uuid
int64 541B
3,299B
| dataset
stringclasses 1
value | text
stringlengths 1
4.29M
|
---|---|---|
1,477,468,750,981 | arxiv | \section{Introduction}
The electromagnetic form factors of a nucleon provide information on its internal
momentum space distribution of charge and magnetization, thus furnishing a unique
window into the quark and gluon substructure of the nucleon. Building a bridge
between QCD and the observed nucleon properties is a key challenge for modern
hadron physics and recent form factor measurements, for example, demonstrate
that a robust understanding of nucleon properties founded in QCD is just beginning.
A key example of the impact of such measurements is provided by the polarization
transfer experiments~\cite{Jones:1999rz,Gayou:2001qd,Gayou:2001qt,Puckett:2010ac,Puckett:2011xg},
which revealed that the ratio of the proton's electric to magnetic Sachs form factors,
$\mu_p\,G_{Ep}(Q^2)/G_{Mp}(Q^2)$, is not constant but instead decreases almost
linearly with $Q^2$. These experiments dispelled decades of perceived wisdom
which perpetuated the view that the nucleon contained similar distributions
of charge and magnetization. Nucleon form factor data at large $Q^2$ can
also be used to test the scaling behaviour predicted by perturbative QCD,
which, for example, makes the prediction that $Q^2\,F_{2p}(Q^2)/F_{1p}(Q^2)$ should
tend to a constant as $Q^2 \to \infty$~\cite{Brodsky:1973kr,Brodsky:1974vy}.
However, recent data extending to $Q^2 \simeq 8\,$GeV$^2$~\cite{Puckett:2010ac,Puckett:2011xg}, find scaling behaviour much closer to $Q\,F_{2p}(Q^2)/F_{1p}(Q^2)$,
which has been attributed to the quark component of the nucleon wave function
possessing sizeable orbital angular momentum~\cite{Ralston:2003mt}.
An interesting recent example, which demonstrates that there is much
of a fundamental nature still to learn in hadron physics, involves the muonic
hydrogen experiments~\cite{Pohl:2010zza,Antognini:1900ns} that found a proton charge radius
some 4\% smaller than that measured in elastic electron scattering or
electronic hydrogen, representing a 7$\sigma$ discrepancy. As yet there is no
accepted resolution to this puzzle~\cite{Cloet:2010qa,Miller:2011yw,Carroll:2011rv}.
It is clear, therefore, that a quantitative theoretical understanding of
nucleon form factors in terms of the fundamental degrees of freedom of QCD, namely
the quarks and gluons, remains an important goal. This task is particularly
challenging because nucleon form factors parameterize the amplitude for a nucleon
to interact through a current and remain a nucleon, for arbitrary space-like
momentum transfer. Therefore, long distance non-perturbative effects associated
with quark binding and confinement must play an important role at all $Q^2$, while,
because of asymptotic freedom at short distances, perturbative QCD must also be
relevant at large momentum transfer. This scenario is somewhat in contrast to
that found with the structure functions measured in deep inelastic scattering,
which can be factorized into short distance Wilson coefficients, calculable
in perturbative QCD, and the long distance parton distribution functions
(PDFs) which encode non-perturbative information on the structure of the bound
state. A consequence of factorization is that once the PDFs are known at a
scale $Q_0^2 \gg \Lambda_{\text{QCD}}^2$, the $Q^2$ evolution of the PDFs, on
the Bjorken $x$ domain relevant to hadron structure, is governed by
the DGLAP evolution equations~\cite{Gribov:1972rt,Altarelli:1977zs,Dokshitzer:1977sg}.
An analogous factorization is not possible for the
nucleon electromagnetic form factors.
Here we investigate the nucleon electromagnetic form factors using the
Nambu--Jona-Lasinio (NJL) model~\cite{Nambu:1961tp,Nambu:1961fr,Vogl:1991qt,Hatsuda:1994pi,Klevansky:1992qe}, which is a Poincar\'e covariant quantum field theory with many of the same
low-energy properties as QCD. For example, it encapsulates
the key emergent phenomena of dynamical
chiral symmetry breaking and confinement.\footnote{Standard implementations of the
NJL model are not confining. This can be seen in results for hadron propagators
which develop imaginary pieces in particular kinematical domains, indicating
that the hadron can decay into quarks. In the version of the NJL model used here
quark confinement is introduced via a particular regularization prescription
which eliminates these unphysical thresholds. This regularization procedure
is discussed in Sect.~\ref{sec:NJL}.} This model also has the same flavour
symmetries as QCD and should therefore provide a robust chiral effective theory
of QCD valid at low to intermediate energies. The NJL model is solved
non-perturbatively, using the standard leading order truncation. Finally, in
order to respect chiral symmetry effectively, we also include pion
degrees of freedom in a perturbative manner. This proves
essential~\cite{Thomas:1981vc,Theberge:1982xs,Thomas:1982kv} for a
good description of the nucleon form factors below $Q^2 \sim 1\,$GeV$^2$.
The outline of the paper is as follows: Sect.~\ref{sec:NJL} gives an introduction
to the NJL model, encompassing the gap equation, the Bethe-Salpeter equation
and the relativistic Faddeev equation. In Sect.~\ref{sec:nucleon_current} we explain
how to calculate the matrix elements of the quark electromagnetic current which
give the nucleon electromagnetic form factors. A key ingredient is the dressed
quark-photon vertex; the interaction of a virtual photon with a non-pointlike
constituent, or dressed quark, which is detailed in Sect.~\ref{sec:QPV}.
Pion loop effects at the constituent quark level are also discussed and results for
dressed quark form factors are presented. Because the nucleon emerges as a
quark--diquark bound state, a critical step in determining the nucleon form factors is
to determine the electromagnetic current for the relevant diquarks. This is
discussed in Sect.~\ref{sec:diquark_results} for scalar and axialvector diquarks,
together with form factor results for the pion and rho mesons, which are
the $\bar{q}q$ analogs of these diquarks. The electromagnetic current of the
nucleon is determined in Sect.~\ref{sec:nucleon_results}, where the role of
pion loop effects is discussed in detail. Careful attention is paid to the flavour
decomposition of the nucleon form factors and the interpretation of their
$Q^2$ dependence in terms of the interplay between the roles of diquark correlations
and pionic effects within the nucleon. Comparisons with experiment are presented
and inferences drawn regarding features of the data and connections to
the quark structure within the nucleon.
Conclusions are presented in Sect.~\ref{sec:conclusion}.
\section{Nambu--Jona-Lasinio Model \label{sec:NJL}}
The Nambu--Jona-Lasinio (NJL) model, while originally a theory of elementary nucleons~\cite{Nambu:1961tp,Nambu:1961fr}, is now interpreted as a QCD motivated chiral effective quark theory characterized by a 4-fermion contact interaction between the quarks~\cite{Vogl:1991qt,Hatsuda:1994pi,Klevansky:1992qe}. A salient feature of the model is that it is a Poincar\'e covariant quantum field theory where interactions dynamically break chiral symmetry, giving rise to dynamically generated dressed quark masses, a pion that is a $\bar{q}q$ bound state with the properties of a pseudo-Goldstone boson and a large mass splitting between low lying chiral partners. The NJL model has a long history of success in the study of meson properties~\cite{Vogl:1991qt,Klevansky:1992qe} and more recently as a tool to investigate baryons as 3-quark bound states using the relativistic Faddeev equation~\cite{Ishii:1993np,Ishii:1993rt,Ishii:1995bu}. Recent examples include the study of nucleon parton distribution functions (PDFs)~\cite{Mineo:2003vc,Cloet:2005rt,Cloet:2005pp,Cloet:2006bq,Cloet:2007em}, quark fragmentation functions~\cite{Ito:2009zc,Matevosyan:2010hh} and transverse momentum dependent PDFs~\cite{Matevosyan:2011vj,Matevosyan:2012ga}. Finally, we mention that the NJL model has been used to study the self-consistent modification of the structure of the nucleon in-medium and its role in the binding of atomic nuclei~\cite{Bentz:2001vc}.
The $SU(2)$ flavour NJL Lagrangian relevant to this study, in the $\bar{q}q$ interaction channel, reads\footnote{The complete $SU(2)$ flavour NJL interaction Lagrangian can in principle also contain the chiral singlet terms
\begin{multline*}
\tfrac{1}{2}\,G_\eta\left[ \left(\bar{\psi}\,\vec{\tau}\,\psi\right)^2 - \left(\bar{\psi}\,\gamma_5\,\psi\right)^2\right]
- \tfrac{1}{2}\,G_f\left(\bar{\psi}\,\gamma^\mu\gamma_5\,\psi\right)^2 \\
- \tfrac{1}{2}\,G_T\left[\left(\bar{\psi}\,i\sigma^{\mu\nu}\psi\right)^2 - \left(\bar{\psi}\,i\sigma^{\mu\nu}\vec{\tau}\,\psi\right)^2\right].
\end{multline*}
The complete Lagrangian explicitly breaks $U_A(1)$ symmetry unless $G_\eta = G_\pi$ and $G_T = 0$. These are the conditions imposed on the NJL Lagrangian by chiral symmetry if the chiral group is enlarged to three flavours, where the $U_A(1)$ symmetry is usually broken by introducing a 6-fermion interaction~\cite{Vogl:1991qt,Klevansky:1992qe}.}
\begin{align}
\mathcal{L} &= \bar{\psi}\left(i\sh{\partial}-\hat{m}\right)\psi \nonumber \\
&\hspace{5mm}
+ \frac{1}{2}\,G_\pi\left[\left(\bar{\psi}\psi\right)^2 - \left(\bar{\psi}\,\gamma_5\vec{\tau}\,\psi\right)^2\right]
- \frac{1}{2}\,G_\omega\left(\bar{\psi}\,\gamma^\mu\,\psi\right)^2 \nonumber \\
&\hspace{5mm}
- \frac{1}{2}\,G_\rho\left[\left(\bar{\psi}\,\gamma^\mu\vec{\tau}\,\psi\right)^2 + \left(\bar{\psi}\,\gamma^\mu\gamma_5\vec{\tau}\,\psi\right)^2\right],
\label{eq:njllagrangian}
\end{align}
where $\hat{m} \equiv \text{diag}[m_u,\,m_d]$ is the current quark mass matrix and the 4-fermion coupling constants in each chiral channel are labelled by $G_\pi$, $G_\omega$ and $G_\rho$. Throughout this paper we take $m_u = m_d = m$. The interaction Lagrangian can be Fierz symmetrized, with the consequence that after a redefinition of the 4-fermion couplings one need only consider direct terms in the elementary interaction~\cite{Ishii:1995bu}. The elementary quark--antiquark interaction kernel is then given by
\begin{align}
\mathcal{K}_{\alpha\beta,\gamma\delta} &= \sideset{}{_\Omega} \sum K_\Omega\,\Omega_{\alpha\beta}\, \bar{\Omega}_{\gamma\delta}\nonumber \\
&\hspace{-8mm}
= 2i\,G_\pi \left[\left(\mbox{\bb 1}\right)_{\alpha\beta}\left(\mbox{\bb 1}\right)_{\gamma\delta} - \left(\gamma_5\tau_i\right)_{\alpha\beta}\left(\gamma_5\tau_i\right)_{\gamma\delta}\right] \nonumber \\
&\hspace{-8mm}
-2i\,G_\rho \left[\left(\gamma_\mu\tau_i\right)_{\alpha\beta}\left(\gamma^\mu\tau_i\right)_{\gamma\delta} + \left(\gamma_\mu\gamma_5\tau_i\right)_{\alpha\beta}\left(\gamma^\mu\gamma_5\tau_i\right)_{\gamma\delta}\right] \nonumber \\
&\hspace{-8mm}
-2i\,G_\omega\left(\gamma_\mu\right)_{\alpha\beta}\left(\gamma^\mu\right)_{\gamma\delta},
\label{eq:qbarqkernel}
\end{align}
where the indices label Dirac, colour and isospin.
The building blocks of mesons and baryons in the NJL model are the quark propagators. The NJL dressed quark propagator is obtained by solving the gap equation, which at the level of approximation used here is illustrated in Fig.~\ref{fig:gapequation} and reads\footnote{In principle there is an infinite tower of higher order terms that can appear in the NJL gap equation kernel, with meson loops an important example. However, in keeping with the standard treatment, these higher order terms are not included. We will however include a single pion loop as a perturbative correction to the quark-photon vertex. This is discussed in Sect.~\ref{sec:QPV}.}
\begin{align}
S^{-1}(k) = S_0^{-1}(k) - \sum_\Omega K_\Omega\, \Omega \int \frac{d^4\ell}{(2\pi)^4} \ \text{Tr}\left[\bar{\Omega}\,S(\ell)\right],
\label{eq:gapequation}
\end{align}
where $S_0^{-1}(k) = \sh{k} - m + i\varepsilon$ is the bare quark propagator and the trace is over Dirac, colour and isospin indices. The only piece of the $\bar{q}q$ interaction kernel given in Eq.~\eqref{eq:qbarqkernel} that contributes to the gap equation expressed in Eq.~\eqref{eq:gapequation} is the isoscalar-scalar interaction $2i\,G_\pi\left(\mbox{\bb 1}\right)_{\alpha\beta}\left(\mbox{\bb 1}\right)_{\gamma\delta}$. This yields a solution of the form
\begin{align}
S(k) = \frac{1}{\sh{k} - M + i\varepsilon}.
\label{eq:quarkpropagator}
\end{align}
The interaction kernel in the gap equation of Fig.~\ref{fig:gapequation} is local and therefore the dressed quark mass, $M$, is a constant and satisfies
\begin{align}
M = m + 12\,i\,G_\pi \int \frac{d^4\ell}{(2\pi)^4}\,\mathrm{Tr}_D\left[S(\ell)\right],
\label{eq:gap}
\end{align}
where the remaining trace is over Dirac indices. For sufficiently strong coupling, $G_\pi > G_{\text{critial}}$, Eq.~\eqref{eq:gap} supports a non-trivial solution with $M > m$, which survives even in the chiral limit ($m=0$).\footnote{In the proper-time regularization scheme defined in Eq.~\eqref{eq:propertime} the critical coupling in the chiral limit has the value: $G_{\text{critial}} = \frac{\pi^2}{3}\left(\Lambda_{UV}^2 - \Lambda_{IR}^2\right)^{-1}$.} This solution is a consequence of dynamical chiral symmetry breaking (DCSB) in the Nambu-Goldstone mode and it is readily demonstrated, by calculating the total energy~\cite{Buballa:2003qv}, that this phase corresponds to the ground state of the vacuum.
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{gap_equation.pdf}
\caption{(Colour online) The NJL gap equation in the Hartree-Fock approximation, where the thin line represents the elementary quark propagator, $S_0^{-1}(k) = \sh{k} - m + i\varepsilon$, and the shaded circle the $\bar{q}q$ interaction kernel given in Eq.~\eqref{eq:qbarqkernel}.
Higher order terms, attributed to meson loops, for example, are not included in the gap equation kernel.}
\label{fig:gapequation}
\end{figure}
The NJL model is a non-renormalizable quantum field theory, therefore a regularization prescription must be specified to fully define the model. We choose the proper-time regularization scheme~\cite{Ebert:1996vx,Hellstern:1997nv,Bentz:2001vc}, which is introduced formally via the relation
\begin{align}
\frac{1}{X^n} &= \frac{1}{(n-1)!}\int_0^\infty d\tau \, \tau^{n-1}\, e^{-\tau\,X}, \nonumber \\
&\hspace{15mm}
\longrightarrow \frac{1}{(n-1)!}\int_{1/\Lambda^2_{UV}}^{1/\Lambda^2_{IR}} d\tau \, \tau^{n-1}\, e^{-\tau\,X},
\label{eq:propertime}
\end{align}
where $X$ represents a product of propagators that have been combined using Feynman parametrization. Only the ultraviolet cutoff, $\Lambda_{UV}$, is needed to render the theory finite, however, for bound states of quarks we also include the infrared cutoff, $\Lambda_{IR}$. This has the effect of eliminating unphysical thresholds for the decay of hadrons into free quarks and therefore simulates aspects of quark confinement in QCD.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{bethe_salpeter_meson_paper.pdf}
\caption{(Colour online) NJL Bethe-Salpeter equation for the quark--antiquark $t$-matrix, represented as the double line with the vertices. The single line corresponds to the dressed quark propagator and the BSE $\bar{q}q$ interaction kernel, consistent with the gap equation kernel used in Eq.~\eqref{eq:gap}, is given by Eq.~\eqref{eq:qbarqkernel}.}
\label{fig:bethesalpeter}
\end{figure}
Mesons in the NJL model are quark--antiquark bound states whose properties are determined by first solving the Bethe-Salpeter equation (BSE). The kernels of the gap and BSEs are intimately related, as exemplified by the vector and axialvector Ward--Takahashi identities, which relate the quark propagator to inhomogeneous Bethe-Salpeter vertices~\cite{Maris:2003vk}. The NJL BSE, consistent with the gap equation of Fig.~\ref{fig:gapequation}, is illustrated in Fig.~\ref{fig:bethesalpeter} and reads
\begin{align}
\mathcal{T}(q) = \mathcal{K} +
\int \frac{d^4k}{(2\pi)^4}\, \mathcal{K}\, S(k+q)\, S(k)\,\mathcal{T}(q),
\label{eq:mesonbse}
\end{align}
where $q$ is the total momentum of the two-body system, $\mathcal{T}$ is the two-body $t$-matrix and $\mathcal{K}$ is the $\bar{q}q$ interaction kernel given in Eq.~\eqref{eq:qbarqkernel}. Dirac, colour and isospin indices have been suppressed in Eq.~\eqref{eq:mesonbse}. Solutions to the BSE in the $\bar{q}q$ channels with quantum numbers that correspond to those of the pion,\footnote{The NJL Lagrangian of Eq.~\eqref{eq:njllagrangian} implies that the $G_\rho\left(\bar{\psi}\,\gamma^\mu\gamma_5\vec{\tau}\,\psi\right)^2$ $\bar{q}q$ interaction should also contribute in the pionic channel, giving rise to $\pi$--$a_1$ mixing. However, since the $a_1$ meson is much heavier than the pion, the amount of mixing is small and we therefore ignore $\pi$--$a_1$ mixing in this work.} rho and omega have the form
\begin{align}
\label{eq:piontmatrix}
\mathcal{T}_\pi(q)_{\alpha\beta,\gamma\delta} &= \left(\gamma_5\tau_i\right)_{\alpha\beta}\hspace{1.5mm}\, \tau_\pi(q) \hspace{3mm}\left(\gamma_5\tau_i\right)_{\gamma\delta}, \\
\mathcal{T}_\rho(q)_{\alpha\beta,\gamma\delta} &= \left(\gamma_\mu\tau_i\right)_{\alpha\beta}\ \, \tau^{\mu\nu}_\rho(q)\ \left(\gamma_\nu\tau_i\right)_{\gamma\delta}, \\
\label{eq:omegatmatrix}
\mathcal{T}_\omega(q)_{\alpha\beta,\gamma\delta} &= \left(\gamma_\mu\right)_{\alpha\beta}\hspace{4mm}\, \tau^{\mu\nu}_\omega(q)\, \, \left(\gamma_\nu\right)_{\gamma\delta},
\end{align}
where $\tau_i$ are the Pauli matrices and
\begin{align}
\label{eq:tpion}
\tau_\pi(q) &= \frac{-2i\,G_\pi}{1 + 2\,G_\pi\,\Pi_{PP}(q^2)}, \\
\label{eq:trho}
\tau^{\mu\nu}_\rho(q)
&= \frac{-2i\,G_\rho}{1+2\,G_\rho\,\Pi_{VV}(q^2)}
\left[g^{\mu\nu} + 2\,G_\rho\,\Pi_{VV}(q^2)\,\frac{q^\mu q^\nu}{q^2}\right], \\
\label{eq:tomega}
\tau^{\mu\nu}_\omega(q)
&= \frac{-2i\,G_\omega}{1+2\,G_\omega\,\Pi_{VV}(q^2)}
\left[g^{\mu\nu} + 2\,G_\omega\,\Pi_{VV}(q^2)\,\frac{q^\mu q^\nu}{q^2}\right].
\end{align}
The functions $\tau_\pi(q)$, $\tau^{\mu\nu}_\rho(q)$ and $\tau^{\mu\nu}_\omega(q)$ are the reduced $t$-matrices, which are interpreted as propagators for the pion, rho and omega mesons. The bubble diagrams in Eqs.~\eqref{eq:tpion}--\eqref{eq:tomega} have the form
\begin{align}
\label{eq:bubble_PP}
&\Pi_{PP}\left(q^2\right)\delta_{ij} = 3i \int \frac{d^4k}{(2\pi)^4}\ \mathrm{Tr}\left[\gamma_5\,\tau_i\,S(k)\,\gamma_5\,\tau_j\,S(k+q)\right], \\
\label{eq:bubble_VV}
&\Pi_{VV}(q^2)\left(g^{\mu\nu} - \frac{q^\mu q^\nu}{q^2}\right)\delta_{ij} \nonumber \\
&\hspace{11mm}
= 3i \int \frac{d^4k}{(2\pi)^4}\ \mathrm{Tr}\left[\gamma^\mu\tau_i\,S(k)\,\gamma^\nu\tau_j\,S(k+q)\right],
\end{align}
where the traces are over Dirac and isospin indices. Meson masses are then defined by the pole in the corresponding two-body $t$-matrix.
In a covariant formulation a two-body $t$-matrix, near a bound state pole of mass $m_i$, behaves as
\begin{align}
\mathcal{T}(q) \to \frac{\Gamma_i(q)\,\overline{\Gamma}_i(q)}{q^2 - m_i^2},
\end{align}
where $\Gamma_i(q)$ is the normalized homogeneous Bethe-Salpeter vertex function for the bound state. Expanding the $t$-matrices in Eqs.~\eqref{eq:piontmatrix}--\eqref{eq:omegatmatrix} about the pole masses gives
\begin{align}
\hspace*{-1.5mm}\Gamma^i_\pi &= \sqrt{Z_\pi}\,\gamma_5\,\tau_i, ~~
\Gamma^{\mu,i}_\rho = \sqrt{Z_\rho}\,\gamma^\mu\,\tau_i, ~~
\Gamma^\mu_\omega = \sqrt{Z_\omega}\,\gamma^\mu,
\end{align}
where $i$ is an isospin index and the normalization factors are given by
\begin{align}
\label{eq:Zpi}
Z_\pi^{-1} &= -\frac{\partial}{\partial q^2}\,\Pi_{PP}(q^2)\Big\rvert_{q^2 = m_\pi^2}, \\[0.1ex]
Z_{\rho,\omega}^{-1} &= -\frac{\partial}{\partial q^2}\,\Pi_{VV}(q^2)\Big\rvert_{q^2 = m_{\rho,\omega}^2}.
\end{align}
These residues are interpreted as the effective meson-quark-quark coupling constants. Homogeneous Bethe-Salpeter vertex functions are an essential ingredient in, for example, triangle diagrams that determine the meson form factors.
Baryons in the NJL model are naturally described as bound states of three dressed quarks. The properties of these bound states are determined by the relativistic Faddeev equation whose solution gives the Poincar\'e covariant Faddeev amplitude. To construct the interaction kernel of the Faddeev equation we require the elementary quark-quark interaction kernel. Using Fierz transformations to rewrite Eq.~\eqref{eq:njllagrangian} as a sum of $qq$ interactions, keeping only the isoscalar--scalar ($0^+,T=0$) and isovector--axialvector ($1^+,T=1$) two-body channels, the NJL interaction Lagrangian takes the form
\begin{multline}
\mathcal{L}_{I,qq} = G_s \Bigl[\bar{\psi}\,\gamma_5\, C\,\tau_2\,\beta_A\, \bar{\psi}^T\Bigr]
\Bigl[\psi^T\,C^{-1}\gamma_5\,\tau_2\,\beta_A\, \psi\Bigr] \\
+ G_a \Bigl[\bar{\psi}\,\gamma_\mu\,C\,\tau_i\tau_2\,\beta_A\, \bar{\psi}^T\Bigr]
\Bigl[\psi^T\,C^{-1}\gamma^{\mu}\,\tau_2\tau_i\, \beta_A\, \psi\Bigr],
\label{eq:qqlagrangian}
\end{multline}
where $C = i\gamma_2\gamma_0$ is the charge conjugation matrix and the couplings $G_s$ and $G_a$ give the strength of the scalar and axialvector $qq$ interactions. Because only colour $\bar{3}$ $qq$ states can couple to a third quark to form a colourless three-quark state, we must have $\beta_A = \sqrt{\tfrac{3}{2}}\,\lambda_A~(A=2,5,7)$~\cite{Ishii:1995bu}. The Lagrangian of Eq.~\eqref{eq:qqlagrangian} gives the following elementary $qq$ interaction kernel
\begin{align}
\mathcal{K}_{\alpha\beta,\gamma\delta} &= 4i\,G_s\left(\gamma_5\,C\,\tau_2\,\beta_A\right)_{\alpha\beta}\left(C^{-1}\,\gamma_5\,\tau_2\,\beta_{A}\right)_{\gamma\delta} \nonumber \\
&\hspace{-5mm}
+ 4i\,G_a\left(\gamma_\mu\,C\,\tau_i\tau_2\,\beta_A\right)_{\alpha\beta}\left(C^{-1}\,\gamma^\mu\,\tau_2\tau_i\,\beta_{A}\right)_{\gamma\delta}.
\label{eq:qqkernel}
\end{align}
This kernel has been truncated to support only scalar and axialvector diquark correlations because the pseudoscalar and vector diquark components of the nucleon must predominantly be in $\ell = 1$ states and are therefore suppressed. Pseudoscalar and vector diquarks are also usually found to be considerably heavier than their scalar and axialvector counterparts~\cite{Roberts:2011cf}.
Using Eq.~\eqref{eq:qqkernel} as the interaction kernel in the Faddeev equation allows us to first sum all two-body $qq$ interactions to form the scalar and axialvector diquark $t$-matrices. Diquark correlations in the nucleon are therefore a natural consequence of the strong coupling in the colour $\bar{3}$ quark-quark interaction channel. The BSE in the $qq$ channel for our NJL model reads
\begin{align}
\mathcal{T}(q) = \mathcal{K} +
\frac{1}{2}\int \frac{d^4k}{(2\pi)^4}\, \mathcal{K}\,S(k+q)\,S(-k)\,\mathcal{T}(q),
\label{eq:diquarkbse}
\end{align}
where $\mathcal{K}$ is given in Eq.~\eqref{eq:qqkernel} and there is a symmetry factor of $\tfrac{1}{2}$ relative to the $\bar{q}q$ BSE of Eq.~\eqref{eq:mesonbse}. The solutions to the BSE in the scalar and axialvector diquark channels are
\begin{align}
\mathcal{T}_s(q)_{\alpha\beta,\gamma\delta} &= \left(\gamma_5\,C\,\tau_2\,\beta_A\right)_{\alpha\beta}\tau_s(q)\left(C^{-1}\gamma_5\,\tau_2\,\beta_A\right)_{\gamma\delta}, \\
\mathcal{T}_a(q)_{\alpha\beta,\gamma\delta} &= \left(\gamma_\mu\,C\,\tau_i\tau_2\,\beta_A\right)_{\alpha\beta} \tau^{\mu\nu}_a(q) \left(C^{-1}\gamma_\nu\,\tau_2\tau_i\,\beta_A\right)_{\gamma\delta},
\end{align}
where
\begin{align}
\label{eq:tscalar}
\tau_s(q) &= \frac{-4i\,G_s}{1 + 2\,G_s\,\Pi_{PP}(q^2)}, \\
\label{eq:taxial}
\tau^{\mu\nu}_a(q)
&= \frac{-4i\,G_a}{1+2\,G_a\,\Pi_{VV}(q^2)}
\left[g^{\mu\nu} + 2\,G_a\,\Pi_{VV}(q^2)\,\frac{q^\mu q^\nu}{q^2}\right].
\end{align}
The scalar and axialvector diquark masses are defined as the poles\footnote{In QCD these poles should not exist, since diquarks, as coloured objects, are not part of the physical spectrum. Nevertheless, diquark states play a very important role in many phenomenological studies, for example in the spin and flavor dependence of nucleon PDFs~\cite{Close:1988br,Schreiber:1991qx}. They have also been observed in lattice QCD studies~\cite{Hess:1998sd} as well as model studies of QCD, for example, in the rainbow-ladder truncation of the Dyson-Schwinger equations (DSEs). In the DSE approach diagrams beyond the rainbow-ladder truncation have been shown to remove the pole in the diquark $t$-matrix~\cite{Bender:1996bb}.} in Eqs.~\eqref{eq:tscalar} and \eqref{eq:taxial}, respectively, and the homogeneous Bethe-Salpeter vertices read
\begin{align}
\Gamma_s &= \sqrt{Z_s}\,\gamma_5\,C\,\tau_2\,\beta_A,~ &
\Gamma^{\mu,i}_a &= \sqrt{Z_a}\,\gamma^\mu\,C\,\tau_i\,\tau_2\,\beta_A,
\label{eq:homogeneousBSvertex}
\end{align}
where $i$ is an isospin index. The pole residues are given by
\begin{align}
\label{eq:Zs}
Z_s^{-1} &= -\frac{1}{2}\,\frac{\partial}{\partial q^2}\,\Pi_{PP}(q^2)\Big\rvert_{q^2 = M_s^2}, \\[0.2ex]
Z_a^{-1} &= -\frac{1}{2}\,\frac{\partial}{\partial q^2}\,\Pi_{VV}(q^2)\Big\rvert_{q^2 = M_a^2},
\end{align}
where $M_s$ and $M_a$ are the scalar and axialvector diquark masses. These pole residues are interpreted as the effective diquark--quark-quark couplings.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{faddeev_vertex_paper.pdf}
\caption{Homogeneous Faddeev equation for the nucleon in the NJL model. The single line represents the quark propagator and the double line the diquark propagators. Both scalar and axialvector diquarks are included in these calculations.}
\label{fig:homofaddeev}
\end{figure}
The homogeneous Faddeev equation is illustrated in Fig.~\ref{fig:homofaddeev}, where
diquark correlations have been made explicit.
The relativistic Faddeev equation in the NJL model has been solved numerically in
Refs.~\cite{Buck:1992wz,Mineo:1999eq,Mineo:2002bg},
where the integrals were regularized using
the Lepage--Brodsky and transverse momentum cutoff schemes.
In the proper-time regularization
scheme used here, solving the Faddeev equation is much more challenging and
we therefore employ the static approximation to the quark exchange kernel.
In this approximation the propagator of the exchanged quark becomes
$S(k) \to -\tfrac{1}{M}$~\cite{Buck:1992wz}.
The nucleon vertex function then takes the form
\begin{align}
&\Gamma_N(p) = \sqrt{-Z_N}\ \Gamma = \sqrt{-Z_N}\ \begin{bmatrix} \Gamma_s(p) \\ \Gamma_a^{\mu,i}(p) \end{bmatrix} \nonumber \\
&\hspace{0mm}
= \sqrt{-Z_N}
\begin{bmatrix} \alpha_1 \\ \left(\alpha_2\,\frac{p^{\mu}}{M_N}\,\gamma_5\ + \alpha_3\,\gamma^\mu\gamma_5\right) \frac{\tau_i}{\sqrt{3}} \end{bmatrix} \chi(t)\,u(p),
\label{eq:faddeevvertex}
\end{align}
where $i$ is an isospin index and $\chi_N(t)$ is the nucleon isospinor:
\begin{align}
\chi\!\left( \tfrac{1}{2}\right) &= \begin{pmatrix} 1 \\ 0 \end{pmatrix}, &
\chi\!\left(-\tfrac{1}{2}\right) &= \begin{pmatrix} 0 \\ 1 \end{pmatrix}.
\end{align}
The first element in the column vector of Eq.~\eqref{eq:faddeevvertex}
represents the piece of the nucleon vertex function consisting
of a quark and scalar diquark, while the second element
represents the quark and axialvector diquark component.
The nucleon mass is labelled by $M_N$ and the Dirac spinor
is normalized such that $\bar{u}_N\,u_N =1$.
$Z_N$ is the nucleon vertex function normalization and $\alpha_1,~\alpha_2,~\alpha_3$
are obtained by solving the Faddeev equation.
After projection onto positive parity, spin one-half and
isospin one-half, the homogeneous Faddeev equation is given by~\cite{Ishii:1995bu}
\begin{align}
\Gamma_N(p,s) = K(p)\, \Gamma_N(p,s),
\label{eq:faddeev}
\end{align}
which in matrix form reads
\begin{align}
\begin{bmatrix} \Gamma_s \\ \Gamma_a^\mu \end{bmatrix}
= \frac{3}{M}
\begin{bmatrix} \Pi_{Ns} & \sqrt{3}\gamma_{\alpha}\gamma_5\,\Pi^{\alpha\beta}_{Na}\\
\sqrt{3}\gamma_5\gamma^{\mu}\,\Pi_{Ns} & -\gamma_{\alpha}\gamma^{\mu}\,\Pi^{\alpha\beta}_{Na}
\end{bmatrix}
\begin{bmatrix} \Gamma_s \\ \Gamma_{a,\beta} \end{bmatrix}.
\end{align}
The quark-diquark bubble diagrams are defined as
\begin{align}
\label{eq:snucleon}
\Pi_{Ns}(p) &= \int \frac{d^4k}{(2\pi)^4}\, \tau_s(p-k)\,S(k), \\
\label{eq:anucleon}
\Pi^{\mu\nu}_{Na}(p) &= \int \frac{d^4k}{(2\pi)^4}\, \tau^{\mu\nu}_a(p-k)\,S(k).
\end{align}
The vertex normalization of Eq.~\eqref{eq:faddeevvertex} is given by
\begin{align}
Z_N = \left[\overline{\Gamma}\,\frac{\partial\,\Pi_N(p)}{\partial p^2}\,\Gamma\right]_{p^2=M_N^2}^{-1},
\end{align}
where
\begin{align}
\Pi_N(p) = \begin{bmatrix} \Pi_{Ns}(p) & 0 \\ 0 & \Pi^{\alpha\beta}_{Na}(p) \end{bmatrix}.
\label{eq:nucleon_bubble_matrix}
\end{align}
Regulating expressions such as those in Eqs.~\eqref{eq:snucleon}
and \eqref{eq:anucleon} using
the proper-time scheme is tedious. Therefore, to render the Faddeev equation and
form factor calculations tractable we make the pole approximation
to the meson and diquark $t$-matrices, for example,
Eqs.~\eqref{eq:tscalar} and \eqref{eq:taxial} become
\begin{align}
\label{eq:scalarpropagatorpoleform}
\tau_s(q) &\to -\frac{i\,Z_s}{q^2 - M_s^2+i\,\varepsilon}, \\
\label{eq:axialpropagatorpoleform}
\tau^{\mu\nu}_a(q) &\to -\frac{i\,Z_a}{q^2 - M_a^2+i\,\varepsilon}
\left(g^{\mu\nu} - \frac{q^{\mu}q^{\nu}}{M_a^2}\right).
\end{align}
Similar expressions are obtained in the meson sector.
In summary, the model parameters consist of the two regularization scales $\Lambda_{IR}$ and $\Lambda_{UV}$, the dressed quark mass $M$,\footnote{Alternatively, one could specify a current quark mass, as one determines the other through the gap equation.} and the Lagrangian coupling constants $G_\pi$, $G_\rho$, $G_\omega$, $G_s$, $G_a$. The infrared regularization scale is associated with confinement and therefore should be of the order $\Lambda_{\text{QCD}}$, and we choose $\Lambda_{IR} = 0.240\,$GeV and for the constituent quark mass take $M = 0.4\,$GeV. The physical pion mass ($m_\pi = 140\,$MeV) and decay constant ($f_\pi = 92\,$MeV) determine $\Lambda_{UV}$ and $G_\pi$. The physical masses of the rho ($m_\rho = 770\,$MeV) and omega ($m_\omega = 782\,$MeV) mesons constrain $G_\rho$ and $G_\omega$, respectively, while the physical nucleon ($M_N = 940\,$MeV) and $\Delta$ ($M_\Delta = 1232\,$MeV) baryon masses determine $G_s$ and $G_a$.\footnote{The relativistic Faddeev equation for the Delta baryon is discussed, for example, in Ref.~\cite{Ishii:1995bu}.} Numerical values are given in Tab.~\ref{tab:parameters}.
Using the parameters given in Tab.~\ref{tab:parameters}, we obtain the following results for the residues of the two-body $t$-matrices: $Z_\pi = 17.9$, $Z_\rho = 6.96$, $Z_\omega = 6.63$, $Z_s = 11.1$ and $Z_a = 6.73$. For the nucleon vertex function of Eq.~\eqref{eq:faddeevvertex} we find $Z_N = 28.1$ and $(\alpha_1,\,\alpha_2,\,\alpha_3) = (0.55,\,0.05,\,-0.40)$, where the scalar and axialvector diquark masses are $M_s = 0.768\,$GeV and $M_a = 0.903\,$GeV.
\begin{table}[tbp]
\addtolength{\tabcolsep}{5.0pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{cccccccc}
\hline\hline
$\Lambda_{IR}$ & $\Lambda_{UV}$ & $M$ & $G_\pi$ & $G_\rho$ & $G_\omega$ & $G_s$ & $G_a$ \\
\hline
0.240 & 0.645 & 0.4 & 19.0 & 11.0 & 10.4 & 5.8 & 4.9 \\
\hline\hline
\end{tabular}
\caption{Model parameters constrained to reproduce the physical
pion, rho and omega masses;
the pion decay constant; and the nucleon and delta baryon masses.
The infrared regulator
and the dressed quark mass are assigned their values \textit{a priori}.
The regularization parameters
and dressed quark mass are in units of GeV,
while the couplings are in units of GeV\,$^{-2}$.}
\label{tab:parameters}
\end{table}
\section{Nucleon Electromagnetic Current \label{sec:nucleon_current}}
The electromagnetic current of an on-shell nucleon, expressed in terms of the Dirac and Pauli form factors, has the form
\begin{align}
&j^\mu_{\lambda' \, \lambda}(p',p)
= \left<p',\,\lambda'\left|J^\mu_{\text{em}}\right|p,\,\lambda\right> \nonumber \\
&
= u(p',\,\lambda')\left[\gamma^\mu\,F_{1}(Q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2\,M_N}\,F_{2}(Q^2)\right]u(p,\,\lambda),
\end{align}
where $q = p' - p$ is the 4-momentum transfer, $Q^2 \equiv -q^2$ and $\lambda$, $\lambda'$ represent
the initial and final nucleon helicity respectively. The nucleon's electric and magnetic Sachs form factors~\cite{Ernst:1960zza},
which diagonalize the Rosenbluth cross-section, are then given by
\begin{align}
G_E(Q^2) &= F_{1}(Q^2) - \frac{Q^2}{4\,M_N^2}\,F_{2}(Q^2), \\
G_M(Q^2) &= F_{1}(Q^2) + F_{2}(Q^2).
\end{align}
Hadron form factors can be decomposed into a sum over the quark charges multiplied by quark sector form factors, such that
\begin{align}
F_h(Q^2) = \sideset{}{_q}\sum \, e_q\, F_h^q(Q^2).
\label{eq:quark_sectorform_factors}
\end{align}
The quark sector form factors $F^q_h(Q^2)$ represent the contribution of the current quarks of flavour $q$ to the total hadron form factor $F_h(Q^2)$. The proton and neutron form factors expressed in terms of quark sector form factors read
\begin{align}
\label{eq:protonflavoursector}
F_{ip}(Q^2) &= e_u\,F^u_{ip}(Q^2) + e_d\,F^d_{ip}(Q^2) + \ldots \\
\label{eq:neutronflavoursector}
F_{in}(Q^2) &= e_u\,F^u_{in}(Q^2) + e_d\,F^d_{in}(Q^2) + \ldots
\end{align}
where $i = 1,\,2$. Note that in light of the experimental discovery that the strange quarks contribute very little to the nucleon electromagnetic form factors~~\cite{Young:2006jc,Aniol:2005zf,Armstrong:2005hs,Baunack:2009gy}, we will neglect their contribution to Eqs.~\eqref{eq:protonflavoursector} and \eqref{eq:neutronflavoursector}. Assuming equal $u$ and $d$ current quark masses and neglecting electroweak corrections, the $u$ and $d$ quark sector form factors of the nucleon must satisfy the charge symmetry constraints:
\begin{align}
F^d_{in}(Q^2) = F^u_{ip}(Q^2) \quad \text{and} \quad F^u_{in}(Q^2) = F^d_{ip}(Q^2).
\end{align}
Experimentally, if electroweak and heavy quark effects are small, the $u$ and $d$ quark sector form factors are given accurately by
\begin{align}
\label{eq:flavourseparation}
F^u_{ip} &= 2\,F_{ip} + F_{in}, & F^d_{ip} &= F_{ip} + 2\,F_{in}.
\end{align}
Recent accurate data for the neutron form factors has enabled a precise determination of the quark sector proton form factors~\cite{Cates:2011pz}. We will discuss results for these quark sector form factors in Sect.~\ref{sec:nucleon_results}.
The slope of an electromagnetic form factor at $Q^2 = 0$ is a measure of either the squared rms charge or magnetic radius of a hadron. Unless stated otherwise all squared rms radii are defined by
\begin{align}
\left<r^2\right> = -\frac{6}{\eta}\ \frac{\partial\, f(Q^2)}{\partial Q^2}\bigg\lvert_{Q^2 = 0} ~~ \Big| ~~
\eta =\begin{cases}
1 & \text{if~}f(0)=0,\\[0.4ex]
f(0) & \text{if~}f(0)\neq 0,
\end{cases}
\label{eq:radii}
\end{align}
where $f(Q^2)$ is an arbitrary form factor. This definition reproduces the standard nucleon results for the charge and magnetic radii defined by the Sachs form factors:
\begin{align}
\label{eq:charge_radius}
\left<r_E^2\right> &= -6\ \ \frac{\partial\, G_E(Q^2)}{\partial Q^2}\Big\lvert_{Q^2 = 0}, \\[0.2ex]
\label{eq:magnetic_radius}
\left<r_M^2\right> &= -\frac{6}{G_M(0)}\ \frac{\partial\, G_M(Q^2)}{\partial Q^2}\Big\lvert_{Q^2 = 0},
\end{align}
but also generalizes to radii defined with respect to the Dirac and Pauli form factors and quark sector form factors. Hadronic radii, in units of fm, will be obtained from the result of Eq.~\eqref{eq:radii} using
\begin{align}
r \equiv \text{sign}\left(\left<r^2\right>\right)\ \sqrt{\left|\left<r^2\right>\right|}.
\end{align}
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{form_factor_diagrams_nucleon_paper.pdf}
\caption{(Colour online) Feynman diagrams representing the nucleon electromagnetic current. The diagram on the left is called the \textit{quark diagram} and the diagram on the right the \textit{diquark diagram}. In the diquark diagram the photon interacts with each quark inside the non-pointlike diquark.}
\label{fig:nucleon_em_current_feynman_diagrams}
\end{figure}
To calculate the nucleon electromagnetic current and therefore the Dirac and Pauli form factors, one must know the manner in which the nucleon described in Sect.~\ref{sec:NJL} couples to the photon, guaranteeing electromagnetic gauge invariance. The necessary Feynman diagrams are illustrated in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams} and a proof of gauge invariance is given in App.~\ref{sec:gaugeinvariance}. We include both scalar and axialvector diquarks in our nucleon wave function and therefore the diagrams in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams} represent six distinct Feynman diagrams. The diagram on the left, referred to as the \textit{quark diagram}, represents the processes where the photon couples to a dressed quark with either a scalar or axialvector diquark as a spectator. The \textit{diquark diagram}, on the right in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}, represents four Feynman diagrams; the photon can couple to a scalar diquark, an axialvector diquark or cause a transition between these two diquark states. Importantly, in the \textit{diquark diagram} the photon couples to the quarks inside each diquark, thereby resolving internal diquark structure and resulting in, for example, diquarks with a finite size. The coupling of a photon to a dressed quark and to the diquarks is discussed in Sects.~\ref{sec:QPV} and \ref{sec:diquark_results}.
\section{Quark--Photon Vertex \label{sec:QPV}}
The quark-photon vertex in the NJL model, and other field theoretic approaches, is given by the solution to an inhomogeneous BSE. The NJL model version of this equation, consistent with the truncation used in the gap and BSEs discussed Sect.~\ref{sec:NJL}, is represented diagrammatically in Fig.~\ref{fig:quarkphotonvertex}. The large oval represents the quark-photon vertex, $\,\Lambda^\mu_{\gamma Q}(p',p)$, the 4-fermion interaction kernel is given in Eq.~\eqref{eq:qbarqkernel} and the elementary vertex, which gives the inhomogeneous driving term, has the form $\gamma^\mu\,\hat{Q}$ (where $\hat{Q}$ is the quark charge operator). The second equality in Fig.~\ref{fig:quarkphotonvertex} expresses this equation in an equivalent form using the $\bar{q}q$ $t$-matrices.
The quark charge operator is
\begin{align}
\hat{Q} &= \begin{pmatrix} e_u & 0 \\ 0 & e_d\end{pmatrix}
=\frac{1}{6} + \frac{\tau_3}{2},
\end{align}
where $e_u = \tfrac{2}{3}$ and $e_d = -\tfrac{1}{3}$ are the $u$ and $d$ quark charges. The quark-photon vertex therefore has both an isoscalar and isovector component, which may in general be expressed in the form
\begin{align}
\Lambda^\mu_{\gamma Q}(p',p)
&= \frac{1}{6}\,\Lambda^\mu_{\omega}(p',p) + \frac{\tau_3}{2}\,\Lambda^\mu_{\rho}(p',p).
\label{eq:quark_photon_vertex_iso}
\end{align}
The quark-photon vertex, separated into flavour sectors defined by the dressed quarks, reads
\begin{align}
\Lambda^\mu_{\gamma Q}(p',p)
= \Lambda^\mu_{U}(p',p)\ \frac{1+\tau_3}{2} + \Lambda^\mu_{D}(p',p)\ \frac{1-\tau_3}{2}.
\label{eq:quark_photon_vertex}
\end{align}
In general each dressed quark component of the quark-photon vertex contains contributions from both the $u$ and $d$ current quarks. This will prove important when we consider quark sector form factors and associated charge symmetry constraints. Note that throughout this manuscript we use a capital $Q = (U,\,D)$ to indicate that an object is associated with dressed quarks and a lowercase $q = (u,\,d)$ to represent the current quarks of the NJL and QCD Lagrangians.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{bethe_salpeter_photon_quark_v6.pdf}
\caption{(Colour online) Inhomogeneous Bethe-Salpeter equation whose solution gives the quark-photon vertex, represented as the large shaded oval. The small dot is the inhomogeneous driving term, while the shaded circle is the $\bar{q}q$ interaction kernel given in Eq.~\eqref{eq:qbarqkernel}. Only the $\rho$ and $\omega$ interaction channels contribute. This integral equation can equivalently be represented using the elementary quark-photon interaction and the $\rho$ and $\omega$ $t$-matrices, given in Eqs.~\eqref{eq:trho} and \eqref{eq:tomega}. This case is depicted by the second equality.}
\label{fig:quarkphotonvertex}
\end{figure}
The quark-photon vertex has in general 12 Lorentz structures~\cite{Maris:1999bh}, 4 longitudinal and 8 pieces transverse to the photon momentum, where each Lorentz structure is accompanied by a scalar function of the three variables $q^2$, $p'^2$ and $p^2$.\footnote{The 12 Lorentz structures in the quark-photon vertex are not all independent, since, for example, the Ward--Takahashi identity and time reversal invariance place additional constraints.} The standard NJL $\bar{q}q$ interaction kernel, as employed in Sect.~\ref{sec:NJL} for the gap and BSE equations, is momentum independent which implies that the quark-photon vertex can only depend on the momentum transfer $q = p'-p$, not $p'$ and $p$ separately. Therefore, in this work, the contributions to the vertex functions of Eq.~\eqref{eq:quark_photon_vertex_iso} from the NJL BSE, take the form
\begin{align}
\label{eq:quark_photon_vertex_parts_omega}
\Lambda^{(\text{bse})\mu}_{\omega}(q) &= \gamma^\mu + \left(\!\gamma^\mu - \frac{q^\mu\sh{q}}{q^2}\right)\!\hat{F}_{1\omega}(q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2\,M} F_{2\omega}(q^2), \\
\label{eq:quark_photon_vertex_parts_rho}
\Lambda^{(\text{bse})\mu}_{\rho}(q) &= \gamma^\mu + \left(\!\gamma^\mu - \frac{q^\mu\sh{q}}{q^2}\right)\!\hat{F}_{1\rho}(q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2\,M} F_{2\rho}(q^2).
\end{align}
With the quark propagator of Eq.~\eqref{eq:quarkpropagator}, these results satisfy the Ward--Takahashi identity:
\begin{align}
q_\mu\,\Lambda^\mu_{\gamma Q}(p',p) &= \hat{Q}\left[S^{-1}(p') - S^{-1}(p)\right],
\end{align}
demanded by $U(1)$ vector gauge invariance.
Current conservation at the hadron level implies that the $q^\mu\sh{q}/q^2$ term in Eqs.~\eqref{eq:quark_photon_vertex_parts_omega} and \eqref{eq:quark_photon_vertex_parts_rho} cannot contribute to hadron form factors. We therefore write our effective vertex as
\begin{align}
\Lambda^{(\text{bse})\mu}_{i}(q) &= \gamma^\mu\,F_{1i}(q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2\,M}\, F_{2i}(q^2),
\label{eq:onshellquarkvertex}
\end{align}
where $i = (\omega,\,\rho)$ and $F_{1i}(q^2) = 1 + \hat{F}_{1i}(q^2)$. This vertex has the same form as the electromagnetic current for an on-shell spin-half fermion. For a pointlike quark $F_{1\omega}(q^2) = 1 = F_{1\rho}(q^2)$ and $F_{2\omega}(q^2) = 0 = F_{2\rho}(q^2)$. However, interactions in the NJL model not only dynamically generate a dressed quark mass but also generate non-trivial dressed quark form factors.
The inhomogeneous BSE for the quark-photon vertex, depicted in Fig.~\ref{fig:quarkphotonvertex}, has the form
\begin{align}
&\Lambda^\mu_{\gamma Q}(p',p) = \gamma^\mu\left(\frac{1}{6} + \frac{\tau_3}{2}\right) + \sideset{}{_\Omega}\sum \,K_\Omega\ \Omega \nonumber \\
&\hspace{6mm}
\times
\ i\!\int \frac{d^4k}{(2\pi)^4}\ \mathrm{Tr}\left[\bar{\Omega}\,S(k+q)\,\Lambda^\mu_{\gamma Q}(p',p)\,S(k)\right],
\label{eq:quark_photon_bse}
\end{align}
where $\sum_\Omega \,K_\Omega\ \Omega_{\alpha\beta}\,\bar{\Omega}_{\lambda\varepsilon}$ represents the interaction kernel given in Eq.~\eqref{eq:qbarqkernel}. The Dirac and isospin structure of $\Lambda^\mu_{\gamma Q}(p',p)$, given in Eqs.~\eqref{eq:quark_photon_vertex_iso}, \eqref{eq:quark_photon_vertex_parts_omega} and \eqref{eq:quark_photon_vertex_parts_rho}, implies that of the interaction channels in Eq.~\eqref{eq:qbarqkernel} only the isovector--vector, $-2i\,G_\rho\left(\gamma_\mu\vec{\tau}\right)_{\alpha\beta}\left(\gamma^\mu\vec{\tau}\right)_{\gamma\delta}$, and isoscalar--vector, $-2i\,G_\omega\left(\gamma_\mu\right)_{\alpha\beta}\left(\gamma^\mu\right)_{\gamma\delta}$, pieces can contribute.
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{up_constituent_quark_form_factors.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{down_constituent_quark_form_factors.pdf}}
\caption{(Colour online) \textit{Upper panel:} Dressed up quark form factors: the dashed line is the Dirac form factor obtained from the BSE of Eq.~\eqref{eq:quark_photon_bse}; the solid and dash-dotted lines are respectively the Dirac and Pauli form factors generated by also including the pion loop corrections illustrated in Fig.~\ref{fig:quarkpionphotonvertex}. \textit{Lower panel:} Dressed down quark form factors: each curve represents an analogous form factor to those in the upper panel.}
\label{fig:constituentquarkformfactors}
\end{figure}
The dressed quark form factors obtained from the inhomogeneous BSE, associated with the electromagnetic current of Eq.~\eqref{eq:onshellquarkvertex}, are
\begin{align}
F_{1i}(q^2) = \frac{1}{1 + 2\,G_i\,\Pi_{VV}(q^2)}, \hspace{12mm}
F_{2i}(q^2) = 0,
\label{eq:bseformfactors}
\end{align}
where $i = \omega,\,\rho$. Comparison with Eqs.~\eqref{eq:trho} and \eqref{eq:tomega} indicate that $F_{1\omega}$ and $F_{1\rho}$ have a pole at $q^2 = m_\omega^2$ and $m_\rho^2$, respectively. The NJL BSE kernel of Eq.~\eqref{eq:qbarqkernel} does not generate Pauli form factors for the dressed quarks because it does not
include the tensor--tensor 4-fermion interaction. The dressed up and down quark form factors given by the BSE therefore read
\begin{align}
\label{eq:bseconstituent}
F^{\text{(bse)}}_{1Q}(Q^2) &= \frac{1}{6}\,F_{1\omega}(Q^2) \pm \frac{1}{2}\,F_{1\rho}(Q^2),
\end{align}
where the plus sign is associated with a dressed up quark. The superscript (bse) indicates that these form factors are obtained solely from the BSE.
Results for the dressed quark BSE form factors are illustrated as the dashed lines in Figs.~\ref{fig:constituentquarkformfactors}. A notable feature of these results is that they do not drop to zero as $Q^2 \to \infty$, but instead behave as
\begin{align}
F^{\text{(bse)}}_{1U}(Q^2) &\stackrel{Q^2 \to \infty}{=} e_u, & F^{\text{(bse)}}_{1D}(Q^2) &\stackrel{Q^2 \to \infty}{=} e_d,
\end{align}
signifying that at infinite $Q^2$ the photon interacts with a bare current quark. This result is consistent with QCD expectations based on asymptotic freedom.
Pion loop corrections to the quark-photon vertex will also be considered and treated as a perturbation to the dressed quark form factors obtained from the BSE, as given in Eqs.~\eqref{eq:bseformfactors} and \eqref{eq:bseconstituent}.
In this case the dressed quark propagator receives an additional self-energy correction which is illustrated in Fig.~\ref{fig:quarkpionloop}.\footnote{Chiral symmetry as expressed in the the NJL Lagrangian of Eq.~\eqref{eq:njllagrangian} demands that the sigma meson be included also, however since the sigma has charge zero and (in this work) $m_\pi/m_\sigma \simeq 0.18$, these additional correction are small and will not be included.} In addition to the pionic self-energies on the dressed quarks, pion exchange between quarks should also be included in the two-body kernels that enter the Bethe-Salpeter and Faddeev equations. However, it is straightforward to show that in the limit where the nucleon and $\Delta$ are mass degenerate, including only self-energy correction on the dressed quarks yields essentially the correct leading non-analytic behaviour of the electromagnetic form factors as a function of quark mass. Further, in form factor calculations diagrams with a photon coupling to an exchanged pion do not contribute because of the cancellation between $\pi^+$ and $\pi^-$ exchange. This self-energy is evaluated using a pole approximation, where the external quark is assumed on-mass-shell. The pion loop therefore shifts the dressed quark mass by a constant, giving a quark propagator of the form
\begin{align}
\tilde{S}(k) = Z\,S(k), \qquad Z = 1 + \frac{\partial\, \Sigma(p)}{\partial \sh{p}}\bigg\rvert_{\sh{p}=M},
\end{align}
where $S(k)$ is the usual Feynman propagator for a dressed quark of mass\footnote{When including pion loops on the dressed quarks we renormalize the $G_\pi$ coupling in the NJL Lagrangian to keep the dressed quark mass fixed.} $M$ and the self-energy reads\footnote{The pion--quark-quark vertex in Fig.~\ref{fig:quarkpionloop} can be read directly from the pion $t$-matrix, given by Eq.~\eqref{eq:piontmatrix}, and takes the form $\gamma_5\,\tau_i$. A pseudovector component to the vertex would be generated through $\pi$--$a_1$ mixing in the BSE kernel, however the strength of this vertex is suppressed by $m_\pi/m_{a_1} \sim 0.1$ relative to the dominant pseudoscalar component. Therefore, we do not include a $\pi$--$a_1$ mixing in the pion--quark-quark vertex.}
\begin{align}
\Sigma(p) = -\int \frac{d^4k}{(2\pi)^2}\ \gamma_5\,\tau_i\,S(p-k)\,\gamma_5\,\tau_i\,\tau_\pi(k).
\label{eq:quarkselfenergypion}
\end{align}
When evaluating $\Sigma(p)$ the reduced pion $t$-matrix is approximated by its pole form, that is
\begin{align}
\tau_\pi(k) \to \frac{i\,Z_\pi}{k^2 - m_\pi^2 + i\varepsilon}.
\end{align}
The quark wave function renormalization factor, $Z$, represents the probability to strike a dressed quark without the pion cloud and is essential to maintain charge conservation. Using the parameters in Tab.~\ref{tab:parameters} gives $Z = 0.80$.
\begin{figure}[tbp]
\centering\includegraphics[width=0.6\columnwidth,clip=true,angle=0]{quark_pion_loop_v2.pdf}
\caption{(Colour online) Pion loop contribution to the dressed quark self-energy. The pion couples to the dressed quark via $\gamma_5\,\tau_i$ and the pion $t$-matrix is approximated by its pole form.}
\label{fig:quarkpionloop}
\end{figure}
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{photon_pion_quark_vertex_paper}
\caption{(Colour online) Pion loop contributions to the quark-photon vertex. The quark wave function renormalization factor $Z$ represents the probability of striking a dressed quark without a pion cloud. In the first two diagrams the photon couples to the dressed quark with a vertex of the general form given by Eq.~\eqref{eq:quark_photon_vertex_iso}; and defined by Eqs.~\eqref{eq:onshellquarkvertex} and \eqref{eq:bseformfactors}. The shaded oval in the third diagram represents the quark--pion vertex, which we approximate by its pole form. It is therefore given by $\left(\ell'+\ell\right)F_\pi(Q^2)$, where $F_\pi(Q^2)$ is the usual pion form factor (see Eq.~\eqref{eq:pionformfactor} and associated discussion).}
\label{fig:quarkpionphotonvertex}
\end{figure}
The quark electromagnetic current, including pion loops, is illustrated in Fig.~\ref{fig:quarkpionphotonvertex}. Evaluating this current between on-shell constituent quarks, gives for the dressed quark sector currents of Eq.~\eqref{eq:quark_photon_vertex}:
\begin{align}
\Lambda_Q^\mu(p',p) &= \gamma^\mu\,F_{1Q}(Q^2) + \frac{i\sigma^{\mu\nu}q_\nu}{2\,M}\,F_{2Q}(Q^2),
\label{eq:dressedquarkcurrent}
\end{align}
where $Q =(U,\,D)$. The dressed quark form factors read
\begin{align}
\label{eq:f1U_quarkpion}
&F_{1U} = Z\left[\tfrac{1}{6}\,F_{1\omega}+\tfrac{1}{2}\,F_{1\rho}\right]
+ \left[F_{1\omega}-F_{1\rho}\right]f_{1}^{(q)} + F_{1\rho}\, f_{1}^{(\pi)}, \\
&F_{1D} = Z\left[\tfrac{1}{6}\,F_{1\omega}-\tfrac{1}{2}\,F_{1\rho}\right]
+ \left[F_{1\omega}+F_{1\rho}\right]f_{1}^{(q)} - F_{1\rho}\, f_{1}^{(\pi)}, \\
\label{eq:f2U_quarkpion}
&F_{2U} = \left[F_{1\omega}-F_{1\rho}\right] f_{2}^{(q)} + F_{1\rho}\, f_{2}^{(\pi)}, \\
\label{eq:f2D_quarkpion}
&F_{2D} = \left[F_{1\omega}+F_{1\rho}\right] f_{2}^{(q)} - F_{1\rho}\, f_{2}^{(\pi)},
\end{align}
where the $Q^2$ dependence of each form factor has been omitted. The body form factors, $f_{1}^{(q)}$ and $f_{2}^{(q)}$, originate from the second diagram in Fig.~\ref{fig:quarkpionphotonvertex}, while $f_{1}^{(\pi)}$ and $f_{2}^{(\pi)}$ are the body form factors from the third diagram, which also contain the pion body form factor (see discussion associated with Eq.~\eqref{eq:pionformfactor}). These body form factors are illustrated in Fig.~\ref{fig:quarkbodyformfactor}. When evaluating the pion loop diagrams in Figs.~\ref{fig:quarkpionloop} and \ref{fig:quarkpionphotonvertex} we use the
proper-time regularization scheme, however, in this case the pions should not be confined and we therefore set $\Lambda_{IR} =0\,$GeV. This procedure guarantees that the leading-order non-analytic behaviour of the hadron form factors as a function of the pion mass is retained.
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{quark_body_form_factors.pdf}
\caption{(Colour online) Dressed quark body form factors associated with the pion loop corrections, where $f_{1}^{(q)}$ and $f_{2}^{(q)}$ originate from the second diagram in Fig.~\ref{fig:quarkpionphotonvertex} and $f_{1}^{(\pi)}$ and $f_{2}^{(\pi)}$ from the third diagram.}
\label{fig:quarkbodyformfactor}
\end{figure}
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_quark_sector_quark_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_quark_sector_quark_form_factor.pdf}}
\caption{(Colour online) \textit{Upper panel:} Dressed up quark Dirac form factors, that also include pion cloud effects, separated into quark sectors. The solid line is the $u$-quark sector of the dressed up quark and the dashed line represents $d$-quark sector. \textit{Lower panel:} Dressed up quark Pauli form factors separated into quark sectors. The solid line is the $u$-quark sector and the dashed line the $d$-quark sector.}
\label{fig:constituentquarkflavoursectorformfactors}
\end{figure}
Results for the Dirac and Pauli dressed quark form factors, including pion loop effects, are given in Figs.~\ref{fig:constituentquarkformfactors}. The pion cloud softens the Dirac form factors, however its most important consequence is the non-zero Pauli form factor for the dressed quarks. At infinite $Q^2$ the dressed quark Dirac form factors now become
\begin{align}
F_{1U}(Q^2) &\stackrel{Q^2 \to \infty}{=} Z\,e_u, & F_{1D}(Q^2) &\stackrel{Q^2 \to \infty}{=} Z\,e_d,
\end{align}
whereas the Pauli form factors vanish for large $Q^2$. We find dressed quark anomalous magnetic moments of
\begin{align}
\kappa_U = 0.10 \qquad \text{and} \qquad \kappa_D = -0.17,
\label{eq:quarkanomalousresults}
\end{align}
defined as $\kappa_Q \equiv F_{2Q}(0)$. The quark charge and magnetic radii, defined with respect to the Sachs form factors and Eq.~\eqref{eq:radii}, take the values
\begin{align}
r^U_E &= 0.59\,\text{fm}, & r^U_M &= 0.60\,\text{fm}, \\
r^D_E &= 0.73\,\text{fm}, & r^D_M &= 0.67\,\text{fm}.
\end{align}
Decomposing the dressed quark form factors in quark/flavour sectors gives
\begin{align}
F_{1U}(Q^2) &= e_u\,F^u_{1U}(Q^2) + e_d\,F_{1U}^d(Q^2), \\
F_{1D}(Q^2) &= e_u\,F^u_{1D}(Q^2) + e_d\,F_{1D}^d(Q^2),
\end{align}
where the flavour sector dressed up quark form factors read
\begin{align}
\label{eq:F1uU}
&F^u_{1U} = Z\,\tfrac{1}{2}\left[F_{1\omega} + F_{1\rho}\right]
+ \left[3\,F_{1\omega} - F_{1\rho}\right]f_{1}^{(q)} + F_{1\rho}\, f_{1}^{(\pi)}, \\
&F^d_{1U} = Z\,\tfrac{1}{2}\left[F_{1\omega} - F_{1\rho}\right]
+ \left[3\,F_{1\omega} + F_{1\rho}\right]f_{1}^{(q)} - F_{1\rho}\, f_{1}^{(\pi)}, \\
&F^u_{2U} = \left[3\,F_{1\omega} - F_{1\rho}\right]f_{2}^{(q)} + F_{1\rho}\, f_{2}^{(\pi)}, \\
\label{eq:F2dU}
&F^d_{2U} = \left[3\,F_{1\omega} + F_{1\rho}\right]f_{2}^{(q)} - F_{1\rho}\, f_{2}^{(\pi)}.
\end{align}
The flavour sector dressed down quark form factors are given by
\begin{align}
F_{iD}^u &= F_{iU}^d \qquad \text{and} \qquad F_{iD}^d = F_{iU}^u
\label{eq:chargesymmetry}
\end{align}
where $i=(1,\,2)$. Therefore, these results satisfy charge symmetry and are illustrated in Figs.~\ref{fig:constituentquarkflavoursectorformfactors}. For the quark sector anomalous magnetic moments we find
\begin{align}
\kappa^u_U = 0.02 \qquad \text{and} \qquad \kappa^d_U = -0.25,
\label{eq:quark_sector_amm_dressed_quarks}
\end{align}
and therefore the $d$ current quarks carry the bulk of the dressed up quark anomalous magnetic moment. This will have important implications for the nucleon form factors.
\section{Diquark and Meson Form Factors \label{sec:diquark_results}}
Critical to our picture of nucleon structure are diquark correlations inside the nucleon. An essential step therefore, in calculating the nucleon form factors, is to first determine the interaction of the virtual photon with the diquarks. A further reason to discuss the diquark form factors is that the scalar and axialvector diquarks are the $qq$ analogs of the $\pi$ and $\rho$ mesons.
The electromagnetic current of a diquark is represented by the Feynman diagrams illustrated in Fig.~\ref{fig:diquarkemcurrent} and is expressed as
\begin{align}
&j^\mu(p',p) = i\int \frac{d^4k}{(2\pi)^4} \nonumber \\
&\mathrm{Tr}\left[\overline{\Gamma}(p')\,S(p'+k)\,\Lambda^\mu_{\gamma Q}(p',p)\,S(p+k)\,\Gamma(p)\,S^T(-k)\right],
\label{eq:diquarkemcurrent}
\end{align}
where the superscript $T$ indicates transpose. The Bethe-Salpeter vertices are represented by $\Gamma(p)$ and are given in Eq.~\eqref{eq:homogeneousBSvertex}. The dressed quark-photon vertex $\Lambda^\mu_{\gamma Q}(p',p)$ is given in Eqs.~\eqref{eq:quark_photon_vertex} and \eqref{eq:dressedquarkcurrent}.
Hadron form factors will be determined using the three variants for the dressed quark form factors discussed in Sect.~\ref{sec:QPV} and illustrated, for the non-trivial variants, in Fig.~\ref{fig:constituentquarkformfactors}. Results obtained by treating the dressed quarks as pointlike will be labelled with a superscript (bare), while those obtained using the dressed quark form factors from the BSE, Eq.~\eqref{eq:bseconstituent}, will be labelled with a superscript (bse) and our full results, where the quark form factors also include pion loop corrections, Eqs.~\eqref{eq:f1U_quarkpion}--\eqref{eq:f2D_quarkpion}, will have no superscript label.
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{diquark_form_factor_paper.pdf}
\caption{(Colour online) Feynman diagrams that represent the diquark electromagnetic current. The shaded circles are the diquark Bethe-Salpeter vertices and the shaded oval is the quark-photon vertex. The Feynman diagrams for the meson form factors are analogous. However the flow of baryon number on one of the quark lines must be reversed.}
\label{fig:diquarkemcurrent}
\end{figure}
The electromagnetic current for a scalar diquark, or any on-shell spin-zero particle, has the general form
\begin{align}
j_s^\mu(p',p) = \left(p'+p\right)^\mu\,F_s(Q^2),
\end{align}
and is therefore parameterized by a single form factor. Evaluating Eq.~\eqref{eq:diquarkemcurrent} for the scalar diquark gives
\begin{multline}
F_s(Q^2) = \left[F_{1U}(Q^2) + F_{1D}(Q^2)\right]f_s^V(Q^2) \\
+ \left[F_{2U}(Q^2) + F_{2D}(Q^2)\right]f_s^T(Q^2),
\end{multline}
where $f_s^V$, $f_s^T$ are the scalar diquark body form factors associated with the vector and tensor photon couplings to the dressed quarks (see Eq.~\eqref{eq:dressedquarkcurrent}). Results for the scalar diquark form factor are given in Fig.~\ref{fig:scalardiquarkff} for the three variants of dressed quark form factors. Vertex corrections introduced by the BSE result in a softer form factor (dashed line) in comparison with results obtained using pointlike dressed quark form factors (dash-dotted line). Including pion loop corrections only slightly alters the scalar diquark form factor (solid line).
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{scalar_diquark_form_factor.pdf}
\caption{(Colour online) Results for the scalar diquark and pion form factors. For the pion we just show the full results, however for the scalar diquark form factors we show the cases when the dressed quarks are pointlike, the quark-photon vertex is given by the BSE and finally when pion loop corrections are also included.}
\label{fig:scalardiquarkff}
\end{figure}
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{Q2_scalar_diquark_form_factor.pdf}
\caption{(Colour online) Scalar diquark and pion form factors multiplied by $Q^2$. The pion form factor data is from Refs.~\cite{Amendolia:1986wj,Amendolia:1984nz,Horn:2006tm,Tadevosyan:2007yd,Blok:2008jy}.}
\label{fig:Q2scalardiquarkff}
\end{figure}
The $\bar{q}q$ analog of the scalar diquark is the pion, where for the $\pi^+$ the electromagnetic form factor is given by
\begin{multline}
F_\pi(Q^2) = \left[F_{1U}(Q^2) - F_{1D}(Q^2)\right]f_s^V(Q^2) \\
+ \left[F_{2U}(Q^2) - F_{2D}(Q^2)\right]f_s^T(Q^2).
\label{eq:pionformfactor}
\end{multline}
The body form factors in Eq.~\eqref{eq:pionformfactor} are the same as those for the scalar diquark, except they are now functions of the pion mass instead of the scalar diquark mass.\footnote{There is also a factor of two because of the different definition for the Bethe-Salpeter normalization given in Eq.~\eqref{eq:Zpi}, compared to that in Eq.~\eqref{eq:Zs}.} We do not include pion loop corrections on the dressed quarks in the case of the pion form factor, because at the hadronic level there is no three pion vertex. The full result for the pion form factor is given as the dotted curve in Fig.~\ref{fig:scalardiquarkff}. The scalar diquark and pion form factors multiplied by $Q^2$ are presented in Fig.~\ref{fig:Q2scalardiquarkff}, where good agreement with pion form factor data from Refs.~\cite{Amendolia:1986wj,Amendolia:1984nz,Horn:2006tm,Tadevosyan:2007yd,Blok:2008jy} is seen. At large $Q^2$ both form factors plateau, where we find $Q^2\,F_\pi(Q^2) \to 0.48$ and $Q^2\,F_s(Q^2) \to 0.30$. The pion form factor result is consistent with the perturbative QCD prediction~\cite{Lepage:1979zb,Farrar:1979aw}:
\begin{align}
Q^2\,F_\pi(Q^2) \stackrel{Q^2\to \infty}{\longrightarrow} 16\,\pi\,f_\pi^2\,\alpha_s(Q^2),
\label{eq:perturbativepionformfactor}
\end{align}
in the sense that the strong coupling constant, $\alpha_s(Q^2)$, corresponds to a constant in the NJL model and therefore $Q^2\,F_\pi(Q^2)$ should become constant as $Q^2 \to \infty$. Taking Eq.~\eqref{eq:perturbativepionformfactor} literally, our pion form factor result implies that $\alpha_s(Q^2) = 1.12$, which using a NNLO result for the running coupling~\cite{Gluck:1998xa} would correspond to an NJL model scale of $Q_0^2 \sim 0.18\,$GeV$^2$, which is consistent with previous estimates~\cite{Cloet:2005rt,Cloet:2005pp,Cloet:2006bq}. Our calculated pion form factor reaches its plateau by $Q^2 \simeq 6\,$GeV$^2$, which corresponds to the same scale at which the Dyson-Schwinger equation results
of Ref.~\cite{Chang:2013nia} reach a maximum, after which the result of Ref.~\cite{Chang:2013nia} decreases because of the logarithmic running of $\alpha_s(Q^2)$ in QCD.
\begin{table}[tbp]
\addtolength{\tabcolsep}{5pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{l|ccccccc}
\hline\hline
& $r^{\text{(bare)}}_E$ & $r^{\text{(bse)}}_E$ & $r_E$ & $r^{\text{exp}}_E$ \\
\hline
scalar diquark & 0.46 & 0.62 & 0.63 & \\
pion & 0.46 & 0.62 & 0.62 & 0.663 $\pm$ 0.006 \\
\hline\hline
\end{tabular}
\caption{Charge radii for the scalar diquark and pion,
each shown for the three variants for the dressed quark form factors.
The experimental value for the pion is from
Ref.~\cite{Dally:1982zk,Amendolia:1986wj}. All radii are in units of fm.}
\label{tab:pion_scalar_diquark_radii}
\end{table}
Results for the scalar diquark and pion charge radii are given in Tab.~\ref{tab:pion_scalar_diquark_radii}, for the three variants of the dressed quark form factors. The charge radius of the pion and scalar diquark are found to be very similar, where the pion radius is approximately 5\% smaller than the experimental value from Refs.~\cite{Dally:1982zk,Amendolia:1986wj}.
The electromagnetic current for an axialvector diquark, or any on-shell spin-one particle, has the general form~\cite{Frankfurt:1977vc}
\begin{align}
j_a^{\mu,\alpha\beta}(p',p) &= \left[g^{\alpha\beta}F_{1a}(Q^2) - \frac{q^\alpha q^\beta}{2\,M_a^2}\,F_{2a}(Q^2)\right]\left(p'+p\right)^\mu \nonumber \\
&\hspace{14mm}
- \left(q^\alpha g^{\mu\beta} - q^\beta g^{\mu\alpha}\right)F_{3a}(Q^2),
\label{eq:axialcurrent}
\end{align}
where the Lorentz indices $\mu$, $\alpha$, $\beta$ represent the polarizations of the photon, initial axialvector diquark and final axialvector diquark, respectively. The Lorentz covariant form factors of Eq.~\eqref{eq:axialcurrent} are often re-expressed as the Sachs-like charge, magnetic and quadruple form factors for a spin-one particle, given by
\begin{align}
\label{eq:gc}
G_C(Q^2) &= F_1(Q^2) + \frac{2}{3}\,\eta\,G_Q(Q^2), \\
G_M(Q^2) &= F_3(Q^2), \\
\label{eq:gq}
G_Q(Q^2) &= F_1(Q^2) + \left(1 + \eta\right)F_2(Q^2) - F_3(Q^2),
\end{align}
where $\eta = \frac{Q^2}{4\,m_H^2}$ and $m_H$ is the relevant hadron mass. At $Q^2 = 0$ these form factors give, respectively,
the charge, magnetic moment and quadruple moment of a spin-one particle, in units of $e$, $e/(2\,m_H)$ and $e/m_H^2$. The charge, magnetic and quadrupole radii -- $\left<r_C^2\right>$, $\left<r_M^2\right>$, $\left<r_Q^2\right>$ -- are defined with respect to these Sachs-like form factors.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{axial_vector_body_form_factors.pdf}
\caption{(Colour online) Axial--vector diquark body form factors. These body form factors must still be multiplied by the appropriate dressed quark form factors to obtain the axialvector diquark form factors.}
\label{fig:axialdiquarkbodyformfactors}
\end{figure}
Evaluating the Feynman diagrams of Fig.~\ref{fig:diquarkemcurrent}, using the axialvector diquark Bethe-Salpeter vertex given in Eq.~\eqref{eq:homogeneousBSvertex} and the quark-photon vertex of Eq.~\eqref{eq:quark_photon_vertex} gives, for an axialvector diquark with quark content $\{ud\}$, the form factor result
\begin{align}
F^{\{ud\}}_{ia}(Q^2) &= \left[F_{1U}(Q^2) + F_{1D}(Q^2)\right]f_{i}^V(Q^2) \nonumber \\
&\,+ \left[F_{2U}(Q^2) + F_{2D}(Q^2)\right]f_{i}^T(Q^2),
\label{eq:axialvectorffs}
\end{align}
where $i \in 1,\,2,\,3$ correspond to the form factors in Eq.~\eqref{eq:axialcurrent}. Expressions for axialvector diquarks of the $\{uu\}$ and $\{dd\}$ type are simply given by Eq.~\eqref{eq:axialvectorffs} with the appropriate substitution of the dressed quark form factors. The vector and tensor body form factors, $f_{i}^V$ and $f_{i}^T$ , are illustrated in Fig.~\ref{fig:axialdiquarkbodyformfactors}. A notable feature of these form factors is that charge conservation implies $f_1^V(0) = 1$ and $f_1^T(0) = 0$. The magnetic moment equals $f_3^V(0) = 2.09$ which, because of relativistic effects, is slightly larger than the canonical value of $\mu_1 = 2$ for a spin-one particle. For the quadrupole moment the body form factors imply $\mathcal{Q} = -0.83$ which, because of relativistic effects, is about 17\% smaller than the canonical value of $\mathcal{Q} = -1$.
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{axial_vector_ud_F1_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{axial_vector_ud_F2_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{axial_vector_ud_F3_form_factor.pdf}}
\caption{(Colour online) Form factors for an axialvector diquark with quark content $\{ud\}$.}
\label{fig:axialdiquarkffs}
\end{figure}
Results for the form factors of Eq.~\eqref{eq:axialcurrent} for an axial--vector diquark with quark content $\{ud\}$ are presented in Figs.~\ref{fig:axialdiquarkffs}. In each case the vertex dressing from the quark-photon inhomogeneous BSE results in a softening of form factors, compared to the case of pointlike dressed quarks. Although the pion loop effects leave $F_{1a}$ almost unchanged, both $F_{2a}$ and $F_{3a}$ receive sizeable negative corrections. The origin of these corrections can be traced back to Eq.~\eqref{eq:axialcurrent} and the results in Fig.~\ref{fig:axialdiquarkbodyformfactors}. The tensor body from factors $f_2^T$ and $f_3^T$ are large and positive for small $Q^2$.
This, together with the large negative anomalous magnetic moment of the dressed down quark (see Eq.~\eqref{eq:quarkanomalousresults}),
results in sizeable corrections to $F_{2a}$ and $F_{3a}$ from pion loop effects.
\begin{table*}[t]
\addtolength{\tabcolsep}{4.9pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{l|ccccccccccccccccccc}
\hline\hline
& $\mu^{\text{(bse)}}$ & $\mu$ & & $\mathcal{Q}^{\text{(bse)}}$ & $\mathcal{Q}$ & & $r^{\text{(bse)}}_C$ & $r_C$ & & $r^{\text{(bse)}}_M$ & $r_M$ & & $r^{\text{(bse)}}_Q$ & $r_Q$ \\[0.2ex]
\hline
$\{uu\}$ axialvector diquark & \ph{-}2.78 & \ph{-}3.14 && -1.10 & -1.20 && 0.65 & 0.76 & & 0.61 & 0.74 & & 0.61 & 0.74 \\
$\{ud\}$ axialvector diquark & \ph{-}0.70 & \ph{-}0.55 && -0.28 & -0.24 && 0.37 & 0.38 & & 0.60 & 0.62 & & 0.61 & 0.64 \\
$\{dd\}$ axialvector diquark & -1.39 & -2.04 && \ph{-}0.55 & \ph{-}0.73 && 0.65 & 0.84 & & 0.61 & 0.80 & & 0.62 & 0.79 \\
rho plus & \ph{-}2.08 & \ph{-}2.57 && -0.87 & -1.06 && 0.67 & 0.82 & & 0.62 & 0.77 & & 0.62 & 0.77 \\
\hline\hline
\end{tabular}
\caption{Results for the magnetic moment, quadruple moment
and the charge, magnetic and quadruple radius of
the axialvector diquarks and $\rho^+$ meson.
In each case we present results for various levels of sophistication for
the constituent quark form factors.
All radii are in units of fm, the magnetic moment has units $e/(2\,m_H)$ and
the quadruple moment $e/m_H^2$, where $m_H$ is the
mass of the relevant diquark or meson.}
\label{tab:rho_axial_diquark_moments}
\end{table*}
The Lorentz covariant form factors for the $\rho^+$ meson, associated with the current of Eq.~\eqref{eq:axialcurrent}, are given by
\begin{align}
F_{i\rho}(Q^2) &= \left[F_{1U}(Q^2) - F_{1D}(Q^2)\right]f_{i}^V(Q^2) \\
&\,+ \left[F_{2U}(Q^2) - F_{2D}(Q^2)\right]f_{i}^T(Q^2),
\label{eq:rhoformfactors}
\end{align}
where $i \in 1,\,2,\,3$ and the body form factors are now functions of the rho mass instead of the axialvector diquark mass.
Results for the Sachs-like spin-one form factors defined in Eqs.~\eqref{eq:gc}--\eqref{eq:gq} are illustrated in Fig.~\ref{fig:axialvectorsachsformfactor} for a $\{ud\}$--type axialvector diquark and the $\rho^+$ meson. In these figures we only show the full results which include pion cloud effects. The zero in the charge form factors occurs at $Q^2 \simeq 6.6\,$GeV$^2$ for $G^{\{ud\}}_{C}$ and for $G^{\rho^+}_{C}$ at $Q^2 \simeq 2.6\,$GeV$^2$.
Static properties of the $\rho^+$ and the axialvector diquarks are given Tab.~\ref{tab:rho_axial_diquark_moments} for variants of the dressed quark form factors. We find that pion loop effects have a substantial impact on the static properties of the axialvector diquarks and rho mesons. For example, the pion cloud increases the magnitude of the $\rho^+$ magnetic moment by 24\% and the quadrupole moment by 22\%,
while for the $\{ud\}$--type axialvector diquarks we find a reduction of the magnetic moment by 21\% and the magnitude of the quadrupole moment by 14\%. The sign difference between these corrections for the $\rho^+$ and $\{ud\}$--type axialvector diquark arises because the dressed down quark form factors enter the respective currents with the opposite sign -- see Eqs.~\eqref{eq:axialvectorffs} and \eqref{eq:rhoformfactors} -- and the dressed down quark has a large anomalous magnetic moment. For the $\rho^+$ meson the pion cloud uniformly increases the charge, magnetic and quadrupole radii by approximately 16\%, whereas for the $\{ud\}$--type axialvector diquarks the pion cloud has little effect on the charge and quadrupole radii but increases the magnetic radius by 38\%.
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{axial_vector_ud_sachs_form_factors.pdf}} \\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{rho_plus_sachs_form_factors.pdf}}
\caption{(Colour online) \textit{Upper panel:} results for the charge, magnetic and quadruple form factors of a $ud$ axialvector diquark.
\textit{Lower panel}: results for the charge, magnetic and
quadruple form factors of a $\rho^+$ meson.}
\label{fig:axialvectorsachsformfactor}
\end{figure}
As an interesting check on the large $Q^2$ behaviour of our rho or axialvector diquark form factor results,
we make a comparison with the relations derived in Ref.~\cite{Brodsky:1992px}. That is, at large timelike or spacelike momenta, the ratio of the form factors for a spin-one particle should behave as
\begin{align}
G_C(Q^2):G_M(Q^2):G_Q(Q^2) = \left(1 - \tfrac{2}{3}\eta\right):\,2\,:\,-1,
\label{eq:hillerbrodsky}
\end{align}
where corrections are of the order $\Lambda_{\text{QCD}}/Q$ and $\Lambda_{\text{QCD}}/M_\rho$. For our spin-one results we find that the $G_C/G_Q$ constraint is satisfied to better than $15\%$ for $Q^2 = 10\,$GeV$^2$, to better than $3\%$ for $Q^2=100\,$GeV$^2$ and for $Q^2 > 1000\,$GeV$^2$ our result takes the value given in Eq.~\eqref{eq:hillerbrodsky}. The calculated ratios $G_C/G_M$ and $G_M/G_Q$
saturate within $15\%$ of the values in Eqs.~\eqref{eq:hillerbrodsky}. However this deviation is well within the leading correction of $\Lambda_{\text{QCD}}/M_\rho \sim 0.3$.
The remaining diquark electromagnetic current that contributes to the nucleon form factors is the transition current between scalar and axialvector diquarks. This current has the form
\begin{align}
j^{\mu,\alpha}_{sa}(p',p) &= \pm\ \frac{1}{M_s+M_a}\ i\varepsilon^{\alpha\mu\sigma\lambda}p_\sigma'p_\lambda\,F_{sa}(Q^2),
\label{eq:satransition}
\end{align}
where the plus sign indicates a scalar $\to$ axialvector transition and the reverse process has the minus sign. The Lorentz indices $\mu$ and $\alpha$ represent the polarizations of the photon and the axialvector diquark. Evaluating the Feynman diagram of Fig.~\ref{fig:diquarkemcurrent} for this transition process gives
\begin{align}
F_{sa}(Q^2) &= \left[F_{1U}(Q^2) - F_{1D}(Q^2)\right]f_{sa}^V(Q^2) \nonumber \\
&\,+ \left[F_{2U}(Q^2) - F_{2D}(Q^2)\right]f_{sa}^T(Q^2),
\end{align}
where $f_{sa}^V(Q^2)$ and $f_{sa}^T(Q^2)$ are the vector and tensor body form factors. The electromagnetic transition form factor describing the $\gamma^*\pi^+ \to \rho^+$ process is given by
\begin{align}
F_{\pi\rho}(Q^2) &= \left[F_{1U}(Q^2) + F_{1D}(Q^2)\right]f_{sa}^V(Q^2) \nonumber \\
&\,+ \left[F_{2U}(Q^2) + F_{2D}(Q^2)\right]f_{sa}^T(Q^2),
\end{align}
where body form factors are now functions of the $\pi$ and $\rho$ masses. Results for $F_{sa}$ and $F_{\pi\rho}$ are presented
in Fig.~\ref{fig:mixingdiquarkformfactor}. The vertex dressing from the BSE produces a softer form factor, and for the
diquark transition the large isovector combination of the constituent quark Pauli form factors, arising from the pion cloud, gives a sizeable correction for $Q^2 \lesssim 1\,$GeV$^2$. Results for the transition moment and transition radius are given in Tab.~\ref{tab:transition_form_factors}.
\begin{figure}[t]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{mixing_diquark_form_factor.pdf}
\caption{(Colour online) Results for the
scalar $\leftrightarrow$ axialvector diquark and $\pi \leftrightarrow \rho$
electromagnetic transition form factors.}
\label{fig:mixingdiquarkformfactor}
\end{figure}
\begin{table}[t]
\addtolength{\tabcolsep}{9.5pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{l|ccccccc}
\hline\hline
& $\kappa_T^{\text{(bse)}}$ & $\kappa_T$ & & $r_T^{\text{(bse)}}$ & $r_T$ \\
\hline
$s \leftrightarrow a$ & 2.66 & 3.61 & & 0.75 & 0.99 \\
$\pi \leftrightarrow \rho$ & 0.62 & 0.49 & & 0.54 & 0.54 \\
\hline\hline
\end{tabular}
\caption{Results for the transition moment, defined as $\kappa_T \equiv F(0)$, the transition radius (which is normalized by $\kappa_T$), for scalar $\leftrightarrow$ axialvector diquark and pion $\leftrightarrow$ rho transitions. Radii are in units of fm.}
\label{tab:transition_form_factors}
\end{table}
\section{Nucleon Form Factor Results \label{sec:nucleon_results}}
The Feynman diagrams that contribute to the nucleon's electromagnetic current
are illustrated in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams},
where the coupling of the photon to the dressed quarks and diquarks has been discussed
in Sects.~\ref{sec:QPV} and \ref{sec:diquark_results}, respectively. Using a
quark-photon vertex of the form given in Eq.~\eqref{eq:quark_photon_vertex} demarcates the nucleon form factors into flavour sectors defined by the dressed quarks, such that
\begin{align}
\label{eq:protondirac}
F_{ip}(Q^2) &= F_{ip}^U(Q^2) + F_{ip}^D(Q^2), \\
F_{in}(Q^2) &= F_{in}^U(Q^2) + F_{in}^D(Q^2),
\end{align}
where $i=(1,\,2)$. The dressed quark flavour sector nucleon form factors
are given by the product of dressed quark form factors
(e.g. Eqs.~\eqref{eq:f1U_quarkpion}--\eqref{eq:f2D_quarkpion}) with the nucleon
body form factors, such that
\begin{align}
\label{eq:FpQ}
F_{ip}^Q &= F_{1Q}\,f_{ip}^{Q,V} + F_{2Q}\,f_{ip}^{Q,T}, \\
\label{eq:FnQ}
F_{in}^Q &= F_{1Q}\,f_{in}^{Q,V} + F_{2Q}\,f_{in}^{Q,T},
\end{align}
where $Q=(U,\,D)$ and the $Q^2$ dependence of each form factor has been omitted. The superscript $V$ indicates a vector body form factor and the superscript $T$ a tensor body form factor, which arise from the quark current of Eq.~\eqref{eq:dressedquarkcurrent}.
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_body_form_factors_vector.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_body_form_factors_vector.pdf}}
\caption{(Colour online) Nucleon Dirac (\textit{upper panel}) and Pauli (\textit{lower panel}) body form factors which result from a vector coupling to the quarks in the Feynman diagrams of Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}. To obtain their contribution to the nucleon form factors these results must be multiplied by the appropriate isospin factors, as in Eqs.~\eqref{eq:f1U_V} and \eqref{eq:f1D_V}, and the dressed quark Dirac form factors.}
\label{fig:vector_body_form_factors}
\end{figure}
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_body_form_factors_tensor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_body_form_factors_tensor.pdf}}
\caption{(Colour online) Nucleon Dirac (\textit{upper panel}) and Pauli (\textit{lower panel}) body form factors which result from a tensor coupling to the quarks in the Feynman diagrams of Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}. To obtain their contribution to the nucleon form factors these results must be multiplied by the appropriate isospin factors, as in Eqs.~\eqref{eq:f1U_V} and \eqref{eq:f1D_V} and the dressed quark Pauli form factors.}
\label{fig:tensor_body_form_factors}
\end{figure}
\begin{table*}[tbp]
\addtolength{\tabcolsep}{4.5pt}
\addtolength{\extrarowheight}{2.0pt}
\begin{tabular}{c|cccccccccc|cccc}
\hline\hline
& $f^{s,V}_{i\mathcal{Q}}$ & $f^{a,V}_{i\mathcal{Q}}$ & $f^{s,V}_{i\mathcal{D}}$ & $f^{a,V}_{i\mathcal{D}}$ & $f^{sa,V}_{i\mathcal{D}}$ &
$f^{s,T}_{i\mathcal{Q}}$ & $f^{a,T}_{i\mathcal{Q}}$ & $f^{s,T}_{i\mathcal{D}}$ & $f^{a,T}_{i\mathcal{D}}$ & $f^{sa,T}_{i\mathcal{D}}$ &
$f^{U,V}_{ip}$ & $f^{D,V}_{ip}$ & $f^{U,T}_{ip}$ & $f^{D,T}_{ip}$ \\[0.6ex]
\hline
Dirac & 0.688 & \ph{-}0.312 & \ph{-}0.688 & 0.312 & 0 & 0 & 0 & 0 & 0 & 0 & 2\ph{.00} & \ph{-}1\ph{.00} & 0\ph{.00} & \ph{-}0\ph{.00}\\
Pauli & 1.134 & -0.451 & -0.546 & 0.472 & 0.666 & 1.482 & 0.008 & 0.0 & 0.659 & 0.893 & 1.61 & -1.07 & 3.10 & -0.29\\
\hline\hline
\end{tabular}
\caption{Nucleon Dirac and Pauli body form factors evaluated at $Q^2 = 0$. The subscript $i=1,2$ corresponds to either the first or second row of the Table. An entry with only one significant figure takes that exact value because of charge conservation. The last four columns give results for the vector and tensor versions of Eqs.~\eqref{eq:f1U_V}--\eqref{eq:f1D_V} at $Q^2 = 0$. To obtain nucleon form factor results at $Q^2 = 0$ these results must be multiplied the appropriate quark charge for the vector coupling diagrams and by the appropriate dressed quark anomalous magnetic moment for the tensor coupling diagrams.}
\label{tab:nucleon_body_form_factors}
\end{table*}
The proton body form factors in Eq.~\eqref{eq:FpQ}, which represent the sum of the six Feynman diagrams of Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}, have the structure
\begin{align}
\label{eq:f1U_V}
\hspace*{-1.5mm}f_{ip}^{U,V} &= f^{s,V}_{i\mathcal{Q}} + \tfrac{1}{3}f^{a,V}_{i\mathcal{Q}} + f^{s,V}_{i\mathcal{D}} + \tfrac{5}{3}f^{a,V}_{i\mathcal{D}} + \tfrac{1}{\sqrt{3}}f^{sa,V}_{i\mathcal{D}}, \\
\label{eq:f1D_V}
\hspace*{-1.5mm}f_{ip}^{D,V} &= \hspace{11mm} \tfrac{2}{3}f^{a,V}_{i\mathcal{Q}} + f^{s,V}_{i\mathcal{D}} + \tfrac{1}{3}f^{a,V}_{i\mathcal{D}}
- \tfrac{1}{\sqrt{3}}f^{sa,V}_{i\mathcal{D}}.
\end{align}
For equal current quark masses the neutron body form factors in Eq.~\eqref{eq:FnQ} are given by
\begin{align}
f_{in}^{D,V} = f_{ip}^{U,V} \quad \text{and} \quad f_{in}^{U,V} = f_{ip}^{D,V},
\label{eq:neutronbody}
\end{align}
and therefore the nucleon body form factors satisfy the constraints imposed by charge symmetry. Expressions for the nucleon tensor body form factors are obtained from Eqs.~\eqref{eq:f1U_V}--\eqref{eq:neutronbody} with $V \to T$. The nomenclature for these nucleon body form factors is: a subscript $\mathcal{Q}$ implies that the photon couples directly to a quark (\textit{quark diagram}) and a subscript $\mathcal{D}$ implies that the photon couples to (a quark inside) a diquark (\textit{diquark diagram}); a superscript $s$ indicates
that the diagram contains only a scalar diquark, while the superscript $a$ only an axialvector diquark and the superscript $sa$ implies the sum of the two diagrams where a photon induces a transition between scalar and axialvector diquarks. The numerical coefficients in Eqs.~\eqref{eq:f1U_V} and \eqref{eq:f1D_V} arise from the isospin structure of the proton Faddeev and the quark-photon vertices, given in Eqs.~\eqref{eq:faddeevvertex} and \eqref{eq:quark_photon_vertex}, respectively.
Nucleon body form factor results for each diagram
in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams},
as expressed by Eqs.~\eqref{eq:f1U_V}--\eqref{eq:f1D_V}, are presented
in Fig.~\ref{fig:vector_body_form_factors} for the vector coupling to the dressed
quarks and in Fig.~\ref{fig:tensor_body_form_factors} for the tensor coupling.
Table~\ref{tab:nucleon_body_form_factors} gives
the $Q^2=0$ values of the nucleon body form factors. Charge conservation for the
vector coupling implies that in this case diagrams with the same quark--diquark
content must be equal at $Q^2 = 0$. Furthermore, with the normalization used here,
the sum of quark diagrams and of diquark diagrams must each equal one in
the vector case. For the vector coupling, charge conservation also forbids the
scalar--axialvector diquark diagram ($sa$) from contributing to the charge.
However, this diagram does give an important contribution to the nucleon anomalous magnetic moment. For
the vector coupling diagrams the only object with a magnetic moment is the
axialvector diquark. Thus the non-zero values for the other $f_2$ body form factor
diagrams, in the lower panel of Fig.~\ref{fig:vector_body_form_factors},
indicate that the associated pieces of the nucleon wave function have sizable
$p$ and $d$ wave components. Therefore the nucleon wave function contains
a significant amount of quark orbital angular momentum.
Figure~\ref{fig:tensor_body_form_factors} and Table~\ref{tab:nucleon_body_form_factors}
demonstrate that the tensor coupling diagrams do not contribute to the nucleon charge,
which is consistent with constraints imposed by the Ward--Takahashi identity for
the nucleon electromagnetic current. However, these diagrams do have an important
impact on the anomalous magnetic moment. The $Q^2$ behavior of the form factors
is also influenced by the tensor coupling diagrams. However, once multiplied by
the dressed quark Pauli form factors, their contribution diminishes rapidly
with $Q^2$, being of little importance for $Q^2 \gtrsim 1\,$GeV$^2$.
In Sect.~\ref{sec:NJL} the nucleon Faddeev equation was solved by first making a pole approximation for the diquark $t$-matrices -- see for example Eqs.~\eqref{eq:scalarpropagatorpoleform} and \eqref{eq:axialpropagatorpoleform}. For a consistent nucleon form factor calculation we must therefore approximate all two-body $t$-matrices by their pole form, which also includes the quark-photon vertex obtained from the inhomogeneous BSE illustrated in Fig.~\ref{fig:quarkphotonvertex}. Expressing $\Pi_{VV}(q^2) = q^2\,\hat{\Pi}_{VV}(q^2)$ in Eq.~\eqref{eq:bseformfactors} and expanding $\hat{\Pi}_{VV}(q^2)$ about either the rho or omega pole mass, we obtain the following results for the pole forms of the BSE form factors:
\begin{align}
F_{1i}(Q^2) &= \frac{1}{1 + Q^2/ m_i^2}, \qquad i \in \omega,\,\rho,
\label{eq:vmdformfactors}
\end{align}
which is the familiar vector meson dominance result. The dressed quark form factors therefore maintain the vector meson pole structure
in the time-like region obtained in the original BSE results of Eq.~\eqref{eq:bseformfactors}. For the nucleon form factor calculations the result in Eq.~\eqref{eq:vmdformfactors} will replace the full BSE result of Eq.~\eqref{eq:bseformfactors} used in the dressed quark form factors, for example, in Eq.~\eqref{eq:bseconstituent} and Eqs.~\eqref{eq:f1U_quarkpion}--\eqref{eq:f2D_quarkpion}.
Dirac and Pauli form factor results for the proton and neutron are presented
in Fig.~\ref{fig:proton_form_factors} and \ref{fig:neutron_form_factors},
respectively, while results for the Sachs form factors are given
in Fig.~\ref{fig:proton_sachs} and \ref{fig:neutron_sachs}. The three curves
in each figure represent results for the three variants of the dressed
quark form factors used in Eqs.~\eqref{eq:protondirac}--\eqref{eq:FnQ}.
The dot-dashed curve is the result where the dressed quarks are treated as pointlike and therefore their Dirac form factors are constants equal to the quark charges and the Pauli form factors are zero. These results are labelled with the superscript (bare). Results for the nucleon form factors that include the dressing of the quark-photon vertex by vector mesons, generated by Eq.~\eqref{eq:vmdformfactors}, are illustrated by the dashed lines, with the superscript (bse). Finally, we use dressed quark form factors that also incorporate effects from pion loops, which generate a non-zero Pauli form factor for the dressed quarks. These results are illustrated as the solid lines (without a superscript label).
The full results for the nucleon form factors, including pion loop effects,
display good agreement with the empirical parametrizations from
Ref.~\cite{Kelly:2004hm}, which are illustrated as the dotted curves
in Figs.~\ref{fig:proton_form_factors} through \ref{fig:neutron_sachs}.
Both the proton and neutron Dirac form factor are slightly softer than the empirical
parametrizations, whereas the Pauli form factors are in almost perfect agreement.
The dressing of the quark-photon vertex by the pole form of the
BSE (Eq.~\eqref{eq:vmdformfactors}) results in a significant softening of all
nucleon form factors, proving critical for a realistic $Q^2$ dependence of
the form factors. Pion loop corrections result in a further 50\% reduction of
the neutron Dirac form factor for low to moderate $Q^2$ and significantly enhance
the nucleon Pauli form factors for $Q^2 \lesssim 1\,$GeV$^2$.
These enhancements correspond to increases in the magnitude of the proton and
neutron anomalous magnetic moments by $25\%$ and $45\%$, as indicated in Table~\ref{tab:nucleon_kappa_radii}. For the proton and neutron magnetic moments we find $\mu_p = 2.78\,\mu_N$ and $\mu_n = -1.81\,\mu_N$, which agree well with the experimental values of $\mu_p = 2.793\,\mu_N$ and $\mu_n =-1.913\,\mu_N$~\cite{Beringer:1900zz}. To obtain the physical result $\left|\kappa_n\right| > \kappa_p$ for the nucleon anomalous magnetic
moments, we find that the dressed quark anomalous magnetic moments of
Eq.~\eqref{eq:quarkanomalousresults} are critical. In particular, $\kappa_U$ must
be positive and $\kappa_D$ negative, with $\left|\kappa_D\right| > \kappa_U$.
We obtain $\left|\kappa_D\right| > \kappa_U$ because the second diagram
in Fig.~\ref{fig:quarkpionphotonvertex} only contributes to the dressed down quark
anomalous magnetic moment (c.f. Eqs.~\eqref{eq:f2U_quarkpion} and \eqref{eq:f2D_quarkpion}),
giving an additional negative contribution.
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_proton_form_factor.pdf}} \\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_proton_form_factor.pdf}}
\caption{(Colour online) Results for the proton Dirac (\textit{upper panel}) and Pauli (\textit{lower panel}) form factors. In each case the \textit{dot-dashed} curve (superscript (bare)) gives the result when the constituent quark form factors are those of an elementary Dirac particle, the \textit{dashed} curve (superscript (bse)) includes the quark-photon vertex dressing effects from the BSE and the \textit{solid} curve is the full result which also includes pion loop effects. The dotted curve is the empirical result from Ref.~\cite{Kelly:2004hm}.}
\label{fig:proton_form_factors}
\end{figure}
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_neutron_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_neutron_form_factor.pdf}}
\caption{(Colour online) Results for the neutron Dirac (\textit{upper panel}) and Pauli (\textit{lower panel}) form factors. In each case the \textit{dot-dashed} curve (superscript (bare)) gives the results when the constituent quark form factors are those of an elementary Dirac particle, the \textit{dashed} curve (superscript (bse)) includes the quark-photon vertex dressing effects from the BSE and the \textit{solid} curve is the full result which also includes pion loop effects. The dotted curve is the empirical result from Ref.~\cite{Kelly:2004hm}.}
\label{fig:neutron_form_factors}
\end{figure}
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GE_proton_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GM_proton_form_factor.pdf}}
\caption{(Colour online) Results for the proton Sachs electric (\textit{upper panel}) and magnetic (\textit{lower panel}) form factors. In each case the dot-dashed curve (superscript (bare)) is the result when the constituent quark form factors are those of an elementary Dirac particle, the dashed curve (superscript (bse)) includes the quark-photon vertex dressing effects from the BSE and the solid curve is the full result which also includes pion loop effects. The dotted curve is the empirical result from Ref.~\cite{Kelly:2004hm}.}
\label{fig:proton_sachs}
\end{figure}
\begin{figure}[t]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GE_neutron_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GM_neutron_form_factor.pdf}}
\caption{Colour online) Results for the neutron Sachs electric (\textit{upper panel}) and magnetic (\textit{lower panel}) form factors. In each case the dot-dashed curve (superscript (bare)) is the results when the constituent quark form factors are those of an elementary Dirac particle, the dashed curve
(superscript (bse)) includes the quark-photon vertex dressing effects from the BSE and the solid curve is the full result which also includes pion loop effects. The dotted curve is the empirical result from Ref.~\cite{Kelly:2004hm}.}
\label{fig:neutron_sachs}
\end{figure}
Results for the charge and magnetic radii, defined by Eqs.~\eqref{eq:charge_radius} and \eqref{eq:magnetic_radius}, are given in Table~\ref{tab:nucleon_kappa_radii} for the two cases where the dressed quark form factors are given by Eq.~\eqref{eq:vmdformfactors} and where pion loop effects are also included. The pion loop effects result in a $65\%$ increase in magnitude of the neutron charge radius, a $19\%$ increase in its magnetic radius, while the proton charge radius increases by 6\% and the magnetic radius by 12\%. All nucleon radii agree well with the empirical values taken from Ref.~\cite{Kelly:2004hm}. A recent global fit to data~\cite{Zhan:2011ji} found the proton charge and magnetic radius to be
\begin{align}
r_{Ep} &= 0.875 \pm 0.008(\text{exp}) \pm 0.006(\text{fit})~\text{fm}, \\
r_{Mp} &= 0.867 \pm 0.009(\text{exp}) \pm 0.018(\text{fit})~\text{fm},
\end{align}
and a recent Mainz experiment found~\cite{Bernauer:2010wm}
\begin{align}
r_{Ep} &= 0.879\ph{1}(5)_{\text{stat}}(4)_{\text{syst}}(2)_{\text{model}}(4)_{\text{group}}~\text{fm}, \\
r_{Mp} &= 0.777(13)_{\text{stat}}(9)_{\text{syst}}(5)_{\text{model}}(2)_{\text{group}}~\text{fm}.
\end{align}
Our proton results agree well with those of Ref.~\cite{Zhan:2011ji}. The origin of
the sizeable discrepancy between the two experimental results for the proton magnetic
radius is discussed, for example, in Ref.~\cite{Sick:2012zz}. In addition, in
view of the muonic hydrogen controversy~\cite{Pohl:2010zza}, the experimental
errors quoted in both places appear to be rather low.
\begin{table*}[tbp]
\addtolength{\tabcolsep}{7.6pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{l|cccccccccccccc}
\hline\hline
& $\mu^{\text{(bse)}}$ & $\mu$ & $\mu^{\text{exp}}$ && $r_E^{\text{(bse)}}$ & $r_E$ & $r_E^{\text{exp}}$ && $r_M^{\text{(bse)}}$ & $r_M$ & $r_M^{\text{exp}}$ \\
\hline
proton & \ph{-}2.43 & \ph{-}2.78 & \ph{-}2.793 && \ph{-}0.81 & \ph{-}0.86 & \ph{-}0.863$\pm$0.004 && 0.76 & 0.84 & 0.848$\pm$0.003\\
neutron & -1.25 & -1.81 & -1.913 && -0.20 & -0.34 & -0.335$\pm$0.055 && 0.74 & 0.88 & 0.907$\pm$0.016\\
\hline\hline
\end{tabular}
\caption{Results for the nucleon magnetic moments and radii, with dressed quark form factors given by Eqs.~\eqref{eq:bseconstituent} and \eqref{eq:vmdformfactors}, labelled with a superscript (bse), and results that also include pion cloud effects at the dressed quark level (these results do not carry a superscript label). Experimental results, labelled with a superscript exp, are taken from Ref.~\cite{Kelly:2004hm}.}
\label{tab:nucleon_kappa_radii}
\end{table*}
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_up_proton_form_factor.pdf}} \\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_up_proton_form_factor.pdf}}
\caption{(Colour online) Proton up quark sector Dirac and Pauli form factors. The empirical results are obtained using Ref.~\cite{Kelly:2004hm} and Eq.~\eqref{eq:flavourseparation}.}
\label{fig:proton_up_form_factors}
\end{figure}
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_down_proton_form_factor.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_down_proton_form_factor.pdf}}
\caption{(Colour online) Proton down quark sector Dirac and Pauli form factors. The empirical results are obtained using Ref.~\cite{Kelly:2004hm} and Eq.~\eqref{eq:flavourseparation}.}
\label{fig:proton_down_form_factors}
\end{figure}
The flavour sector nucleon form factors defined by the dressed quarks,
as given in
Eqs.~\eqref{eq:FpQ}--\eqref{eq:FnQ},
do not satisfy the standard charge symmetry relations, that is
\begin{align}
\frac{F^U_{ip}}{e_u} &\neq \frac{F^D_{in}}{e_d} &&\text{and}& \frac{F^D_{ip}}{e_d} &\neq \frac{F^U_{in}}{e_u}.
\end{align}
where $i=(1,\,2)$.\footnote{Here we must divide out the quark charges because they are included in the definition of the dressed quark form factor, see Eqs.~\eqref{eq:FpQ}--\eqref{eq:FnQ}.}
The reason for this lies not with the nucleon
body form factors, c.f. Eq.~\eqref{eq:neutronbody}, but
with the form factors of the dressed quarks.
Dressed quarks are quasi-particles that contain an infinite number of
$u$ and $d$ current quarks.
Hence a dressed up quark form factor, for example, contains
contributions from both $u$ and $d$ current quarks.
To obtain the nucleon quark sector form factors,
defined in general in Eq.~\eqref{eq:quark_sectorform_factors}, the
dressed quark form factors must be expressed in
their quark sector form as given in Eqs.~\eqref{eq:F1uU}--\eqref{eq:F2dU}.
The nucleon quark sector form factors are therefore given by
\begin{align}
F^q_{ip} &= F_{1Q}^q\,f_{ip}^{Q,V} + F_{2Q}^q\,f_{ip}^{Q,T}, \\
F^q_{in} &= F_{1Q}^q\,f_{in}^{Q,V} + F_{2Q}^q\,f_{in}^{Q,T},
\end{align}
where $i=(1,\,2)$, $q=(u,\,d)$ and there is an implied sum
over $Q=(U,\,D)$. These results satisfy
the charge symmetry constraints
\begin{align}
F^u_{in} &= F^d_{ip} &&\text{and} & F^d_{in} &= F^u_{ip}.
\end{align}
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_up_sector_diagrams_total.pdf}} \\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_up_sector_diagrams_total.pdf}}
\caption{(Colour online) Total contributions to the proton $u$-sector form factors from each Feynman diagram in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}. These results include both the vector and tensor coupling contributions and the sum gives the total $u$-sector Dirac and Pauli proton form factors (solid lines in Fig.~\ref{fig:proton_up_form_factors}).}
\label{fig:flavour_sector_up_diagrams}
\end{figure}
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F1_down_sector_diagrams_total.pdf}}\\
\subfloat{\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{F2_down_sector_diagrams_total.pdf}}
\caption{(Colour online) Total contributions to the proton $d$-sector form factors from each Feynman diagram in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}. These results include both the vector and tensor coupling contributions and the sum gives the total $d$-sector Dirac and Pauli proton form factors (solid lines in Fig.~\ref{fig:proton_down_form_factors}).}
\label{fig:flavour_sector_down_diagrams}
\end{figure}
\begin{table*}[t]
\addtolength{\tabcolsep}{7.0pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{c|cccccccccccccc}
\hline\hline
$q$ & $\kappa^{q,\text{(bse)}}$ & $\kappa^q$ & $\kappa^{q,\text{exp}}$ & & $r_E^{q,\text{(bse)}}$ & $r_E^q$ & $r_E^{q,\text{exp}}$ & & $r_M^{q,\text{(bse)}}$ & $r_M^q$ & $r_E^{q,\text{exp}}$ \\
\hline
$u$ sector & \ph{-}1.61 & \ph{-}1.74 & \ph{-}1.673 && 0.79 & 0.82 & 0.829$\pm$0.097 && \ph{-}0.77 & 0.83 & 0.816$\pm$0.087\\
$d$ sector & -1.07 & -1.85 & -2.033 && 0.75 & 0.71 & 0.720$\pm$0.118 && -0.53 & 0.98 & 1.048$\pm$0.319\\
\hline\hline
\end{tabular}
\caption{Results for the quark sector contribution to
the proton anomalous magnetic moments and radii,
with constituent quark form factors given by
Eqs.~\eqref{eq:bseconstituent} and \eqref{eq:vmdformfactors} (labelled with (bse))
and results that also include the pion cloud.
The experimental values for the quark sector anomalous
magnetic moments and radii are obtained from from
Ref.~\cite{Kelly:2004hm} using Eq.~\eqref{eq:flavourseparation}.}
\label{tab:nucleon_kappa_radii_quark_sector}
\end{table*}
Quark sector proton form factor results are presented in Figs.~\ref{fig:proton_up_form_factors} and \ref{fig:proton_down_form_factors}, for the three stages of sophistication in the description of the dressed quark form factors. Empirical results, shown by the dotted line, were obtained from Ref.~\cite{Kelly:2004hm} using
Eq.~\eqref{eq:flavourseparation}. While the agreement between our full results,
which include pion loop effects, and the empirical parametrization is very good,
{}for the $u$ quark sector we find that our Dirac form factor is slightly too soft
and the Pauli form factor a little too hard. For the $d$ quark sector
the Dirac form factor is in excellent agreement with the empirical parametrization,
whereas the Pauli form factor is slightly too soft. As we shall see, such small
differences can produce apparently large effects in the combination required
to compute $G_E$.
\begin{table*}[t]
\addtolength{\tabcolsep}{4.9pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{c|cccccc|c|ccccccc}
\hline\hline
& $r_1^{\text{(bse)}}$ & $r_1$ & $r_1^{\text{exp}}$ & $r_2^{\text{(bse)}}$ & $r_2$ & $r_2^{\text{exp}}$ & & $r_1^{q,\text{(bse)}}$ & $r_1^q$ & $r_1^{q,\text{exp}}$ & $r_2^{q,\text{(bse)}}$ & $r_2^q$ & $r_2^{q,\text{exp}}$ \\
\hline
proton & 0.75 & 0.79 & 0.791 & 0.77 & 0.85 & 0.879 & $u$-sector & 0.76 & 0.79 & 0.795 & 0.77 & 0.88 & 0.841 \\
neutron & 0.20 & 0.09 & 0.119 & 0.76 & 0.88 & 0.911 & $d$-sector & 0.80 & 0.80 & 0.809 & 0.76 & 0.88 & 0.938 \\
\hline\hline
\end{tabular}
\caption{Results for radii defined by Eq.~\eqref{eq:radii}, for the proton and neutron Dirac and Pauli form factors, and for the quark sector proton Dirac and Pauli form factors. In each case we show results where the dressed quark form factors are given by Eqs.~\eqref{eq:bseconstituent} and \eqref{eq:vmdformfactors} (labelled with (bse)) and results that also include the pion cloud. The empirical values are obtained from Ref.~\cite{Kelly:2004hm} and for the quark sector results using Eq.~\eqref{eq:flavourseparation}.}
\label{tab:nucleon_radii_quark_sector}
\end{table*}
An interesting feature of these results is the role of the pionic corrections to the quark sector Pauli form factors. In contrast to the usual proton and neutron Pauli form factors, which each receive significant corrections from the pion cloud, for the quark sector form factors only $F^d_{2p}$ receives sizeable pionic corrections. For example, pion loop effects increase the magnitude of the $d$ sector anomalous magnetic moment by 73\%, whereas the $u$ quark sector only receives an 8\% correction. This result is a consequence of the Pauli quark sector form factors for the dressed quarks, where from Eq.~\eqref{eq:quark_sector_amm_dressed_quarks} we see that the $d$ quark sector contribution to the dressed up quark anomalous magnetic moment has a magnitude twelve times larger than the $u$ sector contribution, and the proton consists of two dressed up quarks and one dressed down quark. When compared with experiment the $d$-sector anomalous magnetic moment is 10\% too small and the $u$-sector 4\% too large.
Table~\ref{tab:nucleon_kappa_radii_quark_sector} presents results for the quark
sector contribution to the proton anomalous magnetic moments and radii.
We find that the pion cloud has only a minor impact on the $d$-sector
charge radius and the $u$-sector radii,
whereas the $d$-sector magnetic radius actually changes sign once pion loop
effects are included. Again the origin of this lies with the large value of
$\kappa_U^d$ in Eq.~\eqref{eq:quark_sector_amm_dressed_quarks}. With pion
cloud corrections included, all our results for the charge and magnetic
quark sector radii agree well with experiment.
Table~\ref{tab:nucleon_radii_quark_sector} gives results for the Dirac and Pauli radii, defined by Eq.~\eqref{eq:radii}, for the proton and neutron and, for the proton, the corresponding quark sector radii. The agreement with the empirical results of Ref.~\cite{Kelly:2004hm} is very good for the proton and neutron radii. For the proton quark sector radii, the Dirac radii results are in good agreement; however, the $u$ quark sector Pauli radius is slightly larger than experiment and the $d$ quark sector is 7\% smaller.
\begin{table*}[tp]
\addtolength{\tabcolsep}{6.0pt}
\addtolength{\extrarowheight}{2.2pt}
\begin{tabular}{c|cccccc|cccccccc}
\hline\hline
$q$ & $F^{s,q}_{1\mathcal{Q},p}$ & $F^{a,q}_{1\mathcal{Q},p}$ & $F^{s,q}_{1\mathcal{D},p}$ & $F^{a,q}_{1\mathcal{D},p}$ & $F^{sa,q}_{1\mathcal{D},p}$ & $F^{q}_{1p}$
& $F^{s,q}_{2\mathcal{Q},p}$ & $F^{a,q}_{2\mathcal{Q},p}$ & $F^{s,q}_{2\mathcal{D},p}$ & $F^{a,q}_{2\mathcal{D},p}$ & $F^{sa,q}_{2\mathcal{D},p}$ & $F^{q}_{2p}$ \\[0.6ex]
\hline
$u$-sector & 0.69 & 0.10 & 0.69 & 0.52 & 0 & 2 & \ph{-}1.16 & -0.15 & -0.55 & \ph{-}0.75 & \ph{-}0.52 & \ph{-}1.73 \\
$d$-sector & 0 & 0.21 & 0.69 & 0.10 & 0 & 1 & -0.37 & -0.30 & -0.55 & -0.11 & -0.52 & -1.85 \\
\hline\hline
\end{tabular}
\caption{Contributions to the nucleon quark-sector form factors from the various diagrams at $Q^2 = 0$. The vector contributions are obtained from the appropriate body form factors at $Q^2 = 0$ multiplied by isospin factors and quark charges. Therefore these results do not change with the various approximations for the dressed quark form factors. The tensor contributions are only non-zero if the dressed quarks have an anomalous magnetic moment, and in this framework this occurs solely from pion loop effects. Rows with an entry of ``0'' are identically zero because of charge conservation.}
\label{tab:nucleon_quark_sector_diagrams_Q20}
\end{table*}
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{Q4F1_quark_sector_proton_form_factor.pdf}
\caption{(Colour online) Quark sector contributions to the proton Dirac form factor multiplied by $Q^4$. Experimental data are taken from Ref.~\cite{Cates:2011pz}.}
\label{fig:Q4F1_form_factors}
\end{figure}
Figures~\ref{fig:flavour_sector_up_diagrams} and \ref{fig:flavour_sector_down_diagrams} present results for the total contribution of each diagram in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams} to the proton quark sector form factors. That is, the proton quark sector form factors are decomposed into
\begin{align}
F^q_{ip} &= F^{s,q}_{i\mathcal{Q},p} + F^{a,q}_{i\mathcal{Q},p} + F^{s,q}_{i\mathcal{D},p} + F^{a,q}_{i\mathcal{D},p} + F^{sa,q}_{i\mathcal{D},p},
\end{align}
where $i=(1,\,2)$, $q=(u,\,d)$ and each function represents the total contribution to each quark sector for the Feynman diagrams in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}. Table~\ref{tab:nucleon_quark_sector_diagrams_Q20} gives results for the quark sector diagrams of Fig.~\ref{fig:nucleon_em_current_feynman_diagrams} evaluated at $Q^2 = 0$. For the Dirac form factors we see the dominance of the scalar diquark in the proton wave function, where these diagrams carry 69\% of both the $u$ and $d$ quark sector charges. Axialvector diquarks also play an important role for the $u$ quark sector form factors, carrying 26\% of the charge and 35\% of the anomalous magnetic moment. In the $d$ quark sector, $F^{s,d}_{2\mathcal{Q},p}$ would be zero without the effect of the pion cloud. The latter produces a contribution that constitutes 20\% of the $d$ sector anomalous magnetic moment.
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{Q4F2_quark_sector_proton_form_factor.pdf}
\caption{(Colour online) Quark sector contributions to the proton
Pauli form factor multiplied by $Q^4$. Experimental data are taken from Ref.~\cite{Cates:2011pz}.}
\label{fig:Q4F2_form_factors}
\end{figure}
Recent accurate neutron form factor data has enabled a precise experimental determination of the quark sector proton form factors, using Eq.~\eqref{eq:protonflavoursector}. The experimental quark sector results from Ref.~\cite{Cates:2011pz}, along with our results, are presented in Fig.~\ref{fig:Q4F1_form_factors} for the Dirac form factors and in Fig.~\ref{fig:Q4F2_form_factors} for the Pauli form factors. Prima facie, these experimental results
are remarkable. For $Q^2$ beyond 1-2 GeV$^2$ the $d$ quark sector of the proton
Dirac form factor is much softer than the $u$ quark sector. On the other hand, for
the Pauli quark sector form factors, it is the $u$ quark sector that is softer for
low $Q^2$. However, at around $Q^2 \sim 1.5\,$GeV$^2$ there is a cross-over and
the $d$ quark sector form factor starts approaching zero more rapidly.
The empirical results illustrated in Fig.~\ref{fig:Q4F1_form_factors} are straightforward to understand within our framework. The dominant contributions to the quark sector Dirac form factors come from the two Feynman diagrams which involve only a quark and a scalar diquark. This is clear from the upper panels of Figs.~\ref{fig:flavour_sector_up_diagrams} and \ref{fig:flavour_sector_down_diagrams}. The upper panel in Figs.~\ref{fig:constituentquarkflavoursectorformfactors} demonstrates that the current $d$ quarks that contribute to $F_{1p}^d$ must primarily come from the dressed down quark, and these contributions are suppressed by order $1/Q^2$ relative to the current $u$ quarks from the quark diagram that contributes to $F_{1p}^u$. Thus the dominance of scalar diquark correlations in the nucleon clearly provides a very natural explanation of the data in Fig.~\ref{fig:Q4F1_form_factors}.
The zero-crossing in our result for $F_{1p}^d$ at $Q^2 \simeq 4.7\,$GeV$^2$ is also straightforward to understand. We first note that the large $Q^2$ behaviour of the form factors is governed by the quark diagrams in Fig.~\ref{fig:nucleon_em_current_feynman_diagrams}, because when the photon couples to a quark inside a diquark, the diquark form factors
provide at least an additional factor of $1/Q^2$ relative to the quark diagrams. Considering only pointlike quarks, which is sufficient to study the large $Q^2$ behaviour, we have for the proton quark sector form factors
\begin{align}
\label{eq:FpUlargeQ2}
F_{ip}^u &~\stackrel{Q^2\to \infty}{\longrightarrow}~ f^{s,V}_{i\mathcal{Q}} + \tfrac{1}{3}f^{a,V}_{i\mathcal{Q}},\\[0.2ex]
F_{ip}^d &~\stackrel{Q^2\to \infty}{\longrightarrow}~ \hspace*{11mm} \tfrac{2}{3}f^{a,V}_{i\mathcal{Q}},
\end{align}
where $i=(1,\,2)$; c.f. Eqs.~\eqref{eq:f1U_V} and \eqref{eq:f1D_V}. Therefore the large $Q^2$ behaviour of $F_{1p}^d$ is governed by the nucleon body form factor $f^{a,V}_{1\mathcal{Q}}$ (see Fig.~\ref{fig:vector_body_form_factors}), which becomes negative at large $Q^2$ and therefore $F_{1p}^d$ has a zero-crossing. Note that the empirical parameterizations of Ref.~\cite{Kelly:2004hm} also have a zero in $F_{1p}^d$ at $Q^2 \simeq 7.9\,$GeV$^2$.
\begin{figure}[tbp]
\subfloat{\centering\includegraphics[width=0.7\columnwidth,clip=true,angle=0]{form_factor_diagrams_exchange.pdf}}
\caption{Exchange type diagrams that do not appear in our present form factor calculation because the static approximation is used for the quark exchange kernel.}
\label{fig:exchange_diagrams}
\end{figure}
Understanding the $Q^2$ dependence of the proton Pauli quark sector form factors
is more subtle within our model. Analogous to the Dirac form factor example, $F_{2p}^u$ receives a large contribution from the scalar quark diagram $f^{s,V}_{2\mathcal{Q}}$, however, many other contributions are negative. In contrast all diagrams add constructively to the $F_{2p}^d$ form factor, which also receives a significant contribution from the pion cloud.
Therefore at low to moderate $Q^2$ we find $F_{2p}^u/\kappa_u \sim F_{2p}^d/\kappa_d$, with reasonable agreement with the data. However, at larger $Q^2$ the two quark diagrams in Eq.~(\ref{eq:FpUlargeQ2}) partially cancel, giving $F_{2p}^u/\kappa_u < F_{2p}^d/\kappa_d$, which is opposite to the behaviour observed in the data. The suppression of $F_{2p}^d$ with respect to $F_{2p}^u$ at large $Q^2$ was found in Ref.~\cite{Cloet:2008re}, where a major difference from the framework used here is that we make the static approximation to the quark exchange kernel and therefore exchange type diagrams, as illustrated in Fig.~\ref{fig:exchange_diagrams}, are absent from our form factor calculation. This is the likely reason for the discrepancy with experiment at large $Q^2$ observed in Fig.~\ref{fig:Q4F2_form_factors}.
Detailed results for the proton and neutron Sachs form factors are given in
Appendix~\ref{sec:sachs}. Of contemporary
interest is the proton Sachs form factor ratio, $G_{Ep}/G_{Mp}$, for which our result
is presented in Fig.~\ref{fig:proton_sachs_ratio}. We find that this ratio
decreases almost linearly with $Q^2$ but the slope we obtain is significantly
larger than the experimental results obtained via the polarization transfer experiments,
leading to a zero-crossing at $Q^2 \approx 3.7$ GeV$^2$.
So far no such zero-crossing has been seen in the data but if it were to occur
it would have to be in the domain $Q^2 \gtrsim 8\,$GeV$^2$.
The zero in the $G_{Ep}/G_{Mp}$ ratio found here results from a zero in $G_{Ep}$
and, as we have already noted,
the cancellation between $F_1$ and $F_2$ in the linear combination needed for
$G_E$ means that even relatively small differences between the experimental and
theoretical values of the individual from factors can be magnified there.
We find that this zero actually arises from the $u$ quark sector,
as illustrated in the upper panel of Fig.~\ref{fig:sachs_u_sector}.
This zero has its origin in the quark diagram with the scalar diquark spectator,
which becomes negative at around $Q^2 \simeq 1.8\,$GeV$^2$ and dominates at large $Q^2$.
This can be seen in the upper panel of Fig.~\ref{fig:sachs_u_sector_diagrams}.
A possible reason for the discrepancy with data for the $G_{Ep}/G_{Mp}$ ratio is
the omission of exchange diagram contributions (illustrated in Fig.~\ref{fig:exchange_diagrams}),
which do not appear in the model described herein. The running of the quark mass function
in QCD may also play an important role~\cite{Cloet:2013gva}.
Results for the neutron Sachs form factor ratio, $G_{En}/G_{Mn}$, are presented in Fig.~\ref{fig:neutron_sachs_ratio}. For $Q^2 \lesssim 1.5\,$GeV$^2$ our results that include pion loop corrections agree well with data. However, at larger $Q^2$ our ratio continues to grow too rapidly to be consistent with data. Our result for $G_{En}/G_{Mn}$ does not possess a zero-crossing for any $Q^2$ value. This is in contrast to the results of Ref.~\cite{Cloet:2008re} which find a zero-crossing at $Q^2 \simeq 11\,$GeV$^2$.
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GE_over_GM_proton_form_factor.pdf}
\caption{(Colour online) Proton Sachs form factor ratio. Data are from Refs.~\cite{Jones:1999rz,Gayou:2001qd,Gayou:2001qt,Puckett:2010ac,Puckett:2011xg}.}
\label{fig:proton_sachs_ratio}
\end{figure}
\begin{figure}[tbp]
\centering\includegraphics[width=\columnwidth,clip=true,angle=0]{GE_over_GM_neutron_form_factor.pdf}
\caption{(Colour online) Neutron Sachs form factor ratio.
Data are from Refs.~\cite{Riordan:2010id}.}
\label{fig:neutron_sachs_ratio}
\end{figure}
\section{Conclusion \label{sec:conclusion}}
We have presented calculations of the nucleon form factors using a covariant and
confining NJL model, which is a Poincar\'e covariant quantum field theory with many
of the properties of QCD at low to moderate energies. The model satisfies current
conservation exactly and because the framework is covariant the form factors
are determined without the need to specify a reference frame. Poincar\'e covariance
also demands non-zero quark orbital angular momentum in the proton wave function, and this
is reflected in our results by large contributions to the nucleon Pauli form factors
from quark-diquark components of the nucleon wave function that only carry charge
(see Fig.~\ref{fig:vector_body_form_factors} and related discussion).
A unique feature of these results is the parameter-free self-consistent inclusion of pion loop
effects, as a perturbation to the ``quark core'' results obtained from
the solution of a relativistic Faddeev equation. These pion cloud effects play
a vital role for $Q^2 \lesssim 1\,$GeV$^2$. For example, the pion cloud increases
the magnitude of the proton and neutron anomalous magnetic moments by 25\% and 45\%,
respectively, giving final results of $\kappa_p = 1.78$ and $\kappa_n = -1.81$, which
are in rather good agreement with the empirical values.
In the limit of equal current quark masses our model satisfies charge symmetry and
therefore the proton quark sector form factors can be unambiguously determined.
For the quark sector radii we find that $r^u_E$ is 16\% larger than $r^d_E$,
whereas for the magnetic radii $r^d_M$ is 18\% larger than $r^u_M$.
The quark sector magnetic radius result can be understood because pion loop effects
induce a $d$ quark sector anomalous magnetic moment for the dressed up quark
twelve times larger than the $u$ quark sector contribution.
For the quark sector form factors, pion cloud effects are largely concentrated in
the $d$ quark sector. For example, $r^d_M$ actually changes sign when pionic
effects are included and the value of $G_{Mp}^d(0)$ increases by a factor of ten
because of pion loop effects.
An area of particular interest which has been identified in our study is
the interplay between the respective roles of diquark
correlations and pion effects. This is most dramatically illustrated by the
comparison of Figs.~\ref{fig:Q4F1_form_factors} and \ref{fig:Q4F2_form_factors}.
In the first we see the crucial importance on the
behavior of the Dirac form factor of the
dominance of scalar diquarks, when they can contribute. The smaller role
of these scalar diquarks in the $d$-quark case naturally explains the
suppression of the $d$-quark sector at larger values of the momentum transfer.
On the other hand, in the case of the Pauli form factor the axialvector diquarks and pion make
significant contributions to the $d$-quark sector and this effectively counteracts
the effect of the scalar diquark correlations. These are subtle but crucial
aspects of the observed form factors.
Finally, looking to the future, an important near term goal must be to apply
the framework developed here to the study of nucleon
transition form factors, for example, nucleon to $\Delta$ and nucleon to
Roper transitions. This will elucidate the role of pion loop effects in
these transitions and help to expose the nature of diquark correlations in the
structure of baryons. The results presented herein and earlier work on nucleon
PDFs~\cite{Cloet:2005pp,Cloet:2007em} will also serve an a critical starting point
for forthcoming studies of generalized parton distribution functions~\cite{Diehl:2003ny,Belitsky:2005qn}.
|
1,477,468,750,982 | arxiv | \section{Supplementary material}
In this supplementary material we provide details of the derivation of the main formulas in the text of the paper.
\subsection{Markovian evolution}
In this section we describe the mapping of Eq. (13) in the main text on the Markovian evolution process. For the sake of consistency, we repeat here Eq. (13) without incoming wave term
\begin{eqnarray}
\nonumber &&
\left[\frac{1}{2\pi} \ln[(2-\epsilon)(1 + k^2+p^2)]-\lambda\delta_{\nu 2}\right]f_{\nu}({\bf k}, {\bf p})=\\
\nonumber &&
\int\frac{d^2{\bf Q}}{(2\pi)^2} \frac{2}{1+k^2+p^2+Q^2}
\left\{ f_{\nu}(-{\bf k}, {\bf Q})+
2\sum_{\mu=0,2} \mathcal{K}_{\nu\mu}\sum_{j=0,1} f_{\mu}\left(\frac{{\bf p}+(-1)^j{\bf Q}}{\sqrt{2}}, \frac{{\bf Q}-(-1)^j({\bf k}+{\bf p})}{\sqrt{2}}\right)
\right\} \\
\label{start}
\end{eqnarray}
with
\begin{equation}
\mathcal{K}=\left(\begin{array}{cc}
1/3 & 5/9 \\
1 & 1/6
\end{array}\right).
\label{MatrixKsup}
\end{equation}
Let us introduce functions $g_{\nu}({\bf k})$ defined as
\begin{equation}
g_{\nu}({\bf k}, {\bf p})=f_{\nu}({\bf k}, {\bf p})\frac{\ln[(2-\epsilon)(1+k^2+p^2)]}{1+k^2+p^2}.
\label{gauge-transform}
\end{equation}
After the transformation Eq. (\ref{gauge-transform}), equations for $g_{\nu}({\bf k})$ are written in the form that allows their iterative solution
\begin{equation}
g_{\nu,n+1}({\bf k}, {\bf p})=\sum_{\mu=0,2} \int d^2{\bf k}' d^2{\bf p}' P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}') g_{\nu,n}({\bf k}', {\bf p}')
\label{g_iterative}
\end{equation}
It is important to note, that the gauge transformation Eq. (\ref{gauge-transform}) ensures that the integrals of the kernels $P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}')$ over the first coordinates ${\bf k}, {\bf p}$ are finite, which allows the interpretation of $P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}')$ as a transition rate from the state $|{\bf k}',{\bf p}', \mu\rangle$ to the state $|{\bf k}, {\bf p}, \nu\rangle$, and rewriting Eq. (\ref{g_iterative}) in form of a master equation
\begin{eqnarray}
\nonumber &&
g_{\nu,n+1}({\bf k}, {\bf p})-g_{\nu,n}({\bf k}, {\bf p})=\\
&&
\sum_{\mu=0,2} \int d^2{\bf k}' d^2{\bf p}' \left[P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}')-\delta_{\nu\mu}\delta({\bf k}-{\bf k}')\delta({\bf p}-{\bf p}')\right] g_{\nu,n}({\bf k}', {\bf p}').
\label{g_master}
\end{eqnarray}
The kernels $P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}')$ are obtained straightforwardly from Eq. (\ref{start}) and transformation Eq. (\ref{gauge-transform}).
The bound four-atomic state is realized as a stationary solution of Eq. (\ref{g_master}).
To realize the numerical implementation of Eq. (\ref{g_iterative}) as a stochastic Markovian evolution process, we need to interpret the transition rates in Eq. (\ref{g_master}) as {\em probabilities} of a jump of a particle. To reach this, we divide RHS of Eq. (\ref{g_master}) by a maximal value of the total escape rates
\begin{equation}
\Gamma_{\mu}({\bf k}', {\bf p}')=\sum_{\nu=0,2}\int d^2{\bf k} d^2{\bf p} P_{\nu\mu}({\bf k}, {\bf p}; {\bf k}', {\bf p}')
\end{equation}
from the state $|{\bf k}', {\bf p}', \mu\rangle$. This operation is equivalent to the rescaling of the (discrete) time in Eq. (\ref{g_master}), thus it does not change the stationary state we are interested in. The resulting equations read
\begin{eqnarray}
\nonumber &&
g_{\nu, n+1}({\bf k}, {\bf p})-g_{\nu, n}({\bf k}, {\bf p})=-\gamma_{\nu}({\bf k}, {\bf p}) g_{\nu,n}({\bf k}, {\bf p})+\\
&&
\sum_{\nu'}\int d^2{\bf k}' d^2{\bf p}'\left\{ W_{\nu\nu'}({\bf k}, {\bf p}; {\bf k}', {\bf p}') g_{\nu', n}({\bf k}', {\bf p}')
-W_{\nu'\nu}({\bf k}', {\bf p}'; {\bf k}, {\bf p})g_{\nu, n}({\bf k}, {\bf p})\right\},
\label{matrix_Master-Eq}
\end{eqnarray}
where
\begin{equation}
W_{\nu\nu'}({\bf k}, {\bf p}; {\bf k}', {\bf p}')=P_{\nu\nu'}({\bf k}, {\bf p}; {\bf k}', {\bf p}')/C,
\end{equation}
\begin{equation}
\gamma_{\nu}({\bf k}, {\bf p})=\frac{1}{C}\left(1-\Gamma_{\nu}({\bf k}, {\bf p})-\frac{\lambda}{\ln[(2-\epsilon)(1+k^2+p^2)]}\delta_{\nu 2}\right),
\label{gamma_k}
\end{equation}
and
\begin{equation}
C>\max_{\{{\bf k}, {\bf p}\}}\left\{\Gamma_{2}({\bf k}, {\bf p})+\frac{\lambda}{\ln[(2-\epsilon)(1+k^2+p^2)]}, \Gamma_{0}({\bf k}, {\bf p})\right\}.
\label{C}
\end{equation}
For practical calculations in the region $-1<\epsilon\leq 0$, $0<\lambda<2$, $C=20$ is the optimal choice.
The choice of the factor $C$ garanties
\begin{equation}
\sum_{\nu=0,2}\int d^2{\bf k}d^2{\bf p} W_{\nu\nu'}({\bf k}, {\bf p}; {\bf k}', {\bf p}') <1,
\end{equation}
which allows the interpretation of $W_{\nu\nu'}({\bf k}, {\bf p}; {\bf k}', {\bf p}')$ as a probability density for the jump of the particle out of the state $|{\bf k}', {\bf p}', \nu'\rangle$ into the state $|{\bf k}, {\bf p}, \nu\rangle$.
Now we can formulate Markovian stochastic process, which is described by the master equation Eq. (\ref{matrix_Master-Eq}) as follows: Consider a ensemble of walkers that evolve in the 4-dimensional space ${\bf k}=({\bf k}, {\bf p})$ and have an intrinsic flavor $\nu=0,2$. At each discrete time step a walker in the state $|{\bf k}, {\bf p},\nu\rangle$ is subject to the following elementary process: (i) jump to the state $|{\bf k'},{\bf p}',\nu'\rangle$ with the probability $W_{\nu'\nu}({\bf k'}, {\bf p}'; {\bf k}, {\bf p})$ by changing the flavor to $\nu'$ (the same flavor is kept if $\nu'=\nu$); (ii) the walker is destroyed with the probability $\gamma_{\nu}({\bf k}, {\bf p})$. If $\gamma_{\nu}({\bf k}, {\bf p})<0$, another walker is created in the state $|{\bf k}, {\bf p},\nu\rangle$ with the probability $|\gamma_{\nu}({\bf k}, {\bf p})|$.
Implementing that algorithm numerically, the stationary solution is distinguished by a total number of walkers fluctuating around a stable mean value. Generically, due to finite probabilities $\gamma_{\nu}({\bf k})$ either all initially created walkers die out, or their number grows unbound. The final outcome of the evolution is crucially affected by the term $\lambda/\ln(2-\epsilon+k^2+p^2)$ that governs creation or annihilation of walkers in the $\nu=2$ channel. For a generic value of $\lambda$, the total number of walkers grows without bounds for small $|\epsilon|$, and decays to zero after $|\epsilon|$ exceeds some critical value, that corresponds to the energy of the bound 4-atomic state. In the numerical procedure, the value of $\epsilon$ is adjusted to reach the situation with stationary average number of walkers.
\subsection{Two particle scattering amplitude from the Bethe-Peierls boundary conditions in two dimensions}
Consider a collision of two particles with mass 1 in 2D. We are interested in s-wave scattering.
The wave function as a function of the the relative coordinate satisfies the Schr\"odinger equation
\begin{equation}
\left\{\left(\frac{d^2}{dr^2}+\frac{1}{r}\frac{d}{dr}\right) + 2\mu(E-V(r))\right\}\psi(r)=0.
\label{Schroedinger_1}
\end{equation}
Here $\mu$ is the reduced mass, $\mu=1/2$. If $V(r)$ is a deep finite range potential, it can be emulated by the Bethe-Peierls boundary condition, which in 2D is formulated on a circle of small compared to the de Broglie wave length radius $R_0$. The Bethe-Peierls boundary condition reads
\begin{equation}
\frac{d\psi/dr}{\psi}\bigg\vert_{r=R_0}=-\frac{1}{a}.
\label{BP2Dsup}
\end{equation}
The solution of the scattering problem can be written in terms of the Green function describing a free motion of particles in 2D space as follows
\begin{equation}
\psi({\bf r}, t) =\psi_0({\bf r}, t)+\int d^2{\bf r'} G({\bf r}-{\bf r'})\delta(|{\bf r}-{\bf r'}|-R_0) f({\bf r'}).
\label{genSolutionsup}
\end{equation}
The function $\psi_0$ describes the incoming plane wave. The Green function describes the scattered wave, and the integration goes over the region, where the scattering takes place. The Green function satisfies the equation
\begin{equation}
\left(-\frac{1}{2\mu}\nabla^2 - E\right) G({\bf r}, {\bf r'}) =\delta({\bf r}-{\bf r'}).
\label{GF}
\end{equation}
Introduce the parameter $k=\sqrt{2\mu |E|}$, and the rescaled coordinate ${\bf x}=k{\bf r}$. Then for $E>0$ the scattered wave is described by the function $G(|{\bf r}-{\bf r'}|)=C H_0^{(1)} (|{\bf r}-{\bf r'}|)$, where $H_0^{(1)}(x)$ denotes the Hankel function. The constant $C$ is determined by substitution in Eq. (\ref{GF}) and integrating over the circle $|{\bf r}-{\bf r'}|=R_0$. Using the short range approximation $H_0^{(1)}(x)\approx\frac{2 i}{\pi} \ln x+1$, we get $C=\frac{i\mu}{2}$ and hence
\begin{equation}
G(x)=\frac{i\mu}{2} H_0^{(1)}(x).
\end{equation}
For $E<0$ the scattered wave is described by $G(x)= C K_0(x)$, where $K_0(x)$ is the modified Bessel function of the second kind. Using the short-range asymptote $K_0(x)\approx -\ln x$, we fix the constant $C=\mu/\pi$ and obtain ($x=\sqrt{2\mu|E|} r)$, $E<0$
\begin{equation}
G_E(x)=\frac{\mu}{\pi} K_0(x).
\label{GE_x}
\end{equation}
Now we fix the function $f({\bf r})$ by substituting the formal solution Eq. (\ref{genSolution}) in the boundary condition Eq. (\ref{BP2D}). Thereby the action of the scattering potential is replaced by a function $f({\bf r'})$ on the circle of a small radius $R_0$. We obtain two equations, the one for the wave function and the one for its derivative
\begin{eqnarray}
\psi({\bf r}, t) =\psi_0({\bf r}, t)+\oint_{|r'|=R_0} d{\bf l'} G({\bf r}-R_0) f(R_0), \label{psi}\\
\partial_r\psi({\bf r}, t) =\partial_r\psi_0({\bf r}, t)+\oint_{|r'|=R_0} d{\bf l'} \partial_r G({\bf r}-R_0) f(R_0). \label{dr_psi}
\end{eqnarray}
s-wave scattering implies the angular independence of $f(R_0)$, which allows to put it out of the integration. Furthermore, at small distances (correspondingly large wave vectors), one can neglect the energy in Eq. (\ref{GF}) for the Green function. In that way the equation for the Green function acquires the form of the equation for the Coulomb potential in 2D. Putting $\mu=1/2$, we get
\begin{equation}
\nabla^2 G({\bf r}, {\bf r'}) =-\delta({\bf r}-{\bf r'}).
\end{equation}
The condition for that approximation reads
\begin{equation}
\sqrt{2\mu |E|} R_0\ll 1,
\end{equation}
which determines the small parameter in the following derivations.
Furthermore, using the Gauss theorem, we can understand the integral over the circle as a potential created by the homogeneous charge distribution on the circle, which in turn is equal to the potential of the total charge places in the center of the circle. It follows that the result of the integration does not change if we replace the argument $R_0$ by zero in the Green function. Applying this line of arguments we obtain
\begin{equation}
\oint_{|r'|=R_0} G_E({\bf r}-{\bf r'}) f(R_0) d{\bf l} = f(R_0)\oint_{|r'|=R_0} G_E({\bf r}-{\bf R_0}) d{\bf l}=
2\pi R_0 f(R_0) G_E(r).
\label{Coulomb}
\end{equation}
Now we can evaluate integrals in in Eqs. (\ref{psi}), (\ref{dr_psi}) using Eq. (\ref{Coulomb}). Furthermore, since the derivative of the incoming wave $\partial_r\psi_0$ is a smooth function at small $r$ whereas the Green function develops a singularity, one can neglect the term $\partial_r\psi_0$ in Eq. (\ref{dr_psi}). Then the boundary condition assumes the form
\begin{equation}
\frac{\partial_r\psi}{\psi}\vert_{r=R_0}=\frac{2\pi R_0 f(R_0)\partial_r G_E(R_0)}{\psi_0(R_0)+2\pi R_0 f(R_0) G_E(R_0)} =-\frac{1}{a}.
\label{BC1}
\end{equation}
Using the asymptote of Green function at small $r$, and replacing $\psi_0(R_0)\approx 1$, we solve Eq. (\ref{BC1}) with respect to $f(R_0)$. The solution the form
\begin{eqnarray}
&&
f(R_0)=\frac{1}{2\mu R_0\left[\ln\left(k R_0 e^{a/R_0}\right)-i\pi/2\right]}, \ \mbox{for} \ E>0, \label{f_R>sup}\\
&&
f(R_0)=\frac{1}{2\mu R_0\ln\left(k R_0 e^{a/R_0}\right)} , \ \mbox{for} \ E<0.
\label{f_R<}
\end{eqnarray}
Furthermore, in the case $E>0$ we can relate $f(R_0)$ to the scattering amplitude. Substituting $f(R_0)$ in the general solution Eq. (\ref{genSolution}), we obtain for $E>0$
\begin{equation}
\psi({\bf r}, t) =\psi_0({\bf r}, t)+i\pi \mu R_0 f(R_0) H_0^{(1)}(kr).
\label{scattered_wave1}
\end{equation}
Using the large distance asymptote
\begin{equation}
H_0^{(1)}(x)=\sqrt{\frac{2}{\pi x}}e^{ix}e^{-i\pi/4}
\end{equation}
we write down Eq. (\ref{scattered_wave1}) in the form
\begin{equation}
\psi({\bf r}, t) =\psi_0({\bf r}, t)+e^{i\frac{\pi}{4}} \sqrt{\frac{2\pi}{k}} \mu R_0 f(R_0) \frac{e^{ikr}}{\sqrt{r}},
\label{scattered_wave}
\end{equation}
from which we identify the scattering amplitude as
\begin{equation}
A=\sqrt{\frac{2\pi}{k}} \mu R_0 f(R_0)=\frac{\sqrt{\pi}}{\sqrt{2k}\left[\ln\left[k R_0 e^{\frac{a}{R_0}}\right]-i\pi/2\right]},
\label{scatt_amplitude_1}
\end{equation}
where we put $\mu=1/2$.
The continuation of the scattering amplitude to negative energies is obtained by using the expression (\ref{f_R<}) in Eq. (\ref{scatt_amplitude}), which results in
\begin{equation}
A=\frac{\sqrt{\pi}}{\sqrt{2k}\ln\left[k R_0 e^{\frac{a}{R_0}}\right]},
\label{scatt_amplitude}
\end{equation}
For $a>0$, the scattering amplitude has a pole as a function of the energy ($k=\sqrt{|E|}$) at
\begin{equation}
E=-|E|=-\frac{1}{R_0^2}e^{-2\frac{a}{R_0}},
\end{equation}
which corresponds to the formation of a bound molecular state.
For $a<0$, the scattering amplitude remains negative for all energies without showing any resonant structure.
To obtain the relation of the scattering parameter $a$ with the s-wave scattering length in 3 dimensions, we compare Eq. (\ref{scatt_amplitude}) with the expressions for the scattering amplitudes in presence of confinement potential derived in Refs. \cite{Petrov2D,Idziaszek}. The comparison results in the following equation
\begin{equation}
\ln\left[k R_0 e^{\frac{a}{R_0}}\right]=-\sqrt{\frac{\pi}{2}}\frac{\ell_0}{a_{\mathrm{3D}}} +\ln\left(k\ell_0\sqrt{\frac{\pi}{2B}}\right).
\label{Comparison}
\end{equation}
Equating $k$-dependent and $k$-independent parts of Eq. (\ref{Comparison}), we obtain
\begin{equation}
\frac{a}{R_0}=-\sqrt{\frac{\pi}{2}}\frac{\ell_0}{a_{\mathrm{3D}}}, \, R_0=\ell_0\sqrt{\frac{\pi}{2B}},
\label{aBP_a3Dsup}
\end{equation}
which constitutes Eq. (3) of the main text of the paper.
\subsection{Structure of the wave function for scattering of two singlet pairs}
We consider scattering of two singlet bound states, which we also call molecules. The total spin of the four-particle state equals 0. We only consider the elastic scattering events, therefore the out-state still consists of the two singlet molecules. Guided by that reason, we introduce the basis $\mathcal{B}=\{\Phi_1, \Phi_2, \Phi_3\}$ in the spin-0 subspace of the spin Hilbert space of the four atoms as follows
\begin{equation}
\Phi_1= |1,4\rangle_s\otimes|2,3\rangle_s, \, \,
\Phi_2= |2,4\rangle_s\otimes|1,3\rangle_s, \, \,
\Phi_3= |3,4\rangle_s\otimes|1,2\rangle_s.
\label{BasisB}
\end{equation}
Here $|i,j\rangle_s$ denotes the singlet state formed by the atoms $(i,j)$, the index of the state $\Phi_i$ corresponds to the number of the atom that forms a singlet state with the atom 4.
The general two-molecule wave function can now be written as
\begin{equation}
\Psi({\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf r}_4)= \chi_1({\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf r}_4) \Phi_1 + \chi_2({\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf r}_4) \Phi_2+ \chi_3({\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf r}_4) \Phi_3.
\label{4Pwfsup}
\end{equation}
Here $\chi_i$ describes the spatial part of the wave function and $\Phi_i$ relates to the spin part, and ${\bf r}_j$, $j=1,2,3,4$ is the coordinate of the $j$'s atom.
\subsection{Representation of permutation operators in the basis $\mathcal{B}$}
Direct calculation shows that in the basis $(\Phi_1, \Phi_2, \Phi_3)$ the permutation operators are represented by
\begin{equation}
\Pi_{12}=\Pi_{34}=\left(\begin{array}{ccc}
0 & 1 & 0 \\
1 & 0 & 0 \\
0 & 0 & 1
\end{array}\right),
\end{equation}
\begin{equation}
\Pi_{13}=\Pi_{24}=\left(\begin{array}{ccc}
0 & 0 & 1 \\
0 & 1 & 0 \\
1 & 0 & 0
\end{array}\right),
\end{equation}
\begin{equation}
\Pi_{14}=\Pi_{23}=\left(\begin{array}{ccc}
1 & 0 & 0 \\
0 & 0 & 1 \\
0 & 1 & 0
\end{array}\right).
\end{equation}
Representations of other permutation operators are obtained according to the obvious relation $\Pi_{ij}=\Pi_{ji}$.
\subsection{Projectors on the $F=0$ and $F=2$ scattering channels in the basis $\mathcal{B}$}
We denote $\hat{P}_{ij}^{(\nu)}$ the projection operator on the subspace, in which the atoms $i, j$ have the total spin $\nu$, $(\nu=0,2)$. \\
Explicit form of the projector onto $F=0$ state in terms of spin operators can be written as
\begin{equation}
\hat{P}_{ij}^{(0)}=\frac{1}{12}[(\hat{\bf S}_i+\hat{\bf S}_j)^2-6][(\hat{\bf S}_i+\hat{\bf S}_j)^2-2].
\label{Pij_0}
\end{equation}
Using Eq. (\ref{Pij_0}), and definition Eq. (\ref{BasisB}), we obtain the following matrix presentation of the projectors in the basis $\mathcal{B}$
\begin{equation}
\hat{P}_{12}^{(0)}=\left(\begin{array}{ccc}
0 & 0 & 0 \\
0 & 0 & 0 \\
\frac{1}{3} & \frac{1}{3} & 1
\end{array}
\right), \,
\, \,
\hat{P}_{23}^{(0)}=\left(\begin{array}{ccc}
1 & \frac{1}{3} & \frac{1}{3} \\
0 & 0 & 0 \\
0 & 0 & 0
\end{array}
\right), \,
\, \,
\hat{P}_{31}^{(0)}=\left(\begin{array}{ccc}
0 & 0 & 0 \\
\frac{1}{3} & 1 & \frac{1}{3}\\
0 & 0 & 0
\end{array}
\right).
\label{P_0}
\end{equation}
According to the construction of the basis states, the singlet state of the atom 4 and the atom $i$ means also the singlet of the two complementary atoms, $j$ and $k$, where $j,k \neq i, 4$. Therefore, for the projectors involving the atom 4, we have
\begin{equation}
\hat{P}_{14}^{(0)}= \hat{P}_{23}^{(0)}, \ \hat{P}_{24}^{(0)}= \hat{P}_{13}^{(0)}, \ \hat{P}_{34}^{(0)}= \hat{P}_{12}^{(0)}.
\end{equation}
Explicit form of the projector onto $F=2$ state in terms of spin operators can be written as
\begin{equation}
\hat{P}_{ij}^{(2)}=\frac{1}{24}(\hat{\bf S}_i+\hat{\bf S}_j)^2[(\hat{\bf S}_i+\hat{\bf S}_j)^2-2].
\label{Pij_2}
\end{equation}
The matrix representation of the projection operator $\hat{P}_{12}^{(2)}$ is given by
\begin{equation}
\hat{P}_{12}^{(2)}=\frac{1}{6}\left(\begin{array}{ccc}
3 & 3 & 0 \\
3 &3 & 0 \\
- 2 & - 2 & 0
\end{array}\right).
\label{P2-matrix}
\end{equation}
Other projection operators $P^{(2)}_{ij}$ are obtained by action of the permutation operator on $\hat{P}_{12}^{(2)}$ according to the rule
\begin{equation}
P^{(2)}_{ij}=\Pi_{1i}\Pi_{2j}P^{(2)}_{12}\Pi_{1i}\Pi_{2j}
\end{equation}
\subsection{Derivation of STM equations}
Now the Bethe-Peierls boundary conditions can be formulated for each pair of particles $i,j$ as
\begin{equation}
\left(\partial_{r_{ij}}\hat{P}_{ij}^{(\nu)}\boldsymbol{\chi}+\frac{1}{a_{\nu}} \hat{P}_{ij}^{(\nu)}\boldsymbol{\chi}\right)\bigg|_{|{\bf r}_{ij}|=R_0}=0,
\label{BP_chi}
\end{equation}
where $\boldsymbol{\chi}=(\chi_1, \chi_2, \chi_3)^T$.
The boundary conditions Eq. (\ref{BP_chi}) are formulated on the circle $r_{ij}=R_0$.
The general solution in terms of Green's function can be written as (cf. Eq. (\ref{genSolution}))
\begin{equation}
\boldsymbol{\chi}=\boldsymbol{\chi}_0+\sum_{\langle i, j\rangle} \int_{|{\bf r}'_i-{\bf r}'_j|=R_0}G_E({\bf X}-{\bf X}') \boldsymbol{f}^{ij}({\bf X'}) d{\bf X'},
\label{genSol_3body}
\end{equation}
where
\begin{equation}
\left(\frac{1}{2}\nabla^2_{\bf X}+E \right)G_{E}({\bf X-X'})=- \delta({\bf X-X'}),
\label{GF4-eq}
\end{equation}
and we introduced 8-dimensional coordinate vectors
\begin{equation}
{\bf X}=({\bf r}_1, {\bf r}_2, {\bf r}_3, {\bf r}_4), \, \, \, {\bf X}'=({\bf r}'_1, {\bf r}'_2, {\bf r}'_3, {\bf r}'_4).
\label{X}
\end{equation}
In Eq. (\ref{genSol_3body}) we introduced a vector-valued function $\boldsymbol{f}^{ij}=(f^{ij}_1, f^{ij}_2, f^{ij}_3)^{T}$ for each pair of points $(ij)$ with its domain given by the cylinder $r'_{ij}=R_0$.
The symmetry of the wave function under permutations of atoms induces linear relationships between the functions
$\boldsymbol{f}^{ij}$.
\begin{eqnarray}
&&
\hat{\Pi}_{ij} \boldsymbol{f}^{ij}=\boldsymbol{f}^{ij}, \label{f12a}\\
&&
\hat{\Pi}_{ij} \boldsymbol{f}^{jk}=\boldsymbol{f}^{ik},
\label{f12b}
\end{eqnarray}
where we suppressed the spatial arguments.
Note, that no summation over repeating indexes is implied in Eqs. (\ref{f12a}), (\ref{f12b}).
For $\boldsymbol{f}^{12}$ Eq. (\ref{f12a}) implies
\begin{equation}
f^{12}_1=f^{12}_2, \, \, \, f^{12}_2=f^{12}_1.
\end{equation}
It follows that the function $\boldsymbol{f}^{12}({\bf X})$ can be parametrized by two independent functions $\alpha({\bf X})$ and $\beta({\bf X})$
\begin{equation}
\boldsymbol{f}^{12}=\alpha \left(
\begin{array}{c}
1 \\ 1\\ 0
\end{array}
\right)
+\beta \left(
\begin{array}{c}
0 \\ 0\\ 1
\end{array}
\right) .
\label{f12_alphabeta_next}
\end{equation}
Applying the relations Eqs. (\ref{f12b}) to Eq. (\ref{f12_alphabeta_next}), we obtain $\boldsymbol{f}^{34}=\boldsymbol{f}^{12}$, and
\begin{eqnarray}
&&
\boldsymbol{f}^{23}=\boldsymbol{f}^{14}=\alpha \left(
\begin{array}{c}
0 \\ 1\\ 1
\end{array}
\right)
+\beta \left(
\begin{array}{c}
1 \\ 0\\ 0
\end{array}
\right),
\label{f23_alphabeta_2} \\
&&
\boldsymbol{f}^{13}=\boldsymbol{f}^{24}=\alpha \left(
\begin{array}{c}
1 \\ 0 \\ 1
\end{array}
\right)
+\beta \left(
\begin{array}{c}
0 \\ 1 \\ 0
\end{array}
\right) .
\label{f12_alphabeta-next2}
\end{eqnarray}
In terms of the functions $\alpha({\bf X})$ and $\beta({\bf X})$, the general solution for the wave function acquires the form
\begin{eqnarray}
\nonumber &&
\left(\begin{array}{c}
\chi_1({\bf X}) \\ \chi_2({\bf X}) \\ \chi_3({\bf X})
\end{array}\right) =
\left(\begin{array}{c}
\chi_1^0({\bf X}) \\ \chi_2^0 ({\bf X})\\ \chi_3^0({\bf X})
\end{array}\right)
+\left\{ \left(\int_{|{\bf r}'_1-{\bf r}'_2|=R_0}+\int_{|{\bf r}'_3-{\bf r}'_4|=R_0}\right)
\left(\begin{array}{c} \alpha({\bf X}') \\ \alpha({\bf X}') \\ \beta({\bf X}') \end{array}\right) \right. \\
\nonumber &&
\left.
+\left(\int_{|{\bf r}'_1-{\bf r}'_3|=R_0}+\int_{|{\bf r}'_2-{\bf r}'_4|=R_0} \right)
\left(\begin{array}{c} \alpha({\bf X}') \\ \beta({\bf X}') \\ \alpha({\bf X}') \end{array}\right)
+\left(\int_{|{\bf r}'_1-{\bf r}'_4|=R_0} +\int_{|{\bf r}'_2-{\bf r}'_3|=R_0}\right)
\left(\begin{array}{c} \beta({\bf X}') \\ \alpha({\bf X}') \\ \alpha({\bf X}') \end{array}\right) \right\}
G_E({\bf X}-{\bf X}') d{\bf X}',\\
\label{genSol-alpha-beta}
\end{eqnarray}
Now we apply the boundary condition Eq. (\ref{BP_chi}) to the general form Eq. (\ref{genSol-alpha-beta}), and derive equations for the functions $\alpha$ and $\beta$. For instance, the application of the boundary condition Eq. (\ref{BP_chi}) at $|{\bf r}_1-{\bf r}_2|=R_0$ for $\nu=2$ channel leads to equation
\begin{eqnarray}
\nonumber &&
\partial_{r_{12}}(\chi_1^0 + \chi_2^0)\bigg|_{|{\bf r}_{12}|=R_0} +
2\partial_{r_{12}} \left\{\left[\int_{|{\bf r}'_{12}|=R_0} + \int_{|{\bf r}'_{34}|=R_0}\right] G_E({\bf X}-{\bf X}') \alpha({\bf X}') d{\bf X}' + \right. \\
\nonumber &&
\left.
\left[\int_{|{\bf r}'_{13}|=R_0} + \int_{|{\bf r}'_{14}|=R_0}+\int_{|{\bf r}'_{23}|=R_0}+\int_{|{\bf r}'_{24}|=R_0}\right] G_E({\bf X}_i-{\bf X}'_i) \left(\alpha({\bf X}') +\beta({\bf X}') \right) d{\bf X}' \right\}\bigg|_{|{\bf r}_{12}|=R_0} = \\
\nonumber &&
-\frac{1}{a_2} \left\{(\chi_1^0 + \chi_2^0)\bigg|_{|r_{12}|=R_0} +
2 \left[\int_{|{\bf r}'_{12}|=R_0} + \int_{|{\bf r}'_{34}|=R_0}\right] G_E({\bf X}-{\bf X}') \alpha({\bf X}') d{\bf X}' + \right. \\
\nonumber &&
\left.
\left[\int_{|{\bf r}'_{13}|=R_0} + \int_{|{\bf r}'_{14}|=R_0}+\int_{|{\bf r}'_{23}|=R_0}+\int_{|{\bf r}'_{24}|=R_0}\right]
G_E({\bf X}-{\bf X}') \left(\alpha({\bf X}') +\beta({\bf X}') \right) d{\bf X}'
\right\}\bigg|_{|{\bf r}_{12}|=R_0}.
\end{eqnarray}
On the left hand side one can leave only the most singular term for $r_{12}\rightarrow R_0$, in which the the derivative of the Green function is taken by the variable $r_{12}$, normal to the scattering surface at which the boundary condition is imposed. We obtain
\begin{eqnarray}
\nonumber &&
\int_{|{\bf r}'_{12}|=R_0}\partial_{r_{12}} G_E({\bf X}_i-{\bf X}'_i) \left(2 \alpha({\bf X}') \right) d{\bf X}' \bigg|_{|{\bf r}_{12}|=R_0} = \mathcal{I}_0^{(2)}-\frac{1}{a_2} \left\{
\left[\int_{|{\bf r}'_{12}|=R_0} + \int_{|{\bf r}'_{34}|=R_0}\right] G_E({\bf r}_i-{\bf r}'_i) (2\alpha({\bf X}')) d{\bf X}' + \right. \\
&&
\left.
\left[\int_{|{\bf r}'_{13}|=R_0} + \int_{|{\bf r}'_{14}|=R_0}+\int_{|{\bf r}'_{23}|=R_0}+\int_{|{\bf r}'_{24}|=R_0}\right]
G_E({\bf X}-{\bf X}') \left(\alpha({\bf X}') +\beta({\bf X}') \right) d{\bf X}'
\right\}\bigg|_{|{\bf r}_{12}|=R_0}.
\label{BC2-alpha-beta}
\end{eqnarray}
Here
\begin{equation}
\mathcal{I}_0^{(2)}=-\left(\frac{1}{a_2}+\partial_{r_{12}}\right)(\chi_1^0 + \chi_2^0)\bigg|_{|{\bf r}_{12}|=R_0}
\label{Source2}
\end{equation}
denotes the source field, describing the incoming wave in the $F=2$ channel.
Analogously, in the $F=0$ channel, we obtain
\begin{eqnarray}
\nonumber &&
\int_{|{\bf r}'_{12}|=R_0}\partial_{r_{12}} G_E({\bf X}-{\bf X}') \left(\frac{2}{3}\alpha({\bf X}') +
\beta({\bf X}') \right) d{\bf X}'\bigg|_{|{\bf r}_{12}|=R_0} = \\
\nonumber &&
\mathcal{I}_0^{(0)} -\frac{1}{a_0} \left\{\left[\int_{|{\bf X}'_{12}|=R_0} + \int_{|{\bf X}'_{34}|=R_0}\right]G_E({\bf X}_i-{\bf X}'_i) \left(\frac{2}{3}\alpha({\bf X}') +
\beta({\bf X}') \right) d{\bf X}' + \right. \\
&&
\left.
\left[\int_{|{\bf r}'_{13}|=R_0} + \int_{|{\bf r}'_{14}|=R_0}+\int_{|{\bf r}'_{23}|=R_0}+\int_{|{\bf r}'_{24}|=R_0}\right]
G_E({\bf X}-{\bf X}') \left(\frac{4}{3}\alpha({\bf X}') +\frac{1}{3}\beta({\bf X}') \right) d{\bf X}'
\right\}\bigg|_{|{\bf r}_{12}|=R_0},
\label{BC0-alpha-beta}
\end{eqnarray}
where
\begin{equation}
\mathcal{I}_0^{(0)}=-\left(\frac{1}{a_2}+\partial_{r_{12}}\right)\left[\frac{1}{3}(\chi_1^0 + \chi_2^0) + \chi_3^0\right]\bigg|_{|{\bf r}_{12}|=R_0}
\end{equation}
denotes the source field, describing the incoming wave in the $F=0$ channel.
Due to the permutation symmetry, the boundary conditions at other pairs of points $|{\bf r}_{ij}|=R_0$ do not lead to new independent equations.
\subsubsection{Transition to relative coordinates and separation of singularity}
The center of mass coordinate of four atoms is given by ${\bf R}=\frac{1}{4}({\bf r}_1+{\bf r}_2+{\bf r}_3+{\bf r}_4)$.
For the following calculations we introduce the set of relative (Jacobi) coordinates:
\begin{eqnarray}
{\bf z}=({\bf r}_3-{\bf r}_4), & {\bf y}=({\bf r}_1-{\bf r}_2), & {\bf x}=\frac{1}{\sqrt{2}}[({\bf r}_3+{\bf r}_4)-({\bf r}_1+{\bf r}_2)].
\label{relative_coordinatessup}
\end{eqnarray}
The boundary condition at $|{\bf r}_{12}|=R_0$ now acquires the form $|{\bf y}|=R_0$.
We go to the center of mass system by integrating Eqs. (\ref{BC2-alpha-beta}), ({\ref{BC0-alpha-beta}) over the center of mass coordinate ${\bf R}$. The resulting equations depend only on the relative coordinates given by Eq. (\ref{relative_coordinates}).
The free four-particle Green function in relative coordinates satisfies the equation
\begin{equation}
(\nabla^2_{\bf x}+\nabla^2_{\bf y}+\nabla^2_{\bf z}+E)G_{E}({\bf x-x'} , {\bf y-y'}, {\bf z-z'})=- 2 \delta({\bf x-x'}) \delta({\bf y-y'}) \delta({\bf z-z'}),
\label{4PGF-eq}
\end{equation}
which is an equation for a Green function of a free particle in 6 dimensions. In the case of negative energy $E=-|E|<0$ the solution of Eq. (\ref{4PGF-eq}) can be written in the form
\begin{equation}
G_{ E } (\mathbf Z) = \vert E \vert^2 G_0(\sqrt{\vert E \vert } |\mathbf Z|),
\end{equation}
where
\begin{equation}
G_0(\xi) =\frac{K_2( \xi)}{4\pi^3 \xi^2 },
\end{equation}
and ${\bf Z}=({\bf x}, {\bf y}, {\bf z})$ is the 6-dimensional vector of relative coordinates. $K_2(\xi)$ denotes the modified Bessel function.
The Fourier transformed Green function is given by
\begin{equation}
G_{0}({\bf K})=\frac{2}{\left|{\bf K}\right|^2+1},
\end{equation}
where ${\bf K}=({\bf k}_{\bf x}, {\bf k}_{\bf y}, {\bf k}_{\bf z})$ is the 6-dimensional wave vector.
To simplify the form of STM equations further, we introduce the source fields corresponding specifically to $S=0$ and $S=2$ channels as follows
\begin{equation}
f_0({\bf Z})=2\pi R_0\left(\frac{2}{3}\alpha({\bf Z})+\beta({\bf Z})\right), \, \, f_2({\bf Z})=4\pi R_0\alpha({\bf Z}).
\label{def_f0f2}
\end{equation}
The boundary $|{\bf r}_{12}|=R_0$ transforms in the relative coordinates to $|{\bf y}|=R_0$.
Eqs. (\ref{BC2-alpha-beta}), (\ref{BC0-alpha-beta}) develop singularities at the surfaces $|{\bf r}'_{12}|=|{\bf r}_{12}|=R_0$, which in relative coordinates transforms to $|{\bf y}|=|{\bf y}'|=R_0$, when the collision surface coincides with the surface at which the boundary condition is imposed.
The singularities are dealt with by subtraction and addition of
$f_2({\bf x}, R_0, {\bf z})$ or $f_0({\bf x}, R_0, {\bf z})$ for Eqs. (\ref{BC2-alpha-beta}), (\ref{BC0-alpha-beta}) respectively, similarly to the procedure described in Ref. \cite{Petrov}.
Finally, we introduce dimensionless coordinates by re-scaling $\mathbf x \to \mathbf x /\sqrt{\vert E \vert}$. Then the STM equations acquire the following explicit form
\begin{eqnarray}
\nonumber &&
\frac{f_0({\bf x}, {\bf z})}{\gamma_0}=\frac{1}{3}(\chi_1^0({\bf x}, 0, {\bf z}) + \chi_2^0({\bf x}, 0, {\bf z})) + \chi_3^0({\bf x}, 0, {\bf z}) + \\
\nonumber &&
\int[f_0({\bf x'}, {\bf z}'))-f_0({\bf x}, {\bf z}))]
G_0({\bf x}-{\bf x'}, 0, {\bf z}-{\bf z}') d^2{\bf x}' d^2{\bf z'} + \\
\nonumber &&
\int G_0 \left(\sqrt{({\bf x}+{\bf x'})^2+{\bf y'}^2+{\bf z}^2}\right) f_0({\bf x'}, {\bf y'}) d{\bf x}' d{\bf y}'+ \\
\nonumber &&
2\int \left[G_0\left(\sqrt{{\bf z}^2+{\bf x}^2+\sqrt{2} {\bf z}\cdot{\bf x'} +(\sqrt{2}{\bf x}-{\bf z})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right) + \right. \\
\nonumber &&
\left.
G_0\left(\sqrt{{\bf z}^2+{\bf x}^2 - \sqrt{2} {\bf z}\cdot{\bf x'} - (\sqrt{2}{\bf x}+{\bf z})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right) \right] \left(\frac{1}{3}f_0({\bf x}', {\bf z'}) + \frac{5}{9}f_2({\bf x}', {\bf z'}) \right) d{\bf x}' d{\bf z'}, \\
\label{0-channel_fin}
\end{eqnarray}
\begin{eqnarray}
\nonumber &&
\frac{f_2({\bf x}, {\bf z})}{\gamma_2}=(\chi_1^0({\bf x}, 0, {\bf z}) + \chi_2^0({\bf x}, 0, {\bf z})) + \\
\nonumber &&
\int[f_2({\bf x'}, {\bf z}')-f_2({\bf x}, {\bf z})]
G_0({\bf x}-{\bf x'}, 0, {\bf z}-{\bf z}') d^2{\bf x}' d^2{\bf z'} + \\
\nonumber &&
\int G_0\left(\sqrt{({\bf x}+{\bf x'})^2+{\bf y'}^2+{\bf z}^2}\right) f_2({\bf x'}, {\bf y'}) d{\bf x}' d{\bf y}' + \\
\nonumber &&
2\int G_0\left(\sqrt{{\bf z}^2+{\bf x}^2+\sqrt{2} {\bf z}\cdot{\bf x'} +(\sqrt{2}{\bf x}-{\bf z})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right)
\left(f_0({\bf x}', {\bf z'}) +\frac{1}{6}f_2({\bf x}', {\bf z'}) \right) d{\bf x}' d{\bf z'} +\\
\nonumber &&
2 \int G_0\left(\sqrt{{\bf z}^2+{\bf x}^2 - \sqrt{2} {\bf z}\cdot{\bf x'} - (\sqrt{2}{\bf x}+{\bf z})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right)
\left(f_0({\bf x}', {\bf z'}) +\frac{1}{6}f_2({\bf x}', {\bf z'}) \right) d{\bf x}' d{\bf z}'. \\
\label{2-channel_fin}
\end{eqnarray}
where all distances are measured in units of $1/\sqrt{\vert E \vert}$.
Here, all microscopic scattering parameters enter just in the form of two constants, $\gamma_0$ and $\gamma_2$, which are defined as follows
\begin{equation}
\gamma_0=\frac{\pi}{\ln\left(R_0\sqrt{|E|} e^{a_0/R_0}\right) }, \, \, \gamma_2=\frac{\pi}{\ln\left(R_0\sqrt{|E|}e^{a_2/R_0}\right)}.
\label{gamma_02}
\end{equation}
Fourier transform of Eqs. (\ref{0-channel_fin}), (\ref{2-channel_fin}) is performed according to the following Fourier representations of the Green functions
\begin{eqnarray}
\nonumber &&
G_0\left(\sqrt{({\bf x}+{\bf x'})^2+{\bf y'}^2+{\bf z}^2}\right)= \\
&&
\int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2} \exp\left[i\left\{{\bf k}\cdot {\bf z}+{\bf p}\cdot({\bf x}+{\bf x'})-{\bf q}\cdot{\bf y'}\right\}\right],
\end{eqnarray}
and
\begin{eqnarray}
\nonumber &&
G_0\left(\sqrt{{\bf z}^2+{\bf x}^2\pm\sqrt{2} {\bf z}\cdot{\bf x'}-({\bf z}\mp\sqrt{2}{\bf x})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right) = \\
&& \int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2}
\exp\left[i\left\{{\bf k}\cdot\frac{{\bf z}-{\bf z'}}{\sqrt{2}}+{\bf p}\cdot \left(\frac{{\bf z}}{\sqrt{2}}\pm{\bf x'}\right)+{\bf q}\cdot \left(\frac{{\bf z'}}{\sqrt{2}}\pm {\bf x}\right)\right\}\right].
\end{eqnarray}
Below we demonstrate the representation of two typical terms in Eqs. (\ref{0-channel_fin}), (\ref{2-channel_fin}) by Fourier components, from which the Fourier transform becomes obvious
\begin{eqnarray}
\nonumber &&
\int G_0 \left(\sqrt{({\bf x}+{\bf x'})^2+{\bf y'}^2+{\bf z}^2}\right) f({\bf x'}, {\bf y'}) d{\bf x}' d{\bf y}'=\\
\nonumber &&
\int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2}
e^{i{\bf p}\cdot({\bf x}+{\bf x'})}e^{-i{\bf q}\cdot{\bf y'}}e^{i{\bf k}\cdot {\bf z}}f({\bf x'}, {\bf y'})
=\\
&&
\int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2}f(-{\bf p}, {\bf q})
e^{i({\bf p}\cdot{\bf x}+{\bf k}\cdot {\bf z})}.
\label{Fourier1}
\end{eqnarray}
This leads to the first term in the right hand side of Eq. (12) in the main text of the paper.
Furthermore
\begin{eqnarray}
\nonumber &&
\int G_0\left(\sqrt{{\bf z}^2+{\bf x}^2+\sqrt{2} {\bf z}\cdot{\bf x'} +(\sqrt{2}{\bf x}-{\bf z})\cdot{\bf z'}+{\bf x'}^2+{\bf z'}^2}\right) f({\bf x}', {\bf z'}) = \\
\nonumber &&
\int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2}
\int\frac{d^2{\bf k'}}{(2\pi)^2}\frac{d^2{\bf q'}}{(2\pi)^2}\int d{\bf x}' d{\bf y}' f({\bf k'}, {\bf q'})
e^{i{\bf x'}\cdot({\bf k'}+{\bf p})} e^{i{\bf z'}\cdot\left({\bf q'}+\frac{{\bf q}-{\bf k}}{\sqrt{2}}\right)}
e^{i{\bf z}\cdot\left(\frac{{\bf k}+{\bf p}}{\sqrt{2}}\right)} e^{i{\bf x}\cdot{\bf q}}=\\
&&
\int\frac{d^2{\bf k}}{(2\pi)^2}\frac{d^2{\bf p}}{(2\pi)^2}\frac{d^2{\bf q}}{(2\pi)^2}\frac{2}{1+k^2+p^2+q^2}
f\left(-{\bf p}, \frac{{\bf k}-{\bf q}}{\sqrt{2}}\right) e^{i\frac{{\bf k}+{\bf p}}{\sqrt{2}}\cdot{\bf z}}
e^{i{\bf q}\cdot{\bf x}}.
\end{eqnarray}
Introducing new variables
\begin{equation}
{\bf P}=-\frac{{\bf q}+{\bf p}}{\sqrt{2}}, \, \, \, {\bf Q}=\frac{{\bf q}-{\bf p}}{\sqrt{2}},
\label{PQ}
\end{equation}
we obtain
\begin{equation}
-{\bf p}=\frac{1}{\sqrt{2}}({\bf P}+{\bf Q}), \, \, \, \frac{{\bf k}-{\bf q}}{\sqrt{2}}=\frac{1}{\sqrt{2}}({\bf k}+{\bf P}-{\bf Q}) ,
\label{pq}
\end{equation}
which leads to the last term in the right hand side of Eq. (12) in the main text of the paper with $s=-1$.
\end{document}
|
1,477,468,750,983 | arxiv | |
1,477,468,750,984 | arxiv | \section{Introduction} \label{introduction}
The properties of quantum electron on a fractal substrate and under the influence of a magnetic field were studied long ago in the physics literature~\cite{Alexanderetal,Alexander,Rammal,AO,Rammal2,Ghez} as part of a more general program involving quasiperiodic media~\cite{Simon,Bel}, but until recently there has been no mathematically rigorous model for even formulating a magnetic Schr\"{o}dinger equation on a self-similar fractal set. We remedy this in the special case of the Sierpinski Gasket with certain simple magnetic fields using mathematical developments from the study of diffusions and Laplacian-type operators on fractals using probability and functional analysis (see~\cite{BarlowPerkins,Kigamibook,Strichartzbook} and references therein) and the recent introduction of differential forms associated to this structure~\cite{CS,IRT,ACSY,CSetal,HT,hinzetal,HR}. These developments in analysis on fractals have benefited from and contributed to the understanding of quantum and statistical physics~\cite{ADT09,ADT10,ABDTV12,Dunne12,Akk}.
Our goal in this paper is to introduce a mathematically rigorous Schr\"{o}dinger equation for a magnetic operator on the Sierpinski Gasket (SG), following the methods of~\cite{IRT,HT,hinzetal,HR}, and study its spectrum, which by~\cite{HR} is discrete and accumulates only at $\infty$ (Theorem~\ref{thm:mainresultofHR}). For reasons of mathematical simplicity we consider a somewhat unphysical situation in which the magnetic field has non-zero flux through only finitely many of the ``holes'' in the gasket. In this situation we are able to prove that the magnetic operator may be approximated in an appropriate sense by a renormalized sequence of magnetic operators on approximating graphs (Theorems~\ref{thm:DFamconvergestoDFa} and~\ref{thm:convergeofMagm}). This approximation generalizes the well-known approximation of a Dirichlet form on SG by renormalized graph Dirichlet forms~\cite{Kigami89,Kigamibook}. The approximating magnetic operators provide a method for numerical study of the spectrum and some data of this type is in Section~\ref{sec:spectra}. Guided by the observations in this data and using the description of the Laplacian spectrum from the spectral decimation method~\cite{RammalToulouse,FukushimaShima,MT} we show that a field through only finitely many holes of SG modifies only those eigenvalues for which the eigenfunctions have support enclosing these holes (Theorems~\ref{thm:conjugateLapefns} and~\ref{thm:spectrumofMaga}), and conclude that the spectral asymptotics of the magnetic operator are the same as those of the Laplacian (Corollary~\ref{cor:spectasymptotics}). In principle, for any magnetic field of this type, one can use our methods to compute the bulk of the spectrum and the associated eigenfunctions by applying suitable gauge transformations to Laplacian eigenfunctions. For the small (asymptotically vanishing) portion of the spectrum that is not found by this method one can choose $\lambda$ and compute all eigenvalues of size less than $\lambda$ by solving finitely many linear algebra problems. We also give a description of the basic modification that a magnetic field makes to the Laplacian spectrum by examining periodic functions on a covering space (Section~\ref{sec:ladderfractafold}). In the case of SG the relevant covering space is a fractafold (as defined in~\cite{STR03}) called the Sierpinski Ladder~\cite{ST12}.
\section{Analysis and $1$-forms on SG}\label{sec:SG}
The Sierpinski Gasket (SG) is the attractor of the Iterated Function System $\{ F_j = \frac{1}{2} (x - p_j ) + p_j \}$, $j=0,1,2$ for $\{p_{j}\}$ the vertices of an equilateral triangle in $\mathbb{R}^2$. The image of SG under an $m$-fold composition of these maps is called an $m$-cell. We index these by words: let $w=w_{1}w_{2}\dotsc w_{m}\in\{0,1,2\}^{n}$ be a word of length $|w|=m$ and $F_{w}=F_{w_{1}}\circ \dotsm \circ F_{w_{m}}$. Then $F_{w}(\SG)$ is an $m$-cell. From the cellular structure of SG we obtain a sequence of graphs. Let $V_{0}=\{p_{0},p_{1},p_{2}\}$ and inductively $V_{m}=\cup_{j=0,1,2} F_{j}(V_{m-1})$. The $m^{\text{th}}$-level graph approximation of SG is the graph with vertices $V_{m}$ and edges between pairs of vertices that are contained in a common $m$-cell. We write $x\sim_{m}y$ to denote that there is an edge between $x,y\in V_{m}$ in the $m$-scale graph. The set $V_{\ast}=\cup_{m}V_{m}$ is dense in SG.
Analysis on SG is based on the existence of a Dirichlet form and an associated Laplacian. Of the available constructions~\cite{BarlowPerkins,Lindstrom,Kigamibook} we follow the method of Kigami~\cite{Kigamibook}, some features of which are as follows. Proofs of all of the results stated may be found in~\cite{Kigamibook,Strichartzbook}. We endow SG with the (unique) self-similar probability measure $\mu$ that is invariant under the symmetries of the triangle with vertices the points $p_j$.
\begin{enumerate}[{A}1]
\item\label{AselfsimilardecompofDF} There is a Dirichlet form $\mathcal{E}$ on SG with domain $\mathcal{F}\subset L^{2}(\mu)$ consisting of continuous functions. $\mathcal{E}$ may be localized to any $m$-cell and is self-similar with scaling factor $\frac{5}{3}$. Specifically, for a word $w$ with $|w|=m$ let $\mathcal{E}_{w}(f,g)=(\frac{5}{3}\bigr)^{m}\mathcal{E}(f\circ F_{w},g\circ F_{w})$ so $\mathcal{E}_{w}$ is a Dirichlet form on $F_{w}(\SG)$. Then $\mathcal{E}(f,g)=\sum_{|w|=m}\mathcal{E}_{w}(f,g)$.
\item\label{ADFislimitofgraphs} $\mathcal{E}$ may be obtained as a limit of forms on the graphs. For $f,g: V_{*} \to \mathbb{R}$, $m \in \mathbb{N}$, let $\mathcal{E}_{m}(f,g)=\sum_{x\sim_{m}y} (f(x)-f(y))(g(x)-g(y))$. Then $\bigl(\frac{5}{3}\bigr)^{m}\mathcal{E}_{m}(f,f)$ is non-decreasing and converges to $\mathcal{E}(f,f)$ if $f\in\mathcal{F}$.
\item From standard considerations there is a non-positive definite self-adjoint Laplacian associated to $\mathcal{E}$. We define $f\in\dom(\Delta)\subsetneq\mathcal{F}$ to mean there is a continuous $\Delta f$ such that $\mathcal{E}(f,g)=\langle -\Delta f,g\rangle_{L^{2}}$ for all $g\in\mathcal{F}_{0}$, where $\mathcal{F}_{0}\subset\mathcal{F}$ is the subspace of functions that vanish on $V_{0}$.
\item Let $df(p_{j}) = \lim_{m\to\infty} \bigl(\frac{5}{3}\bigr)^{m} \bigl(2f(p_{j}) -f(F_{j}^{\circ m} p_{(j+1)})-f(F_{j}^{\circ m} p_{(j+2)})\bigr)$, where the subscripts are taken modulo $3$. If $f\in\dom(\Delta)$ then this limit exists on $V_{0}$ and there is a Gauss-Green formula $\mathcal{E}(f,g)=\langle -\Delta f,g\rangle_{L^{2}}+\sum_{j=0}^{2} df(p_{j})g(p_{j})$; we call $df(p_{j})$ the normal derivative of $f$ at $p_{j}$. Both $df$ and the Gauss-Green formula may be localized to any $m$-cell.
\item\label{Delta_is_graph_limit} $\Delta$ may be obtained as a limit of graph Laplacians. For $f : V_{*} \to \mathbb{R}$, $m \in \mathbb{N}$, let $\Delta_m f(x) = \sum_{y \sim_{m} x} (f(y) - f(x))$. Then $\Delta f(x) = \frac{3}{2} \lim_{m\to\infty} 5^{m}\Delta_{m}f(x)$
\item\label{Aharmonic} If $X\subset\SG$ is finite and $g:X\to\mathbb{R}$ then there is a unique $f\in\mathcal{F}$ such that $f|_{X}=g$ and $\mathcal{E}(f)$ is minimized; $f$ is called the harmonic extension of $g$ and satisfies $\Delta f(x)=0$ for $x\in\SG\setminus X$. If $X=V_{m}$ then also $\Delta_{n}f(x)=0$ for all $n>m$ and $x\in V_{n}\setminus V_{m}$, and $f$ is called $m$-harmonic.
\end{enumerate}
Differential forms on certain spaces that include the Sierpinski Gasket have been studied in~\cite{CS,IRT,ACSY,CSetal,HT,hinzetal}. We follow the approach in~\cite{IRT}, which introduces $1$-forms as a Hilbert space $\mathcal{H}$ generated by tensor products $f\otimes g$ with $f,g\in\mathcal{F}$, and which is a module over $\mathcal{F}$. There is then a derivation $\partial:\mathcal{F}\to\mathcal{H}$ such that $\|\partial f\|_{\mathcal{H}}^{2}=\mathcal{E}(f,f)$ and the image of $\partial$ is the space of exact forms.
The key feature that we need from~\cite{IRT} is that the action of $\mathcal{F}$ on $\mathcal{H}$ by multiplication extends to permit multiplication by much more general functions. In particular, multiplication by the characteristic function $\mathds{1}_{w}$ of an $m$-cell $F_{w}(X)$ is well-defined. This permits a cellular decomposition of $\mathcal{H}$ akin to that described in~(A\ref{AselfsimilardecompofDF}) and a notion of graph approximation like that in~(A\ref{ADFislimitofgraphs}). Proofs of the following results are in~\cite{IRT}.
\begin{enumerate}[{F}1]
\item \label{Fselfsimilar} Let $\mathcal{H}_{w}$ be the space of $1$-forms constructed from $\bigl(\mathcal{E}_{w}, \mathcal{F}|_{F_{w}(\SG)}\bigr)$ in the same manner as $\mathcal{H}$ is constructed from $(\mathcal{E},\mathcal{F})$. If $h_{w}=f|_{F_{w}(\SG)}\otimes g|_{F_{w}(\SG)}$ then the map $h_{w}\mapsto (f\circ F_{w})\otimes (g\circ F_{w})=h$ takes the dense subspace of generators of $\mathcal{H}_{w}$ to those of $\mathcal{H}$ and has $\|h_{w}\|_{\mathcal{H}_{w}}^{2}=\bigl(\frac{5}{3}\bigr)^{m} \|h\circ F_{w}\|_{\mathcal{H}}^{2}$, so extends to an isomorphism of $\mathcal{H}_{w}$ to $\mathcal{H}$.
\item\label{Fcelldecomp} $\mathcal{H}_{w}$ is isometrically isomorphic to the subspace $\{a\mathds{1}_{w}:a\in\mathcal{H}\}$ via the continuous extension of the identification of $f|_{F_{w}(\SG)}\otimes g|_{F_{w}(\SG)}$ with $(f\otimes g)\mathds{1}_{w}$ and there is a direct sum decomposition $\mathcal{H}=\bigoplus_{|w|=m}\mathcal{H}_{w}$.
\item\label{FspacesHilm} Let $\mathcal{H}_{m}$ be the subspace of $\mathcal{H}$ generated by $\bigl\{f\otimes\mathds{1}_{w}: f\text{ is $m$-harmonic and } |w|=m \bigr\}$. Then $\mathcal{H}_{m}\subset\mathcal{H}_{m+1}$ for all $m$ and $\cup_{m}\mathcal{H}_{m}$ is dense in $\mathcal{H}$. The preceding results imply that $\mathcal{H}_{m}$ is isomorphic to a direct sum of copies of $\mathcal{H}_{0}$, with one copy of $\mathcal{H}_{0}$ for each $m$-cell. Moreover $\mathcal{H}_{0}$ is isomorphic to the harmonic functions modulo constants on SG, and is obtained from this space by applying the derviation $\partial$.
\end{enumerate}
Though it is not made explicit in~\cite{IRT}, the result in~(F\ref{FspacesHilm}) gives a connection to $1$-forms on graphs. Recall that a $1$-form on a graph is a simply a function on the set of directed edges. Let $a\in\mathcal{H}_{m}$ and $e_{xy}$ denote the edge from $x$ to $y$ in the $m$-scale graph. Take $w$ with $|w|=m$ so $F_{w}(\SG)$ is the unique cell containing $e_{xy}$ and use~(F\ref{FspacesHilm}) to obtain a harmonic function modulo constants $A_{w}$ corresponding to $a\mathds{1}_{w}$. If we set $A(e_{xy})=A_{w}(y)-A_{w}(x)$ then $A$ is a well-defined function on directed edges, so is a $1$-form on the $m$-scale graph. Moreover it is exact at scale $m$ because on each $m$-cell $F_{w}(\SG)$ it is the derivative of $A_{w}$. The norm of $a\in\mathcal{H}_{m}$ is simply $\|a\|_{\mathcal{H}}^{2}=\sum_{|w|=m} \mathcal{E}(A_{w})=\sum_{x\sim_{m}y}A(e_{xy})^{2}$.
This permits us to understand the space $\mathcal{H}$ as a generalization of $(\mathcal{E},\mathcal{F})$, because it exhibits the $\mathcal{H}$-norm as a renormalized limit of $L^{2}$-norms. To make this connection more precise we need some definitions. Let $h_{j}$ denote the harmonic function on $\SG$ which has values $h_{j}(p_{j})=0$, $h_{j}(p_{j-1})=-1$ and $h_{j}(p_{j+1})=1$.
\begin{defn}\label{defn:trace}
For any two points joined by an edge in the $m$-scale graph there is $j\in\{0,1,2\}$ and a word $w$ with $|w|=m$ such that the points are $x=F_{w}(p_{j-1})$ and $y=F_{w}(p_{j+1})$ (subindices are taken modulo $3$). Define $\Tr_{m}:\mathcal{H}\to\mathcal{H}_{m}$ by setting the value on the edge $e_{xy}$ from $x$ to $y$ to be
\begin{equation*}
(\Tr_{m}a) (e_{xy}) = \frac{1}{3} \langle a, \partial h_{j} \rangle_{\mathcal{H}}.
\end{equation*}
A sequence $\{a_{m}\}_{1}^{\infty}\subset\mathcal{H}$ is called {\em compatible} if $\Tr_{m}a_{m+1}=a_{m}$ for all $m$.
\end{defn}
The following theorem should be compared to the results in Section~4 of~\cite{ACSY}. It gives a full description of $1$-forms on SG as limits of $1$-forms on the approximating graphs.
\begin{thm}
The map $\Tr_{m}:\mathcal{H}\to\mathcal{H}_{m}$ is a projection. If $a\in\mathcal{H}$ then the sequence $\{a_{m}\}$ of projections onto $\mathcal{H}_{m}$ is compatible, $a_{m}\to a$ in $\mathcal{H}$ and $\|a_{m}\|_{\mathcal{H}}\uparrow\|a\|_{\mathcal{H}}$. Conversely, if $\{a_{m}\}$ is a compatible sequence then $a_{m}\in\mathcal{H}_{m}$ for all $m$; if we further assume that $\|a_{m}\|_{\mathcal{H}}$ is bounded then there is $a\in\mathcal{H}$ such that $a_{m}\to a$ and $a_{m}$ is the projection of $a$ to $\mathcal{H}_{m}$ for all $m$.
\end{thm}
\begin{proof}
The main thing we need to prove is that $\Tr_{m}$ is the projection onto $\mathcal{H}_{m}$. From~(F\ref{Fcelldecomp}) it is apparent that the projection can be taken one cell at a time, and the self-similarity in~(F\ref{Fselfsimilar}) implies that all cells are the same, so it suffices to show $\Tr_{0}$ is the projection onto $\mathcal{H}_{0}$. We recall that $\mathcal{H}_{0}$ is obtained from the $2$-dimensional space of harmonic functions on SG by applying the derivation.
Let $\tilde{h}_{j}$ be harmonic on SG with $\tilde{h}_{j}(p_{j})=1$, $\tilde{h}_{j}(p_{j+1})=\tilde{h}_{j}(p_{j-1})=0$. Symmetry shows that $h_{j}$ and $\tilde{h}_{j}$ are orthogonal, so $\partial h_{j}$ and $\partial\tilde{h}_{j}$ are an orthogonal basis for $\mathcal{H}_{0}$. Suppose we project $a\in\mathcal{H}$ onto $a_{0}\in\mathcal{H}_{0}$ and compute the corresponding function $A_{\emptyset}$. Since $\tilde{h}_{j}(p_{j+1})=\tilde{h}_{j}(p_{j-1})$, the difference $A(p_{j+1})-A(p_{j})$ is determined by the component involving $h_{j}$. Precisely, it is
\begin{equation*}
A(p_{j+1})-A(p_{j})
= \frac{1}{\mathcal{E}(h_{j})} \langle a,\partial h_{j}\rangle_{\mathcal{H}} \bigl(h(p_{j+1})-h(p_{j})\bigr)
= \frac{2}{6} \langle a,\partial h_{j}\rangle_{\mathcal{H}}
=\Tr_{0}a (e_{p_{j-1}p_{j}}).
\end{equation*}
Thus the trace assigns the same values to the edges as does the projection, and they must coincide. Note that, in particular, this means the values of $\Tr_{0}a$ on the three edges $e_{01}$, $e_{12}$, $e_{20}$ must sum to zero, and indeed we find from the definition that they do because $\sum_{j}h_{j}$ is identically zero, so $\sum_{j}\Tr_{0}a(e_{j(j+1)})=0$.
Having established that $\Tr_{m}$ is the projection onto $\mathcal{H}_{m}$ it is immediate that the sequence of projections $a_{m}$ of $a\in\mathcal{H}$ is compatible, and~(F\ref{FspacesHilm}) shows $a_{m}\to a$ in $\mathcal{H}$ and $\|a_{m}\|_{\mathcal{H}}\uparrow\|a\|_{\mathcal{H}}$. For the converse, if $a_{m}$ is a compatible sequence then the fact that $a_{m}=\Tr_{m}a_{m+1}$ implies $a_{m}\in\mathcal{H}_{m}$ for all $m$ and that $\|a_{m}\|_{\mathcal{H}}$ is an increasing sequence. If we suppose that $\|a_{m}\|_{\mathcal{H}}$ is bounded then using the Pythagorean decomposition $\|a_{n}\|_{\mathcal{H}}^{2}=\|a_{m}\|_{\mathcal{H}}^{2}+\|a_{n}-a_{m}\|_{\mathcal{H}}^{2}$, $n>m$, for projection in a Hilbert space we see $\|a_{n}-a_{m}\|_{\mathcal{H}}^{2}\leq \bigl(\sup_{n}\|a_{n}\|_{\mathcal{H}}^{2}\bigr)-\|a_{m}\|_{\mathcal{H}}^{2}\to0$ as $m,n\to\infty$, so the sequence is Cauchy with limit $a\in\mathcal{H}$. Finally, the composition $\Tr_{m}\circ\Tr_{m+1}\circ\dotsm\circ\Tr_{n}$ shows $a_{m}=\Tr_{m}a_{n}$ for all $n>m$ and, by taking the limit, $a_{m}=\Tr_{m}a$.
\end{proof}
\section{Magnetic forms, Magnetic Laplacian and gauge transformations}\label{sec:magnetic}
Following~\cite{hinzetal, HR} a magnetic differential may be defined as a deformation of $\partial$. To do so we treat a real-valued $1$-form $a\in\mathcal{H}$ as an operator $\mathcal{F}\to\mathcal{H}$ via multiplication, so $f\mapsto fa$. Then $(\partial+ia):\mathcal{F}\to\mathcal{H}$ is the magnetic differential obtained by deforming $\partial$ via the form $a\in\mathcal{H}$. With this approach an essential result is the following theorem.
\begin{thm}[\protect{\cite{HR}}]\label{thm:mainresultofHR}
The quadratic form $\mathcal{E}^{a}(f)=\|(\partial+ia)f\|_{\mathcal{H}}^{2}$ with domain $\mathcal{F}$ is closed on $L^{2}(\mu)$. Thus there is an associated non-positive definite self-adjoint magnetic (Neumann) Laplacian $\mathcal{M}^{a}_{N}$ satisfying $$\mathcal{E}^{a}(f,g)=\langle -\mathcal{M}^{a}_{N}f,g\rangle_{L^{2}(\mu)}$$ for all $g\in\mathcal{F}$. Moreover $\mathcal{M}^{a}_{N}$ has compact resolvent, so the spectrum of $-\mathcal{M}^{a}_{N}$ is a sequence $0\leq\kappa_{1}\leq\kappa_{2}\leq\dotsm$ accumulating only at $\infty$.
\end{thm}
The same argument provides that the quadratic form $(\mathcal{E}^{a},\mathcal{F}_{0})$ is closed on the space $L^{2}(\SG\setminus V_{0},\mu)$ and defines a magnetic (Dirichlet) Laplacian $\mathcal{M}^{a}_{D}$ with compact resolvent and $\mathcal{E}^{a}(f,g)=\langle-\mathcal{M}^{a}_{D}f,g\rangle$ for all $g\in\mathcal{F}_{0}$. Henceforth we will just use the Dirichlet magnetic operator and will denote it $\mathcal{M}^{a}$. Much of our work transfers to the Neumann magnetic operator with minor changes.
\begin{rmk}
We are using the complexification of each of the spaces $L^2(\mu)$, $\mathcal{F}$, $\mathcal{H}$, $\dom(\Delta)$ as well as the subspaces $\mathcal{F}_0$, $\mathcal{H}_m$, etc. These are standard, but for the convenience of the reader we recall that one may complexify $\mathcal{F}$ by endowing $\mathcal{F}+i\mathcal{F}$ with the form
\begin{equation*}
\mathcal{E}(f,g) = \mathcal{E}(f_1,g_1) -i\mathcal{E}(f_1,g_2) + i\mathcal{E}(f_2,g_1) +\mathcal{E}(f_2,g_2)
\end{equation*}
where $f=f_1+if_2$ and $g=g_1+ig_2$. In this case the finite approximations in~(A.\ref{ADFislimitofgraphs}) become $\mathcal{E}_m(f,g)=\sum_{x\sim_m} (f(x)-f(y))(\overline{g(x)-g(y)})$. One may then construct $\mathcal{H}$ from the complexified version of $\mathcal{F}\otimes\mathcal{F}$ in the same manner as was done in the real case in~\cite{IRT} and discussed in Section~\ref{sec:SG}.
\end{rmk}
We wish to study the spectrum of $\mathcal{M}^{a}$ by making graph approximations. For this reason we introduce a graph magnetic form and a graph magnetic Laplacian. The connection between these and $\mathcal{E}^{a}$ and $M^{a}$ is not immediately obvious but will rapidly become apparent.
\begin{defn}
Suppose $a\in\mathcal{H}$ is real-valued and for each $m\in\mathbb{N}$ let $a_{m}$ be the projection of $a$ to $\mathcal{H}_{m}$. For $f,g: V_{*} \to \mathbb{C}$ define
\begin{gather}
\mathcal{E}_{m}^{a_{m}}(f) =\sum_{x,y: x\sim_{m} y} \Bigl| f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr|^{2} \\
\mathcal{M}^{a_{m}}_{m} f(x) = - \sum_{y:y\sim_{m}x} \Bigl( f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr) \quad\text{ for }x\in V_{m}\setminus V_{0}.
\end{gather}
\end{defn}
We have the usual relation
\begin{equation} \label{eqn:DFmandMagfm}
\mathcal{E}_{m}^{a_{m}}(f,g) = \langle -\mathcal{M}^{a_{m}}_{m} f, g \rangle_{l^{2}(V_{m})},
\end{equation}
when $g=0$ on $V_0$, as may be verified by direct computation:
\begin{align*}
\lefteqn{2 \sum_{x,y: x\sim_{m} y} \Bigl( f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr)\overline{\Bigl( g(x)- g(y)e^{i a_{m}(e_{xy})} \Bigr)} } \quad&\notag\\
&= \sum_{x\in V_{m}\setminus V_0} \overline{g(x)} \sum_{y\sim_{m}x} \Bigl( f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr)
- \sum_{y\in V_{m}\setminus V_0} \overline{g(y)} \sum_{ x\sim_{m}y} \Bigl( f(x) e^{-i a_{m}(e_{xy})} - f(y) \Bigr) \notag\\
&= \sum_{x\in V_{m}\setminus V_0} \overline{g(x)} \sum_{y\sim_{m}x} \Bigl( f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr)
+ \sum_{x\in V_{m}\setminus V_0} \overline{g(x)} \sum_{ y\sim_{m}x} \Bigl( f(x) - f(y)e^{i a_{m}(e_{yx})} \Bigr) \notag\\
&=2\sum_{x\in V_{m}\setminus V_0} \bigl( -\mathcal{M}_{m}^{a_{m}}f(x) \bigr) \overline{g(x)}.
\end{align*}
Note that we need not sum over $V_0$ because $g$ vanishes there. The equality holds for arbitrary $g$ if $ \sum_{y\sim_{m}x} \Bigl( f(x)- f(y)e^{i a_{m}(e_{xy})} \Bigr) =0$ for $x\in V_0$.
\begin{lem}
$\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}^{a_{m}}(f)$ converges as $m\to\infty$ if and only if $f\in\mathcal{F}$.
\end{lem}
\begin{proof}
Observe from $\bigl| |f(x)|-|f(y)|\bigr|\leq \bigl| f(x)-f(y)e^{i a_{m}(e_{xy})} \bigr|$ that the convergence in the statement implies $\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}(|f|)$ is finite and therefore $|f|\in\mathcal{F}$. In particular $f$ is bounded. The converse assumption $f\in\mathcal{F}$ also ensures $f$ is bounded.
Using boundedness of $f$ we may estimate as follows
\begin{align*}
\Bigl| f(x) - f(y)e^{i a_{m}(e_{xy})} \Bigr|^{2}
&\leq \Bigl( \bigl| f(x) - f(y) \bigr| + |f(y)| \bigl| 1-e^{ia_{m}(e_{xy})} \bigr| \Bigr)^{2}\\
&\leq 2 | f(x) - f(y) |^{2} + 2\|f\|_{\infty}^{2} |a_{m}(e_{xy})|^{2}
\end{align*}
and similarly
\begin{equation*}
| f(x) - f(y) |^{2}
\leq \Bigl| f(x) - f(y)e^{i a_{m}(e_{xy})} \Bigr|^{2} + 2 \|f\|_{\infty}^{2} |a_{m}(e_{xy})|^{2}.
\end{equation*}
As previously discussed, $\Bigl(\frac{5}{3}\Bigr)^{m}\sum_{x\sim_{m}y}|a_{m}(e_{xy})|^{2}=\|a_{m}\|_{\mathcal{H}}^{2}\leq\|a\|_{\mathcal{H}}^{2}$, so convergence of
$\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}^{a_{m}}(f)$ is equivalent to convergence of $\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}(f)$ and thus to $f\in\mathcal{F}$.
\end{proof}
Of course one should expect that $\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}^{a_{m}}(f)$ converges to $\mathcal{E}^{a}(f)$, but we have only proved this under a condition akin to assuming $a\in\mathcal{H}$ is locally exact. Note that in the classical (Euclidean) setting all $1$-forms are locally exact because the space is locally topologically trivial, but this is not the case on fractals.
\begin{defn}\label{defn:exact}
A $1$-form $a\in\mathcal{H}$ is called exact if there is $A\in\mathcal{F}$ such that $\partial A=a$. It is locally exact if there is an open cover such that it is exact on the open sets. Equivalently, it is locally exact if there is a finite partition of $\SG=\cup_{j} X_{w_{j}}$ of SG into cells $X_{w_{j}}=F_{w_{j}}(\SG)$ such that $a$ is exact on each cell, meaning there are $A_{w_{j}}\in\mathcal{F}$ so $a\mathds{1}_{w_{j}} =(\partial A_{w_{j}})\mathds{1}_{w_{j}}$ for all $j$. We say $a$ is exact at scale $m$ if this is the smallest integer for which the partition can be chosen to consist of $m$-cells.
\end{defn}
It is proved in~\cite{HR} that when $a$ is real-valued and exact there is a Coulomb gauge transformation which conjugates $\mathcal{E}^{a}$ to $\mathcal{E}$ and $\mathcal{M}^{a}$ to $\Delta$. Specifically, one has from Corollary~5.6 of~\cite{HR}
\begin{gather}
\mathcal{E}^{a}(f) = \mathcal{E}(e^{iA}f) \label{eqn:gaugeforDF}\\
\mathcal{M}^{a}f = e^{-iA} \Delta (e^{iA} f)\label{eqn:gaugeforMag}
\end{gather}
In fact rather more can be obtained from the discussion at the end of Section~5 of~\cite{HR}, using the notion of a Coulomb gauge.
\begin{defn}
Suppose $a\in\mathcal{H}$ is real-valued. We say $a$ admits a Coulomb gauge if there is $e^{iA}\in\mathcal{F}$ such that $e^{-iA}\partial (e^{iA})=a$, and $a$ admits a local Coulomb gauge if this is true on the cells of a finite partition.
\end{defn}
\begin{rmk}
If $a$ admits a Coulomb gauge then it is locally exact, because $e^{iA}$ is uniformly continuous and thus has a logarithm in $\mathcal{F}$ on all sufficiently small cells. However, having a Coulomb gauge is weaker than (global) exactness because it is possible to have $e^{iA}\in\mathcal{F}$ with $A$ locally but not globally in $\mathcal{F}$. To see the distinction, suppose that $a$ is locally exact with $a=\partial A_{j}$ on cells $X_{w_{j}}$. The $A_{w_{j}}$ are defined up to additive constants, and $a$ is exact if and only if we can choose these constants so $A=A_{w_{j}}$ on $X_{w_{j}}$ is continuous on SG. By contrast, $a$ has a Coulomb gauge if we can choose the constants so that $e^{iA}=e^{iA_{w_{j}}}$ on $X_{w_{j}}$ is continuous on SG, so in this latter case we may permit jump discontinuities that are integer multiples of $2\pi$ at intersection points of the cells.
\end{rmk}
From Theorem~5.9 of~\cite{HR} both~\eqref{eqn:gaugeforDF} and~\eqref{eqn:gaugeforMag} are valid when $a$ admits a Coulomb gauge. Note that this Corollary relies on the hypothesis that for connected open sets $U$, $\partial f\mathds{1}_{U}=0$ implies $f$ is constant on $U$. This is valid on SG because we can write $U$ as a connected union of cells, whence at any finite scale the cellular decomposition of $\|\cdot\|_{\mathcal{H}}$ allows us to assume the restriction of $\partial f$ to each cell is zero. Since each cell is self-similar to SG it suffices to note that if $\mathcal{E}(f)=\|\partial f\|_{\mathcal{H}}^{2}=0$ then $f$ is constant by the properties of resistance forms.
It should be noted that when there is a Coulomb gauge we may immediately write a gauge transformation of $\mathcal{E}_{m}^{a_{m}}$ and $\mathcal{M}_{m}^{a_{m}}$, because in this case the function $e^{iA}\in\mathcal{F}$ has an $m$-harmonic approximation (see~(A\ref{Aharmonic} for the definition). The $m$-harmonic approximation has the same values as $e^{iA}$ at points of $V_{m}$, so denoting it with $\bigl(e^{iA}\bigr)_{m}$ we can use to to write
\begin{equation*}
e^{ia_{m}(e_{xy})} = \bigl( e^{iA(y)}\bigr)_{m} \bigl( e^{-iA(x)}\bigr)_{m}
\end{equation*}
for all $x\sim_{m}y$ in $V_{m}$, and therefore
\begin{equation}\label{eqn:exactgaugetransfforDFm}
\mathcal{E}_{m}^{a_{m}}(f)
= \sum_{x,y: x\sim_{m} y} \Bigl| f(x)\bigl( e^{iA(x)}\bigr)_{m} - f(y)\bigl( e^{iA(y)}\bigr)_{m} \Bigr|^{2}
= \mathcal{E}_{m}\bigl(e^{iA}f\bigr),
\end{equation}
and similarly, for $x\in V_{m}$,
\begin{align}
\mathcal{M}^{a_{m}}_{m} f(x)
&= - \bigl(e^{-iA(x)}\bigr)_{m} \sum_{y:y\sim_{m}x} \Bigl( f(x)\bigl(e^{iA(x)}\bigr)_{m}- f(y)\bigl(e^{iA(y)}\bigr)_{m} \Bigr)\\
&= e^{-iA} \Delta_{m}\bigl(e^{iA}f\bigr).
\end{align}
For forms that admit a local Coulomb gauge our graph magnetic energies converge to the magnetic energy on the fractal.
\begin{thm}\label{thm:DFamconvergestoDFa}
If $a\in\mathcal{H}$ is real-valued and has a local Coulomb gauge at scale $n$ then
\begin{equation*}
\Bigl(\frac{5}{3}\Bigr)^{m}\mathcal{E}_{m}^{a_{m}}(f)\to\mathcal{E}^{a}(f) \text{ as } m\to\infty.
\end{equation*}
\end{thm}
\begin{proof}
By hypothesis we may partition SG as $\cup_{|w|=n}F_{w}(\SG)$ and have functions $e^{iA_{w}}\in\mathcal{F}$ such that
\begin{equation*}
\mathcal{E}^{a}(f)=\sum_{|w|=n} \mathcal{E}_{X_{w}} \bigl(e^{iA_{w}} f|_{X_{w}} \bigr)
\end{equation*}
where $ \mathcal{E}_{X_{w}} $ is the Dirichlet form on the cell $X_{w}=F_{w}(\SG)$, so is just a rescaling of the global Dirichlet form.
On each cell the $m$-scale energy $\mathcal{E}_{m}$ converges to $\mathcal{E}$, so take $m>n$ sufficiently large that
\begin{equation*}
\mathcal{E}_{X_{w},m} \bigl(e^{iA_{w}} f|_{X_{w}} \bigr) \leq \mathcal{E}_{X_{w}} \bigl(e^{iA_{w}} f|_{X_{w}} \bigr) \leq \frac{\epsilon}{N}+ \mathcal{E}_{X_{w}, m} \bigl(e^{iA_{w}} f|_{X_{w}} \bigr).
\end{equation*}
where $N$ is the number of $n$-cells.
Now by~\eqref{eqn:exactgaugetransfforDFm} each of the $\mathcal{E}_{X_{w},m} \bigl(e^{iA_{w}} f|_{X_{w}} \bigr)$ is that part of the sum for $\mathcal{E}_{m}^{a_{m}}(f)$ which corresponds to the edges in $X_{w}$, so summing over the finite collection of cells in the truncated sum gives $\mathcal{E}_{m}^{a_{m}}(f)$ and we have shown it is within $2\epsilon$ of $\mathcal{E}^{a}(f)$.
\end{proof}
\begin{rmk}
We conjecture that Theorem~\ref{thm:DFamconvergestoDFa} holds without the restriction that $a$ admits a local Coulomb gauge. Note, however, that we will also need the Coulomb gauge restriction to prove our results on the spectrum of $\mathcal{M}^{a}$ in Section~\ref{sec:spectra}, so little is lost by making this assumption here too.
\end{rmk}
\begin{thm}\label{thm:convergeofMagm}
Suppose $a\in\mathcal{H}$ is real-valued and has a local Coulomb gauge at scale $n$. Then $f\in\dom(\mathcal{M}^{a})$ if and only if $\frac{3}{2} 5^{m} \mathcal{M}_{m}^{a_{m}} f $ converges uniformly on $V_{\ast}\setminus V_{0}$ to a continuous function $\Phi$. In this case the continuous extension of $\Phi$ to SG is $\mathcal{M}^{a}f$.
\end{thm}
\begin{proof}
First assume the uniform convergence to a continuous $\Phi$. For any $g\in\mathcal{F}$ that vanishes on $V_{0}$ define functions $h_{m}$ which are harmonic at scale $m$ and have values
\begin{equation*}
h_{m}(x) = \frac{3}{2} 5^{m} \bigl( \mathcal{M}_{m}^{a_{m}}f(x)\bigr) \overline{g(x)} \qquad\text{ for } x\in V_{m}\setminus V_0.
\end{equation*}
Obviously $h_{m}(x)$ converges uniformly on SG to the continuous extension of $\Phi(x)\overline{g(x)}$. What is more, the integral of the $m$-harmonic function which is $1$ at $x\in V_{m}\setminus V_{0}$ and zero at all other points of $V_{m}$ is $\frac{2}{3}3^{-m}$ so we may compute
\begin{equation*
\int h_{m}(x)\,d\mu = \Bigl(\frac{5}{3}\Bigr)^{m} \sum_{x\in V_{m}}\bigl( \mathcal{M}_{m}^{a_{m}}f(x)\bigr) \overline{g(x)}
\end{equation*}
Then~\eqref{eqn:DFmandMagfm} says that
\begin{equation*}
\int h_{m}(x)\,d\mu = -\Bigl(\frac{5}{3}\Bigr)^{m} \mathcal{E}_{m}^{a_{m}} (f,g)
\end{equation*}
By Theorem~\ref{thm:DFamconvergestoDFa} and the parallelogram law the right side converges to $-\mathcal{E}^{a}(f,g)$, and since the left side converges to $\int \Phi\bar{g}\,d\mu=\langle \Phi,g\rangle_{L^{2}(\mu)}$ and $g\in\mathcal{F}_{0}$ is arbitrary it must be that $f\in\dom(\mathcal{M}^{a})$ with $\mathcal{M}^{a}f$ being the continuous extension of $\Phi$ to SG.
Conversely we have $\mathcal{E}^{a}(f,g)=-\langle \mathcal{M}^{a}f,g\rangle_{L^{2}(\mu)}$ for all $g\in\mathcal{F}_{0}$ and will make a careful choice of $g$. Fix $x\in V_{\ast}\setminus V_{0}$ and $m\geq n$. Since $a$ has a Coulomb gauge at scale $n$ we may find $e^{iA_{w}},e^{iA_{w'}}\in\mathcal{F}$ so that $a\mathds{1}_{w}=e^{-iA_{w}}\partial(e^{iA_{w}})\mathds{1}_{w}$ and $a\mathds{1}_{w'}=e^{-iA_{w'}}\partial(e^{i A_{w'}})\mathds{1}_{w'}$, where $F_{w}(\SG)$ and $F_{w'}(\SG)$ are the two $n$-cells that meet at $x$. However the $e^{iA_{w}}$ and $e^{iA_{w'}}$ are defined only up to multiplicative constants constants of norm $1$, so we can arrange that they join continuously at $x$ and write both as $e^{iA}$. Now let $\phi_{m}$ be the $m$-harmonic function which is equal $e^{iA(x)}$ at $x$ and zero at all other points of $V_{m}$ and define $\psi_{m}=e^{-iA}\phi_{m}$. Note that both $\phi_{m}$ and $\psi_{m}$ are identically zero off $F_{w}(\SG)\cup F_{w'}(\SG)$, so the behavior of $A$ off this set does not affect $\psi_{m}$. Since $\psi_{m}$ is a product of elements of $\mathcal{F}$ and is zero at $V_{0}$ it is in $\mathcal{F}_{0}$.
Using this and the fact that $\psi_{m}$ is supported on the set where the gauge transformation is valid
\begin{equation*}
-\langle \mathcal{M}^{a}f,\psi_{m}\rangle_{L^{2}(\mu)}
=\mathcal{E}^{a}(f,\psi_{m})
=\mathcal{E}(e^{iA}f,e^{iA}\psi_{m})
= \mathcal{E}(e^{iA}f,\phi_{m})
\end{equation*}
but $\phi_{m}$ is $m$-harmonic, so
\begin{equation*}
\mathcal{E}(e^{iA}f,\phi_{m})
=\bigl(\frac{5}{3}\bigr)^{m}\mathcal{E}_{m}(e^{iA}f,\phi_{m})
= \bigl(\frac{5}{3}\bigr)^{m}\mathcal{E}_{m}^{a_{m}}(f,\psi_{m})
\end{equation*}
and inserting~\eqref{eqn:DFmandMagfm} we have only the terms involving $x$, so
\begin{equation*}
3^{m} \langle \mathcal{M}^{a}f,\psi_{m}\rangle_{L^{2}(\mu)} = 5^{m} \mathcal{M}_{m}^{a_{m}}f(x).
\end{equation*}
We assumed $\mathcal{M}^{a}f$ was continuous, and it is obvious the support of the $\psi_{m}$ converges to $x$, so the proof will be complete if we show $3^{m}\int \psi_{m}\,d\mu\to\frac{2}{3}$. However $e^{iA}$ is continuous, so its restriction to the support of $\psi_{m}$ converges uniformly to $e^{iA(x)}$ as $m\to\infty$. If $\chi_{m}$ denotes the the harmonic function which is $1$ at $x$ and zero on the other points of $V_{m}$ then we conclude $\psi_{m}-\chi_{m}$ converges uniformly to zero. Moreover $3^{m}\int \chi_{m}\,d\mu=\frac{2}{3}$ for all $m$ by elementary symmetry considerations, so the proof is complete.
\end{proof}
\begin{thm}
Suppose $a\in\mathcal{H}$ is real-valued and admits a local Coulomb gauge at scale $n$. If $f\in\dom(\mathcal{M}^{a})$, then the magnetic normal derivative
\begin{equation*}
d^{a}f(p) = \lim_{m\to\infty} \Bigl( \frac{5}{3}\bigr)^{m} \sum_{x\sim_{m}p} \bigl( f(p)- e^{i(A_{p}(x)-A_{p}(p))} f(x) \bigr)
\end{equation*}
exists at each $p\in V_{0}$ and for $g\in\mathcal{F}$ we have the Gauss-Green formula
\begin{equation}\label{eqn:magGG}
\mathcal{E}^{a}(f,g) = -\langle \mathcal{M}^{a} f,g \rangle_{L^{2}(\mu)} + \sum_{x\in V_{0}} \bigl(d^{a}f(p) \bigr)\overline {g(p)}.
\end{equation}
If, in addition, there is $A_{p}\in\mathcal{F}$ such that $\partial A_{p}=a$ on a neighborhood of $p$, and the usual normal derivative $d A_{p}(p)$ exists, then $df(p)$ exists and
\begin{equation}\label{eqn:magnormalderivfromusual}
d^{a}f(p)= e^{-iA_{p}(p)}d \bigl( fe^{iA_{p}} \bigr) = df(p) + if(p)dA_{p}(p)
\end{equation}
\end{thm}
\begin{proof}
Fix $g\in\mathcal{F}$. For each $m$ and each $p\in V_{0}$ use the construction of $\psi_{m}$ from the proof of Theorem~\ref{thm:convergeofMagm} to obtain a function $\psi_{m}^{p}$ which is $1$ at $p$, zero at all other points of $V_{m}$ and such that if $e^{iA_{p}}$ is the local Coulomb gauge at $p$ then $e^{iA_{p}}\psi_{m}^{p}$ is $m$-harmonic. Let $g_{m}=\sum_{p\in V_{0}} g(p)\psi_{m}^{p}$. Then $g-g_{m}\in\mathcal{F}_{0}$ and therefore $\mathcal{E}^{a}(f,g-g_{m})=-\langle \mathcal{M}^{a}f,g-g_{m}\rangle_{L^{2}(\mu)}$. Since $\mathcal{M}^{a}f$ is continuous and $g-g_{m}\to g$ in $L^{2}(\mu)$ we find that $\mathcal{E}^{a}(f,g_{m})$ converges. For large enough $m$ the gauge transform and the definition of $\psi_{m}^{p}$ imply
\begin{align*}
\mathcal{E}^{a}(f,g_{m})
&=\sum_{p}\overline{g(p)}\mathcal{E}\bigl(e^{iA_{p}}f,e^{iA_{p}}\psi_{m}^{p}\bigr) \\
&=\Bigl( \frac{5}{3}\bigr)^{m}\sum_{p}\overline{g(p)}\mathcal{E}_{m} \bigl( e^{iA_{p}}f,e^{iA_{p}}\psi_{m}^{p}\bigr)\\
&=\Bigl( \frac{5}{3}\bigr)^{m} \sum_{p}\overline{g(p)}\sum_{x\sim_{m}p} \bigl( f(p)- e^{i(A_{p}(x)-A_{p}(p))} f(x) \bigr)
\end{align*}
so that the magnetic normal derivative exists and~\eqref{eqn:magGG} holds.
When $dA_{p}$ exists we have $A_{p}(x)-A_{p}(p) = -\Bigl(\frac{3}{5}\Bigr)^{m}dA_{p}(p) + o \Bigl(\frac{3}{5}\Bigr)^{m}$ for both $x\sim_{m}p$. Thus we compute
\begin{align*}
df(p)
&= \lim_{m\to\infty} \Bigl( \frac{5}{3}\bigr)^{m} \sum_{x\sim_{m}p} \bigl( f(p) - f(x)\bigr) \\
&=\lim_{m\to\infty} \Bigl( \frac{5}{3}\bigr)^{m} \sum_{x\sim_{m}p} \Bigl(\bigl( f(p) - f(x)e^{i(A_{p}(x)-A_{p}(p))}\bigr) + f(x)\bigl( e^{i(A_{p}(x)-A_{p}(p))}-1\bigr)\Bigr) \\
&= d^{a}f(p) - if(p) dA_{p}(p)
\end{align*}
which gives the second conclusion of the theorem.
\end{proof}
It is apparent that we can localize the magnetic Gauss-Green formula to any cell. Doing so allows us to give necessary and sufficient conditions for defining a function in $\dom(\mathcal{M}^{a})$ piecewise.
\begin{thm}\label{thm:gluing}
Suppose $a\in\mathcal{H}$ is real-valued and admits a local Coulomb gauge. Let $X_{1}=F_{w_{1}}(\SG)$ and $X_{2}=F_{w_{2}}(SG)$ be two cells with $X_{1}\cap X_{2}=\{x\}$ and assume we have functions $f_{j}$ and $u_{j}$ from $\mathcal{F}|_{X_{j}}$ such that $\mathcal{M}^{a}f_{j}=u_{j}$, $j=1,2$. In order that the piecewise functions $f=f_{j}$ on $X_{j}$ and $u=u_{j}$ on $X_{j}$ for $j=1,2$ satisfy $\mathcal{M}^{a}f=u$ it is necessary and sufficient that both are continuous, $f_{1}(x)=f_{2}(x)$ and $u_{1}(x)=u_{2}(x)$, and also that $d^{a} f_{1}(x)+d^{a}f_{2}(x)=0$.
\end{thm}
\begin{proof}
The role of the continuity assumption is elementary, so we focus on the condition on $d^{a}$.
By localizing~\eqref{eqn:magGG} to $X_{1}$ and $X_{2}$ we may write the hypothesis $\mathcal{M}^{a}u_{j}=f_{j}$ as
\begin{equation}\label{eqn:gluingstep}
\mathcal{E}^{a}_{X_{j}} (f_{j},g) = -\langle u_{j},g \rangle_{L^{2}(\mu,X_{j})} + \sum_{p\in V_{0}} \bigl(d^{a}f_{j}(F_{w_{j}}(p)) \bigr) \overline{g(F_{w_{j}}(p))}
\end{equation}
for $j=1,2$.
Similarly, $\mathcal{M}^{a}u=f$ on the union means that for functions $g$ which vanish on $\bigl(F_{w_{1}}(V_{0})\cup F_{w_{2}}(V_{0})\bigr)\setminus\{x\}$ we have
\begin{equation*}
\mathcal{E}^{a}_{X_{1}\cup X_{2}} (f,g)
= -\langle u,g \rangle_{L^{2}(\mu,X_{1}\cup X_{2})}.
\end{equation*}
Comparing this to the sum of~\eqref{eqn:gluingstep} for $j=1,2$ we see that they are the same if and only if all the terms from the sums over $V_0$ vanish. Our hypothesis on $g$ ensures these sums only contain the two terms at $x$, so the quantity which must vanish is $(d^{a}f_{1}(x)+d^{a}f_{2}(x))\overline{g(x)}$, and $g(x)$ can take any value.
\end{proof}
We conclude this section with a discussion of the structure of the subspace of exact forms on SG and its complementary subspace in $\mathcal{H}$. Recall that the exact forms are the image of the map $\partial:\mathcal{F}\to\mathcal{H}$. Since $\|\partial f\|_{\mathcal{H}}^{2}=\mathcal{E}(f)$ and $\mathcal{F}$ modulo constants is a Hilbert space, the exact $1$-forms are a complete, hence closed, subspace of $\mathcal{H}$. We write $P$ for the projection onto the exact forms and $P^{\perp}$ for the orthogonal projection. It is proven in~\cite{IRT} that $P\mathcal{H}_{m}$ is the space obtained by applying $\partial$ to the $m$-harmonic functions, while $P^{\perp}\mathcal{H}_{m}$ is the space of $m$-harmonic $1$-forms. A $1$-form is $m$-harmonic if on each $m$-cell $X_{w}=F_{w}(SG)$ it is $(\partial h_{w})\mathds{1}{w}$ for some $m$-harmonic function $h_{w}$, and for any point $x\in V_{m}$ the sum of the normal derivatives $\sum_{w} dh_{w}(x)$ over the cells meeting at $x$ is zero.
The self-similarity of the space $\mathcal{H}_{m}$ ensures we may understand the structure of $\mathcal{H}_{m}$ by studying the structure of $\mathcal{H}_{1}$. This is generated by the harmonic functions modulo constants on the $1$-cells. It is convenient to incorporate the condition on constants by assuming the harmonic functions have mean zero, so the sum of the values at points $F_{j}(V_{0})$ is zero for each $j\in\{0,1,2\}$. One can then check that $P\mathcal{H}_{1}$ is $5$-dimensional. In fact, the $5$-dimensional space generated by $1$-harmonic functions that are mean-zero on SG can be made mean-zero on each $F_{j}(\SG)$ by subtracting an appropriate mean-zero function that is harmonic on all of SG, so this space decomposes into the $2$-dimensional space $\mathcal{H}_{0}=P\mathcal{H}_{0}$ and a $3$-dimensional complement. The remaining space, $P^{\perp}\mathcal{H}_{1}$ is $1$-dimensional and corresponds to a loop around the central hole. We let $b\in\mathcal{H}_{1}$ be the element with counterclockwise orientation shown in Figure~\ref{fig:loopelement}(a), multiplied by $1/\sqrt{30}$ so that $\|b\|_{\mathcal{H}}=1$. It is also convenient to choose harmonic functions on the $1$-cells as shown in Figure~\ref{fig:loopelement}(b) such that applying $\partial$ gives $\sqrt{30}b$. Although this latter is not a function on SG it is a function $B$ on the disjoint union $\sqcup_{j=0,1,2}F_{j}(SG)$.
\begin{figure}[htb]
\begin{picture}(105.6,90)(0,-3)
\setlength{\unitlength}{.23pt} \Spic{2}{32}{0}{0}
\put(72,-40){$-1$} \put(252,-40){$-1$}
\put(-13,70){$-1$} \put(152,70){$2$} \put(212,70){$2$} \put(345,70){$-1$}
\put(182,126){$2$}
\put(248,228){$-1$}
\put(82,228){$-1$}
\end{picture}
\ \ \ \ \ \ \ \
\begin{picture}(105.6,90)(0,-3)
\setlength{\unitlength}{.20pt}
\Separatedfirstlevel{32}
\put(-18,-40){$0$} \put(442,-40){$0$} \put(212,380){$0$}
\put(185,-40){$1$} \put(220 ,-40){$-1$}
\put(30,155){$-1$} \put(375,155){$1$}
\put(100,210){$1$} \put(320,210){$-1$}
\end{picture}
\caption{(a) The $1$-form $\sqrt{30}b$, with orientation clockwise around each $1$-cell, hence counterclockwise around the central hole, and
(b) The harmonic function $B$ on disjoint $1$-cells.}\label{fig:loopelement}
\end{figure}
It is apparent that the set $\{b\circ F_{w}\}$ of $1$-forms span the space of harmonic forms $P^{\perp}\mathcal{H}$. If $b\circ F_{w}$ and $b\circ F_{w'}$ are from disjoint cells then the direct sum decomposition in~(F\ref{Fcelldecomp}) implies they are orthogonal, and by computing $\Tr_{0}b=0$ from the formula in Definition~\ref{defn:trace} we find $b\circ F_{w}$ and $b\circ F_{w'}$ are orthogonal if $|w|\neq|w'|$. Thus $\{b\circ F_{w}\}$ is an orthogonal basis for $P^{\perp}\mathcal{H}$, and for real values $\beta_{w}$
\begin{equation}\label{eqn:normofharmonicform}
\Bigl\| \sum_{m=1}^{\infty}\sum_{|w|=m} \beta_{w}b\circ F_{w} \Bigr\|_{\mathcal{H}}^{2} = \sum_{m=1}^{\infty}\Bigl( \frac{5}{3}\Bigr)^{m}\sum_{|w|=m} \beta_{w}^{2}
\end{equation}
if the latter series converges. Moreover, $a\in\mathcal{H}$ is locally exact if and only if $P^{\perp}a$ is a series of this type with only finitely many terms, it has Coulomb gauge if and only if all $\beta_{w}$ in the series for $P^{\perp}a$ are integer multiples of $2\pi$, and it has scale $n$ Coulomb gauge if and only if all $\beta_{w}$ in the series for $P^{\perp}a$ which have $|w|>n$ are integer multiples of $2\pi$. Note that the final point and~\eqref{eqn:normofharmonicform} gives another proof that every form admiting a local Coulomb gauge is locally exact, though not necessarily at the same scale.
\section{Spectra of magnetic operators with local Coulomb gauge}\label{sec:spectra}
In this section we study the spectrum of Dirichlet magnetic operators $\mathcal{M}^{a}$, which we know from Theorem~\ref{thm:mainresultofHR} is pure point. Our approach relies heavily on the spectral decimation property of the Laplacian on SG~\cite{RammalToulouse,Shima,FukushimaShima} and associated properties of the eigenfunctions~\cite{DSV,Kig1998}. Spectral decimation says that if $f$ is an eigenfunction of $\Delta$ on SG then there is $m_{0}$ (called the generation of birth) and a sequence $\{\lambda_{m}\}_{m_{0}}^{\infty}$ such that $\Delta_{m}f=\lambda_{m}f$ for all $m\geq m_{0}$. The sequence $\{\lambda_{m}\}$ is related to the eigenvalue by $\lambda_{m}(5-\lambda_{m})=\lambda_{m-1}$ and $\frac{3}{2}\lim 5^{m}\lambda_{m}=\lambda$. One way to view this graph eigenfunction equation is as follows: if on each $m$-cell $F_{w}(\SG)$ we have $f_{w}$ such that $\Delta f_{w}=\lambda f_{w}$ then defining $f$ piecewise to be $f_{w}$ on $F_{w}(\SG)$ we have $\Delta f=\lambda f$ if and only if $f$ is continuous and $\Delta_{m}f=\lambda_{m}f$. Comparing this to the usual gluing property we see that the discrete eigenfunction equation encodes that the normal derivatives sum to zero at the points of $V_{m}$. The equivalence of these conditions may also be verified using the explicit formulas for the normal derivatives from~\cite{DRS}.
We wish to study the spectrum of $\mathcal{M}^{a}$ via the finite approximations $\mathcal{M}^{a_{m}}_{m}$, so in light of the results of the previous section it makes sense to only consider real-valued $a\in\mathcal{H}$ which admit a local Coulomb gauge at scale $n$. By the discussion following Definition~\ref{defn:exact} we may also assume that $Pa=0$, because we can gauge transform to remove this part of $a$. Doing so will not change the eigenvalues of $\mathcal{M}^{a}$ and will simply conjugate the eigenfunctions. Under these assumptions let $m\geq n$ and $e^{iA_{w}}$ be the gauge transform on the $m$-cell $F_{w}(\SG)$. Then $u_{w}$ satisfies $\mathcal{M}^{a}u_{w}=\lambda u_{w}$ on $F_{w}(\SG)$ if and only if $f_{w}=e^{iA_{w}}u_{w}$ and $\Delta f_{w}=\lambda f_{w}$ on the cell. The condition for gluing the $u_{w}$ into a piecewise defined eigenfunction with $\mathcal{M}^{a}u=\lambda u$ is that they join continuously and $\sum_{w}d^{a}u_{w}(p)=0$, where the sum is over the cells meeting at $p\in V_{m}$. From $Pa=0$ we have $\sum_{w}dA_{w}(p)=0$ for all $p$, so by~\eqref{eqn:magnormalderivfromusual} our condition becomes $\sum_{w}e^{-iA_{w}(p)}df_{w}(p)=\sum_{w}du_{w}=\sum_{w}d^{a}u_{w}=0$. But $\Delta e^{-iA_{w}(p)}f_{w}(x)=\lambda e^{-iA_{w}(p)}f_{w}(x)$, so the normal derivatives sum to zero at $p$ if and only if $\Delta_{m}e^{-iA_{w}(p)}f_{w}(p)=\lambda_{m}e^{-iA_{w}(p)}f_{w}(p)$, which is precisely $\mathcal{M}_{m}^{a_{m}}u_{w}=\lambda_{m} u_{w}$. Thus we can study $\mathcal{M}^{a}u=\lambda u$ by examining $\mathcal{M}_{m}^{a_{m}}u=\lambda_{m}u$ for $m\geq n$.
As described at the end of the previous section, the assumptions we have on $a$ imply that there are real numbers $\beta_{w}$ with $\beta_{w}\in2\pi\mathbb{Z}$ for $|w|>n$, such that
\begin{gather}
a=\sum_{m=1}^{\infty}\sum_{|w|=m} \beta_{w}b\circ F_{w}, \label{eqn:nonexactmag}\\
\|a\|_{\mathcal{H}}^{2}=\sum_{m=1}^{\infty}\Bigl( \frac{5}{3}\Bigr)^{m}\sum_{|w|=m} \beta_{w}^{2}<\infty. \notag
\end{gather}
Since all terms in this expression are self-similar it is clear that a significant step is to understand the spectrum of $\mathcal{M}^{\beta b}$, in which case we can look at $\mathcal{M}_{1}^{\beta b}$.
\begin{figure}
[t]
\includegraphics[width=.65\textwidth]{MagneticGasketLvl4.pdf}
\\[12pt]
\includegraphics[width=.65\textwidth]{MagneticGasketLvl5.pdf}
\\[12pt]
\includegraphics[width=.65\textwidth]{MagneticGasketLvl6.pdf}
\caption{Eigenvalues less than $160$ and $0\leq \beta\leq 2$ for the (from top to bottom) 4th, 5th, and 6th level approximation to $\mathcal M^{\beta b}$ } \label{SpectrumGraphs}
\end{figure}
The results of some numerical investigations into the spectrum of $\mathcal{M}^{b}$ are shown in Figure~\ref{SpectrumGraphs}. One can see the structure of the spectrum inherited from the spectral decimation process, which copies and expands the spectrum with each level of approximation.
Of particular note is the existence of many eigenvalues that do not vary with $\beta$, and are therefore independent of the field. These can be seen in Figure \ref{SpectrumGraphs} as horizontal lines. This pattern persists for more complicated magnetic operators $\mathcal{M}^{a}$ with local Coulomb gauge: when $m$ is sufficiently large we find that $\mathcal{M}_{m}^{a_{m}}$ has a large number of eigenvalues that are the same as those of $\Delta_{m}$. This turns out to be a straightforward consequence of the structure of the eigenfunctions of the Laplacian.
\begin{thm}\label{thm:conjugateLapefns}
Suppose $f$ is an eigenfunction of $\Delta$ with eigenvalue $\lambda$ and the support of $f$ is a finite union of cells $\cup X_{k}$ on which $a$ has a Coulomb gauge, so there is $e^{iA}\in\mathcal{F}$ such that $e^{-iA}(\partial e^{iA})\mathds{1}_{\cup X_{k}}=a\mathds{1}_{\cup X_{k}}$. Then $fe^{-iA}$ is an eigenfunction of $\mathcal{M}^{a}$ with eigenvalue $\lambda$.
\end{thm}
\begin{proof}
This is a direct computation from the validity of the gauge transformation on $\cup X_{k}$, because for $g\in\mathcal{F}_{0}$
\begin{equation*}
\mathcal{E}^{a}(fe^{-iA},g)=\mathcal{E}(f,e^{iA}g)
=-\lambda\langle f, e^{iA}g\rangle
=-\lambda\langle fe^{-iA},g\rangle. \qedhere
\end{equation*}
\end{proof}
\begin{rmk}
This result can also be thought of in terms of the gluing result in Theorem~\ref{thm:gluing}. By construction $fe^{-iA}$ satisfies the eigenfunction equation for $\mathcal{M}^{a}$ on $\cup X_{k}$. From the fact that $f$ is identically zero outside $\cup X_{k}$ we see that $df$ must be zero at the boundary points of $\cup X_{k}$. Using~\eqref{eqn:magnormalderivfromusual} with $f=df=0$ on the boundary of $\cup X_{k}$ we have $fe^{-iA}=d^{a}(fe^{-iA})=0$ there also, so extending $fe^{-iA}$ by zero gives a smooth solution of the eigenfunction equation on SG.
\end{rmk}
In order to see why this result determines many eigenfunctions of $\mathcal{M}^{a}$ we need some more consequences of the spectral decimation method, particularly those from~\cite{DSV,Kig1998}. Our presentation of them follows the elementary exposition in~\cite{Strichartzbook}, except that our bases for the $5$-series eigenspaces are more like those in~\cite{DSV}. In order to describe these bases we define a chain of $m$-cells to be a sequence $X_{k}=F_{w_{k}}(\SG)$, $k=1,\dotsc, K$ such that $|w_{k}|=m$ for all $k$ and $X_{k}\cap X_{k+1}=\{x_{k}\}$ is a sequence of $K-1$ distinct points from $V_{m}$. We say the chain is simple if $X_{k}\cap X_{k'}=\emptyset$ unless $|k-k'|\leq1$.
\begin{enumerate}[({S}1)]
\item\label{S:specdec} For a Dirichlet eigenvalue $\lambda$ of $\Delta$ on SG with eigenfunction $f$, let $m(\lambda)$ be its generation of birth and $\lambda_{m}$ be the spectral decimation sequence, so $\Delta_{m}f=\lambda_{m}f$, $\lambda_{m}(5-\lambda_{m})=\lambda_{m-1}$ and $\frac{3}{2}\lim 5^{m}\lambda_{m}=\lambda$. Then $\lambda_{m(\lambda_{0})}\in\{2,5,6\}$ and $\lambda_{m}\not\in\{2,5,6\}$ for $m>m(\lambda)$. We let $\sigma_{s}=\{\lambda: \lambda_{m(\lambda_{0})}=s\}$ for $s=2,5,6$, and call these the $2$, $5$, and $6$ series eigenfunctions.
\item\label{S:fixation} From the preceding, $\lambda_{m}=\frac{1}{2}\bigl(5\pm\sqrt{25-4\lambda_{m-1}} \bigr)=\Phi_{\pm}(\lambda_{m-1})$. For convergence of $5^{m}\lambda_{m}$ the positive root can occur at most finitely often, so there is $m_{1}(\lambda)$ called the generation of fixation such that $\lambda_{m}=\Phi_{-}(\lambda_{m-1})$ for all $m>m_{1}$. Writing $\Phi_{-}^{\circ m}$ for the $m$-fold composition, the function $\mathcal{R}(\tau)=\lim_{m} 5^{m}\Phi_{-}^{\circ m}(\tau)$ is analytic, $\mathcal{R}(0)=0$ and $\mathcal{R}'(0)\neq0$. Knowing the generation of fixation the eigenvalue is $\lambda=5^{m_{1}}\mathcal{R}(\lambda_{m_{1}})$.
\item If $\lambda\in\sigma_{2}$ then $m(\lambda)=1$, its eigenspace is $1$-dimensional, and the eigenfunctions are fully symmetric under the dihedral symmetry group of the triangle.
\item If $\lambda\in\sigma_{5}$ then $m(\lambda)\geq1$. All eigenfunctions vanish on $V_{m(\lambda)-1}$ and the eigenspace has dimension $\frac{1}{2}\bigl(3^{m(\lambda)-1}+3\bigr)$. There is a basis for the $5$-series eigenfunctions in which each is supported on a simple chain of $(m(\lambda)-1)$-cells in which $X_{k_{1}}$ and $X_{k_{K}}$ contain distinct points of $V_{0}$.
\item If $\lambda\in\sigma_{6}$ then $m(\lambda)\geq2$. The eigenspace has dimension $\frac{1}{2}\bigl(3^{m(\lambda)}-3\bigr)$, and there is a basis in which each eigenfunction is supported on the union of two $(m(\lambda)-1)$-cells meeting at a point of $V_{m(\lambda)-1}\setminus V_{0}$.
\end{enumerate}
A small comment about the $5$-series basis is in order. With generation of birth $m+1$ there is a function supported on an $m$-cell with the following property: given an $m$-chain with both ends on $V_{0}$ there is an arrangement of copies of the function along the cells in the chain such that the
resulting function extends smoothly by $0$ to give a $5$-series eigenfunction on SG. This arrangement is unique up to multiplying the eigenfunction by a scalar. In~\cite{DSV} a basis is given in which each eigenfunction is supported on an $m$-cell chain from $p_{0}$ to either $p_{1}$ or $p_{2}$, but the chains given are not simple. In particular it follows from~\cite{DSV} that the number of $m$-cell chains between two points of $V_{0}$ is $\frac{1}{2}\bigl(3^{m-1}+1\bigr)$. Observe that each simple $m$-chain determines an $(m-1)$-chain by taking the parent cells of the $m$-cells in the chain. Conversely an $(m-1)$-chain determines a simple $m$-chain by taking, in each $(m-1)$-cell $X_{k}$, the two $m$-cells which form the shortest $m$-cell chain from $x_{k-1}$ to $x_{k}$. From this bijection between simple $m$-chains and $(m-1)$-chains we see that the number of simple $m$-cell chains between two specified points of $V_{0}$ is $\frac{1}{2}\bigl(3^{m-2}+1\bigr)$, and therefore the number of such chains joining pairs of points from $V_{0}$ is $\frac{1}{2}\bigl(3^{m-1}+3\bigr)$, which is the dimension of a $5$-series eigenspace with generation of birth $m$. Moreover it is easy to prove inductively that the eigenfunctions corresponding to these chains are linearly independent. When $m=2$ this can be done by hand (as was done in~\cite{DSV}). For the inductive step observe that if a linear combination of eigenfunctions corresponding to simple $m$-chains is zero then it is zero on each cell $F_{j}(X)$, $j=0,1,2$. Then precomposing the piece on $F_{j}(X)$ with $F_{j}^{-1}$ gives a vanishing linear combination of eigenfunctions corresponding to $(m-1)$-chains, and these are linearly independent by the inductive hypothesis.
\begin{thm}\label{thm:spectrumofMaga}
If $a$ is a real-valued form with local Coulomb gauge at scale $n$ and $\lambda$ is a Laplacian eigenvalue with generation of birth $m(\lambda)>n$ then $\lambda$ is also an eigenvalue of $\mathcal{M}^{a}$, and the corresponding eigenfunction is obtained from the Laplacian eigenfunction by a gauge transformation.
\end{thm}
\begin{proof}
If $a$ is as described then on every $n$-cell $F_{w}(\SG)$ we have a gauge function $e^{iA_{w}}$, which is determined up to a multiplicative constant. For $\lambda$ as described the Laplacian eigenfunction is supported either on simple chain of $(m(\lambda)-1)\geq n$ cells, or on the union of two $(m(\lambda)-1)$ cells, which we denote $X_{k}=F_{w_{k}}(\SG)$. In either case simplicity of the chain ensures we may choose the values $e^{iA_{w}(x_{k})}$, where $x_{k}=X_{k}\cap X_{k+1}$, so that $e^{iA}=e^{iA_{w_{k}}}$ on $X_{k}$ is continuous, hence a Coulomb gauge on $\cup X_{k}$. The result then follows from Theorem~\ref{thm:conjugateLapefns}.
\end{proof}
\begin{cor}\label{cor:spectasymptotics}
If $a$ is a real-valued form with local Coulomb gauge at scale $n$ then $\mathcal{M}^{a}$ has the same spectral asymptotics as $\Delta$. Specifically, let $\rho^{a}(x)$ be the counting function of $\mathcal{M}^{a}$, so $\rho^{a}(x)=\#\{\lambda\in\sigma_{D}:\lambda\leq x\}$. There is a non-constant periodic function $\chi$ of period $\log 5$ such that
\begin{equation*}
\lim_{x\to\infty} \rho^{a}(x)x^{-\log3/\log5} - \chi(\log x) = 0.
\end{equation*}
The function $\chi$ is independent of $a$, so is the same as that occuring for the Laplacian spectrum.
\end{cor}
\begin{proof}
For $a=0$ this is simply the spectral asymptotic for the Laplacian, and follows from a more general analysis in~\cite{KigLap}. When $a\neq0$ the result follows from the fact that eigenvalues with generation of birth less than $n$ make an asymptotically small contribution to the spectrum. To make this precise we reason as follows.
The eigenvalues and eigenfunctions of $\mathcal{M}^{m}$ obey spectral decimation for all sufficiently large $m$, so for each eigenvalue $\lambda$ there is a sequence $\lambda_{m}$, $m\geq m_{0}$ as in~(S\ref{S:specdec}) and the eigenvalue is determined at the generation of fixation as described in~(S\ref{S:fixation}). Following this line of reasoning, for a specified $x$ there is $m_{1}$ comparable to $\log x$ such that all eigenvalues $\lambda\leq x$ are of the form $5^{m_{1}}\Phi(\lambda_{m_{1}})$ for $\lambda_{m_{1}}$ an eigenvalue of $\mathcal{M}_{m_{1}}^{a_{m_{1}}}$. Hence it suffices to know what proportion of the eigenvalues of $\mathcal{M}_{m_{1}}^{a_{m_{1}}}$ have generation of birth $\leq n$. At each $m$ the number of newly born eigenvalues is comparable to $3^{m}$, and these split according to the positive and negative roots in the spectral decimation to give a multiple of $2^{m_{1}-m}3^{m}$ eigenvalues at the generation of fixation, so the number of eigenvalues born before $n$ but fixed at $m_{1}$ is comparable to $3^{n}$, while the total number fixed at $m_{1}$ is comparable to $3^{m_{1}}$. Thus the proportion of eigenvalues of $\mathcal{M}^{a}$ that differ from those of $\Delta$ and are less than $x$ is bounded by a multiple of $3^{n}/x$ for large $x$, and goes to zero as $x\to\infty$.
\end{proof}
Theorem~\ref{thm:spectrumofMaga} also gives all of the spectrum of $e^{i\beta b}$ except that born at generation $1$. We can get the rest by direct computation. If we label the points of $V_{1}\setminus V_{0}$ as $q_{j}$, $j=0,1,2$, then symmetry suggests we ought to have eigenfunctions $f_{k}$ of $\mathcal{M}_{1}^{a_{1}}$ with values on $V_{1}$ given by $f_{k}(q_{j})=e^{ijk2\pi/3}$. Indeed
\begin{equation*}
\mathcal{M}_{1}^{\beta b}f_{k}=\Bigl( 4 - 2\cos \Bigl( \frac{2\pi k}{3} + \frac{2\beta}{\sqrt{30}} \Bigr) \Bigr)\, f_{k}
\end{equation*}
from which we can determine the eigenfunctions by applying $5\Phi$. Ideally we would like to be able to use this information to compute the bottom of the spectrum for $\mathcal{M}^{a}$ in the case where $a$ is given by~\eqref{eqn:nonexactmag}, at least in some special cases, but unfortunately we do not know how to do this.
\section{Spectrum of $\mathcal{M}^{\beta b}$ via the ladder fractafold}\label{sec:ladderfractafold}
An alternative approach to the problem of determining the spectrum of $\mathcal{M}^{\beta b}$ is to lift the problem to a periodic version on a suitable covering space using a technique from~\cite{ST12}. To avoid merely repeating the results of the previous section we illustrate this method by computing the spectrum of the Neumann magnetic operator.
The space we use is called the Ladder Fractafold based on the Sierpinski gasket, and is denoted LF. \cite{ST12} gives a general method for analyzing the spectrum of a fractafold constructed by gluing copies of SG arranged according to a graph. For LF, let the vertices of a graph $\Gamma_0$ be three copies of $\mathbb{Z}$, labelled $\{x_{k+\frac{1}{2}}\}$, $\{w_k\}$, and $\{y_{k+\frac{1}{2}}\}$ and the edges be such that $w_{k}$, $x_{k-\frac{1}{2}}$, $x_{k+\frac{1}{2}}$ is a complete graph on $3$ vertices, and so is $w_{k}$, $y_{k-\frac{1}{2}}$, $y_{k+\frac{1}{2}}$. Then LF is obtained by replacing these complete $3$-graphs with copies of SG, see Figure~\ref{fig:LF}.
\begin{figure}
\includegraphics[width=.9\textwidth]{SierpinskiLadder.pdf}
\caption{The Ladder Fractafold}\label{fig:LF}
\end{figure}
According to the analysis in~\cite{ST12}, the spectrum of LF can be determined from the graph of the cells and their connectivity. If we label the cell with vertices $w_{k}$, $x_{k-\frac{1}{2}}$, $x_{k+\frac{1}{2}}$ by $a_{k}$ and that with vertices $w_{k}$, $y_{k-\frac{1}{2}}$, $y_{k+\frac{1}{2}}$ by $b_{k}$ and treat $\{a_{k}\}\cup\{b_{k}\}$ as vertices of a graph $\Gamma$ with edges when the corresponding cells intersect, then $\Gamma$ is a ladder as shown in Figure~\ref{fig:GammaGraphs}. If $-\Delta_{\Gamma}$ is the usual discrete Laplacian on $\Gamma$ it has absolutely continuous spectrum $[0,6]$. One can prove the resolvent is unbounded by considering two sets of functions that satisfy an eigenfunction equation but are not in $L^{2}$: $\{\phi_{\theta}\}$ such that $\phi_{\theta}(a_{k})=\phi_{\theta}(b_{k})=e^{ik\theta}$ with eigenvalue $2-2\cos\theta$ (these are even in the reflection exchanging $a_{k}$ and $b_{k}$), and $\{\psi_{\theta}\}$ such that $\psi_{\theta}(a_{k})=-\psi_{\theta}(b_{k})=e^{ik\theta}$ with eigenvalue $4-2\cos\theta$ (these are odd in the reflection exchanging $a_{k}$ and $b_{k}$). In both cases $0\leq\theta\leq\pi$. From Theorem~3.1 of~\cite{ST12} and their discussion in Example~5.2, this spectrum is the same as that of $ -\Delta_{\Gamma_{0}}$. Moreover they relate the spectrum $\sigma(-\Delta_{\Gamma_{0}})$ to that of the Laplacian on the fractafold as follows.
\begin{figure}
\includegraphics[width=\textwidth]{GammaGraphs.pdf}
\caption{The graphs $\Gamma_0$ (unfilled verteces and dashed edges), and $\Gamma$ (filled verteces, solid edges)}\label{fig:GammaGraphs}
\end{figure}
\begin{thm}[\protect{Theorem~2.3 of~\cite{ST12}}] \label{BigTheorem}
Using the function $\mathcal{R}$ from~(S\ref{S:fixation}) let
\begin{gather*}
\Sigma_{\infty} = 5\biggl( \mathcal{R}\{2\} \cup \bigcup_{0}^{\infty} 5^{m}\mathcal{R}\{3,5\}\biggr),\\
\Sigma'_{\infty} = 5\biggl( \bigcup_{m=0}^{\infty} 5^{m} \mathcal{R}\{3,5\}\biggr)\subset\Sigma_{\infty}.
\end{gather*}
Then for $\Delta$ the Laplacian on the fractafold obtained by gluing according to $\Gamma_{0}$
\begin{equation*}
\mathcal{R}(\sigma (- \Delta_{\Gamma_0})) \cup \Sigma_{\infty}' \subset \sigma (- \Delta_{\text{LF}}) \subset\ \mathcal{R} (\sigma (- \Delta_{\Gamma_0})) \cup \Sigma_{\infty}.
\end{equation*}
\end{thm}
To connect this to the study of the magnetic operator $\mathcal{M}^{\beta b}$ we ``fold'' the ladder along the center-line parallel to its length, so the point $x_{k+\frac{1}{2}}$ is identified with $y_{k+\frac{1}{2}}$ for all $k$, and obtain a fractafold is in Figure~\ref{fig:FLF}, which we call the folded ladder fractafold, or FLF. The FLF is a covering space for SG in which the loop around the central hole of the $V_1$ graph has been trivialized. The covering map takes each cell $a_{k}$ in the fractafold to a $1$-cell of SG in a $3$-periodic manner, identifying $a_{k}$ with the cell $F_{k\,\text{mod}\,3}(SG)$, $w_{k}$ with $p_{k\,\text{mod}\,3}\in V_{0}$ and mapping both $x_{k+\frac{1}{2}}$ and $y_{k+\frac{1}{2}}$ to the same point of $V_{1}\setminus V_{0}$. We arrange the map so that the line through the $x_{k+\frac{1}{2}}$ wraps in a counterclockwise direction around the central hole in the $V_{1}$ graph as $k$ increases.
\begin{lem}
There is a bijection taking each Neumann eigenfunction $f$ of $\Delta$ on SG with eigenvalue $\lambda$ to a solution $\tilde{f}$ of $\Delta_{\text{LF}}\tilde{f}=\lambda\tilde{f}$ which is symmetrical under the central line reflection and is $3$-periodic.
\end{lem}
\begin{proof}
For the definition and properties of the Laplacian on LF and FLF we refer to~\cite{ST12}. A function satisfying $\Delta_{\text{FLF}}\hat{f}=\lambda\hat{f}$ on FLF unfolds to give a function $\tilde{f}$ on LF. This function satisfies $\Delta_{\text{LF}}\tilde{f}=\lambda\tilde{f}$ if and only if its normal derivatives at each $w_{k}$ sum to zero; given the symmetry, this happens if and only if $d\hat{f}=0$ at all points $w_{k}$. At the same time, the period $3$ covering of SG by FLF ensures that $3$-periodic solutions of $\Delta_{\text{FLF}}\hat{f}=\lambda\hat{f}$ on FLF correspond to eigenfunctions on SG in such a way that the normal derivatives at points $w_{k}$ correspond to those on $V_{0}$.
\end{proof}
\begin{rmk}
We could do something similar for the Dirichlet eigenfunctions on SG by considering antisymetry in the center line.
\end{rmk}
\begin{figure}
\includegraphics[width=\textwidth]{FoldedLadder.pdf}
\caption{The folded Sierpinski Ladder Fractafold}\label{fig:FLF}
\end{figure}
More importantly, the same thing happens for the magnetic operator $\mathcal{M}^{\beta b}$. The only modification required for the proof is that the symmetric unfolding of a solution of $\Delta_{\text{FLF}}\hat{f}=\lambda\hat{f}$ from FLF to LF gives a solution of $\Delta_{\text{LF}}\tilde{f}=\lambda\tilde{f}$ if and only if $d^{\beta b}\hat{f}(w_{k})$ sums to zero for all $k$. However~\eqref{eqn:magnormalderivfromusual} and the fact that $db=0$ sums to zero at each $w_{k}$ ensures the Neumann condition is still the correct one. To make this argument we need $1$-forms and magnetic forms on LF and FLF; their properties are substantially similar to those on SG, and we refer to~\cite{IRT,HR} for more details.
\begin{cor}
There is a bijective map which takes each Neumann eigenfunction of $\mathcal{M}^{\beta b}$ on SG with eigenvalue $\lambda$ to a solution $\tilde{f}$ of $\mathcal{M}^{\beta b}_{\text{LF}}\tilde{f}=\lambda\tilde{f}$ that is symmetrical under the central line reflection and $3$-periodic. Here $\mathcal{M}^{\beta b}_{\text{LF}}$ is the magnetic operator corresponding to the symmetric $3$-periodic lift of $\beta b$ to LF.
\end{cor}
The preceding result is significant because passing to FLF trivializes the loop where $b$ is not exact, so we might expect $\beta b$ to be exact on FLF. This is not literally true because the periodic extension of $\beta b$ to FLF will not have finite energy, simply because it is periodic. However our reasoning regarding the gauge transformation is still valid: we can define $e^{i\beta B}$, which is globally continuous and locally in the domain of the Dirichlet form on FLF, such that $\mathcal{M}^{\beta b}f=e^{-i\beta B}\Delta_{\text{FLF}}(e^{i\beta B}f)$ for any $f$ in the domain of $\mathcal{M}^{\beta b}$ with compact support, and take limits to extend this operation to $L^{2}$.
\begin{thm}
The spectrum of the Neumann magnetic operator $\mathcal{M}^{\beta b}$ on SG is
\begin{equation*}
\sigma (M^{\beta b}) = \mathcal{R} \Bigl\{ 2 - 2\cos\bigl( \frac{2k\pi}{3} - \frac{2\beta}{\sqrt{30}} \bigr) \Bigr\}^{2}_{k=0} \cup \Sigma_{\infty}'
\end{equation*}
\end{thm}
\begin{proof}
The periodic extension of $\beta b$ to FLF has gauge $e^{iB}$ where $B$ is harmonic on each cell $a_{k}$ and has values
\begin{equation*}
B(x_{k+\frac{1}{2}})
= \frac{2\beta k}{\sqrt{30}}+\frac{1}{2}
\end{equation*}
and $B(w_{k})=0$ for all $k$. We use the same notation for the symmetric extension to LF. The gauge transformation is valid and reduces the problem to finding those elements of the spectrum of the Laplacian on LF for which the associated function is symmetric in the center line and, after application of the gauge transformation, is $3$-periodic. By Theorem~\ref{BigTheorem} and elementary arguments about the eigenfunctions associated to $\Sigma_{\infty}$ and $\Sigma'_{\infty}$ this includes all of $\Sigma'_{\infty}$ but not $5\mathcal{R}\{2\}$. The remaining values correspond to spectral values from the symmetric functions $\phi_{\theta}$ on $\Gamma$. According to~\cite{ST12} the corresponding functions on LF are equal to $e^{i(k+\frac{1}{2})\theta}$ at $x_{k+\frac{1}{2}}$ and $e^{ik\theta}$ at $w_{k}$. When multiplied by the gauge, these have
\begin{equation*}
e^{iB}\phi_{\theta}(x_{k+\frac{1}{2}})
=\exp i\Bigl( \bigl(k+\frac{1}{2}\bigr)\theta + \frac{2\beta k}{\sqrt{30}}+\frac{1}{2}\Bigr)
\end{equation*}
which is periodic of period $3$ in $k$ if and only if $3\theta+\frac{6\beta}{\sqrt{30}}\equiv0\mod2\pi$. Using this to determine $\theta$, the fact that the eigenvalue on $\Gamma$ was $2-2\cos\theta$ and the preceding reasoning from Theorem~\ref{BigTheorem} completes the proof.
\end{proof}
|
1,477,468,750,985 | arxiv | |
1,477,468,750,986 | arxiv | \section{Introduction}
\label{sec:intro}
Young stellar objects grow mainly by accreting mass from a circumstellar or protoplanetary disk. Planet formation theories predict that the composition of the disk should vary with time owing to the growth of dust grains into centimeter-sized grains, or ``pebbles'', that drift toward the central star, the formation of planetesimals and planets, and the presence of disk winds. In our previous studies, we provided constraints on the evolutionary models of protostellar and pre-main-sequence (pre-MS) stars, including accretion \citep{Kunitomo+17}, and also investigated the impact of the evolving composition of the accreting material on the composition of the stellar surface \citep{Kunitomo+18}. We showed that some of the anomalies observed in the chemical composition of stellar clusters, $\lambda$ Boo stars, and binary stars can be explained by planet formation processes. Moreover, our studies revealed that the large deficit in the Sun's refractory elements compared to that in solar twins, as found by \citet[][]{Melendez+09}, may be explained by the formation of giant planets in the Solar System with a very high rock-to-ice ratio \citep{Kunitomo+18} \citep[see][for an alternative explanation]{Booth+Owen20}.
Given the detailed information and constraints available regarding the composition and structure of the Sun, it would be interesting to investigate the imprint of planet formation processes in the Sun that may be detected by observations.
The main characteristics of the Sun, namely, its radius, luminosity, and age, are precisely known. However, the theoretical modeling of the Sun's atmospheric composition, which is determined using high-resolution spectroscopy, has significantly evolved in the past decade. This is owing to the replacement of old one-dimensional atmospheric models by more accurate three-dimensional models with new atomic data that account for convection and nonlocal thermodynamic equilibrium effects, which substantially improved the fit to the observed spectral lines \citep{Asplund+05,Asplund+09}.
As a consequence, the inferred present-day solar surface metallicity decreases significantly from $\Zs=0.018$ \citep[][hereafter \citetalias{GS98}]{GS98} to $\Zs=0.013$ \citep[][hereafter \citetalias{Asplund+09}]{Asplund+09}.
The best constraints on the internal structure of the Sun are obtained from helioseismology, which helps determine not only the location of the base of the convective zone (CZ) but also the sound speed from the radius $r \approx 0.1\, R_\sun $ to the solar surface, where $ R_\sun $ is the solar radius \citep[e.g.,][]{Basu16}. However, it has been shown that the old high-$Z$ solar models result in significantly better fits to the observed sound speed compared to the low-$Z$ solar models. This is the so-called solar abundance problem \citep[see reviews by][]{Asplund+09,Serenelli16}, which has been the focus of many studies. Potential solutions to the solar abundance problem include the consideration of increased opacities \citep{Bahcall+05,Christensen-Dalsgaard+09,Villante10,Ayukov+Baturin13, Bailey+15,Vinyoles+17}, increased efficiencies of diffusion \citep[so-called extra mixing;][]{Christensen-Dalsgaard+18,Buldgen+19}, diffusion due to the solar rotation \citep[so-called rotational mixing;][]{Yang19}, helium-poor accretion \citep{Zhang+19}, an updated solar composition \citep{Young18}, and revised nuclear reaction rates \citep{Ayukov+Baturin17}. Accretion models with a time-dependent composition, in particular, low-$Z$ accretion in the late pre-MS phase, have also been investigated but were found to be unsuccessful in solving the solar abundance problem \citep{Castro+07,Guzik+Mussack10,Serenelli+11}.
To investigate the effect of planet formation on the structure and composition of the Sun requires a quantitative analysis of: (1) various planet formation processes and (2) solar evolution models that consider the accretion of matter with a time-dependent composition to reproduce all observational constraints. In the present work, which is an extension of our previous works \citep{Kunitomo+17, Kunitomo+18}, we conducted a thorough search for solutions to the solar abundance problem by applying our accretion models with a time-dependent composition to optimized solar evolution models.
The remainder of this paper is organized as follows.
In Sect.\,\ref{sec:context}, we discuss how the growth of dust grains, the radial drift of pebbles, and planet formation affect the composition of the gas accreted by the proto-Sun during its protostellar and pre-MS evolution. In Sect.\,\ref{sec:method}, we describe the method used to compute our stellar evolution models and the procedure followed to optimize these models to reproduce the observational constraints.
In Sect.\,\ref{sec:results}, we present the results of different solar evolution models and compare them with observations; in addition, we show that an increased opacity appears to be the most likely solution for the solar abundance problem. Using these models, in Sect.\,\ref{sec:discussion-planets}, we examine the effect of planet formation on the composition and structure of the present-day Sun.
Our results are summarized in Sect.\,\ref{sec:conclusion}.
\section{Context}
\label{sec:context}
\subsection{Evolution of the composition of protoplanetary disks}
\label{sec:planets}
During the collapse of a molecular cloud core, a large fraction of the material first forms a circumstellar disk around the central star (or stars) \citep{Inutsuka12}. Next, the transport of angular momentum from the inner region of the disk to the outer region causes the matter to be accreted by the central star(s) \citep{Lynden-Bell+Pringle74} on a timescale of order 1--10\,Myr \citep[e.g.,][]{Hartmann+16}. During this time, initially micron-sized dust grains grow larger in size, as indicated by the dust emission at millimeter wavelengths \citep[e.g.,][]{Beckwith+90}. A comparison of the mass of heavy elements present in exoplanetary systems and that inferred to be present in protoplanetary disks indicates that these grains must grow further in size to form planetesimals and even planets during the early stages of disk evolution \citep{Manara+18,Tychoniec+20}.
Dust growth in the Class 0/I phases of young stellar objects has also been investigated in several theoretical studies \citep[e.g.,][]{Tsukamoto+17, Tanaka+Tsukamoto19}.
The formation of planetesimals and planets is likely to lead to a decrease in the metallicity of the gas accreted by the central star(s). \citet{Kunitomo+18} estimated that approximately 97 to $168\, M_\oplus$ of heavy elements in the Solar System were either used to form planets or were ejected from the system.
This is a small value compared to the total mass of heavy elements present in the Sun, which is estimated to be approximately $5000\, M_\oplus$; however, it can still cause the metallicity of the accreted gas to change, particularly during the later stages of stellar accretion. Moreover, it has been proposed that the peculiar composition of $\lambda$ Boo stars can be explained by the presence of giant planets in the surrounding protoplanetary disks \citep{Kama+15}.
It is also worth noting that as the grains grow in size, they drift rapidly inward in the protoplanetary disk \citep{Adachi+76, Weidenschilling77a}. This can create a wave of solids that may be lost to the central star in the absence of any other processes \citep{Stepinski+Valageas96, Garaud07}. Circumstellar disks have been proposed to be large (100\,au or more in size) and expanding \citep[e.g.,][]{Hueso+Guillot05}, which implies that the outer part of the disk acts as a reservoir in which the dust grains grow and from which they are gradually released. By assuming the standard theory of grain growth and adopting the $\alpha$-disk model \citep[][]{Shakura+Sunyaev73}, \citet{Garaud07} demonstrated that the timescale in which the solids are released from the outer disk is determined by a balance between the outward propagation of the disk and the growth of the dust grains.
The metallicity $\Zacc$ of the gas accreted by the proto-Sun can be calculated from the ratio between the mass fluxes of the dust $\dot{M}_{\rm d}$ and gas $\dot{M}\sub{acc}$, such that
\begin{equation}
\Zacc=\frac{\dot{M}_{\rm d}}{\dot{M}\sub{acc}}\,.
\end{equation}
The above equation assumes that the metallicity is equal to the amount of condensing material. We note that because most elements heavier than hydrogen and helium condense in the protoplanetary disk, the above assumption is justified in our case.
Figure~\ref{fig:Eacc} shows the evolution of $\Zacc$ with time and the mass of the proto-Sun, as obtained from various studies on the evolution of dust in protoplanetary disks. In the beginning, $\Zacc=\Zproto$, where $\Zproto$ is the primordial metallicity of the molecular cloud core. Eventually, $\Zacc$ increases with time due to the incoming pebble wave before decreasing sharply owing to the removal of all or most of the solids from the disk.
\begin{figure}[tbp]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{zacc.pdf}
\caption{\small{
Evolution of the metallicity $\Zacc$ of the accreted gas in units of the initial metallicity $\Zaccini$ as a function of time (left panel) and mass of the proto-Sun (right panel). Several evolutionary models of circumstellar disks that include both gas and dust are shown (see text for details). The protostar mass was calculated assuming some mass loss (e.g., due to partial photoevaporation of the circumstellar disk) except in two cases, which are represented by dashed lines.}}
\label{fig:Eacc}
\end{center}
\end{figure}
The first set of disk models shown in Fig.~\ref{fig:Eacc} corresponds to the analytical solutions obtained by \citet{Garaud07} for $\alpha=10^{-2}$ and $10^{-3}$ and $R_0=30$\,au and 1000\,au, where $\alpha$ is the dimensionless turbulent viscosity parameter and $R_0$ is the initial disk radius. The second model corresponds to the numerical solution obtained by \citet{Appelgren+20} for a standard one-dimensional disk model. In this case, the authors assumed a variable $\alpha$ such that its value increased in the presence of gravitational instabilities but had a fixed minimum value of $\alpha_{\rm min}=10^{-2}$. Finally, the third set of models corresponds to the solutions obtained by \citet{Elbakyan+20} for a two-dimensional disk model. The accretion is highly variable in this case, and hence, we plot only the mean value over a timescale of $2.5\times 10^4$ years.
The left panel of Fig.~\ref{fig:Eacc} shows that the pebble wave appears at less than 1 Myr and lasts until approximately 10 Myr. This timescale is extremely difficult to estimate because it depends on several factors, including the turbulent viscosity in the disk, initial angular momentum of the disk material, overall structure of the disk, and the model used to calculate the grain growth. The peak value of $\Zacc/\Zproto$ varies from 1.5, for models L and M of \citet{Elbakyan+20}, to approximately 5, for the model proposed by \citet{Appelgren+20}. It should be noted that \cite{Mousis+19} found an even higher metallicity peak in the range 10--20. The peak in metallicity is followed by a sudden drop to a value close to the initial metallicity in the models proposed by \citet{Elbakyan+20} and to zero in the remaining models; however, in reality, the final metallicity value should lie somewhere in between. This is because we know from observations that protoplanetary disks are never completely cleared of dust \citep[e.g.,][]{Dullemond+Dominik05}, indicating that the perfect depletion obtained by \cite{Garaud07} and \cite{Appelgren+20} is due to the simplifications adopted in their models to account for the grain size distribution. Conversely, these models do not consider planetesimal and planet formation processes, which can remove solids from the system and thus lead to an additional decrease in the metallicity of the accreted gas. In addition, we note that fully formed planets can efficiently filter out pebbles and prevent them from reaching the protostar \citep{Guillot+14}.
For the purpose of this study, it is also important to estimate how the metallicity of the accreted gas varies as a function of the mass of the protostar. The disk models above neglect the change in the mass of the protostar owing to disk accretion. Therefore, we estimated the mass of the protostar as a function of time, as follows:
\begin{equation}
M_{\star}(t)=M_{\rm final}+M_{XY,\mathrm{lost}}-{M}_{\rm g}(t),
\end{equation}
where $t$ is time, $M_{\rm g}(t)$ is the gas mass in the disk, $M_{\rm final}$ is the final mass of the star (note that $ M_{\rm final}=1\, M_\sun $ for the Sun), and $M_{XY,\mathrm{lost}}$ is the mass ejected from the disk \citep[e.g., via photoevaporation or magnetohydrodynamic (MHD) winds; see][]{Hollenbach+00, Suzuki+Inutsuka09,Alexander+14}.
We stop the calculation of $M_{\star}$ at a time $t_{\rm final}$ when $M_{\rm g}(t=t_{\rm final})=M_{XY,\mathrm{lost}}$.
Following \citet{Guillot+Hueso06}, we assumed that predominantly hydrogen and helium were lost from the disk \citep[also, see][]{Gorti+15,Miyake+16}. We note that photoevaporation is more effective than MHD disk winds in the selective removal of hydrogen and helium during the later stages of accretion \citep{Kunitomo+20}. Although the total mass of the disk lost via winds is not well determined, we expect $M_{XY,\mathrm{lost}}\la0.05\, M_\sun $.
The right panel of Fig.~\ref{fig:Eacc} shows that the peak in the metallicity of the accreted gas occurs after approximately 90\% of the mass has been accreted onto the protostar. For most of the models, the peak occurs at a high value of the protostellar mass, namely, between 95\% and 98\%. For the disk model proposed by \cite{Appelgren+20} with no mass loss, a pebble wave occurs when the mass of the protostar is approximately 85\% of the final mass of the star. This is because in their model the disk extends beyond 1000\,au. However, such a large disk is likely to get photoevaporated rapidly due to external radiation \citep[e.g.,][]{Guillot+Hueso06,Anderson+13}. Thus, for our simulations of the Sun, we considered a situation in which the metallicity starts to increase after $0.85\, M_\sun $ of the gas has been accreted, peaks between $0.9\, M_\sun $ and $0.98\, M_\sun $, and finally decreases to a very low value.
\subsection{Internal structure of the pre-main-sequence Sun}
\label{sec:interior}
\begin{figure*}[!tb]
\sidecaption
\includegraphics[width=12cm,keepaspectratio]{t-Mrad-kippenhahn_v4.pdf}
\caption{Early evolution of the solar interior. The total mass of the accreting proto-Sun is indicated by the dashed line. The surface CZ is depicted as a cloudy region \citep[see][]{Kippenhahn+Weigert90} delimited by a radiative zone that grows outward from the center of the Sun. Three evolution models are shown corresponding to different values of the accretion efficiency parameter $\xi$: a classical evolution model with $\xi=0.5$ (red line), a cold accretion model with $\xi=0.1$ (green line) that is a lower limit of the $\xi$ value in the observations of young clusters \citep{Kunitomo+17}, and a model with $\xi=0$ (gray line) corresponding to a theoretical proto-Sun formed in the absence of any accretion heat. The right-hand $y$-axis provides the radius corresponding to the mass on the left-hand $y$-axis of the present-day Sun (which is 4.567\,Gyr old). The bottom panel highlights the key physical processes that occur during the growth of the proto-Sun: the presence of a circumstellar gas disk (during the first million years); an increase in the metallicity $\Zacc$ of the accreted gas due to a pebble wave; the concomitant formation of planetesimals and planets; and a sudden decrease in $\Zacc$.
}
\label{fig:t-MCZ}
\end{figure*}
One may expect the metallicity of the accreted gas to have a significant impact on the composition of the Sun's atmosphere \citep[see, e.g.,][]{Chambers10}. However, this is not the case because of the internal structure of the proto-Sun. Studies on the evolution of the Sun, which is in agreement with the evolution of stars in young clusters of similar mass as the Sun \citep{Kunitomo+17}, indicate that the Sun must have been almost fully convective for the first 1 to 2\,Myr. Then, the CZ starts to recede slowly as the Sun evolves toward the main sequence (MS; which takes approximately 40\,Myr), finally reaching a stage where it is only 2.5\% of the total mass of the present Sun.
Figure~\ref{fig:t-MCZ} shows the first 30\,Myr of the evolution of the solar CZ for three models adopted from \cite{Kunitomo+18}: a standard accretion model with $\xi=0.5$, which corresponds to the maximum accretion efficiency; a limiting model with $\xi=0.1$, which is in agreement with the observational data from young clusters but corresponds to an accretion efficiency of only 10\%; and a cold accretion model with $\xi=0$ for reference \citep[see][for a complete discussion]{Kunitomo+17}.
The $\xi$ value corresponds to the ratio of the accretion heat injected into the protostar to the liberated gravitational energy of accreting materials \citep{Kunitomo+17}.
In the present work, we adopted models equivalent to the standard model with $\xi=0.5$ (see Sect.\,\ref{sec:Mdot} for details).
The lifetime of protoplanetary disks is still uncertain but is thought to be less than 10\,Myr \citep[see, e.g.,][]{Haisch+01, Kennedy+Kenyon09, Fedele+10, Hartmann+16}. In fact, the magnetism observed in meteorites indicates that the lifetime of the protoplanetary disk in the Solar System was $\simeq4$\,Myr \citep[see, e.g.,][]{Wang+17,Weiss+21}. Thus, during the protoplanetary disk phase, the interior of the Sun was largely convective, with the CZ encompassing at least 50\% of the Sun's mass, whereas the typical disk mass is $\sim 0.01\, M_\sun $ \citep[see, e.g.,][]{Williams+Cieza11}. This implies that any anomalies in the solar composition due to the accretion of the protoplanetary disk gas with the pebble wave or with the depletion of heavy elements by the formation of planetesimals and planets were likely to be suppressed by the large CZ.
Thus, we can expect two consequences from the above discussion: (1) the signature left by both the high-$Z$ and low-$Z$ accretion is suppressed by roughly one to two orders of magnitude and (2) convective mixing leads to a uniform composition in a large fraction of the solar interior. As seen in Fig.~\ref{fig:t-MCZ}, a mass coordinate of $0.5\, M_\sun $ corresponds to approximately 27\% of the present-day solar radius, which indicates that the signatures of the planet formation processes are buried deep in the solar interior (mostly in its nuclear burning core).
\section{Computation method}
\label{sec:method}
In this section, we describe how we (i) simulate stellar evolution, (ii) compare our results with observations, and (iii) minimize the total $\chi^2$ value by changing the input parameters.
\subsection{Stellar evolution with accretion}
\label{sec:stellarevol}
We used the one-dimensional stellar evolution code \texttt{MESA} version 12115 \citep{Paxton+11,Paxton+13,Paxton+15,Paxton+18,Paxton+19}.
For details of the computational method used in this work, we refer the readers to \citet{Kunitomo+17}, \citet{Kunitomo+18}, and the series of papers by Paxton et al.
Below we briefly summarize the method and the various parameters used.
\subsubsection{Initial conditions}
\label{sec:init}
Stars are formed via the gravitational collapse of a molecular cloud core. A protostar (or second hydrostatic core) forms after the formation of a transient hydrostatic object \citep[the so-called first core; see][]{Larson69,Inutsuka12}. Radiation hydrodynamic simulations have suggested that the initial mass of a protostar is typically $\sim0.003\, M_\sun $ \citep{Masunaga+Inutsuka00,Vaytet+Haugbolle17}.
In this study, we used a stellar seed of mass $0.1\, M_\sun $, radius $4\, R_\sun $, and metallicity $Z=0.02$, to avoid numerical convergence issues at very low protostellar masses in the new version of the \texttt{MESA} code.
Recent studies have shown that the thermal evolution of the protostar depends on the entropy $s\sub{acc}$ of the accreted material \citep[e.g.,][]{Hartmann+97,BCG09,Hosokawa+11,Tognelli+15,Kunitomo+17,Kuffmeier+18}.
The initial seed corresponds to a protostar of mass $0.1\, M_\sun $ that has evolved via hot accretion from its birth mass of $\sim0.003\, M_\sun $\footnote{
We note that we chose a hot stellar seed because the accretion rate and $s\sub{acc}$ are expected to be high during the early stages of protostellar evolution \citep{Machida+10,Hosokawa+11,BVC12,Tomida+13}.
According to \citet[][see their Fig.\,D.1]{Kunitomo+17}, in hot accretion models, the stellar radius converges before the protostar reaches a mass of $0.1\, M_\sun $.
}.
For the non-accreting cases (see Sect.\,\ref{sec:noacc}), we used an initial stellar seed of mass $1\, M_\sun $ and a central temperature of $3\times10^5$\,K (i.e., corresponding to the top of the Hayashi track).
The seed has a uniform composition. The mass fractions of hydrogen, helium, and metals of the seed are $X\sub{proto}$, $Y\sub{proto}$, and $Z\sub{proto}$, respectively.
We note that the Kelvin-Helmholtz timescale at the top of the Hayashi track is short \citep[see, e.g.,][]{Stahler+Palla04}, and hence, the pre-MS evolution without accretion is not sensitive to the choice of the initial central temperature.
\subsubsection{Mass accretion}
\label{sec:Mdot}
We adopted the following mass accretion rate $\dot{M}\sub{acc}$ in the protostellar and pre-MS phases\footnote{In this study, we refer to the main accretion phase (i.e., the class 0/I phase) as the protostellar phase, which is just before the protostellar mass becomes equal to the final mass of the star (i.e., $1\, M_\sun $). A pre-MS star, in contrast, has $M_{\star}\simeq 1\, M_\sun $ but has not yet reached the zero-age MS.
}:
\begin{align}
\dot{M}\sub{acc} =
\begin{cases}
10^{-5}\, M_\sun /{\rm{yr}} & {\rm{for}}\,\, t \leq t_1\,, \\
10^{-5}\, M_\sun /{\rm{yr}}\,\times (t/t_1)^{-1.5} & {\rm{for}}\,\, t>t_1\,
\end{cases}
\label{eq:Mdot}
\end{align}
\citep[see][for details about the exponent $-1.5$]{Hartmann+98}.
In our fiducial model, we set $t_1=31,160\,$yr so that $M_{\star}$ could reach $1\, M_\sun $ at the end of the accretion phase, that is, at $t\sub{acc}=10$\,Myr. The evolution of $M_{\star}$ and $\dot{M}\sub{acc}$ in our fiducial model is shown in Fig.\,\ref{fig:Mdot}.
In some of our simulations, we changed the accretion timescale $t\sub{acc}$ and $t_1$, but we always used the same exponent (i.e., $-1.5$) for $t>t_1$. As discussed in Sect.\,\ref{sec:interior}, accretion generally ends several million years; therefore, $t\sub{acc}=10\,$Myr corresponds to the upper limit of the observational constraint.
We note that previous studies, such as \citet[][]{Serenelli+11} and \citet{Zhang+19}, considered accretion only in the pre-MS phase,
whereas, in this study, we considered accretion in the protostellar phase, which led to a larger mass of the accreted material.
Moreover, \citet[][see their section\,3.2.3]{Serenelli+11} and \citet[][see their section 2.4]{Zhang+19} introduced accretion in their models after a certain time\footnote{
\citet[][]{Serenelli+11} varied this time $\tau\sub{ac,i}$ between 5 and 30\,Myr, while \citet{Zhang+19} fixed it to 2\,Myr.} to allow the pre-MS Sun to develop a radiative core.
These studies needed a non-accreting phase because they adopted arbitrary initial conditions, whereas our model is based on the current understanding of star formation.
We also note that the non-accreting timescale in some of the models of \citet{Serenelli+11} exceeded the typical disk lifetime (i.e., several million years).
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{t-M.pdf}
\caption{\small{
Evolution of stellar mass $M_{\star}$ (red solid line) and accretion rate $\dot{M}\sub{acc}$ (green dashed line) with time in the fiducial case.
The gray vertical lines indicate $t_1$ and $t\sub{acc}$ in the fiducial case.
The gray dotted lines indicate $M_{\star}$ and $\dot{M}\sub{acc}$ for $t\sub{acc}=3$ and 20\,Myr.
}}
\label{fig:Mdot}
\end{center}
\end{figure}
In the present work, we did not consider mass loss.
Vigorous stellar winds have been known to flow out from young stars \citep{Wood+05,Suzuki+13}; moreover, \citet{Zhang+19} suggested that mass loss has the potential to alter the surface composition of the star. Although in this study we focused only on the effects of planet formation and opacity enhancement on the stellar composition, the effect of mass loss should be considered in future studies.
We used the same accretion model characterized by the parameter $s\sub{acc}$ as in \citet{Kunitomo+17,Kunitomo+18}, who assumed that the accretion heat injected into the protostar $L\sub{add}$ is a fraction of the gravitational energy liberated by accretion.
Consequently, we assumed that $L\sub{add}=\xi GM_{\star}\dot{M}\sub{acc} /R_{\star}$, where $\xi$ is a dimensionless parameter, $G$ is the gravitational constant, and $R_{\star}$ is the stellar radius. The accretion heat is assumed to be uniformly distributed throughout the star \citep[see related discussion in][]{Kunitomo+17}.
Although we set $\xi=0.1$ \citep[see][]{Kunitomo+17} in our accretion models, we chose a high-entropy initial seed (see Sect.\,\ref{sec:init}). This implies that our thermal evolution calculations are equivalent to those of standard accretion models (\citet{SST80I}; $\xi=0.5$ models in \citet{Kunitomo+17}).
Thus, the evolution of the CZ in our models corresponds to the $\xi=0.5$ line shown in Fig.\,\ref{fig:t-MCZ} (also, see Appendix\,\ref{app:non-accreting}).
In this context, we can expect the calculations with $\xi=0.1$ and a low-entropy initial seed to capture a slightly larger effect due to planet formation processes. We leave this for future work.
\subsubsection{Convection}
\label{sec:conv}
We adopted the mixing-length theory for convection proposed by \citet{Cox+Giuli68}.
We assumed the ratio of the mixing length to the local pressure scale height ($\Hp$) to be $\amlt$.
We also considered the effect of the composition gradient on the convective stability (i.e., the Ledoux criterion).
Previous studies have suggested that additional mixing (e.g., convective overshooting and tachocline circulation), especially at the base of the CZ, plays an important role in solar sound-speed anomalies \citep[see, e.g.,][]{Christensen-Dalsgaard+18, Buldgen+19, Zhang+19}.
However, these processes are poorly understood and are not well constrained.
\citet{Christensen-Dalsgaard+18} showed that extending the Sun's CZ or adding extra mixing in the tachocline region leads to a better agreement between theoretical models and helioseismic constraints.
\citet{Zhang+19} included convective overshoot in their models by adopting an exponentially decreasing diffusion coefficient and considering a change in the luminosity due to kinetic energy transfer in the overshooting region. They showed that the radial location $\RCZ$ of the base of the CZ is sensitive to the underlying overshooting model.
In this study, we considered the conventional convective overshooting model proposed by \citet{Herwig00}, in which the diffusion coefficient decreases exponentially from the convective--radiative boundary.
We assumed the $e$-folding length of the diffusion coefficient to be $\fov\Hp$.
In addition, we adopted the same overshooting parameters below and above the CZ, irrespective of the presence of nuclear burning.
Although we also considered semiconvection in our models, we confirmed that it has little effect on the results.
In this study, $\amlt$ and $\fov$ are the two free parameters. Based on the results of \citet{Zhang+19} and given the fact that a detailed exploration of convective overshooting models is beyond the scope of the present work, we slightly relaxed the constraint on $\RCZ$ when seeking the best solutions (see Sect.\,\ref{sec:obs}).
\subsubsection{Abundance tables}
\label{sec:abund}
We used the abundance table presented in \citetalias{Asplund+09} unless otherwise mentioned.
We note that recent studies have suggested some modifications to the tables in \citetalias{Asplund+09}\footnote{
\citet[][]{Steffen+15}, \citet{Young18}, and \citet{Lodders19} suggested modifications to the O, Ne, Th, and U abundances.
In addition, \citet{Caffau+11} and recently \citet{Asplund+21} suggested different abundance tables.
Although some studies \citep[e.g.,][]{Serenelli+09,Vinyoles+17} adopted the abundances of refractory elements from CI chondrites, we used the original table of solar photospheric abundances presented in \citetalias{Asplund+09}.
}.
However, these modifications are relatively minor considering the goals of the present study and are therefore not considered here \cite[see][for the effect of modified solar abundances]{Buldgen+19}.
We also performed simulations using the abundance table presented in \citetalias{GS98} for comparison with previous studies. We adopted the table in which the abundances of refractory elements were modified using meteorites.
\subsubsection{Opacity}
\label{sec:kap}
In stellar evolution calculations, the Rosseland-mean opacity $\kappa$ is determined by interpolating the opacities listed in standard opacity tables using local thermodynamic and compositional quantities, such that $\kappa=\kappa(\rho, T, X, Z)$, where $\rho$ is the density, $T$ is the temperature, $X$ is the hydrogen mass fraction, and $Z$ is the metallicity.
We adopted the OPAL opacity table \citep{Iglesias+Rogers96} for the cases using the \citetalias{Asplund+09} composition and the OP opacity table \citep{Seaton05} for the cases using the \citetalias{GS98} composition. The opacity table presented in \citet{Ferguson+05} was used to determine the opacities in the low-temperature regions for both cases.
We refer the readers to \citet{Paxton+11} for more details.
We confirmed that the simulations with the OP table yielded very similar results to those with the OPAL table for the \citetalias{Asplund+09} composition \citep[][also, see Appendix\,\ref{app:non-accreting}]{Buldgen+19}.
As mentioned in Sect.\,\ref{sec:intro}, we aim to investigate the effect of opacity enhancement.
We modeled the opacity enhancement using a dimensionless factor $\fopa$, such that
\begin{align}\label{eq:kap}
\kappa' &= \kappa (1 + \fopa)\,.
\end{align}
Below, we describe our $\fopa$ model.
Laboratory experiments have not been able to reproduce the real conditions at the base of the solar CZ, which has led to uncertainties in the opacity.
\citet{Bailey+15} performed experiments with conditions close to those in the real Sun and showed that the wavelength-dependent opacity of iron was 30\%--400\% higher than previously thought, and also, the Rosseland-mean opacity was increased by $7\pm3\%$ \citep[see also][]{Nagayama+19}.
\citet[][see their Figure 2]{LePennec+15} obtained the contribution of each element to the opacity. Iron was shown to have three peaks at $\log (T/{\rm K})=5.66, 6.45$, and 7.18\footnote{In this study, $\log\equiv \log_{10}$.
}. We fitted the contribution of iron to the opacity by the sum of three Gaussian functions of $T$ and we used the same function to model $\fopa$, considering the possibility that further uncertainties may remain in the iron opacity and assuming that the iron abundance is inhomogeneous in the solar interior.
Thus, we modeled the opacity increase as
\begin{align}\label{eq:delkap}
\fopa = \sum_{i=1}^3 A_i\,\exp \left[ -\frac{ \left( \log (T/{\rm K}) -b_i\right)^2 }{2c_i^2} \right]\,,
\end{align}
where $A_i$ is a free parameter,
{and $b_i$ and $c_i$ were derived by fitting Figure\,2 of \citet{LePennec+15}, leading to} $(b_1, b_2, b_3)=(5.66, 6.45, 7.18)$ and
$(c_1, c_2, c_3)=(0.22, 0.18, 0.25)$.
We note that \citet{Villante10} suggested another $\fopa$ model that increases linearly with $T$.
We note that our opacity increase model is based on actual data of the contribution to the opacity, whereas the model in \citet{Villante10} is more ad hoc in nature.
Although in this study we only present the results using Eq.\,\eqref{eq:delkap}, we confirmed that both opacity-increase formalisms improve the solar evolution model in a similar manner.
\subsubsection{Composition of accreted material}
\label{sec:Zacc}
For all the models with protostellar accretion, we fixed the mass fraction of deuterium to $X\sub{D}=28\,$ppm \citep[see][and references therein]{Kunitomo+17} and that of $\element[][3]{He}$ also to 28\,ppm \citep{Mahaffy+98}.
In the non-accreting cases, we set $X\sub{D}=0$, assuming that the deuterium was completely depleted in the protostellar phase.
For the models with accretion, we considered three models to account for the composition of the accreted material (i.e., $\element[][1]{H}$, $\element[][4]{He}$, and metals): homogeneous accretion, metal-poor accretion, and helium-poor accretion (see Table\,\ref{tab:chi2}).
In the first model, the helium mass fraction $\Yacc$ and metallicity of the accreted materials $\Zacc$ are constant with time; therefore, $\Yacc=\Yaccini$ and $\Zacc=\Zaccini$, where the subscript ``proto'' indicates the initial value.
\begin{table*}[!ht]
\begin{center}
\caption{Parameter settings of the chi-squared simulations.}
\label{tab:chi2}
\begin{tabular}{llllllll}
\hline
\hline
\noalign{\smallskip}
\# & Model name & Opacity$^a$ & $A_2$ & Abundance & $t\sub{acc}$$^b$ & Composition & $\fov$ \\
& & & & table & [Myr] & of accreted & \\
& & & & & & material$^c$ & \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
\multicolumn{7}{l}{ [\textit{Optimized non-accreting models}] } &\\
\noalign{\smallskip}
1 & noacc & S & -- & \citetalias{Asplund+09} & -- & -- & variable\\
2 & noacc-GS98 & S & -- & \citetalias{GS98} & -- & --& variable \\
3 & noacc-noov & S & -- & \citetalias{Asplund+09} & -- & -- & 0 \\
4 & noacc-GS98-noov & S & -- & \citetalias{GS98} & -- & -- & 0 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\multicolumn{7}{l}{ [\textit{Optimization with time-dependent $\Yacc$ (He-poor accretion)}] } &\\
\noalign{\smallskip}
5 & He12Myr & S & -- & \citetalias{Asplund+09} & 12 & Y & variable\\
6 & He20Myr & S & -- & \citetalias{Asplund+09} & 20 & Y & variable \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\multicolumn{7}{l}{ [\textit{Optimization with time-dependent $\Zacc$ (pebble wave and planet formation)}] } &\\
\noalign{\smallskip}
7 & \FULL & S & -- & \citetalias{Asplund+09} & 10 & Z & variable \\
8 & \FULLnoov & S & -- & \citetalias{Asplund+09} & 10 & Z & 0 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\multicolumn{7}{l}{ [\textit{Optimization with opacity enhancement and homogeneous $\Zacc$ and $\Yacc$ }] } &\\
\noalign{\smallskip}
9 & \kapb & K$_2$ & [0, 0.22] & \citetalias{Asplund+09} & 10 & H & variable\\
10 & \kapa & K$_{23}$ & [0, 0.22] & \citetalias{Asplund+09} & 10 & H & variable \\
11 & \kapc & K$_{23}'$ & 0.10 & \citetalias{Asplund+09} & 10 & H & variable \\
12 & K2$'$ & K$_2$ & 0.12/0.15/0.18 & \citetalias{Asplund+09} & 3/5/10 & H & 0/0.01/0.025 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\multicolumn{7}{l}{ [\textit{Optimization with opacity enhancement and time-dependent $\Zacc$}] } &\\
\noalign{\smallskip}
13 & \kappla & K$_{2}$ & 0.12/0.15/0.18 & \citetalias{Asplund+09} & 10 & Z & variable \\
14 & \kapplb & K$_{23}$ & 0.12/0.15/0.18 & \citetalias{Asplund+09} & 10 & Z & variable \\
15 & K2-MZ & K$_2$ & 0.12 & \citetalias{Asplund+09} & 5 & Z$'$ & 0.01 \\
16 & K2-MZ1$'$/MZ8$'$ & K$_2$ & 0.12/0.15/0.18 & \citetalias{Asplund+09} & 3/5/10 & Z$'$ & 0/0.01/0.025 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\end{tabular}
\end{center}
\tablefoot{
($^a$) S: Standard (i.e., no modification to the opacity).
K$_2$: Enhanced opacity ($\kappa$) using Gaussian functions with $A_1=A_3=0$ and $A_2$ fixed to a value between 0 and 0.22 (see Sect.\,\ref{sec:kap}).
K$_{23}$: Same as K$_{2}$ but $A_3$ varies in the simplex method.
K$_{23}'$: $A_2=0.10$ (fixed) and $A_3$ is fixed to either $0.05$ or $-0.05$.
($^b$) Accretion timescale (see Sect.\,\ref{sec:Mdot}).
($^c$) H: homogeneous composition with time (i.e., $\Yacc=\Yaccini$ and $\Zacc=\Zaccini$).
Y: He-poor accretion.
Z: Time-dependent $\Zacc$ (see Sect.\,\ref{sec:Zacc}).
Z$'$: $M_1$, $M_2$, and $\Delta \Zacc$ are fixed (see Table\,\ref{tab:MZ}).
}
\end{table*}
In the metal-poor accretion model, $\Zacc$ is time-dependent because of the underlying planet formation processes.
We adopted a simple model for $\Zacc$ to capture the following two processes: pebble drift and planetesimal formation (see Sect.\,\ref{sec:planets}).
Figure\,\ref{fig:Zacc} shows the evolution model of $\Zacc$ used in this work.
In the early phase (Phase I; $M_{\star} \leq M_1$), $\Zacc=\Zaccini$ is constant.
Once the Stokes number of dust grains increases and the grains start to migrate inward, $\Zacc$ increases monotonically with time. We refer to this phase as Phase II or the pebble-accretion phase.
Planetesimals begin to form when $M_{\star}=M_2$. Subsequently, in Phase III (i.e., the metal-poor accretion phase), drifting dust grains are captured by the planetesimals and are not allowed to reach the proto-Sun, causing $\Zacc$ to abruptly decrease to 0.
When $M_{\star}$ reaches $1\, M_\sun $, the accretion stops abruptly (i.e., $\dot{M}\sub{acc}=0$). After the accretion phase, the proto-Sun evolves to become the Sun (Phase IV).
When $dZ\equiv\Zacc-\Zaccini\ne 0$ (i.e., $M_{\star}>M_1$),
we assumed that $\Xacc=\Xaccini -0.7dZ$ and $\Yacc=\Yaccini -0.3dZ$, where $\Xacc$ is the hydrogen mass fraction of accreted materials and $\Xaccini$ is the initial value of $\Xacc$.
We note that because $X\sub{D}$ and the $\element[][3]{He}$ abundance were kept constant, only the $\element[][1]{H}$ and $\element[][4]{He}$ abundances changed with time.
We point out that our assumption of $\Zacc=0$ in Phase III is a major simplification. In reality, grains are not completely filtered by planets and planetesimals. Some of the grains collide with the planetesimals leading to a small but nonzero $\Zacc$, which corresponds to the green dotted line in Fig.\,\ref{fig:Zacc} (see also Sect.\,\ref{sec:planets}). However, because the mass of the solar CZ is large during the accretion phase (see Fig.\,\ref{fig:t-MCZ}), the difference between these two models has little impact on the results.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{schematic3.pdf}
\caption{\small{
Sketch of the evolution of the metallicity $\Zacc$ of the accreted material with stellar mass $M_{\star}$.
In this work, we used the model depicted by the blue solid line and assumed that the difference between this model and the one depicted by the green dotted line has little impact on the internal structure of the Sun (see text for details).
}}
\label{fig:Zacc}
\end{center}
\end{figure}
In the He-poor accretion model, the helium abundance $\Yacc$ of the accreted material was assumed to vary with time, such that
\begin{align}
\Yacc =
\begin{cases}
\Yaccini & {\rm{for}}\,\, M_{\star}\leq M_1\,, \\
\Yaccmin & {\rm{for}}\,\, M_{\star} > M_1\,.
\end{cases}
\label{eq:Yacc}
\end{align}
The He-poor accretion was proposed by \citet{Zhang+19} as a solution to the solar sound-speed anomaly.
\citet{Zhang+19} considered a constant $\Yacc$ but had a larger initial stellar mass in their simulations than in the present study (see Sect.\,\ref{sec:Mdot}); thus, their simulations are equivalent to ours in the late phase (i.e., $M_{\star} > M_1$).
In the He-poor accretion models, we assumed that $\Zacc=\Zaccini$ and $\Xacc=\Xaccini -dY$, where $dY\equiv \Yacc - \Yaccini$.
\subsubsection{Other input physics}
\label{sec:other_inputs}
We adopted the OPAL equation-of-state tables updated in 2005 \citep{Rogers+Nayfonov02}.
We used the nuclear reaction rates in \citet{Caughlan+Fowler88} and \citet{Angulo+99} with certain modifications \citep[see][for details]{Paxton+11}.
The prescription for element diffusion presented in \citet{Thoul+94} was also used.
We note that the JINA reaction table is also available in \texttt{MESA}; however, we confirmed that the results are insensitive to the choice of the reaction table (see Appendix\,\ref{app:non-accreting}).
The outer boundary condition used in this study was given by the atmospheric tables \citep[see Sect.2.7 of ][]{Kunitomo+18}.
Although previous studies adopted different boundary conditions\footnote{
For example, \citet{Vinyoles+17} and \citet{Zhang+19} adopted the atmospheric model proposed by \citet{Krishna_Swamy66}.
}, we confirmed that the outer boundary conditions did not have a significant impact on the results.
\subsubsection{Summary of the input parameters}
\label{sec:summary_inputs}
\begin{table*}
\caption{Input parameters.}
\label{tab:inputs}
\centering
\begin{tabular}{lll}
\hline\hline
Parameter & Description & Reference \\
\hline
$\amlt$ & Mixing-length parameter & Sect.\,\ref{sec:conv} \\
$\fov$ & Overshooting parameter & Sect.\,\ref{sec:conv} \\
$A_1$, $A_2$, $A_3$ & Amplitudes of opacity enhancement & Sect.\,\ref{sec:kap} \\
$M_1$, $M_2$ & Stellar masses when $\Zacc$ starts to increase and when it becomes zero, respectively & Sect.\,\ref{sec:Zacc}, Fig.\,\ref{fig:Zacc} \\
$\Zaccini$ & Metallicity of accreted material when $M_{\star}\leq M_1$ & Sect.\,\ref{sec:Zacc}, Fig.\,\ref{fig:Zacc} \\
$\Zaccmax$ & Maximum metallicity when $M_{\star}=M_2$ & Sect.\,\ref{sec:Zacc}, Fig.\,\ref{fig:Zacc} \\
$\Yaccini$ & He abundance of accreted materials when $M_{\star}\leq M_1$ & Sect.\,\ref{sec:Zacc} \\
$\Yaccmin$ & He abundance of accreted materials when $M_{\star}>M_1$ in Runs 5 and 6 & Sect.\,\ref{sec:Zacc} \\
\hline
\end{tabular}
\tablefoot{For most of the cases, we set $A_1=A_3=0$.
}
\end{table*}
Table\,\ref{tab:inputs} summarizes the input parameters used in this work.
In the runs with opacity enhancement and $\Zacc$ evolution, the number of input parameters can be up to 10.
\subsection{Comparison with observations}
\label{sec:obs}
\begin{table*}
\caption{Target quantities.}
\label{tab:targets}
\centering
\begin{tabular}{lllll}
\hline\hline
Observed parameter & Description & Value & Uncertainty & References \\
\hline
$\ZXs$ & Abundance ratio of metals to hydrogen \tablefootmark{a} & 0.0181 & $10^{-3}$ & 1 \\
& & 0.02292 & $10^{-3}$ & 2 \\
$\Ys$ & Surface helium abundance & 0.2485 & 0.0035 & 3 \\
$\RCZ$ & Location of the convective--radiative boundary [$ R_\sun $] & 0.713 & 0.01 \tablefootmark{b} & \\
rms($\delcs$) & Root-mean-square sound speed & 0 & $10^{-3}$ \tablefootmark{c} & \\
$\logL_{\star}$ & Bolometric luminosity $[ L_\sun]$ & 0 & 0.01\,dex \tablefootmark{c} & \\
$T\sub{eff}$ & Effective temperature [K] & 5777 & 10 \tablefootmark{c} & \\
\hline
\end{tabular}
\tablefoot{
In this work, $ L_\sun=3.8418\times10^{33}\,\rm erg/s$ and $ R_\sun =6.9598\times10^{10}\,\rm cm$ \citep{Bahcall+05}.
\tablefoottext{a}{The $\ZXs$ value depends on the adopted abundance tables (see Sect.\,\ref{sec:abund}).}
\tablefoottext{b}{\citet{Bahcall+05} suggested $0.713\pm0.001\, R_\sun $; however, we used a larger value for convergence purposes (see text for details).}
\tablefoottext{c}{Arbitrarily small (nonzero) uncertainty for convergence purposes (see text for details).}
\tablebib{(1) \citetalias{Asplund+09}, (2) \citetalias{GS98}, (3) \citet{Basu+Antia04}.}
}
\end{table*}
When the elapsed time of the stellar evolution simulations described in Sect.\,\ref{sec:stellarevol} reached the solar age, we compared the simulation results with spectroscopic and helioseismic observations of:
the ratio of the surface metallicity to the surface hydrogen abundance $\ZXs$;
the surface helium abundance $\Ys$;
the location of the convective--radiative boundary $\RCZ$;
the root mean square (rms) of $\delcs$ (see Eq.\,\eqref{eq:delcs});
the bolometric luminosity $L_{\star}$;
and the effective temperature $T\sub{eff}$.
We define the observed minus calculated sound speed as
\begin{eqnarray} \label{eq:delcs}
\delcs\equiv (\csobs-\cs)/\csobs\,.
\end{eqnarray}
To obtain the rms$(\delcs)$, we compared our simulated results with the observed data provided in \citet[][see their Table 3]{Basu+09}.
We interpolated the $\cs$ profile obtained at the solar age (typically $\sim3000$ grids) to the locations of 37 points given in \citet{Basu+09} and thus calculated the rms value of $\delcs$ (see discussions in Sect.\,\ref{sec:solarproblem}).
Table\,\ref{tab:targets} summarizes the observed parameters and their $1\sigma$ uncertainties with three exceptions.
First, although the uncertainty of $\RCZ$ in the helioseismic observations is $0.001\, R_\sun $ in \citet{Bahcall+05}, we relaxed this constraint by a factor of 10. This is because the $\RCZ$ value is likely to be affected by the uncertainties in the convective overshooting model, which are beyond the scope of this study \citep[see Section 3.1 of ][and Sect.\,\ref{sec:conv} in this work]{Zhang+19}.
Second, we chose the rms($\delcs$) uncertainty ($10^{-3}$) based on the results obtained with the \citetalias{GS98} composition presented in \citet{Serenelli+09} (see their Table 2), rather than from the helioseismic observations.
Finally, the uncertainties in $L_{\star}$ and $T\sub{eff}$ are arbitrary; however, we confirmed that our results are not sensitive to these values because all the models reproduce these quantities well ($<0.001$\,dex and 3\,K, respectively).
We assumed the solar age to be 4.567\,Gyr \citep{Amelin+02} after Ca-Al-rich inclusions (CAIs) were condensed.
We note that there could be a time difference between the formation of CAIs and time taken for $M_{\star}$ to reach $0.1\, M_\sun $.
Assuming that CAIs were probably formed in the protosolar nebula, this time difference is at most several million years, and therefore, we neglect it in the present study.
\subsection{Chi-squared tests}
\label{sec:chi2}
By comparing our simulation results with observations, we derived the $\chi^2$ value, which is given by
\begin{equation}\label{eq:chi2}
\chi^2 = \frac{\sum_{i=1}^N \left[ (q_i-q_{i,\rm target})/\sigma(q_i)\right]^2}{N}\,,
\end{equation}
where $q_i$ is the simulated value of a particular quantity, $q_{i,\rm target}$ is the corresponding observed (or target) value, $N=6$ is the total number of target quantities, and $\sigma$ is the uncertainty of each quantity listed in Table\,\ref{tab:targets}.
We aimed to search for a set of input parameters that minimized the $\chi^2$ value.
To do so, we used the downhill simplex method \citep{Nelder+Mead65}.
Typically, $\sim200$ simulations were needed for the minimization.
We performed such simulations under a variety of settings.
In the simplex algorithm, we imposed constraints on the parameter space composed of $\Zaccmax$, $M_1$, and $M_2$. We did not consider any cases where the total metal mass of the accreted material was outside the range [0.5, 2] $\times \Zaccini (1\, M_\sun -0.1\, M_\sun )$.
It should be noted that in general standard solar models are optimized in the non-accreting case with three input parameters, namely $\amlt$, $Y\sub{proto}$, and $Z\sub{proto}$, and three target quantities, namely $L_{\star}$, either $T\sub{eff}$ or $R_{\star}$, and $\ZXs$ \citep[see, e.g.,][]{Vinyoles+17}. \citet[][see their Tables 1 and 2]{Farag+20} showed that, if the above three input and target quantities are used, then the optimized solutions using the \texttt{MESA} code are similar to those in \citet{Vinyoles+17}. We confirmed that, using these three target quantities, we could reproduce exactly the results of \citet{Farag+20}. However, we found that in some cases this can result in values of $\Ys$ outside the range of observed values \citep[see, e.g., Table~4 of ][]{Vinyoles+17}. To search for the solutions that explain all the observable quantities, we used as constraints the six quantities in Table~\ref{tab:targets}.
We also performed simulations in which the 37 points used to calculate the $\delcs$ values (see Sect.\,\ref{sec:obs}) were considered as independent and were added to the five other target values in Table~\ref{tab:targets} \citep[thus $N=42$; see, e.g., ][]{Villante+14}. We found that the models that poorly fit the available constraints were not improved and that our best models were not changed by this new approach.
We note that the solutions with the simplex method sometimes fall into local minima. Although we carefully chose the initial values of the input parameters to avoid this problem, future simulations using the Markov chain Monte Carlo method are encouraged.
\section{Solar models fitting the observational constraints}
\label{sec:results}
In this section, we present the results of the simulations optimized using the simplex method for different conditions.
First, in Sects.\,\ref{sec:noacc} and \ref{sec:Hepoor}, we show that our results are in agreement with those of previous studies.
Next, we present our results for the metal-poor accretion (Sect.\,\ref{sec:Zpoor}) and opacity increase (Sect.\,\ref{sec:kap-results}) models.
The parameter settings are summarized in Table\,\ref{tab:chi2}.
The detailed results are provided in Tables\,\ref{tab:chi2-results-input} and \ref{tab:chi2-results-output}.
\subsection{Non-accreting models}
\label{sec:noacc}
To date, most studies have performed solar evolution simulations without including accretion. In this work, we simulated such non-accreting models to ensure consistency with the previous studies.
We performed four sets of simulations (Runs 1--4 in Table\,\ref{tab:chi2}) adopting either the \citetalias{GS98} or \citetalias{Asplund+09} abundance tables, and with and without convective overshooting.
We started the simulations with a $1\, M_\sun $ star having a fully convective structure (see Sect.\,\ref{sec:init}).
The initial stellar composition, $\amlt$, and $\fov$ were optimized using the simplex method.
We found that the converged $\cs$ profiles are in good agreement with those of previous studies (see Fig.\,\ref{fig:non-accreting}), which validates our simulations with the \texttt{MESA} code.
The optimized input parameters and corresponding best results are shown in Fig.\,\ref{fig:other}.
The maximum values of $\delcs$ for the cases with the \citetalias{GS98} and \citetalias{Asplund+09} composition were approximately 0.004 and 0.009, respectively, irrespective of the inclusion of overshooting (see Fig.\,\ref{fig:non-accreting}).
The rms($\delcs$) value for the \citetalias{Asplund+09} composition was found to be in the range (3.2--3.4)$\times10^{-3}$, which is approximately twice as large as that for the \citetalias{GS98} composition, that is, (1.6--1.8)$\times10^{-3}$ (see Table\,\ref{tab:chi2-results-output}).
The other results obtained with the \citetalias{Asplund+09} composition (i.e., surface composition and $\RCZ$) were also found to be worse than those obtained with the \citetalias{GS98} composition, as indicated by the four times larger total $\chi^2$ value in the \citetalias{Asplund+09} case.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{other4.pdf}
\caption{\small{
Optimized input parameters (left) and results at the solar age (right).
The left panels show $\amlt$, $\fov$, $\Yaccini$, $\Zaccini$, and stellar mass ($M_1$ and $M_2$) for nine models from top to bottom.
The right panels show the $\chi^2$ value (see Eq.\,\eqref{eq:chi2}), rms$(\delcs)$, $\ZXs$, $\Ys$ and $\RCZ$ from top to bottom.
The five models shown in each panel are: non-accreting models with \citetalias{GS98} composition (Runs 2 and 4); non-accreting models with \citetalias{Asplund+09} composition (Runs 1 and 3); accreting models with time-dependent $\Zacc$ (Runs 7 and 8); models with He-poor accretion (Runs 5 and 6); and \kapb\ model (Run 9) with $A_2=0.12$, from left to right (see Table\,\ref{tab:chi2} for details).
Squares and crosses represent the cases with and without overshooting, respectively.
Diamonds and asterisks represent the cases with He-poor accretion for $t\sub{acc}=12$ and 20\,Myr, respectively.
The shaded regions in the right panels indicate $\chi^2< 1$ or the $1\sigma$ uncertainties (see Table\,\ref{tab:targets}).
}}
\label{fig:other}
\end{center}
\end{figure*}
\subsection{Helium-poor accretion}
\label{sec:Hepoor}
Next, we performed simulations with the He-poor accretion following \citet{Zhang+19}.
In this case, we started simulations with a $0.1\, M_\sun $ seed (see Sect.\,\ref{sec:init}) and simulated its evolution from the protostellar to the solar age.
The input parameters optimized using the simplex method were $\amlt, \fov$, initial composition of the accreted material, $M_1$, and $\Yaccmin$ (see Eq.\,\eqref{eq:Yacc}).
Considering that \citet{Zhang+19} required a longer accretion timescale of $t\sub{acc}\geq 12$\,Myr\footnote{
\citet[][see their Section 3.2]{Zhang+19} claimed the duration of their accretion phase was $t\sub{acc}\geq 10$\,Myr; however, they started accretion at 2\,Myr (see Sect.\,\ref{sec:Mdot} of this work). We note that $t=0$ in this study corresponds to the protostellar phase (i.e., $M_{\star}=0.1\, M_\sun $). }, we set $t\sub{acc}=12$ and 20\,Myr in this work. We note that our fiducial model corresponds to $t\sub{acc}=10$\,Myr (see Table\,\ref{tab:chi2}).
Figure\,\ref{fig:other} shows that He-poor accretion improves the rms($\delcs$) value, as claimed by \citet{Zhang+19}. The rms($\delcs$) value was found to be $2.8\times10^{-3}$ and $1.4\times10^{-3}$ for $t\sub{acc}=12$ and 20\,Myr, respectively. \citet{Zhang+19} obtained even better (but qualitatively similar) rms($\delcs$) values by considering mass loss by stellar winds and a more complex overshooting model.
Despite the above results, we regard the possibility of He-poor accretion to be unlikely. First, the giant planets in our Solar System, which also captured gas in the protoplanetary disk, do not show a large depletion in their helium abundance \citep[see, e.g.,][]{Guillot+Gautier2015}. Jupiter and Saturn's atmospheres are characterized by slightly lower-than-protosolar helium abundances, which is consistent with the helium settling in their interiors from an initial protosolar value \citep{Mankovich+Fortney2020}. In contrast, Uranus and Neptune appear to have a helium abundance compatible with the protosolar value \citep[][]{Guillot+Gautier2015}.
Second, \citet{Zhang+19} surmised that the high first ionization potential of helium might lead to the accumulation of helium at the inner edge of the disk followed by He-poor accretion onto the proto-Sun. However, this would lead to $\simeq0.015\, M_\sun $ of the helium remaining in the disk (see their Table 4).
It is difficult to understand why such a large amount of helium would not have eventually accreted onto the proto-Sun.
\subsection{Metal-poor accretion}
\label{sec:Zpoor}
The possibility that metal-poor accretion may affect the structure of the Sun was first suggested by \citet{Guzik+05} and then tested by \citet{Castro+07}, \citet{Guzik+Mussack10}, \citet{Serenelli+11}, and \citet{Hoppe+20}.
Although these studies observed some improvement due to the inclusion of metal-poor accretion, they failed to find a solution as good as the models with high-$Z$ abundances \citep[i.e.,][\citetalias{GS98}]{Grevesse+Noels93} with respect to $\delcs, \RCZ$, and $\Ys$.
In this work, we revisited metal-poor accretion in a larger parameter space and in the framework of recent planet formation theories (see Sect.\,\ref{sec:planets}).
While the aforementioned studies considered metal-poor accretion onto a pre-MS or MS Sun, we considered metal-poor accretion during the protostellar phase.
We assumed the composition of the accreted material to vary with time, as shown in Fig.\,\ref{fig:Zacc}. We ran simulations with model \FULL\ that had seven input parameters: $\amlt, \fov, \Yaccini, \Zaccini, \Zaccmax, M_1$, and $M_2$.
We also performed a series of calculations without including overshooting (model \FULLnoov).
Figure\,\ref{fig:other} shows the results obtained with models \FULL\ and \FULLnoov.
We found that the results are almost identical to those of the non-accreting models, with no significant improvement in the $\chi^2$ value. This indicates that metal-poor accretion cannot be a solution to the solar abundance problem. This is consistent with the findings of previous studies \citep[e.g.,][]{Serenelli+11} and can be easily explained. As shown in Fig.~\ref{fig:t-MCZ}, the proto-Sun has a nearly fully convective interior, implying that the effect of metal-poor accretion is heavily suppressed therein. This, in turn, limits the signatures of metal-poor accretion on the internal structure of the present-day Sun.
Although metal-poor accretion does not have an impact on the $\chi^2$ test, it affects the metallicity profile of the solar interior. We will revisit this issue in Sect.\,\ref{sec:discussion-planets}.
\subsection{Opacity increase}
\label{sec:kap-results}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{A2-3.pdf}
\caption{\small{
Similar to Fig.\,\ref{fig:other} but showing the dependence on $A_2$ in Runs 9, 10, 11, 13, and 14; in addition, the left bottom panel shows $A_3$.
The orange and blue colors indicate the cases with varying $A_3$ and $A_3=0$, respectively.
The circles and plus signs denote the cases with and without planet formation, respectively (i.e., $\Zacc$ either evolves with time or is a constant).
The gray triangles at $A_2=0.1$ denote the cases with $A_3=\pm0.05$.
The two gray vertical lines demarcate the opacity enhancement (i.e., $7\pm3\%$) suggested by \citet{Bailey+15}.
}}
\label{fig:A2}
\end{center}
\end{figure*}
Before the experiments conducted by \citet{Bailey+15} showed that iron opacities in stellar interiors were underestimated, solar models with a $\sim10$--30\% ad hoc increase in the opacities had already been examined and have even been shown to partly solve the solar abundance problem \citep[e.g.,][]{Christensen-Dalsgaard+09,Serenelli+09,Christensen-Dalsgaard+Houdek10,Villante10}.
Guided by the results of \citet{Bailey+15}, we adopted a physically motivated opacity-increase factor $\fopa$ that depends on three adjustable parameters $A_1$, $A_2$, and $A_3$, as defined by Eqs.\,\eqref{eq:kap} and \eqref{eq:delkap}. Plausible values of these parameters were obtained by minimizing the $\chi^2$ value using the simplex method.
First, we obtained results that were insensitive to the value of the low-temperature parameter $A_1$. This is because this parameter modifies the opacities in the CZ, where temperature changes are insensitive to the opacity increase. Therefore, we set $A_1=0$ in our models.
Next, we constructed models with different values of $A_2$ in the range 0 to 0.22 and five variable input parameters for the simplex method: $\amlt, \fov, \Yaccini$, $\Zaccini$, and $A_3$ (corresponding to model \kapa\ in Table\,\ref{tab:chi2}).
Another series of models was constructed by setting $A_3=0$ (model \kapb).
Figure\,\ref{fig:A2} shows the results as a function of $A_2$. When $A_2=A_3=0$, the results obtained with the \citetalias{Asplund+09} abundances were found to be clearly much worse than those obtained with the \citetalias{GS98} abundances. However, when $A_2$ was increased (in both models \kapa\ and \kapb), the quality of the fit to the observational constraints improved significantly, with the best results being obtained for $A_2\simeq0.12$--0.18. Modifying $A_3$ in addition to $A_2$ (as in model \kapa) was found to be helpful but not necessary. The $\chi^2$ and rms$(\delcs)$ values decreased in this case because of the additional degree of freedom. However, additional simulations with $A_3=\pm0.05$ and $A_2=0.10$ (model \kapc) confirmed that the effect of $A_3$ was marginal. Therefore, we did not explore the effect of $A_3$ further and simulated instead the \kapb\ models (with $A_3=0$).
The iron opacities measured in laboratory experiments at a temperature of $\sim 2\times 10^6$\,K should correspond to an increase of $7\pm3\%$ in the Rosseland-mean opacities over standard opacities \citep{Bailey+15}. This temperature is consistent with an increase in $A_2$ of similar magnitude \citep[see Sect.\,\ref{sec:kap} and][]{LePennec+15}. Based on this and the best-fit results from Fig.\,\ref{fig:A2}, we adopted $A_2=0.12$ for our fiducial model. We note that this value of $A_2$ is also consistent with that used in models with increased opacities in previous studies \citep[][]{Serenelli+09,Buldgen+19}.
We observed a clear correlation between $\Zaccini$ and $A_2$, such that $\Zaccini$ decreased monotonically with $A_2$. This is because both parameters have the same effect on the opacity in the solar interior. The correlations between $A_2$ and the other parameters (i.e., $\amlt$, $\fov$, and $\Yaccini$) are more complicated. This is because we used the simplex method to fit six observational constraints on which the input parameters affect differently \citep[see, e.g.,][]{Henyey+65,Kippenhahn+12}.
\subsection{Opacity increase and metal-poor accretion}
\label{sec:Zpoor-kap}
Finally, we performed simulations by including both opacity increase and metal-poor accretion (i.e., the $\Zacc$ evolution as shown in Fig.\,\ref{fig:Zacc}).
The blue and orange circles in Fig.\,\ref{fig:A2} show the results of models \kappla\ (with $A_3=0$) and \kapplb\ (with varying $A_3$), respectively, for $A_2=$ 0.12, 0.15, and 0.18.
We found that the minimized $\chi^2$ and rms$(\delcs)$ values were almost the same as those obtained using models \kapb\ and \kapa\ (i.e., with homogeneous $\Zacc$), as expected from the results presented in Sect.\,\ref{sec:Zpoor}.
This again points to the conclusion that planet formation processes do not affect the solar sound speed profile.
\subsection{The solar abundance problem}\label{sec:solarproblem}
\begin{figure}[!ht]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{r-cs.pdf}
\caption{\small{
Radial profiles of observed minus calculated sound speed $\delcs$.
The blue solid line with circles shows the result obtained in this study using model \kapb\ with $A_2=0.12$.
The gray solid and dashed lines show the results of non-accreting models with \citetalias{GS98} and \citetalias{Asplund+09} compositions, respectively (see Fig.\,\ref{fig:non-accreting}).
The other solid lines show the results in the literature: the orange, red, purple, and green lines show the results of: \citet[][see their Figure\,7]{Christensen-Dalsgaard+18},
\citet[][see the magenta line in their Figure\,9]{Buldgen+19},
Model AGSSr2a of \citet[][]{Yang19},
and Model TWA of \citet[][]{Zhang+19}.
}}
\label{fig:r-cs}
\end{center}
\end{figure}
It is interesting to note that the \kapb\ model with $A_2=0.12$ fits the observational constraints, and in particular, rms$(\delcs)$, even better than the models using \citetalias{GS98} abundances (see Fig.\,\ref{fig:A2}). This is better demonstrated in Fig.\,\ref{fig:r-cs}, where we plot $\delcs$ as a function of radius. For our fiducial model (i.e., model \kapb\ with $A_2=0.12$), the peak value at the base of the CZ is $\delcs=0.003$, which is much smaller than the peak value obtained for standard models with the \citetalias{Asplund+09} composition ($\delcs=0.009$), and even better than that obtained for models with the \citetalias{GS98} composition.
Figure\,\ref{fig:r-cs} also compares our model with other models in the literature that include various physical processes. These models are qualitatively equivalent fits to the sound speed constraint.
\citet{Zhang+19} claimed the importance of helium-poor accretion (however, see discussion in Sect.\,\ref{sec:Hepoor}) and the improved overshooting models (see Sect.\,\ref{sec:conv}).
In the models by \citet{Christensen-Dalsgaard+18} and \citet{Buldgen+19}, both an ad hoc opacity increase and extra mixing around the base of the CZ were considered.
\citet{Yang19} showed that rotational mixing is also promising. We note that in the present work, we did not investigate these possibilities, but Fig.\,\ref{fig:r-cs} shows that all the aforementioned models also fit the helioseismic constraints better than the models with the high-$Z$ \citetalias{GS98} abundances.
We note that the smoothness of our $\delcs$ profile differs from that of other models in the literature. We derived $\delcs$ simply by comparing our simulated $\cs$ profile with that given in \citet{Basu+09} (see Sect.\,\ref{sec:obs}). Conversely, in some of the studies, $\delcs$ was derived by comparing the oscillation modes from the calculated solar structure using an inversion method with the observed modes \citep[see, e.g.,][]{Buldgen+19}. We already confirmed that our $\delcs$ profiles for the non-accreting models are in good agreement with those of previous studies, and therefore, the difference in the derivation of $\delcs$ does not change the conclusions of this study.
\section{Consequences of planet formation}
\label{sec:discussion-planets}
Using our fiducial K2 model, shown in Sect.\,\ref{sec:results} to be our best fit to the observational constraints, we now examine the impact of an inhomogeneous $\Zacc$ on the following: present-day solar central metallicity $\Zc$ (see Sect.\,\ref{sec:discussion-Zc}), primordial Solar-System metallicity $Z\sub{proto}$ (see Sect.\,\ref{sec:discussion-Zini}), and primordial Solar-System helium abundance $Y\sub{proto}$ (see Sect.\,\ref{sec:discussion-Yini}).
Before discussing $\Zc$, $Z\sub{proto}$, and $Y\sub{proto}$, in Sect.\,\ref{sec:discussion-formation}, we first discuss the consequences of planet formation on the solar interior, as inferred from the results presented in Sect.\,\ref{sec:results}.
\subsection{Planet formation and the solar interior}
\label{sec:discussion-formation}
As discussed in Sect.\,\ref{sec:context}, planet formation is associated with both an increase in $\Zacc$ (i.e., the ``pebble wave'' phase) and a late metal-poor accretion phase during which planetesimals or planets are formed. The largely convective structure of the young Sun partly erases the signature of planet formation; however, a small trace remains and can be highlighted by comparing solar interior models calculated with and without planet formation mechanisms.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=0.48\hsize,keepaspectratio]{r-Z-noacc3.pdf}
\includegraphics[width=0.48\hsize,keepaspectratio]{t-Zsurf-noacc3.pdf}
\caption{\small{
Comparison between models \kapb\ (gray dashed line) and \kappla\ (blue solid line) with $A_2 = 0.12$.
\textit{Left panel}: Metallicity profile in the interior of present-day Sun.
\textit{Right panel}: Time evolution of the surface metallicity $\Zs$.
The gray vertical lines denote $M_1 (=0.904\, M_\sun )$, $M_2 (=0.962\, M_\sun )$, and the end of the accretion phase for model \kappla.
The target value of $\Zs$ at 4.567\,Gyr is $0.0134\pm0.0007$ (see Table\,\ref{tab:targets}).
}}
\label{fig:r-Z-2}
\end{center}
\end{figure*}
Figure\,\ref{fig:r-Z-2}(a) shows the metallicity profile in the interior of the present-day Sun for the models \kappla\ and \kapb\ (i.e., with and without a time-varying $\Zacc$, respectively) with $A_2=0.12$.
We observe that planet formation increases the central metallicity $\Zc$ by $9\times10^{-4}$ ($\simeq5\%$); however, in the outer part of the Sun ($r\ga0.2\, R_\sun $), the metallicity is almost the same.
This is because, as shown in Fig.\,\ref{fig:t-MCZ}, the proto-Sun at $\leq10\,$Myr has a small radiative core (i.e., a non-mixing region) and the signature of an initial high metallicity remains there until the present day.
Therefore, realistic protostellar and pre-MS evolution models are crucial for obtaining realistic $\Zc$ values.
Figure\,\ref{fig:r-Z-2}(b) shows the time evolution of $\Zs$.
For model \kappla\ with $A_2=0.12$, $\Zs$ varies considerably from 0.0147 to 0.0162 in the accretion phase because of planet formation; however, it decreases monotonically in the MS phase due to gravitational settling.
We note that $\Zs$ decreases slightly with time in the very early phase (e.g., $\Zs$ decreases from 0.0149 to 0.0147 during the time 0.1 to 0.7\,Myr). This is because the $0.1\, M_\sun $ initial stellar seed has $Z=0.02$, which is larger than $\Zaccini=0.0140$.
\begin{table*}[!t]
\begin{center}
\caption{Parameter settings of the K2-MZ simulations for different planet formation scenarios.}
\label{tab:MZ}
\begin{tabular}{l|lll|lll}
\hline
\hline
\noalign{\smallskip}
Model name & \multicolumn{3}{c}{Parameter settings} & \multicolumn{3}{c}{Results} \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
& $M_1$ & $M_2$ & $\DZacc$ & $\Zaccini$ & $M_{XY,\mathrm{lost}}$ & $M_{Z,\mathrm{planet}}$\\
& $[ M_\sun ]$ & $[ M_\sun ]$ & & & $[ M_\sun ]$ & $[ M_\oplus]$ \\
\noalign{\smallskip}
\hline
\noalign{\smallskip}
MZ1 & 0.90 & 0.92 & 0.06 & 0.0155 & 0 & 213 \\
MZ2 & 0.95 & 0.97 & 0.06 & 0.0144 & 0.04 & 150 \\
MZ3 & 0.88 & 0.92 & 0.03 & 0.0155 & 0 & 213 \\
MZ4 & 0.88 & 0.92 & 0.06 & 0.0148 & 0.03 & 150 \\
MZ5 & 0.9703\tablefootmark{*} & 0.9703\tablefootmark{*} & -- & 0.0152 & 0 & 150 \\
MZ6 & 0.86 & 0.92 & 0.02 & 0.0155 & 0 & 214 \\
MZ7 & 0.86 & 0.92 & 0.06 & 0.0140 & 0.08 & 150 \\
MZ8 & 0.91 & 0.97 & 0.06 & 0.0132 & 0.14 & 150 \\
MZ9 & 0.92 & 0.94 & 0.06 & 0.0151 & 0.01 & 150 \\
\noalign{\smallskip}
\hline
\noalign{\smallskip
\end{tabular}
\end{center}
\tablefoot{
$\fov=0.01$, $A_2=0.12$, and $t\sub{acc}=5$\,Myr (see Table\,\ref{tab:chi2}).
The $M_{XY,\mathrm{lost}}$ and $M_{Z,\mathrm{planet}}$ values were calculated using the optimized $\Zaccini$ value (see text for details).
$\DZacc=\Zaccmax-\Zaccini$. See Fig.\,\ref{fig:Zacc} for the definitions of $M_1$ and $M_2$.
\tablefoottext{*}{In MZ5, we assumed that there is no pebble accretion phase (i.e., $M_1=M_2$), $M_{XY,\mathrm{lost}}=0$, and $M_{Z,\mathrm{planet}}=150\, M_\oplus$; thus, $M_1$ was determined by $\Zaccini$ (see Eq.\,\eqref{eq:Mlost}).}
}
\end{table*}
To evaluate the effects of planet formation on the properties of the solar interior, we examined models having different values of the three parameters, $M_1$, $M_2$, and $\DZacc\equiv \Zaccmax-\Zaccini$ (see Fig.\,\ref{fig:Zacc}), as listed in Table\,\ref{tab:MZ}. We refer to these new simulation models as K2-MZ.
We set the fiducial $t\sub{acc}$ for these models to be 5\,Myr because the typical protoplanetary disk lifetime (i.e., half-life period) is several million years (see Sect.\,\ref{sec:interior}). We set $t\sub{acc}=10\,$Myr in Sect.\,\ref{sec:results} to investigate the maximum impact of planet formation on the $\cs$ profile. In contrast, in this section, our aim is to explore the realistic extent of the influence of planet formation.
In Sects.\,\ref{sec:discussion-Zc}--\ref{sec:discussion-Yini} we evaluate the effects of planet formation on the $\Zc$, $Z\sub{proto}$, and $Y\sub{proto}$ values.
It is important to link the parameters $M_1$, $M_2$, and $\DZacc$ to $M_{Z,\mathrm{planet}}$, which is the mass retained by planets in the form of ``metals'' (i.e., excluding hydrogen and helium but including all the other elements) and to $ M_{XY,\mathrm{lost}}$, which is the mass in metal-poor gas (i.e., containing hydrogen and helium only) that is lost due to selective photoevaporation and/or MHD disk winds (see Sect.\,\ref{sec:planets}).
The conservation of the metal mass implies that
\begin{eqnarray}\label{eq:Mlost}
\Zaccini(1\, M_\sun +M_{XY,\mathrm{lost}}) \!&=&\! \Zaccini M_2+\frac12\DZacc (M_2-M_1) \nonumber \\
&& + M_{Z,\mathrm{planet}}\,.
\end{eqnarray}
We note that in the above equation, $M_{XY,\mathrm{lost}}$ and $M_{Z,\mathrm{planet}}$ are degenerate, and hence, we assumed $M_{Z,\mathrm{planet}}=150\, M_\oplus$ following \citet{Kunitomo+18}, unless otherwise noted. Table\,\ref{tab:MZ} also lists the values obtained for $M_{XY,\mathrm{lost}}$ using the optimized $\Zaccini$ and Eq.\,\eqref{eq:Mlost}. If we obtained $M_{XY,\mathrm{lost}}<0$, we set $M_{XY,\mathrm{lost}}=0$ and consequently increased $M_{Z,\mathrm{planet}}$. We note that the value of $M_{Z,\mathrm{planet}}$ in the Solar System is rather uncertain, but we estimate the upper limit to be $\simeq168\, M_\oplus$ \citep{Kunitomo+18}.
\subsection{Impact on the central solar metallicity}
\label{sec:discussion-Zc}
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{multi-Zc4.pdf}
\caption{\small{
Dependence of $\Zc$ on (a) $A_2$, (b) $t\sub{acc}$, (c) $\fov$, and (d) planet formation models (corresponding to different values of $M_1$, $M_2$, and $\Zaccmax$; see Table\,\ref{tab:MZ}).
In panel (a), the results of models K2 and K2-MZvar are shown, in addition to those of models K2$'$, K2-MZ1$'$, and K2-MZ8$'$, with $\fov=0.01$ and $t\sub{acc}=10\,$Myr (see Table\,\ref{tab:chi2}).
In panels (b) and (c), the green and orange curves indicate models K2$'$ and K2-MZ1$'$, respectively. The orange dots in panel (d) are the K2-MZ simulation results.
The two horizontal gray dashed lines in all the panels denote the results of the non-accreting models without overshooting for the \citetalias{GS98} (top) and \citetalias{Asplund+09} (bottom) compositions. The horizontal green dashed line in panel (d) denotes the result of model K2$'$ with $A_2=0.12$, $t\sub{acc}=5\,$Myr, and $\fov=0.01$.
}}
\label{fig:Zcore}
\end{center}
\end{figure*}
To what extent can planet formation modify $\Zc$? To address this question, we focused on the results of the K2-MZ models (see Sect.\,\ref{sec:discussion-formation} and Table\,\ref{tab:MZ}) in addition to those of the K2 and K2-MZvar models presented in Sect.\,\ref{sec:results}. We explore in Fig.\,\ref{fig:Zcore} the dependence of $\Zc$ on the parameters $A_2$, $t\sub{acc}$, $\fov$, and planet formation scenarios.
Figure\,\ref{fig:Zcore}(a) shows that $\Zc$ is negatively correlated with $A_2$; moreover, $\Zc$ increases by $\simeq\!0.008$ because of planet formation processes, irrespective of the value of $A_2$. This negative correlation arises owing to the negative correlation between $\Zaccini$ and $A_2$ (see Sect.\,\ref{sec:kap-results}).
We note that, in the K2 and K2-MZvar models, $\fov$, $M_1$, $M_2$, and $\DZacc$ were allowed to vary in order to minimize $\chi^2$. We performed additional simulations (K2$'$, K2-MZ1$'$, and K2-MZ8$'$ with $t\sub{acc}=10\,$Myr; see Table\,\ref{tab:chi2}) by fixing these parameters.
The $M_1$, $M_2$, and $\DZacc$ values of K2-MZ1$'$ and K2-MZ8$'$ are the same as those of K2-MZ1 and K2-MZ8, respectively (see Table\,\ref{tab:MZ}).
The results of the different models were quite similar, confirming that the final $\Zc$ value is mostly determined by the opacity ($A_2$) of the solar interior and planet formation processes (in particular the metal-poor accretion).
Next, we investigated the dependence of $\Zc$ on $t\sub{acc}$ (see Sect.\,\ref{sec:interior}).
Figure\,\ref{fig:Zcore}(b) shows the results of models K2-MZ1$'$ and K2$'$ for $\fov=0.01$ and $A_2=0.12$ with different $t\sub{acc}$ values. We find that for model K2-MZ1$'$, $\Zc$ increases with $t\sub{acc}$.
This is because the size of the radiative core in the accretion phase increases with time (Sect.\,\ref{sec:interior}); thus, in the longer $t\sub{acc}$ case, the metal-poor accretion (see Fig.\,\ref{fig:Zacc}) has a larger effect and $\Zaccini$ (and therefore $\Zc$) can be higher.
However, for model K2$'$ with a homogeneous $\Zacc$, $\Zc$ is independent of $t\sub{acc}$.
Although \citet[][see their Figures \,4 and 7]{Serenelli+11} and \citet[][see their Figures 15 and 18]{Zhang+19} have already shown that $\Zacc$ has the potential to modify $\Zc$, the accretion history in these studies \citep[and $t\sub{acc}$ of][]{Serenelli+11} is not based on the standard model of star formation (see Sect.\,\ref{sec:Mdot}). Considering the strong dependence of $\Zc$ on $t\sub{acc}$, a realistic accretion history is crucial for accurately evaluating $\Zc$.
Figure\,\ref{fig:Zcore}(c) shows that $\Zc$ decreases with $\fov$ for the model with planet formation. This is because if a star has a more vigorous overshooting, then the radiative core becomes smaller and is developed in a slightly later phase.
However, for the cases with homogeneous $\Zacc$, $\Zc$ is insensitive to $\fov$ because the internal metallicity profile is homogeneous until gravitational settling sets in (i.e., in $\sim1$\,Gyr).
Although recent studies have attempted to constrain the efficiency of overshooting by hydrodynamic simulations \citep{Freytag+96,Korre+19,Higl+21} and observations \citep[e.g., ][]{Deheuvels+16}, it remains uncertain. Further studies to constrain the mixing in stellar interiors are necessary.
Finally, Fig.\,\ref{fig:Zcore}(d) shows $\Zc$ for the K2-MZ models. We find that $\Zc$ is not sensitive to planet formation scenarios. This is related to the process used to satisfy the constraints in this work. We fixed $M_1$, $M_2$, and $\DZacc$, and chose $\Zaccini$ such that it satisfied the constraints (in particular, $\ZXs$; see Fig.\,\ref{fig:Z-MZ}(b)) imposed by the simplex method.
Consequently, $\Zc$ does not have much variation with respect to different planet formation scenarios.
We note that in models with a higher $\Zc$, the central opacity is higher, leading to an increased radiative temperature gradient \citep[e.g.,][]{Kippenhahn+Weigert90}, and thus, to a higher central temperature. For models with $A_2 = [0.12, 0.18]$ and $A_3=0$, the central temperature ranges from $1.559\times10^7$ to $1.567\times10^7$\,K for values of $\Zc$ in the range $0.0155$--$0.0171$.
In summary, the value of $\Zc$ is larger for a lower $A_2$, longer $t\sub{acc}$, and lower $\fov$. For $A_2 =0.12$ and $t\sub{acc}=5\,$Myr, planet formation enhances $\Zc$ up to 0.01686.
The $\Zc$ values for models with $A_2 = [0.12, 0.18]$ (i.e., the models that reproduce the sound speed profile) and planet formation are $\simeq$5\% higher than those of homogeneous $\Zacc$ models (see Fig.\,\ref{fig:Zcore}(a)).
Interestingly, recent observations of solar neutrino fluxes reaching the Earth have also pointed to high values of the metallicity at the solar center \citep{Agostini+18,Borexino-Collaboration20}; however, their interpretation remains tentative at present. More accurate observations are required in the future.
\subsection{Impact on the constraints for the primordial metallicity}
\label{sec:discussion-Zini}
\begin{figure*}[htb]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{multi-Zini3.pdf}
\caption{\small{
Same as Fig.\,\ref{fig:Zcore} except the vertical axis is the initial metallicity $\Zaccini$, which corresponds to the primordial metallicity of the protosolar molecular cloud core; and
the horizontal axis of (d) is $M_{XY,\mathrm{lost}}$, which was calculated using Eq.\,\eqref{eq:Mlost} and $M_{Z,\mathrm{planet}}=150\, M_\oplus$. We note that the unphysical $M_{XY,\mathrm{lost}}<0$ values are due to our assumption that $M_{Z,\mathrm{planet}}=150\, M_\oplus$. These solutions are equivalent to solutions with $M_{XY,\mathrm{lost}}=0$ and $M_{Z,\mathrm{planet}}>150\, M_\oplus$ (see Table\,\ref{tab:MZ}).
%
%
%
}}
\label{fig:Zini}
\end{center}
\end{figure*}
\citet{Vinyoles+17} obtain constraints on the primordial metallicity $Z\sub{proto}$ that mostly depend on the assumed abundance table from their non-accreting models, that is, $Z\sub{proto}=0.0149$ for \citetalias{Asplund+09} and $Z\sub{proto}=0.0187$ for \citetalias{GS98}. For non-accreting models, we obtain $Z\sub{proto}=0.0163$ and $Z\sub{proto}=0.0187$, respectively. We find that the difference in the \citetalias{Asplund+09} case arises from the different number of target values, $N$. As described in Sect.\,\ref{sec:chi2}, $N=3$ in \citet[][and most previous studies]{Vinyoles+17}, whereas $N=6$ in this study, leading to a much better fit of $\Ys$. We stress that these solutions, however, do not represent the best fits to the observational constraints.
Figure\,\ref{fig:Zini} shows the variation in the primordial metallicity of the protosolar molecular cloud core $\Zaccini$ (see Sect.\,\ref{sec:planets}) for the same models and parameters as in Fig.\,\ref{fig:Zcore}, including our best models.
Figure\,\ref{fig:Zini}(a) shows that $\Zaccini$ is negatively correlated with $A_2$, as described in Sect.\,\ref{sec:kap-results}. Although this behavior is the same as that of $\Zc$, Fig.\,\ref{fig:Zini}(a) shows some differences compared to Fig.\,\ref{fig:Zcore}(a). Most importantly, while planet formation processes always increase $\Zc$, they can either increase or decrease $\Zaccini$. As shown by Figs.\,\ref{fig:Zini}(b) and (c), this cannot be explained by varying $t\sub{acc}$ or $\fov$, which have little impact on $\Zaccini$.
Figure\,\ref{fig:Zini}(d) shows that $\Zaccini$ decreases with $M_{XY,\mathrm{lost}}$.
When the mass of the hydrogen and helium that is selectively removed from the disk is high, the primordial metallicity $\Zaccini$ must be low to compensate for the accretion of the disk gas that becomes relatively metal-rich and to account for the present-day observations.
Conversely, in models with a low value of $M_{XY,\mathrm{lost}}$, $Z\sub{proto}$ must be high to compensate for the metal-poor accretion.
We note that the retention of heavy elements by planet formation has the opposite effect (see Eq.\,\eqref{eq:Mlost}) so that a low value of $M_{XY,\mathrm{lost}}$ is equivalent to a high value of $M_{Z,\mathrm{planet}}$ (we set $M_{Z,\mathrm{planet}} = 150\, M_\oplus$ in Fig.\,\ref{fig:Zini}(d); see Sect.\,\ref{sec:discussion-formation}).
If $M_{XY,\mathrm{lost}} \simeq 0.03\, M_\sun $, then $\ZacciniM_{XY,\mathrm{lost}}$ compensates for $M_{Z,\mathrm{planet}} = 150\, M_\oplus$ and the value of $Z\sub{proto}$ is the same as that obtained for model K2$'$ (i.e., with homogeneous $\Zacc$).
The above behavior can also explain the non-monotonic relation between $\Zaccini$ and $A_2$ for the \kappla\ models in Fig.\,\ref{fig:Zini}(a). Indeed, the models with $A_2=0.12$, 0.15, and 0.18 correspond to $M_{XY,\mathrm{lost}}=0.13$, $0.18$, and $0.13\, M_\sun $, which explains the much lower $\Zaccini$ value for the \kappla\ model with $A_2=0.15$.
For our preferred models with $A_2=[0.12, 0.18]$, which successfully reproduce the observational constraints (see Sect.\,\ref{sec:kap-results}), $\Zaccini$ ranges from 0.0127 to 0.0157.
A slightly narrower range may be obtained by imposing tighter constraints on $M_{XY,\mathrm{lost}}$ and $M_{Z,\mathrm{planet}}$, namely, $M_{XY,\mathrm{lost}}\la 0.05\, M_\sun $ and $M_{Z,\mathrm{planet}}=97$--168\,$ M_\oplus$ (see Sect.\,\ref{sec:planets}). In this case, $\Zaccini$ lies approximately in the range 0.0140--0.0153.
\subsection{Impact on the constraints for the primordial helium abundance}
\label{sec:discussion-Yini}
Estimating the primordial helium abundance (i.e., the helium abundance in the protosolar molecular cloud core $Y\sub{proto}$) accurately is of particular interest, {because} it provides reliable constraints on the internal structure and composition of Jupiter and Saturn \citep{Guillot+18}. \citet{Serenelli+Basu10} obtained a primordial helium abundance of $0.273\pm0.006$ by using solar evolution models with turbulent mixing below the CZ. {\citet{Vinyoles+17} obtained a lower value of $Y\sub{proto}=0.2613$ for the \citetalias{Asplund+09} composition, which resulted in a poor fit of $\Ys$ (see Sect.\,\ref{sec:chi2}). }
Figure\,\ref{fig:Yini} shows $\Yaccini$ for the same models as in Figs.\,\ref{fig:Zcore} and \ref{fig:Zini}. Again, the most significant correlation is obtained between $\Yaccini$ and the opacity-enhancement parameter $A_2$. We observe that $\Yaccini$ decreases from $0.274$ for the standard model ({K2 model} with $A_2=0$) to $0.267$ for the highest value of $A_2=0.22$. Figures\,\ref{fig:Yini}(b)--(d) show that $\Yaccini$ has a weak dependence on $t\sub{acc}$, $\fov$, and $M_{XY,\mathrm{lost}}$ (for the models with planet formation).
However, Fig.\,\ref{fig:Yini}(a) shows that planet formation processes result in an increase in $\Yaccini$, independent of $M_{XY,\mathrm{lost}}$. This is similar to what is observed for $\Zc$ but different from what is observed for $\Zaccini$. The reason for such behavior is threefold.
First, hydrogen and helium are believed to have a common evolutionary history that differs from that of metals (see Sect.\,\ref{sec:Zacc}). This implies that planet formation processes (i.e., pebble wave, planetesimal formation, and disk winds) affect $\Zacc$ while conserving the mass ratio of helium to hydrogen ($Y/X$) in the accreted material.
Second, when planet formation results in a high $\Zc$, the central temperature $\Tc$ of the star increases because of the changes in the opacity, which in turn increases the radiative temperature gradient (see Sect.\,\ref{sec:discussion-Zc}).
Third, this increase in temperature affects the nuclear burning rate $r_{pp}$ because $r_{pp}\propto \Xc^2\Tc^4$ \citep{Kippenhahn+Weigert90}, where $\Xc\sim 1-\Yc$ is the hydrogen abundance in the Sun's nuclear burning core. Given the global constraints on the structure of the present-day Sun, to compensate for the higher $\Tc$, $\Xc$ needs to be lower on average on the MS, which in turn implies a lower $\Xaccini$ and therefore a higher $\Yaccini$. For these reasons, planet formation processes lead to higher values of $\Zc$, $\Tc$, and hence, $\Yaccini$.
In addition, $\Yaccini$ is positively correlated with $A_3$. Therefore, if the high-temperature opacities are changed, then $\Yaccini$ would be modified by an absolute factor, which is estimated to be $\delta\Yaccini\sim A_3/20$.
Overall, our results for the models that fit the observational constraints (i.e., $A_2=[0.12, 0.18]$) imply that $\Yaccini$ ranges from 0.268 to 0.274.
This range compatible with, but somewhat more tightly constrained than the range obtained by \citet{Serenelli+Basu10}, which is 0.267--0.278.
The $\Yaccini/(1-\Zaccini)$ value was found to range from 0.272 to 0.278.
\begin{figure*}[!ht]
\begin{center}
\includegraphics[width=\hsize,keepaspectratio]{multi-Yini4.pdf}
\caption{\small{
Same as Fig.\,\ref{fig:Zini} except the vertical axis is the initial helium mass fraction $\Yaccini$, which corresponds to the primordial helium abundance of the protosolar molecular cloud core.
The gray open circles in panel (d) show the results of models K2, K2$'$, K2-MZvar, K2-MZ1$'$, and K2-MZ8$'$ with $A_2=[0.12, 0.18]$.
%
%
%
}}
\label{fig:Yini}
\end{center}
\end{figure*}
\section{Conclusion}
\label{sec:conclusion}
In this work, we studied how the formation of the Solar System affected the composition and internal structure of the Sun. The Sun was formed owing to the collapse of a molecular cloud core, and it grew via accretion of the gas in the circumsolar disk over several million years while planet formation processes were at play. According to protoplanetary disk evolution models, when the proto-Sun was approximately between 90\% and 98\% of its final mass (see Fig.\,\ref{fig:Eacc}), a pebble wave led to a phase of accretion of high-metallicity gas. This was followed by a phase of metal-poor accretion due to the formation of planetesimals and planets, while the hydrogen and helium may have been selectively lost from the disk atmosphere via photoevaporation and/or MHD disk winds. Therefore, the Sun grew via the accretion of gas {with} an evolving composition.
To study the evolution of the Sun and reproduce the present-day constraints on its structure and composition obtained from spectroscopy and helioseismology, we performed an extensive ensemble of simulations. Our simulations included accretion in the protostellar and pre-MS phases. The input parameters were adjusted using $\chi^2$ minimization to best reproduce the present-day constraints (i.e., after $4.567$\,Gyr of evolution) on the luminosity, effective temperature, surface composition ($\Ys$ and $\ZXs$), CZ radius ($\RCZ$), and sound speed profile of the Sun. The input parameters used were the mixing length parameter $\amlt$, overshooting parameter $\fov$, initial helium abundance $\Yaccini$, and initial metallicity $\Zaccini$. A second set of adjustable parameters were introduced to modify the opacity, namely, $A_1$, $A_2$, and $A_3$. A third set of parameters were introduced to modify the composition of the accreted material, $M_1$, $M_2$, $\Zaccmax$, and $\Yaccmin$.
Several scenarios were tested. Classical non-accreting models with old abundances (\citetalias{GS98}) are known to provide better fits to the helioseismic constraints than those using more recent abundances (\citetalias{Asplund+09}), leading to the so-called ``solar abundance problem.'' Models that consider the accretion of gas with an evolving composition (i.e., include planet formation processes) do not improve the $\chi^2$ fit significantly. This is because the proto-Sun has an almost fully convective interior in the accretion phase, implying that the accreted gas is heavily diluted and therefore the changes in the structure of the present-day Sun are quite limited. Models involving helium-poor accretion, as proposed by \citet{Zhang+19}, indeed improve the fit. However, such models are probably unlikely because the giant planets in our Solar System contain significant amounts of helium in their interiors.
We note that other possibilities, such as extra mixing, were not tested in this work \citep[see][]{Christensen-Dalsgaard+18,Buldgen+19,Yang19}.
Our best models were found to be those with the \citetalias{Asplund+09} composition and a 12\%--18\% opacity increase centered at $T = 10^{6.4}$\,K.
This is slightly higher but qualitatively in good agreement with the high iron opacities measured by \citet{Bailey+15} at this temperature range.
The models with an opacity increase in the range 12\%--18\% represent better fits to the observations than those using old abundances, and are therefore a promising solution to the solar abundance problem.
The impact of planet formation on the solar structure can be examined by using the aforementioned models to globally fit the observational constraints and by modifying the parameters $M_1$, $M_2$, and $\Zaccmax$.
We find that despite the negligibly small effect on the sound speed profile (and therefore on the helioseismic constraints), planet formation processes lead to a limited but real (up to $5\%$) increase in the metallicity $\Zc$ of the deep solar interior ($r\la0.2\, R_\sun $).
The increase in $\Zc$ is smaller for a larger overshooting parameter $\fov$ but larger for a longer disk-accretion timescale $t\sub{acc}$.
Qualitatively, this is in accordance with recent solar neutrino measurements that appear to favor high-$Z$ values in the deep solar interior \citep{Agostini+18, Borexino-Collaboration20}.
We will examine this issue in future work.
We also constrained the primordial composition of the molecular cloud core that gave birth to the Sun by using models that best reproduced all the present-day constraints. We found that the protosolar metallicity $Z\sub{proto}$ ranged from 0.0127 to 0.0157; moreover, the retention of heavy elements due to planet formation yielded higher values of $Z\sub{proto}$, while significant disk winds led to lower values. Similarly, the protosolar helium mass fraction $Y\sub{proto}$
was found to slightly increase because of planet formation processes and its value ranges from 0.268 to 0.274.
In conclusion, we expect that a combined investigation of the solar interior, solar evolution models (in particular, those that include improved opacities and a more sophisticated treatment for mixing), and observational constraints (from surface abundances, helioseismic observations, and neutrino fluxes) will help constrain the processes that led to the formation of the Solar System.
\begin{acknowledgements}
This paper is dedicated to the memories of Rudolf Kippenhahn, whose diagrams were a source of inspiration for this work, and Jean-Paul Zahn, who taught one of us how to read them (upside-down!).
We are grateful to Ga\"{e}l Buldgen, J{\o}rgen Christensen-Dalsgaard, Johan Appelgren, and Vardan Elbakyan for kindly providing their simulation results. We also thank
Lionel Bigot for fruitful discussions and comments.
This work was supported by the JSPS KAKENHI Grant (number 20K14542), the Astrobiology Center Program of National Institutes of Natural Sciences (NINS; grant number AB301023), and a long-term visitor JSPS fellowship to TG.
Numerical computations were carried out on the PC cluster at the Center for Computational Astrophysics, National Astronomical Observatory of Japan, and in part on the ``Mesocentre SIGAMM'' machine hosted by the Observatoire de la Cote d'Azur.
\textit{Software}: \texttt{MESA} \citep[version 12115; ][]{Paxton+11,Paxton+13,Paxton+15,Paxton+18, Paxton+19}; Numpy \citep{vanderWalt+11}; and Scipy \citep{Virtanen+20}.
\end{acknowledgements}
\bibliographystyle{aa}
|
1,477,468,750,987 | arxiv | \section{Introduction
\label{sec:intro}
As is well-known, string and M theory are higher dimensional theories, that can yield phenomenologically relevant four-dimensional models through compactification. The vast majority of these constructions use a Calabi--Yau (CY) manifold as the internal, compact space. There are many reasons for this: historically, the first four-dimensional, maximally symmetric and supersymmetric vacua were found by compactification of heterotic string theory on CY threefolds \cite{Candelas:1985en}, and since then the collective efforts of mathematicians and physicists have led to a deep understanding of these spaces. However, CY manifolds are not the most general manifolds that lead to supersymmetric four-dimensional vacua through string compactifications. Furthermore, once background fluxes and sources are introduced in the construction, CY manifolds generically fail to solve the Killing spinor equations required for supersymmetric vacua.\footnote{Calabi--Yau compactifications inevitably have moduli, which lead to phenomenological problems in the associated four-dimensional theory. Background fluxes provide one way of stabilising these moduli. For recent reviews on flux compactifications, see \cite{Grana:2005jc,Douglas:2006es,Blumenhagen:2006ci,Koerber:2010bx}.} Thus, by only focusing on CY manifolds, phenomenologically interesting flux compactifications might be missed, and premature conclusions on the properties of generic string vacua will be drawn.
$SU(3)$ structure manifolds provide a natural generalisation of CY manifolds; both types of manifolds allow a globally defined spinor, that reduces their structure groups to $SU(3)$. On CY manifolds, the spinor is in addition covariantly constant (with respect to the Levi--Civita connection), thus reducing the holonomy to $SU(3)$. A well-defined spinor is certainly needed to construct supersymmetric four-dimensional vacua, but demanding that it is covariantly constant is not necessary. It can be shown that the covariant derivative of the spinor vanishes if and only if the intrinsic torsion of the $SU(3)$ structure manifold is zero. This statement can be reformulated in terms of differential forms: bilinears of the spinor define a real two-form $J$ and a complex decomposable (3,0)-form $\Omega$ that fulfil
\begin{equation}
\Omega\wedge J=0 \; , \;
\Omega\wedge\overline{\Omega}=-\frac{4i}{3}J^3\neq 0 \; ,
\end{equation}
and are closed if and only if the torsion vanishes. Loosely speaking, the intrinsic torsion, which can be decomposed into five torsion classes $\mathcal W_i$, thus measures how far the manifold is from being CY. The precise definition of the torsion classes can be found in section \ref{sec:su3constr}.
The intrinsic torsion means that the properties of generic $SU(3)$ structure and CY manifolds differ radically. The first two torsion classes, $\mathcal W_1$ and $\mathcal W_2$, are associated with the Nijenhuis tensor of the manifold, and as long as either of them is non-zero the almost complex structure of the manifold fails to be integrable. This implies that generic $SU(3)$ structure manifolds cannot be analysed using algebraic geometry. As a consequence, it has proven quite difficult to construct explicit examples of $SU(3)$ structure manifolds, and this scarcity of examples has left important aspects of flux compactifications in obscurity. While four-dimensional vacua can be found, the effective field theories that describe fluctuations around these vacua are difficult to obtain. Indeed, some properties of the vacua, such as the existence of moduli, are often hard to determine. To shed more light on these constructions, a better understanding of the dimensional reduction on $SU(3)$ structure manifolds is required. Finding more examples is of essence to meet this goal.
As the non-integrability of the almost complex structure is an obstacle in the construction of examples, much would be gained if already known complex manifolds could be shown to admit a second, non-integrable almost complex structure, that is associated with an $SU(3)$ structure. This has indeed been demonstrated for twistor spaces and was used by Tomasiello to show that $\mathbb{CP}^3$ and $\mathbb{CP}^1 \hookrightarrow \mathbb{CP}^2$ allow a half-flat $SU(3)$ structure \cite{Tomasiello:2007eq} (see also \cite{Xu:2006math}). Since both $\mathbb{CP}^3$ and $\mathbb{CP}^1 \hookrightarrow \mathbb{CP}^2$ are smooth, compact toric varieties (SCTV), it was proposed by L\"ust, Tsimpis and the present author that other toric varieties may also admit two almost complex structures, and that $SU(3)$ structures may exist also on these varieties \cite{Larfors:2010wb}. Since there are infinitely many SCTVs, such a construction holds the promise of substantially expanding the set of $SU(3)$ structure examples.
The purpose of this paper is to extend the studies of \cite{Larfors:2010wb} in two respects. First, we discuss the topological constraint that a manifold must fulfil to allow an $SU(3)$ structure. In order to allow a nowhere vanishing spinor, the second Stiefel--Whitney class of the manifold must be trivial. This can be reformulated as a constraint on the first Chern class $c_1$: only manifolds with even $c_1$ allow $SU(3)$ structures. On a toric variety, $c_1$ is easily computed as a sum of divisors, rendering the check of this topological constraint almost trivial, as recently noted by Dabholkar \cite{Dabholkar:2013qz}. We will extend the analysis of \cite{Dabholkar:2013qz} to different classes of SCTVs studied by Oda \cite{oda}, and show that $SU(3)$ structures can be constructed on all toric $\mathbb{CP}^1$ fibrations. This is an infinite number of varieties.
Our second objective is to study the torsion classes of toric $SU(3)$ structures. We construct the defining forms $J$ and $\Omega$ following the method of \cite{Larfors:2010wb}. In order to decompose ${\rm d} J$ and ${\rm d} \Omega$ into torsion classes we need to compute contractions, which requires a manageable expression for the $SU(3)$ structure metric. This metric is in general different from the metric that the SCTV inherits from the ambient space $\mathbb{C}^n$. We will provide a compact expression for it in section \ref{sec:metric}, that will be used to explicitly compute all torsion classes for example varieties in section \ref{sec:Kuniq}.
In any string theory compactification, the torsion classes will be constrained by the supersymmetry variations, equations of motion and Bianchi identities. In particular, there is an interesting and useful connection between the supersymmetry variations and the geometry: imposing that the variations are zero leads to necessary conditions on the $SU(3)$ structure. A similar reasoning can be made for certain non-supersymmetric vacua. Consequently, once we have computed the torsion classes, we can see if the manifold is suitable for knows string compactifications. In addition to investigating if $SU(3)$ structures meet such necessary constraints for example SCTVs, we will check whether there are choices for the parameters of the construction that imply that such conditions are met in general. We find that the metric play an important role in this analysis, as additional parameter bounds arise from the requirement of metric positivity.
The rest of this paper is organised as follows. Section \ref{sec:survey} contains a brief survey on the role of $SU(3)$ structure manifolds in the construction of string vacua, that is provided for readers less familiar with this literature. In section \ref{sec:sctv} we present the smooth compact toric varieties, and in section \ref{sec:toricsu3} we discuss topological conditions for, and review the construction of, $SU(3)$ structures on these manifolds. The almost complex structure and metric of the $SU(3)$ structure are computed in section \ref{sec:metric}, and general properties of the torsion classes on SCTVs are discussed in section \ref{sec:torsion}. $SU(3)$ structures on specific example manifolds are discussed in section \ref{sec:Kuniq}, and the explicit torsion classes are presented for two examples. Section \ref{sec:conclusion} contains concluding remarks and ideas for future work. Appendix \ref{sec:Mori} reviews some aspects of K\"ahler and Mori cones that are used in the construction of $SU(3)$ structures. Our conventions regarding differential forms, wedge products and contractions follow \cite{Gray:2012md}, and are summarised in appendix A of that paper.
\section{Which $SU(3)$ structures are relevant for string vacua?}
\label{sec:survey}
Since the seminal work of Strominger \cite{Strominger:1986uh}, manifolds with torsionful $SU(3)$ structures have been used to construct string vacua. The literature on the subject is by now quite vast, and it can be difficult to keep track on the constraints that are relevant for different compactifications. To put our exploration of toric $SU(3)$ structures in context, we therefor recall some of these constructions. In these scenarios, the ten-dimensional Killing spinor equations and/or equations of motion, in combination with the Bianchi identities, lead to torsion constraints that can be effectively derived using the language of generalised geometry \cite{Hitchin:2000jd,h02,g04}. We review these constraints here. To keep the length of this section within reasonable boundaries, we limit our survey to maximally symmetric four-dimensional vacua, and do not discuss non-classical corrections to the solutions.\footnote{This brief review cannot make justice to all the work done on flux compactifications on $SU(3)$ structure manifolds, and leaves out constructions using $SU(2)$ and $SU(3)\times SU(3)$ structures. A more complete list of references can be found in \cite{Grana:2005jc,Douglas:2006es,Blumenhagen:2006ci,Koerber:2010bx}. $SU(3)$ structure manifolds also play a key role in heterotic domain wall compactifications, where the constraints on the torsion classes were recently derived in \cite{Lukas:2010mf,Gray:2012md}.}
\subsection{Maximally symmetric $\mathcal N=1$ vacua
\begin{table}[tb]
\begin{tabular}{| l | l | l |}
\hline
String vacuum & Vanishing torsion classes & $SU(3)$ type \\
\hline
\hline
Heterotic, Type IIB ($\mathcal N=1$ Mkw) & $\mathcal{W}_1, \mathcal{W}_2$& Complex\\
\hline
Type IIA ($\mathcal N=1$ Mkw) & $\mathcal{W}_{1}, \mathcal{W}_3, \mathcal{W}_4$ & Symplectic \\
\hline
Type IIA ($\mathcal N=1$ AdS) & $\mathcal{W}_{3}, \mathcal{W}_4, \mathcal{W}_5$ & Restricted half-flat \\
\hline
\end{tabular}
\caption{\it The Killing spinor equations for four-dimensional $\mathcal N=1$ string vacua require that the $SU(3)$ torsion classes satisfy the constraints listed in this table. These torsion class constraints are necessary but not sufficient.} \label{tab:su3N1}
\end{table}
The $SU(3)$ structure manifolds that lead to maximally symmetric $\mathcal N=1$ vacua can be completely classified. The $\mathcal N=1$ Killing spinor equations are very constraining, and give necessary conditions for the torsion classes that are summarised in table \ref{tab:su3N1}. In addition, integrability statements can be made that show that the equations of motion are implied by the supersymmetry constraints and Bianchi identities \cite{Lust:2004ig,Gauntlett:2005ww,Koerber:2007hd}. Thus, it is enough to solve the latter to show that a string vacuum exists. Since supersymmetry guarantees stability, such vacua are non-tachyonic but may have flat directions.
The L\"ust and Tsimpis vacua of type IIA supergravity \cite{Lust:2004ig} are arguably the simplest string vacua on $SU(3)$ structure manifolds (see also \cite{Behrndt:2004km,Behrndt:2004mj}). These are four-dimensional, AdS vacua that preserve $\mathcal N=1$ supersymmetry.\footnote{Here ``four-dimensional'' primarily indicates that the metric is block-diagonal, as these solutions, and the $\mathcal N=0$ AdS vacua discussed in the next section, generically lack a separation of scales \cite{Tsimpis:2012tu,McOrist:2012yc}. } If the fluxes of type IIA supergravity are chosen accordingly, it can be shown that the Killing spinor equations are solved by $SU(3)$ structures that are restricted half-flat, i.~e.~$\mathcal W_3, \mathcal W_4$ and $\mathcal W_5$ are all zero. The Bianchi identities for the background fluxes further impose a differential constraint on $\mathcal W_2$. Using the integrability results discussed above, it can be shown that the constraints on the torsion classes are necessary and sufficient.
It is also possible to construct $\mathcal N=1$ Minkowski vacua on $SU(3)$ structure manifolds. The oldest vacua of this type are the Strominger solutions of heterotic string theory \cite{Strominger:1986uh}, which require $SU(3)$ structure manifolds that are complex, $\mathcal W_1 = \mathcal W_2 = 0$, and have exact $\mathcal W_4 = 2 \mathcal W_5$ \cite{Cardoso:2002hd}. If the third torsion class is non-zero, there must be a non-zero Neveu--Schwarz (NSNS) flux $H$, whose Bianchi identity yields an extra constraint on the geometry.
The first $\mathcal N=1$ Minkowski vacua of type II string theory compacitfied on $SU(3)$ structure manifolds was found by Giddings, Kachru and Polchinshi (GKP) \cite{Giddings:2001yu}, and the full set of such vacua has been classified by Gra\~na and collaborators \cite{Grana:2004bg}. In addition to solutions of the Strominger type for both type II theories, type IIA allows $\mathcal N=1$ Minkowski vacua on symplectic manifolds, $\mathcal W_1 = \mathcal W_3 = \mathcal W_4 = 0$, if furthermore $\mathcal W_5$ is exact. To circumvent the Maldacena--Nunez no-go theorem \cite{Maldacena:2000mw}, orientifold six-planes (O6) have to be added to the construction. Type IIB allows $\mathcal N=1$ Minkowski vacua on complex manifolds, $\mathcal W_1 = \mathcal W_2 = 0$. With O3 planes, $\mathcal W_3$ must also be zero and $\mathcal W_4$ and $\mathcal W_5$ are proportional (this includes the GKP vacua). If instead O5 planes are used, $\mathcal W_3$ need not be zero. For all three type II $\mathcal N=1$ vacua , the Bianchi identities give differential constraints on the torsion classes, in addition to the necessary constraints just discussed.
Before we close this section, a few comments on orientifolds are in order. As already mentioned, there are no-go theorems in flux compactifications; a four-dimensional space time with non-negative cosmological constant is only possible when sources balance the charge and tension induced by the flux in the compact space
\cite{Maldacena:2000mw}. In type II string theory, O$p$ planes provide such sources, and are thus necessary ingredients in the Minkowski vacua just described. Moreover, approximating the O$p$ planes as smeared sources relaxes the differential conditions on the torsion classes that come from the Bianchi identities, so that solutions are easier to find. This approximation has been used to construct examples of both the supersymmetric solutions just discussed, and the non-supersymmetric ones we turn to in the next section. Whether such smeared orientifolds can be localised is, however, not always clear, and recent discussions of this issue can be found in \cite{Blaback:2010sj,Saracco:2012wc,McOrist:2012yc,Maxfield:2013wka}.
\subsection{Maximally symmetric $\mathcal N=0$ vacua
A complete classification of the $SU(3)$ structures that are relevant for non-supersymmetric string vacua does not exist. These vacua are more difficult to analyse than their supersymmetric cousins; to find generic solutions one must solve the second-order ten-dimensional equations of motion, rather than the first-order Killing spinor equations. In addition to this increased complexity, there is no guarantee for the stability of generic vacua.
\begin{table}[tb]
\hspace{-0.5cm}
\begin{tabular}{| l | l | l |}
\hline
String vacuum & Constraints on torsion classes & $SU(3)$ type \\
\hline
\hline
Type IIB (O3) &$\mathcal W_1, \mathcal W_2, \mathcal W_3$ vanishing; $3\mathcal W_4=2\mathcal W_5$ exact & Conformal CY\\
\hline
Type IIB (O5) &$\mathcal W_2 = 2 \mathcal W_1(J_B-2J_F)$; $\mathcal W_4=0$; $\mathcal W_5$ exact & 2/4 split: $J = J_B+J_F$\\
\hline
Type IIA (O6) &$\mathcal W_3 = \frac{3}{2} \mathcal W_1(\mbox{Im}\Omega-4\mbox{Im}\Omega_B)$; $\mathcal W_4=0$; $\mathcal W_5$ exact& 3/3 split: $\Omega = \Omega_B+\Omega_F$\\
\hline
Heterotic & $\mathcal W_2=\mathcal W_1(J_B-2J_F)$; $\mathcal W_5=2\mathcal W_4$ exact & 2/4 split: $J = J_B+J_F$\\
\hline
\end{tabular}
\caption{\it Four-dimensional $\mathcal N=0$ Minkowski string vacua of no-scale type require calibrated $SU(3)$ structures of the type listed in this table. Some calibrations give the manifold a fibration structure that splits $J$ and $\Omega$ into components along the base and fibre. These torsion class constraints are necessary but not sufficient.}
\label{tab:su3N0}
\end{table}
On $SU(3)$ structure manifolds, however, one can construct classes of maximally symmetric $\mathcal N=0$ vacua that break supersymmetry in a controllable way. By only giving up a subset of the Killing spinor equations, one can obtain Minkowski vacua whose stability is guaranteed by a no-scale structure (\ie~the potential is positive semidefinite; relaxing this leads to weaker torsion class constraints). More specifically, these $\mathcal N=0$ vacua admit stable D or NS5 branes, a condition that can be rephrased in terms of calibrations. GKP constructed the first vacuum of this type by compactifying type IIB/F-theory with O3 planes on conformally CY manifolds, \ie~an $SU(3)$ structure manifold with $\mathcal W_1, \mathcal W_2, \mathcal W_3$ vanishing and $3\mathcal W_4=2\mathcal W_5$ exact \cite{Giddings:2001yu}. Additional type II vacua of this type were studied by Camara and Gra\~na \cite{Camara:2007cz}, and classified using calibrations by L\"ust and collaborators \cite{Lust:2008zd}. Similar solutions have been found in heterotic string theory \cite{Held:2010az}, and we summarise the calibration conditions for the torsion classes in table \ref{tab:su3N0}. For these non-supersymmetric vacua, the integrability results are weakened and not all equations of motion are implied by the Killing spinor equations and Bianchi identities. Consequently, both the Bianchi identities and one constraint from the equations of motion must be checked, in addition to the conditions in table \ref{tab:su3N0}. Once these constraints are satisfied, the stability of the vacua is guaranteed.
Calibrated $\mathcal N=0$ AdS vacua can also be found on $SU(3)$ structure manifolds. Romans constructed AdS vacua of massive type IIA supergravity using complex ($\mathcal W_1=0=\mathcal W_2$) or nearly K\"ahler (only $\mathcal W_1$ non-vanishing) $SU(3)$ structures \cite{Romans:1985tz}. An extensive study of source-free type IIA vacua on nearly K\"ahler manifolds can be found in \cite{Lust:2008zd}.
Finally, $\mathcal N=0$ maximally symmetric solutions can be of dS type. While being phenomenologically very interesting, these vacua are extremely difficult to control: they are necessarily non-supersymmetric and there is no guarantee for their perturbative stability.\footnote{Recall that dS vacua are at most metastable in theories that also allow Minkowski and AdS vacua.} Thus, for every putative vacuum, one must check if it has tachyonic directions. This analysis is model-dependent and four-dimensional, and does not result in torsion class constraints. Nevertheless, by focusing on moduli that are common for sets of models, generic no-go theorems can be derived \cite{Ihl:2007ah,hktt08,Caviezel:2008tf,Flauger:2008ad,Haque:2008jz,Shiu:2011zt}. For type IIA compactifications with O6 planes, it has been argued that Neveu--Schwarz and Ramond--Ramond fluxes (including a Romans mass) and a negative scalar curvature for the internal manifold are needed to avoid the no-go theorems \cite{Haque:2008jz}.\footnote{See \cite{Silverstein:2007ac} for an early discussion on how a negative scalar curvature helps in achieving four-dimensional dS solutions, and \cite{Douglas:2010rt} for a general discussion on compactifications on negatively curved manifolds.} On an $SU(3)$ structure manifold, the curvature is given by the torsion classes \cite{Bedulli2007}
\begin{equation}
2\mathcal{R} = 15 |\mathcal W_1|^2 - |\mathcal W_2|^2 - |\mathcal W_3|^2 +
8\langle \mathcal W_5, \mathcal W_4 \rangle - 2|\mathcal W_4|^2 + 4 {\rm d} * (\mathcal W_4 + \mathcal W_5) \; .
\end{equation}
$\mathcal{R}$ can be positive or negative: the nearly-K\"ahler case is an example of the first, and the symplectic case is often an instance of the latter.
Despite these caveats, proposals exist for type IIA dS vacua on $SU(3)$ structure manifolds. In a study by Andriot and collaborators, dS solutions were found on a symplectic solvmanifold with vanishing $\mathcal W_5$,\footnote{The stability of the solution is not demonstrated.} and it was noted that dS solutions might also be allowed on less constrained manifolds, which only require constant $\mathcal W_1$ and vanishing $\mathcal W_4$ \cite{Andriot:2010ju}. Another approach has been taken by Danielsson and collaborators, who have analysed dS solutions on manifolds with half-flat $SU(3)$ structures \cite{Danielsson:2009ff,Danielsson:2011au}. Examples of such solutions have been found on half-flat coset manifolds with vanishing $\mathcal W_2$, but all suffer from perturbative instabilities \cite{Caviezel:2008tf,Danielsson:2011au}.\footnote{Complementary four-dimensional studies demonstrate that these dS solutions are not included among the four-dimensional gauged supergravities that have stable dS vacua \cite{Danielsson:2012by}.}
\section{Smooth compact toric varieties} \label{sec:sctv}
In this section we summarise the construction of smooth compact toric varieties, largely following \cite{Larfors:2010wb} to which we refer for more details. Toric varieties are usually discussed in terms of fans (or polytopes).\footnote{A standard reference on toric geometry is \cite{fulton}, and for recent physicist-friendly reviews we refer the reader to \cite{Reffert:2007im,Denef:2008wq,Knapp:2011ip}, and section 2 in \cite{Chialva:2007sv}.} Alternatively, they can be described as the supersymmetric moduli space of a gauged linear sigma model (GLSM). If $\{z^i, ~i=1,\dots n\}$ are holomorphic coordinates of $\mathbb{C}^n$, let
\begin{equation} \label{u1action}
z^i\longrightarrow e^{i\varphi_aQ^a_i}z^i
\end{equation}
be a $U(1)^s$ action on $\mathbb{C}^n$. The symplectic quotient
\begin{equation} \label{kmoddef}
\mathcal{M}_{2d}=\{
z^i\in\mathbb{C}^n | \sum_{i=1}^nQ^a_i|z^i|^2=\xi^a
\}/U(1)^s~
\end{equation}
then defines a toric variety of real dimension $2d = 2(n-s)$. This is a complex variety, with local holomorphic coordinates given by $U(1)$ invariant combinations of $z^i$. The $U(1)$ charges $Q^a_i$, which completely determine the toric variety, are related to the fundamental generators $v_i$ of the fan associated to the toric variety by
\begin{equation} \label{connection}
\sum_{i=1}^nQ_i^{a} v_i=0~,
\end{equation}
for $a=1,\dots,s=n-d$. Using this relation, one can pass between the GLSM and fan descriptions of an SCTV. The fan description is useful when classifying toric varieties, as it translates features like smoothness and compactness into easily accessible properties of the fan.
The GLSM description, on the other hand, facilitates the discussion of SCTV $SU(3)$ structures since differential forms can easily be constructed. Any differential form $\Phi$ on $\mathbb{C}^n$ restricts to a well-defined form $\Phi|$ on $\mathcal{M}_{2d}$ if it is vertical
\begin{equation} \label{vertical}
\iota_{V^a}\Phi=\iota_{\bar{V}^a}\Phi=0 \; ,
\end{equation}
for $a=1,\dots, s$, and invariant
\begin{equation} \label{invariant}
\mathcal{L}_{\mathrm{Im}V^a}\Phi=0 \; .
\end{equation}
Here $\mathcal{L}_{V}$ is the Lie derivative with respect to the
vector $V$, and $V^a$ are the holomorphic vector fields that generate the $U(1)^s$ action
\begin{equation} \label{holvec}
V^a:=\sum_{i}Q^a_iz^i\partial_{z_i} \; .
\end{equation}
In particular, the toric variety inherits a K\"ahler form from the standard K\"ahler form of $\mathbb{C}^n$, by projecting to its vertical component
\begin{equation} \label{eq:Jtilde}
\widetilde{J}:= \frac{i}{2} P\left(\sum_{i=1}^n {\rm d} z^i \wedge {\rm d} \bar{z}^i \right) = \frac{i}{2} \sum_{i=1}^n \mathcal D z^i \wedge \mathcal D \bar{z}^i \; ,
\end{equation}
where $P$ is a projector, and the vertical component of ${\rm d} z^i$ is denoted by $\mathcal{D}z^i$. Explicitly, we have
\begin{equation} \label{Dz}
\mathcal{D}z^i=P_{ij}dz^j \; ,
\end{equation}
where
\begin{equation} \label{projector}
P_{ij}
=\delta_{ij}-Q^a_iQ^b_j\tilde{g}_{ab} z^i\bar{z}^j ~~~~~
\mbox{ (no sum on $i, j$)}
\end{equation}
and $\tilde{g}_{ab}$ is the inverse of the real symmetric matrix
\begin{equation} \label{gab}
g_{ab}=\sum_i Q^a_iQ^b_i|z^i|^2 \; .
\end{equation}
Although $\mathcal{D}z^i$ are not globally defined on the toric variety, it is straightforward to check that the combination $\widetilde{J}$ is both vertical and invariant. Furthermore, the restriction $\widetilde{J}|$ of $\widetilde{J}$ is closed, since any well-defined form satisfies
\begin{equation} \label{arm}
d(\Phi|)=P(d\Phi)|
~.
\end{equation}
A set of well-defined one-forms is given by $\bar{z}_i\mathcal{D}z^i$. Naturally, there can only be three linearly independent (1,0)-forms on a manifold of three complex dimensions, a fact that is ensured by the moment maps which lead to the constraints
\begin{equation}
\sum_{i=1}^nQ^a_i\bar{z}_i\mathcal{D}z^i = 0~;~~~a=1,\dots,d-n~.
\end{equation}
These constraints are imposed when restricting forms to the SCTV.
In summary, a smooth compact toric variety ${\cal M}$ is complex and K\"ahler, with K\"ahler form $\widetilde{J}|$ and metric $\tilde{G}$ inherited from the corresponding canonical structures on the ambient space. The Fayet--Iliopoulos parameters $\xi^a$ in \eqref{kmoddef} are the K\"ahler moduli of the variety, and so \eqref{kmoddef} really describes a family of toric varieties. Inside the K\"ahler cone in the moduli space, $\xi^a$ are larger than zero and the manifold is regular. Moreover, the Betti numbers of any $d$-dimensional toric variety are known: the odd ones are all zero and the even ones are given by
\begin{equation}
b_{2k} = \sum_{j=k}^d (-1)^{j-k} {j \choose k} d_{d-j} \; ,
\end{equation}
where $d_k$ is the number of $k$-dimensional cones in the fan (see section 4.5 in \cite{fulton} for a proof).
The triplet $({\cal M}, \widetilde{g}, \widetilde{J}|)$ defines a $U(3)$ structure on ${\cal M}$. In the next section we will discuss when the structure group can be further reduced to $SU(3)$, but before we do so, we recall that three-dimensional smooth compact toric varieties with up to eight fundamental generators have been classified by Oda \cite{oda}. This classification is based on the weighted triangulations that is created as a two-sphere intersects the three-dimensional fan associated to an SCTV, and was reviewed in detail in \cite{Larfors:2010wb}. Here we focus on three types of three-dimensional SCTVs that are specified by their $U(1)$ charges
\begin{itemize}
\item $\mathbb{CP}^3$\\
\begin{equation} \label{eq:qcp3}
Q = \begin{pmatrix}1&1&1&1\end{pmatrix}\; ,
\end{equation}
\item $\mathbb{CP}^2$ bundles over $\mathbb{CP}^1$\\
\begin{equation} \label{eq:qab}
Q=\begin{pmatrix}
1&1&a&b&0\\
0&0&1&1&1
\end{pmatrix}\; ,
\end{equation}
where $a$, $b$ are integers specifying the
`twisting' of the $\mathbb{CP}^2$ bundle.
\item $\mathbb{CP}^1$ bundles over two-dimensional SCTVs\\
\begin{equation} \label{eq:qci}
Q=\begin{pmatrix}
q^1_1&\dots&q^1_{n-2}&n^1&0\\
\dots & \dots & \dots & \dots& \dots\\
q^{s-1}_1&\dots&q^{s-1}_{n-2}&n^{s-1}&0\\
0&\dots&0&1&1
\end{pmatrix}\; ,
\end{equation}
where $n^a\in\mathbb{Z}$, $a=1,\dots,s-1$, are integers specifying the `twisting' of the $\mathbb{CP}^1$ bundle. The charge components $q^a_i$ are the $U(1)$ charges of the two-dimensional SCTV.
\end{itemize}
Note that the last class of SCTVs is infinite, as there are infinitely many two-dimensional SCTVs. These two-dimensional varieties are completely classified, and can be constructed by blow-ups of $\mathbb{CP}^2$ or the Hirzebruch surface $\mathbb{F}_a$ ($a=0,1,2,...$) \cite{oda}.
\section{Constructing toric $SU(3)$ structures}
\label{sec:toricsu3}
In this section we review and extend the $SU(3)$ structure construction of \cite{Larfors:2010wb}. In addition, we specify the topological restrictions for the existence of toric $SU(3)$ structures. A recent discussion of some of these topological aspects can be found in \cite{Dabholkar:2013qz}.
\subsection{Topological constraints}\label{sec:su3top
In the last section we found that all three-dimensional SCTVs admit an $U(3)$ structure, specified by the triplet $({\cal M}, \widetilde{G}, \widetilde{J}|)$. An $SU(3)$ structure is possible if there exist a pair of nowhere vanishing forms on ${\cal M}$ that satisfy
\begin{equation} \label{eq:su3cond}
\begin{split}
\Omega\wedge J&=0 \, , \\
\Omega\wedge\overline{\Omega}&=-\frac{4i}{3}J^3\neq 0 \; ,
\end{split}
\end{equation}
where $J$ is a real two-form and $\Omega$ is a complex decomposable three-form. The real two-form $\widetilde{J}|$ must thus be complemented with a nowhere-vanishing three-form if a further reduction of the structure group should take place. This is a topological restriction on the manifold, which is usually formulated as the requirement that the manifold has vanishing first Chern class $c_1 \equiv c_1(T^{(1,0)} {\cal M}) )$.
However, $c_1$ is not quite a topological quantity, since it depends on the choice of holomorphic tangent bundle and consequently on the choice of almost complex structure. A topological condition that is independent of this choice exists: as long as $c_1$ is even in cohomology, so that the manifold is spin, the SCTV allows an $SU(3)$ structure.\footnote{This existence argument is not restricted to toric varieties (see \eg~\cite{bryant}): any oriented, spin six-manifold allows a reduction of the structure group to $SU(3)$, as can be seen by analysis of the spin bundle. The $SU(3)$ torsion is not specified by this construction. I thank Robert Bryant for explaining this point to me.} Indeed, as we will see below, by changing the almost complex structure, we can set $c_1=0$ as is necessary to allow a nowhere-vanishing three-form \cite{Tomasiello:2007eq,Larfors:2010wb}. This is only possible if $c_1$ is even to start with, a condition that is independent of the almost complex structure. For a toric variety, the total Chern class $c=1+c_1 + c_2 + ...+ c_d$, where $c_i \in \Omega^{2i}({\cal M})$, is determined by
\begin{equation}
c = \prod_{i=1}^n (1 + D_i) \; ,
\end{equation}
where $D_i$ are the Poincar\'e duals of the divisors $D_i: z^i = 0$. The first Chern class is thus given by the sum
\begin{equation}
c_1 = \sum_{i=1}^n D_i \; .
\end{equation}
A $d$-dimensional toric variety has $s$ linearly independent divisors, corresponding to the linearly independent columns in the $U(1)$ charge matrix $Q$. Consequently, $c_1$ can be expressed as a sum of the linearly independent divisors, and will be even if the coefficients of this sum are all even. Changing the almost complex structure can change the signs of these coefficients, so that they cancel rather than add up, but it cannot change whether they are even or odd.
For the SCTVs classified by Oda, we have, with $a,b,n^a$ as in \eqref{eq:qab}-\eqref{eq:qci}
\begin{itemize}
\item $\mathbb{CP}^3$:
\begin{equation}
c_1 = 4 D_1 \; .
\end{equation}
\item $\mathbb{CP}^2$ bundles over $\mathbb{CP}^1$\\
\begin{equation}
c_1 = (2+a+b) D_1 + 3 D_5 \; .
\end{equation}
\item $\mathbb{CP}^1$ bundles over two-dimensional SCTVs\\
\begin{equation}
c_1 = \sum_{a=1}^{n-2} (1+n^a) D_a + 2 D_n \; ,
\end{equation}
where the first sum will be simplified further once the charge components $q^a_i$ for the two-dimensional SCTV are given, since these give the linear relations between the first $n-2$ divisors.
\end{itemize}
From these values, it is immediately clear that $\mathbb{CP}^3$ has an even first Chern class, and that $\mathbb{CP}^2$ bundles over $\mathbb{CP}^1$ always have odd first Chern class. For $\mathbb{CP}^1$ bundles over two-dimensional SCTVs $c_1$ depends on the twisting parameters $n^a$. It is not difficult to see that one can always choose $n^a$ such that all coefficients are even: let $D_n, D_1, \ldots, D_{s-1}$ be a linearly independent basis of divisors. Then
\begin{equation}
c_1 = \sum_{a=1}^{n-2} (1+n^a) D_a + 2 D_n = \sum_{a=1}^{s-1} \left(1+\sum_{i=s}^{n-2} q_i^a + n^a \right) D_a + 2 D_n
\; ,
\end{equation}
and by choosing, say, $n^a = 1+\sum_{i=s}^{n-2} q_i^a$ all coefficients in the sum are even.
In conclusion, of these three types of SCTVs, only two allow $SU(3)$ structures: $\mathbb{CP}^3$ and $\mathbb{CP}^1$ bundles over two-dimensional SCTVs. In the latter case an $SU(3)$ structure is allowed when the twist parameters of the $\mathbb{CP}^1$ bundles are chosen to appropriate values. Such a choice is always possible.
\subsection{Construction of $J$ and $\Omega$
\label{sec:su3constr}
If we want to use an SCTV for the purpose of string theory compactifications, it is not enough to know that it permits an $SU(3)$ structure. We also need information about its torsion classes, which are given by the exterior derivatives of $J$ and $\Omega$ \cite{chiossi,Cardoso:2002hd}
\bea
\label{eq:torsionclass}
d J&=-\frac{3}{2}\mbox{Im}(\mathcal{W}_1\overline{\Omega})+\mathcal{W}_4\wedge J+\mathcal{W}_3 \, , \\ \nonumber
d \Omega&= \mathcal{W}_1 J\wedge J+\mathcal{W}_2 \wedge J+\mathcal{W}_5 \wedge \Omega ~,
\eea
where $\mathcal{W}_1$ is a function, $\mathcal{W}_2$ is a primitive (1,1)-form and $\mathcal{W}_3$ is a real
primitive $(1,2)+(2,1)$-form. Here, primitivity means that the form contracts to zero with $J$. The Lie forms $\mathcal{W}_4$, $\mathcal{W}_5$ are both real one-forms.\footnote{It is only the (0,1) piece of $\mathcal W_5$ that contributes to \eqref{eq:torsionclass}, so an alternative definition as a complex (1,0)-form is common. Since a real one-form and a complex (1,0)-form carry the same number of degrees of freedom, the two definitions are exchangeable.} For a Calabi--Yau manifold, all torsion classes are zero.
To determine the torsion classes we thus need explicit expressions for $J$ and $\Omega$, which we construct following \cite{Tomasiello:2007eq,Gaiotto:2009yz,Larfors:2010wb}. As discussed in section \ref{sec:sctv}, we already have a candidate two-form: the inherited K\"ahler form $\widetilde{J}$. In addition, a (3,0)-form (with respect to the inherited complex structure) $\widetilde{\Omega}$ can be constructed on the toric variety by contraction of the holomorphic top form $\Omega_{\mathbb{C}}$ of the ambient space $\mathbb{C}^n$:
\begin{equation} \label{eq:omtilde}
\widetilde{\Omega}:= \left(\mathrm{det}g_{ab}\right)^{-1/2}\prod_{a=1}^{s}\iota_{V^{a}}
\Omega_{\mathbb{C}} \; .
\end{equation}
Here $V^a$ are the generators of the $U(1)$ action \eqref{holvec} and the factor containing the determinant of \eqref{gab} is needed for normalisation. $\widetilde{\Omega}$ is a vertical form, and its restriction $\widetilde{\Omega}|$ is a regular (without poles) form on the SCTV, with exterior derivative
\begin{equation}
{\rm d} \widetilde{\Omega}| = -\frac{1}{2} {\rm d} \ln \left(\mathrm{det}g_{ab}\right) \wedge \widetilde{\Omega}| \; .
\end{equation}
It is straightforward to show that the pair $(\widetilde{J}|, \widetilde{\Omega}|)$ satisfies the orthogonality and normalisation conditions \eqref{eq:su3cond} (see \cite{Larfors:2010wb} for details). Consequently, the two forms define a local $SU(3)$ structure.
However, $\widetilde{\Omega}$ is not invariant (it does not have zero $U(1)$ charge)
\begin{equation}
\mathcal{L}_{\mathrm{Im}V^a} \widetilde{\Omega} = \sum_{i=1}^n Q_i^a \; ,
\end{equation}
and is thus only locally defined. The non-zero charge of $\widetilde{\Omega}$ is linked to the non-vanishing first Chern class, as a globally defined three-form is only allowed when $c_1 = 0$. As a consequence, the $SU(3)$ structure defined by $(\widetilde{J}|, \widetilde{\Omega}|)$ is only locally defined. To obtain a globally defined three-form we must ``twist'' the SCTV along some divisor so that $c_1$ vanishes. Clearly, this can be accomplished by constructing a twisted three-form with zero $U(1)$ charge, which is possible if there exist a one-form $K$ on $\mathbb{C}^n$ with the following properties:
\begin{enumerate}
\item It is (1,0) (with respect to the inherited complex structure) and vertical: $P(K)=K$.
\item It is an eigenform of $\mathcal{L}_{\mathrm{Im}V^a}$ (\ie{}
it has definite $Q^a$-charge):
\begin{equation}
\mathcal{L}_{\mathrm{Im}V^a}K=q^a K
~,
\end{equation}
where $q^a$ is half the $Q^a$-charge of ${\Omega}_{\mathbb{C}}$:
\begin{equation} \label{eq:chargecond}
q^a = \frac{1}{2} \sum_{i=1}^n Q_i^a
~.
\end{equation}
\item It is nowhere-vanishing, and hence can be normalised to:
\begin{equation}
K\cdot \bar{K}=2
~,
\end{equation}
where the dot on the left-hand side denotes contraction
of indices with respect to the inherited metric $\tilde{G}$.
\end{enumerate}
Just as $\widetilde{\Omega}$, $K$ is not invariant, and hence only locally defined; consequently it does not restrict the structure group or the topology of the three-fold. With its help we can construct a local $SU(2)$ structure. After normalising $K \cdot \bar{K}=2$, we define
\begin{equation} \label{eq:su2def}
\begin{split}
\omega&:= -\frac{i}{2}~\! \bar{K} \cdot \widetilde{\Omega}\vert\\
j&:=\widetilde{J}\vert-\frac{i}{2}K\wedge \bar{K}
\end{split}
\end{equation}
which form a local $SU(2)$ structure. In particular, the $SU(2)$ conditions
\begin{equation} \label{eq:su2cond}
\begin{split}
\omega\wedge\bar{\omega}&= 2j\wedge j\\
\omega\wedge j&=0
\end{split}
\end{equation}
can be shown to hold from \eqref{eq:su3cond} and \eqref{eq:su2def}. A property that follows from this construction is that $K$ and $\bar{K}$ contracts to zero with $j$ and $\omega$. The local $SU(3)$ structure is then given by
\begin{equation} \label{eq:su3local}
\begin{split}
\widetilde{J}\vert&= j + \frac{i}{2}K\wedge \bar{K}\\
\widetilde{\Omega}\vert&=i K\wedge\omega \; .
\end{split}
\end{equation}
We now perform the ``twist'': a new $SU(3)$ structure is constructed by switching $K \leftrightarrow \bar{K}$ in \eqref{eq:su3local}:
\begin{equation} \label{eq:su3global}
\begin{split}
J&:=\alpha j- \frac{i\beta^2}{2}K\wedge \bar{K}\\
\Omega&:=e^{i\gamma}\alpha\beta \bar{K} \wedge\omega \; ,
\end{split}
\end{equation}
where the parameters $\alpha, \beta, \gamma$ are non-zero real functions. Using \eqref{eq:su2cond} it is straightforward to show that $(J,\Omega)$ satisfy the $SU(3)$ conditions \eqref{eq:su3cond}, and $\Omega$ can also be shown to be complex decomposable for all $\alpha, \beta, \gamma$ \cite{Larfors:2010wb,Larfors:2011zz}. Furthermore, the charge of $J$ and $\Omega$ are both zero by construction, since $Q(\bar{K})= -Q(K) = -Q(\omega)$. Thus, a global $SU(3)$ structure is constructed.
The real functions $\alpha, \beta, \gamma$ in the global $SU(3)$ structure \eqref{eq:su3global} are not constrained by \eqref{eq:su3cond} (but, in a string vacuum, they will be restricted by supersymmetry constraints and the equations of motion). Two limits in the parameter space are of particular interest, namely
\begin{equation}
\alpha = -\beta^2 \; \; , \; \; \beta=1\; \mbox{ , and }
\alpha = +\beta^2 \; \; , \; \; \beta=1 \; .
\end{equation}
In the first limit the real two-form $J =- \widetilde{J}|$ is closed, and the $SU(3)$ structure is symplectic. In the second limit, the metric defined by $(J,\Omega)$ equals the metric $\tilde{G}$ induced from the canonical metric on $\mathbb{C}^n$ \cite{Larfors:2010wb}.
\subsection{Existence of $K$
\label{sec:Kreq}
At this point it should be clear that given a one-form $K$ we can explicitly construct an $SU(3)$ structure. What remains to show is when such a one-form exists. We have seen above that an $SU(3)$ structure is always allowed once the first Chern class, $c_1$ is even in cohomology. We will now show that if this constraint is satisfied, the one-form $K$ can always be constructed.
As discussed in section \ref{sec:su3constr}, there are three conditions on $K$. First, it should be $(1,0)$ with respect to $\widetilde{\mathcal{I}}$ and vertical. These conditions are met by linear combinations of $\mathcal D z^i$ that are also eigenvectors with eigenvalue 1 of the projection matrix $P_{ij}$:
\begin{equation}
K_i P_{ij}= K_j~.
\end{equation}
$P_{ij}$ has rank three, so there are three such linearly independent eigenvectors. For $\mathbb{CP}^3$ they can be taken as
\begin{equation} \label{eq:evcp3}
K_1 = (-z^2,z^1,0,0); \quad
K_2 = (0,0,-z^4,z^3); \quad
K_3 = (-z^4,0,0,z^1) ~,
\end{equation}
where the first vector corresponds to the form $K_1 = -z^2 \mathcal D z^1 + z^1 \mathcal D z^2$ etc. For $\mathbb{CP}^1$ bundles, with coordinates $z^{n-1}, z^n$ along the $\mathbb{CP}^1$ fibre, the eigenvectors have a similar form, and we list them for bundles with up to six generators in table \ref{tab:1} in section \ref{sec:Kuniq}. Schematically, there are two eigenvectors with zero components along the $\mathbb{CP}^1$ fibre, and one with non-zero components:
\begin{equation} \label{eq:evcp1bdl}
K_1 = (*,\ldots,*,0,0); \quad
K_2 = (*,\ldots,*,0,0); \quad
K_3 = (*,\ldots,*,*) ~,
\end{equation}
where $*$ means that the entry is not necessarily zero. As long as the $\mathbb{CP}^1$ fibration is non-trivial (\ie~non-zero twist parameters $n^a$), there is no eigenvector of the form $(0,\ldots,0,*,*)$.
Thus, the first condition on $K$ can always be fulfilled. The second condition is that $K$ should have half the $U(1)$ charge of $\widetilde{\Omega}$. It is easy to see that this can only be satisfied when the charge of $\widetilde{\Omega}$ is even (since no function of the $z^i$ has fractional charge). This will restrict the twist parameters $n^a$ in $Q$, just as the condition on the first Chern class did. In fact, the even charge condition on $\widetilde{\Omega}$ exactly corresponds to requiring that $c_1$ is even in cohomology, and so can be solved for $\mathbb{CP}^3$ and all $\mathbb{CP}^1$ fibrations. Concretely, once the $n^a$ are fixed, we read off the charge of $K_{i}$, and look for functions $\alpha_{i}$ so that
\begin{equation}
\hat{K} =\sum_{i=1}^3 \alpha_i K_i
\end{equation}
has the required charge. Such functions $\alpha_{i}$ can always be found, and several consistent choices may exist.
Thirdly, we must check that the norm of $\hat{K}$ is nowhere-vanishing, so that the twisted $SU(3)$ structure is well-defined. This is possible by choosing $\alpha_i$ so that $|\hat{K}|^2$ is bounded from below by a positive combination of the K\"ahler moduli (recall that these are positive for non-singular varieties). This step requires a bit more work than the charge condition; in particular the K\"ahler and Mori cones of the variety needs to be identified as described in appendix \ref{sec:Mori}. Again, several consistent choices may exist and we will come back to the question of uniqueness in the examples.
To conclude, a one-form $K$ fulfilling the three conditions can always be found if $c_1$ is even in cohomology. Consequently, an $SU(3)$ structure can be constructed and its torsion classes can be computed.
\section{Properties of toric $SU(3)$ structures}
In the last section, we showed that SCTVs with even first Chern class allow $SU(3)$ structures, and we also constructed the defining forms $J$ and $\Omega$. To investigate if these structures are relevant for string vacua, we need to further analyse the associated metric and torsion classes. In this section we give explicit forms of the almost complex structure and metric defined by the $SU(3)$ structure, and use metric positivity to derive constraints on the parameters of the construction. We also compute generic properties of the torsion classes of toric varieties, particularly focusing on how they are affected by changes of the parameters.
\subsection{Almost complex structure and metric
\label{sec:metric}
Any $SU(3)$ structure has a metric, that is determined by $(J,\Omega)$ as follows \cite{Hitchin:2000jd}. First, $\Omega$ specifies an almost complex structure by
\begin{equation}
\mathcal{I} = \frac{\widehat{\mathcal{I}}}{\sqrt{-\text{tr}\, \frac{1}{6}\,\widehat{\mathcal{I}}^2}}
\mbox{ , where }~
\widehat{\mathcal{I}}_k{}^l = \varepsilon^{lm_1\dots m_5} (\mbox{Re} \Omega)_{km_1m_2} (\mbox{Re} \Omega)_{m_3m_4m_5}
\end{equation}
and $\varepsilon^{m_1\dots m_6}=\pm1$ is the totally antisymmetric symbol in six dimensions. Complex decomposability of $\Omega$ guarantees that $\mathcal{I}^2=-1$. Using $\mathcal{I}$ and $J$, the metric is then given by
\begin{equation}
\label{eq:su3metric}
G_{mn}=-\mathcal{I}_m{}^lJ_{ln}~.
\end{equation}
The construction does not guarantee that the metric is positive definite.
For the local $SU(3)$ structure $(\widetilde{J},\widetilde{\Omega})$ we can thus compute, using \eqref{eq:su3local},
\begin{equation} \label{eq:ifs}
\widehat{\widetilde{\mathcal{I}}}_k{}^l = \frac{3}{2} \varepsilon^{lm_1\dots m_5} \mbox{Re} \left(
(K_k \omega_{m_1m_2} - 2 K_{m_1} \omega_{k m_2})
\bar{K}_{m_3} \bar{\omega}_{m_4 m_5}
\right)
\end{equation}
where further terms vanish due to index antisymmetrisation. It can be checked that this is just the inherited complex structure from $\mathbb{C}^n$, and the associated metric is the inherited metric $\widetilde{G}$ (this is also known as the Fubini--Study metric on $\mathbb{CP}^3$):
\begin{equation}
\widetilde{G}_{mn}=-\widetilde{\mathcal{I}}_m{}^l\widetilde{J}|_{ln}~.
\end{equation}
For the global $SU(3)$ a similar computation yields
\bea \label{eq:isu3}
\widehat{\mathcal{I}}_k{}^l &= \frac{3}{2}\alpha^2 \beta^2 \varepsilon^{lm_1\dots m_5} \mbox{Re} \left(
(-K_k \omega_{m_1m_2} - 2 K_{m_1} \omega_{k m_2})
\bar{K}_{m_3} \bar{\omega}_{m_4 m_5}
\right) \; .
\eea
Note that the phase of $\Omega$ (i.e. $\gamma$) does not affect $\mathcal{I}$, and the factors of $\alpha$ and $\beta$ will cancel in the normalised almost complex structure $\mathcal{I}$. Thus, the only difference between this almost complex structure and the inherited complex structure \eqref{eq:ifs} is a relative sign. This sign reflects the twisting of the toric variety, and matches the relative sign found in the almost complex structures of twistor spaces, see equations (3.5) and (3.6) in \cite{Tomasiello:2007eq}. Using \eqref{eq:ifs} and \eqref{eq:isu3} it is straightforward to show
\bea
\widetilde{\mathcal{I}}_k{}^l K_l &= i K_k = -\mathcal{I}_{ k}{}^l K_l \\ \nonumber
\widetilde{\mathcal{I}}_k{}^l \omega_{lm} &= i \omega_{km} = \mathcal{I}_{ k}{}^l \omega_{lm}
\; .
\eea
Consequently, $\widetilde{\Omega}$ and $\Omega$ are (3,0) with respect to their associated almost complex structures, as is required for the consistency of the construction.
The metric associated to the $SU(3)$ structure is given by inserting $J$ and \eqref{eq:isu3} in \eqref{eq:su3metric}. A short computation gives
\begin{equation} \label{eq:su3metric2}
G_{mn} =\alpha \left[ \widetilde{G}_{mn} + \left(\frac{\beta^2}{\alpha}-1\right) \mbox{Re} \left(K_m \bar{K}_{n}
\right) \right]
\; .
\end{equation}
This expression for the metric is another manifestation of the twisting of the toric variety by $K$. In the parameter limit $\alpha = \beta^2 = 1$ it simplifies to the induced metric. Thus, contractions are greatly simplified in this limit, which is helpful when computing the torsion classes.
Expressing the $SU(3)$ structure metric as in \eqref{eq:su3metric2} facilitates the check of metric positivity: $G$ is positive definite if for any non-zero vector $\mathbf{v}$
\begin{equation}
0 < \mathbf{v}^{T} G \mathbf{v} = \alpha \left[ \mathbf{v}^{T}\widetilde{G}\mathbf{v} + \left(\frac{\beta^2}{\alpha}-1\right) \mathbf{v}^{T} \mbox{Re} \left(K \bar{K} \right) \mathbf{v} \right] \; .
\end{equation}
$K$ is directed along a certain direction in the space of one-forms, and so $\mbox{Re} \left(K \bar{K}\right)$ will contribute to a block of $G$. As a consequence, some of the eigenvalues $G$ will be proportional to those of $\widetilde{G}$, with proportionality coefficient $\alpha$. Since $\widetilde{G}$ is positive definite, the condition
\begin{equation} \label{eq:parpos1}
\alpha > 0 \; ,
\end{equation}
is thus necessary for metric positivity.\footnote{This argument can be rephrased using Sylvester's criterion \cite{gilbert}.} This is a severe restriction on the parameters, and it shows that the $SU(3)$ structure does not have a positive definite metric in the symplectic limit $\alpha = -\beta^2 =-1$. Further parametric constraints can be derived once the properties of $\mbox{Re} \left(K \bar{K}\right)$ are known. For example, if $\mbox{Re} \left(K \bar{K}\right)$ is positive semidefinite, $\beta^2 \ge \alpha$ is a sufficient (but not necessary) condition for metric positivity.
\subsection{Torsion classes, choices of $K$ and parameters}
\label{sec:torsion}
The torsion classes of a toric $SU(3)$ structure are determined by the exterior derivatives of $K$, $\omega$ and the parameters $\alpha, \beta, \gamma$. In this section we discuss their general properties. Let us first note that the parametric freedom given by $\alpha, \beta, \gamma$ is a great help when computing the torsion classes. Contractions are needed in order to decompose ${\rm d} J$ and ${\rm d} \Omega$ in $SU(3)$ representations, and since the metric \eqref{eq:su3metric2} tends to be complicated for generic choices of $K$, these are computationally expensive. It is therefore very useful that parameter choices exist where either the metric simplifies, or some torsion classes are set to zero.
For constant parameters $\alpha, \beta, \gamma$, the torsion classes are uniquely determined by the exterior derivatives of $K$ and $\omega$. Decomposing $J$ as
\begin{equation} \label{eq:Jdec}
J = \alpha \widetilde{J}| + \frac{i}{2} (\alpha + \beta^2) K \wedge \bar{K}
\end{equation}
shows that
\begin{equation}
{\rm d} J |_{{\rm d}\alpha={\rm d}\beta=0} = -(\alpha + \beta^2) \mbox{Im} ({\rm d} K \wedge \bar{K}) \; .
\end{equation}
Consequently, up to contributions from ${\rm d} \alpha$ and ${\rm d}\beta$, the torsion classes $\mathcal W_1$, $\mathcal W_3$ and $\mathcal W_4$ are completely determined by ${\rm d} K$. If, as we will see in some examples,
\begin{equation} \label{eq:dKgen}
{\rm d} K = \delta \omega + \Psi \wedge K
\end{equation}
where $\delta$ is a function and $\Psi$ a one-form, we find that $\mathcal W_1 \propto \delta$ and
\begin{equation}
\begin{split}
\mathcal W_4 \wedge J + \mathcal W_3 = i (\alpha + \beta^2) \mbox{Re} (\Psi) \wedge K \wedge \bar{K}
\; ,
\end{split}
\end{equation}
since $K \wedge \bar{K}$ is imaginary. Evidently, if $\mbox{Re} (\Psi)$ is zero, so are $\mathcal W_3$ and $\mathcal W_4$.
As is clear from \eqref{eq:Jdec}, $J$ is a closed form in the limit $\alpha = -\beta^2, \beta=1$. Thus the only non-zero torsion classes in this limit are $\mathcal W_2$ and $\mathcal W_5$, and the $SU(3)$ structure is symplectic. Moving away from this limit generically switches on all torsion classes. As an example, the primitivity condition on $\mathcal W_2$
\begin{equation}
\mathcal W_2 \wedge J \wedge J = 0
\end{equation}
depends on $\alpha$ and $\beta$. Thus, $\mathcal W_2$ computed in one parameter limit will give contributions to both $\mathcal W_1$ and $\mathcal W_2$ in a different region of parameter space. Another relevant observation is that the phase of $\mathcal W_1$ and $\mathcal W_2$ is completely determined by $\gamma$.
Finally, it was shown in \cite{Gray:2012md} that non-constant $\alpha, \beta, \gamma$ contribute additional terms to $\mathcal W_3, \mathcal W_4$ and $\mathcal W_5$. In summary, we have
\begin{equation}
\begin{split}
\label{sctvtorsions}
&\mathcal W_1= (\alpha+\beta^2) e^{i \gamma} \mathcal W_1^0 \\
&\mathcal W_2 =e^{i\gamma} \mathcal W_2^0 \\
&\mathcal W_3 = (\alpha+\beta^2) \mathcal W_3^0
+ \left(\chi - \frac{1}{4} (J \lrcorner \chi )\wedge J \right)
\\
&\mathcal W_4 = (\alpha+\beta^2) \mathcal W_4^0 + \frac{1}{4} J \lrcorner \chi
\\
&\mathcal W_{5}=\mathcal W_5^0 + {\rm d} \ln (\alpha \beta) + \mathcal{I} {\rm d} \gamma ~,
\end{split}
\end{equation}
where $\lrcorner$ denotes contraction, $\mathcal W_i^0$ a reference value for the torsion class (computed with constant $\alpha, \beta, \gamma$), $\mathcal{I} {\rm d} = i (\partial - \bar{\partial})$ and we recall that $\mathcal W_5$ is real in our conventions. The three-form $\chi$ that contributes to $\mathcal W_3$ and $\mathcal W_4$ is given by
\begin{equation}
\chi ={\rm d} \ln \alpha \wedge J + i \frac{\beta^2}{2} {\rm d} (\ln \alpha -2 \ln \beta) \wedge K \wedge \bar{K} \ .
\end{equation}
When $\alpha \propto \beta^2$, with constant proportionality coefficient, we have $\chi = {\rm d} \ln \alpha \wedge J$. This lacks a primitive piece, and so does not contribute to $\mathcal W_3$, but adds an exact term to $\mathcal W_4$.
From the above reasoning, it is clear that exact contributions to $\mathcal W_4$ and $\mathcal W_5$ can be compensated by parameter choices. This phenomenon is related to an observation by Chiossi and Salamon \cite{chiossi}: it follows from the second $SU(3)$ condition in \eqref{eq:su3cond} that under conformal transformations $g \to e^{f} g$, where $f$ is any real function, the torsion classes $\mathcal W_4$ and $\mathcal W_5$ both transform by the addition of exact pieces. Thus, if $\mathcal W_4$ and $\mathcal W_5$ are exact and $3\mathcal W_5-2\mathcal W_4= 0$ we can make a conformal transformation to an $SU(3)$ structure with vanishing Lie forms.
Since $\alpha$ and $\beta$ are two real functions, the parametric freedom is a bit larger than conformal transformations: as long as $\mathcal W_4^0$ and $\mathcal W_5^0$ are exact and proportional, with constant coefficient of proportionality, they can be set to zero. Clearly, if for some function $p$ and constant $A$
\begin{equation} \label{eq:dpcontr}
\mathcal W_4^0 =\frac{1}{\beta^2} {\rm d} \ln p \; , \;
\mathcal W_5^0 = A {\rm d} \ln p \; ,
\end{equation}
we can choose $\alpha = \frac{2A-3}{3}\beta^2$ and $\beta=p^{-A/3}$ to set $\mathcal W_4 = 0 = \mathcal W_5$. If $\mathcal W_{4,5}^0$ are exact but have a non-constant quotient, only one linear combination of them can be set to zero.
In a given example, there may be additional parameter limits that set other sets of torsion classes to zero. In general, care is needed when distinguishing the $SU(3)$ structures that are obtained through the construction in section \ref{sec:su3constr}, as it is possible that different choices of $K$ lead to $SU(3)$ structures that are equivalent up to changes in the parameters $\alpha, \beta$ and $\gamma$. We will discuss this phenomenon in explicit examples in the following section, and it would certainly be interesting to study this question in more depth in the future.
\section{Examples} \label{sec:Kuniq}
In section \ref{sec:toricsu3}, we argued that whenever $c_1$ is even in cohomology, there exist a one-form $K$ fulfilling the three requirements discussed in section \ref{sec:su3constr}. The choice of $K$ is not unique, as was first pointed out in \cite{Larfors:2010wb}, which leads to the possibility of having multiple $SU(3)$ structures on a single toric variety. In this section, we construct $K$ for $\mathbb{CP}^3$ and toric $\mathbb{CP}^1$ bundles over two-dimensional SVTVs with up to six generators. For $\mathbb{CP}^3$ and $\mathbb{CP}^1 \hookrightarrow \mathbb{F}_0$, we show that simple changes to $K$ lead to parametrically inequivalent $SU(3)$ structures on example manifolds.\footnote{The symbolic computer program \cite{bonanos} has been used for all explicit computations of torsion classes.} In addition, we check if the $SU(3)$ structures thus obtained lead to vacua of the type discussed in section \ref{sec:survey}.
\subsection{$\mathbb{CP}^3$
Our first example, $\mathbb{CP}^3$, has been studied at length in the literature, starting with the classical papers \cite{Nilsson:1984bj,Sorokin:1985ap} to more modern studies \cite{Behrndt:2004km,Behrndt:2004mj,Lust:2004ig,Tomasiello:2007eq,Koerber:2008rx}. In the symplectic quotient description, this manifold can be constructed as a subspace of $\mathbb{C}^4$ using \eqref{kmoddef} and the single charge
\begin{equation}
\label{eq:cha}
Q^1=\begin{pmatrix} 1&1&1&1 \end{pmatrix} ~.
\end{equation}
The local $SU(3)$ structure is given by
\begin{equation}
\widetilde{J} = \sum_{i=1}^4 \mathcal D z^i \wedge \mathcal D \bar{z}^i \; , \;
\widetilde{\Omega} =
\frac{1}{\sqrt{\mbox{det}g_{ab}}}
\left( z^1 \mathcal D z^{234} - z^2 \mathcal D z^{134} + z^3 \mathcal D z^{124} -z^4 \mathcal D z^{123} \right) \; ,
\end{equation}
where the prefactor is a positive constant (see \eqref{gab})
\begin{equation} \label{eq:p3xi}
\mbox{det}g_{ab} = \sum_{i=1}^4 |z_i|^2 = \xi > 0 \; .
\end{equation}
Here $\xi$ is the (coordinate independent) K\"ahler modulus of $\mathbb{CP}^3$, which is strictly positive in the K\"ahler cone. Consequently, $\widetilde{\Omega}$ is a closed form.
The vertical one-form $K = \sum_{i=1}^3 \alpha_i K_i$ is a linear combination of the $P_{ij}$ eigenvectors \eqref{eq:evcp3}.
Since $Q(K_i)=2$ already is half of that of $\widetilde{\Omega}$, we must choose $\alpha_i$ with charge 0. The choice of $\alpha_i$ completely determines the $SU(3)$ structure, and different choices will lead to different torsion classes.
\subsubsection*{Half-flat $SU(3)$ structure}
An interesting choice for $\hat{K}$ is
\begin{equation} \label{eq:therightK}
\hat{K}=(-z^2,z^1,-z^4,z^3) \; ,
\end{equation}
to be read as a vector in the $\mathcal D z^i$ basis. With respect to the Fubini--Study metric, this has constant norm $|\hat{K}|^2 =\mbox{det}g_{ab}=\xi \neq 0$, and so we can define
\begin{equation}
K = \frac{1}{\sqrt{\mbox{det}g_{ab}}} (-z^2,z^1,-z^4,z^3) \; ,
\end{equation}
which has norm 2 and can be used to construct the global $SU(3)$ structure \eqref{eq:su3global}.
It can be checked that this $K$ gives a positive semidefinite contribution to the $SU(3)$ metric \eqref{eq:su3metric2}, and so a sufficient condition for positive definiteness of the latter is $\alpha>0$ and $\beta^2>\alpha$. On closer inspection, it can be shown that the last of these conditions is superfluous, and that metric positivity is guaranteed by only imposing the first constraint.
The torsion classes are straightforward to compute. First, we note that ${\rm d} K$ is proportional to $\omega$:
\begin{equation}
{\rm d} K = \frac{2}{\sqrt{\mbox{det}g_{ab}}} \omega \; ,
\end{equation}
and that we can fix $\gamma$ so that ${\rm d} \Omega$ is real (or imaginary). The first assertion sets $\mathcal W_3 = 0 = \mathcal W_4$, while the second implies that $\mathcal W_5=0$. $\mathcal W_2$ is non-zero and can be computed by contracting $J$ with ${\rm d} \Omega$ in the limit $\alpha=\beta^2$, and then using the result as an ansatz for general parameters. The result, for constant $\alpha, \beta, \gamma$, is
\begin{equation}
\begin{split}
\mathcal W_1 &= \frac{ 4 e^{i \gamma} (\alpha+\beta^2)}{3 \alpha \beta \sqrt{\mbox{det}g_{ab}}} \\
\mathcal W_2 &= \mathcal W_1 \frac{2\beta^2-\alpha}{\alpha+\beta^2} \left(
J +\frac{3i \beta^2}{2} K \wedge \bar{K}
\right) \\
\mathcal W_3 &= 0 \; ,
\mathcal W_4 = 0 \; ,
\mathcal W_5 = 0\; .
\end{split}
\end{equation}
Since only $\mathcal W_1$ and $\mathcal W_2$ are non-zero, this is an example of a restricted half-flat $SU(3)$ structure. In fact, we have reproduced the $SU(3)$ structure found from a twistor analysis in \cite{Tomasiello:2007eq} and a coset perspective in \cite{Koerber:2008rx}. Comparing with section \ref{sec:survey}, it is straightforward to check that this structure satisfies necessary requirements for several string vacua, such as the type IIA $\mathcal N=0,1$ AdS vacua and the calibrated $\mathcal N=0$ vacua of either type IIB (with O5 planes) or heterotic string theory. To fully investigate that all constraints are satisfied for a particular vacuum goes beyond the scope of this paper, and we refer the reader to \eg~\cite{Tomasiello:2007eq,Koerber:2008rx,Caviezel:2008tf} for a more detailed discussion.
\subsubsection*{Modified $SU(3)$ structure}
Let us now investigate if there are different choices of $\alpha_{1,2}$, such that the new $K$ still fulfils the verticality, charge and norm conditions. We focus on the last condition, which is most constraining. On $\mathbb{CP}^3$, there is only one $U(1)$ charge, which says that \eqref{eq:p3xi} is non-zero. However, adding any non-negative combination of $|z^i|^2$ to $\xi$ also gives a nowhere-vanishing expression. Since $K_1$ and $K_2$ are orthogonal vectors with non-negative norm, we can thus change $\alpha_{1,2}$ and get a new $K$ with nowhere vanishing norm. For simplicity, we take $\alpha_{1,2}$ to be real constants; we will comment on non-constant functions below.
The norm of the new form $\hat{K}_{new}$ is non-constant for $\alpha_{1} \neq \alpha_2$
\begin{equation}
p = |\hat{K}_{new}|^2=\alpha_1 \mbox{det}g_{ab} + (\alpha_2-\alpha_1) |K_2|^2 \; .
\end{equation}
The one-form $K_{new} = 1/\sqrt{p} \hat{K}_{new}$ gives a positive semidefinite contribution to the $SU(3)$ metric \eqref{eq:su3metric2}. Again, it can be shown that metric positivity only requires $\alpha>0$.
The exterior derivative of $K$ is no longer proportional to $\omega$, and in particular gives a term $-\frac{1}{2} {\rm d} \ln p \wedge K$ which will contribute to the Lie forms. Thus, after a straightforward computation, we find
\begin{equation}
\begin{split}
\mathcal W_1^0 &= \frac{ 4 \alpha_1 \alpha_2 \sqrt{\mbox{det}g_{ab}}}{3 \alpha \beta p } \\
\mathcal W_2^0 &= (2\beta^2-\alpha)\mathcal W_1^0 \left(
J +\frac{3i \beta^2}{2} K \wedge \bar{K}
\right) \\
\mathcal W_3^0 &= - \mathcal W_4^0 \wedge \left( J + i \beta^2 K \wedge \bar{K} \right) \; , \\
\mathcal W_4^0 &= \frac{1}{2 \beta^2} {\rm d} \ln p \; , \\
\mathcal W_5^0 &= 2 {\rm d} \ln p \; ,
\end{split}
\end{equation}
which should be inserted into \eqref{sctvtorsions} to get the torsion classes for general parameters. As is clear from these equations, the effect of choosing non-trivial $\alpha_{1,2}$ is that $\mathcal W_1$ and $\mathcal W_2$ are rescaled, $\mathcal W_3, \mathcal W_4$ and $\mathcal W_5$ are all non-zero and $\mathcal W_4$ and $\mathcal W_5$ are exact. Even though these changes can largely be compensated by a change in the parameters $\alpha, \beta, \gamma$, no choice of these parameters take us back to the restricted half-flat $SU(3)$ studied in the previous subsection. There are, however, several parametric limits where some of the torsion classes are zero. In fact, with different parametric choices, we can turn on or off all torsion classes but $\mathcal W_1$ (this is only zero in the symplectic limit $\alpha = -\beta^2$ which is excluded by metric positivity). Comparing with table \ref{tab:su3N1} and \ref{tab:su3N0}, we note that no $\mathcal N=1$ vacuum can be constructed using this $SU(3)$ structure, but that $\mathcal N=0$ vacua of type IIB (with O5 planes) and heterotic string theory may be allowed. Again, we leave a detailed investigation to the future.
We thus conclude that in the toric formulation, $\mathbb{CP}^3$ allows a more general $SU(3)$ structure than has been found through twistor space and coset studies. To further stress this point we can allow the $\alpha_i$ to be non-constant. This does not change the metric, but will in general change all the torsion classes. Most importantly, $\mathcal W_4$ and $\mathcal W_5$ are no longer exact, and so one of the necessary constraints for string vacua cannot be met. The connection between $\mathcal W_3$ and $\mathcal W_4$ is also lost. All this can be understood at the level of $dK$: for generic non-constant $\alpha_i$, the relation \eqref{eq:dKgen} fails since ${\rm d} \alpha_1$ and ${\rm d} \alpha_2$ need not be equal.
\begin{table}
\hspace{-0.5cm}
\begin{tabular}[htb]{| l | l | p{11.5cm} |}
\hline
$N$ & $q$ & $K_i$\\
\hline
\hline
3
& $\begin{pmatrix}
1&1&1
\end{pmatrix}$
&
$\begin{matrix}
&K_1= (&-z^3,& 0,& z^1,& 0,& 0 ) \\
&K_2= (&0,& -z^3,& z^2,& 0,& 0 ) \\
&K_3= (&c^1z^{45},& 0,& 0,& -z^{14},& z^{15} )
\end{matrix}$ \\
\hline
\hline
4 &
$\begin{pmatrix}
0&1 & 0 &1\\
1&a & 1 &0
\end{pmatrix}$
&
$\begin{matrix}
&K_1=&(-z^3\;, \; 0 \; \;, \; z^1&, &0 \;, &0 \;, &0 \;)& &\\
&K_2=&(a z^{24},\; -z^{14} \;, \; 0 \;&, &z^{12}, &0 \;, &0 \;)& &\\
&K_3=&(\;[c^2-a c^1]z^{256}&, &c^1 z^{156}, &0, &0, &-z^{126}, &z^{125})
\end{matrix}$
\\
\hline
\noalign{\smallskip}
\hline
5
&
$\begin{pmatrix}
1&a & 1 &0 &0\\
0&1 & 0 & 0 &1\\
1&a+1 & 0 &1 &0
\end{pmatrix}$
&
$\begin{matrix}
&K_1=& (-z^{34},\;0 \;, \; z^{14} \; ,& z^{13},\; 0,\; 0,\; 0) \;\; & \; & \\
&K_2=& (0,-z^{345}, \; a z^{245},& [1+a]z^{235},\; z^{234}, 0, 0)& \; &\\
&K_3=& ([c_1+c_3]z^{4567},\; 0,& 0, \; -c_1 z^{1567}, c_2 z^{1467},& \; -z^{1457},& \;z^{1456})
\end{matrix}$\\
\hline
\hline
6
& $\begin{pmatrix}
-1&1 & -1 &0&0 &0\\
1&0&a & 1 & 0 &0\\
0&0&1 & 0 &0 &1\\
1&0&a+1 & 0 &1 &0\end{pmatrix}$
&
$\begin{matrix}
&\,K_1= (z^{245},& z^{145},& 0,& -z^{125},& -z^{124},& 0,&0,0) \end{matrix}$
$\begin{matrix}
&K_2= (0,& z^{3456},& z^{2456},& -a z^{2356},& -[a+1]z^{2346},& -z^{2345},&0,0)
\end{matrix}$
\\
\hline
\hline
6
& $\begin{pmatrix}
1&a & 1 &0&0 &0\\
2&2a+1&0 & 1 & 0 &0\\
1&a+1&0 & 0 &1 &0\\
0&1&0 & 0 &0 &1
\end{pmatrix}$
& $\begin{matrix}
&\,K_1= (z^{345},& 0,& -z^{145},& - 2z^{135},& - z^{134},& 0,&0,0) \end{matrix}$
$\begin{matrix}
&K_2= (0,& z^{3456},& -az^{2456},& -[2a+1] z^{2356},& -[a+1]z^{2346},& -z^{2345},&0,0)
\end{matrix}$ \\
\hline
\hline
6
& $\begin{pmatrix}
1&a & 1 &0&0 &0\\
1&a+1&0 & 1 & 0 &0\\
1&a+2&0 & 0 &1 &0\\
0&1&0 & 0 &0 &1
\end{pmatrix}$
& $\begin{matrix}
&\,K_1= (z^{345},& 0,& -z^{145},& - z^{135},& - z^{134},& 0,&0,0) \end{matrix}$
$\begin{matrix}
&K_2= (0,& z^{3456},& -az^{2456},& -[a+1] z^{2356},& -[a+2]z^{2346},& -z^{2345},&0,0)
\end{matrix}$ \\
\hline
\hline
\end{tabular}
\caption{\it Charges and vertical one-forms for $\mathbb{CP}^1$ bundles over two-dimensional SCTVs with $N$ generators. $q$ is the charge matrix for the two-dimensional SCTVs, and the charge matrix $Q$ for the $\mathbb{CP}^1$ bundle is given by \eqref{eq:qci}. The one-forms $K_i = K_{i,m} \mathcal D z^m$ are eigenvectors of $P_{ij}$, and the abbreviation $z^{ij..}=z^i z^j..$ is used. For $N=6$, only two of the three linearly independent eigenvectors have been computed. \label{tab:1}}
\end{table}
\subsection{$\mathbb{CP}^1$ bundles over two-dimensional SCTVs
Toric $\mathbb{CP}^1$ bundles differ from $\mathbb{CP}^3$ in two important respects. First, the determinant of the symmetric matrix $g_{ab}$ \eqref{gab} is no longer constant. Consequently, the local three-form $\widetilde{\Omega}$ is no longer closed. Second, these varieties are specified by more than one moment map, which can all be used to build up a nowhere-vanishing norm of $K$. This leads to more freedom in the construction, and it is not expected that $K$ should be unique.
In this section, we first discuss $\mathbb{CP}^1$ bundles over the Hirzebruch surface $\mathbb{F}_0 = \mathbb{CP}^1 \times \mathbb{CP}^1$. This example was first studied in \cite{Larfors:2010wb} where an $SU(3)$ structure was constructed and some of the torsion classes were computed (see also \cite{Gray:2012md}). Here we compute all torsion classes and also discuss how they are affected by changes in the choice of $K$. Secondly, we present valid choices for $K$ on $\mathbb{CP}^1$ bundles over two-dimensional SCTVs with 3, 5, and 6 generators. This includes the flag manifold $\mathbb{CP}^1$ over $\mathbb{CP}^2$, which is known to allow the same type of half-flat $SU(3)$ structure that $\mathbb{CP}^3$ does \cite{Tomasiello:2007eq,Koerber:2008rx}.
\subsubsection{$\mathbb{CP}^1$ bundles over $\mathbb{F}_0$}
The charges for a $\mathbb{CP}^1$ fibration over $\mathbb{F}_a$ are
\begin{equation}
\begin{split}
\label{eq:Qn4}
Q^1&=(0,1,0,1,n^1,0) \\
Q^2&=(1,a,1,0,0,-n^2) \\
Q^3&=(0,0,0,0,1,1) ~.
\end{split}
\end{equation}
As discussed in section \ref{sec:su3top}, $\mathbb{CP}^1$ fibrations only allow $SU(3)$ structures for certain values of the parameters $n^a$. In this example, $n^1$ and $a-n^2$ must be even to obtain an even first Chern class, or equivalently an even $U(1)$ charge of $\widetilde{\Omega}$
\begin{equation}
Q (\widetilde{\Omega}) = \left(2 + n^1 , 2+a-n^2, 2 \right) \; . \;
\end{equation}
For concreteness, we set $a=0$ from now on, referring to \cite{Larfors:2010wb} for a discussion of non-zero $a$.
The choice of basis for the generators of the $U(1)^3$ group is connected to the value of the parameters, and for the choice \eqref{eq:Qn4} the K\"ahler cone is given by $\widetilde{\xi}^a >0$ only for negative $n^1$ and positive $n^2$. Here $\widetilde{\xi}^a$ are the K\"ahler moduli that enter the moment maps
\begin{equation} \label{eq:mmaps}
\begin{split}
|z^2|^2+|z^4|^2+n^1|z^5|^2&=\widetilde{\xi}^1\\
|z^1|^2+|z^3|^2-n^2|z^6|^2&=\widetilde{\xi}^2\\
|z^5|^2+|z^6|^2&=\widetilde{\xi}^3 \ .
\end{split}
\end{equation}
We expand upon this issue in appendix \ref{sec:Mori} (see also \cite{Larfors:2010wb} and \cite{Denef:2008wq}).
We choose $\hat{K}$ as a linear combination of the $P_{ij}$ eigenvectors $K_i$ listed in the second row of table \ref{tab:1}. These three-forms have different $U(1)$ charges, none of which is half of that of $\widetilde{\Omega}$:
\begin{equation}
Q (K_1) = \left(0, 2, 0 \right) \; ,\;
Q (K_2) = \left(2, 0, 0 \right)\; ,\;
Q (K_3) = \left(1, 1+a, 2 \right) \; .
\end{equation}
Noting that the parameters $n^a$ can be used to tune the first two components of $Q (\widetilde{\Omega})$, but not the last, we restrict our ansatz to
\begin{equation} \label{eq:hatK}
\hat{K} = \alpha_1 K_1 + \alpha_2 K_2 \; .
\end{equation}
For $a=0$, $K_1$ and $K_2$ are orthogonal and (since $n^1 \le 0$ and $n^2 \ge 0$)
\begin{equation}
|K_1|^2 = |z^1|^2+|z^3|^2 \ge \widetilde{\xi}^2 > 0 \; , \;
|K_2|^2 = |z^2|^2+|z^4|^2\ge \widetilde{\xi}^1 > 0\; .
\end{equation}
Hence $\hat{K}$ has nowhere vanishing norm if we pick $\alpha_{1,2}$ that cannot be simultaneously zero.
Turning to the charge condition, we find that $K$ has half the charge of $\widetilde{\Omega}$ if
\begin{equation}
Q(\alpha_1) = \frac{1}{2}(2+n^1, -2-n^2, 2) \; , \;
Q(\alpha_2) = \frac{1}{2}(-2+n^1, 2-n^2, 2) \; .
\end{equation}
The simplest solution to these constraints is $\alpha_1 = z^6 \; , \; \alpha_2=z^5$,
which satisfies the charge condition if we impose $n^1 = -2 \; , \; n^2 = 2$. This choice of $\alpha_i$ was studied in \cite{Gray:2012md,Larfors:2010wb}, where all torsion classes but $\mathcal W_2$ were computed. Using our improved understanding of the $SU(3)$ metric we can now compute this torsion class. In addition, we generalise the choice of $K$ to
\begin{equation} \label{eq:choice1}
\alpha_1 = B_1 z^6 \; , \; \alpha_2=B_2 z^5
\end{equation}
where $B_i$ are real and constant. Non-constant $B_i$ lead to the same changes of the torsion classes as for $\mathbb{CP}^3$; they are all non-zero and the Lie forms are not exact.
We now insert $K = 1/\sqrt{p} \hat{K}$, where $p=|\hat{K}|^2$, in \eqref{eq:su3global} to get the $SU(3)$ structure. The contribution of this $K$ to the $SU(3)$ metric is positive semidefinite; in addition to four zero eigenvalues, the matrix $\mbox{Re}(K \bar{K})$ has two equal positive eigenvalues $E_{K\bar{K}}$, that in the patch $z_1, z_4, z_6 \neq 0$ are given by
\begin{equation}
\begin{split}
E_{K\bar{K}} = \frac{1}{|z^{14}|^2 p} \Big[
&B_2^2 |z^{15}|^2 |K_2|^2 +
B_1^2 |z^{46}|^2 |K_1|^2 +\\
&4 |z^5|^2 (B_2^2 |z^{125}|^2 + B_1^2 |z^{346}|^2 + 2 B_1 B_2 \mbox{Re} (\bar{z}^{136} z^{245})
\Big] \\
\ge \frac{1}{|z^{14}|^2 p} \Big[
&B_2^2 |z^{15}|^2 |K_2|^2 +
B_1^2 |z^{46}|^2 |K_1|^2 +
4 |z^5|^2 ( B_2 |z^{125}| - B_1 |z^{346}|)^2 \Big]
\; ,
\end{split}
\end{equation}
where the shorthand $z^{ij..} = z^i z^j...$ is used. Thus, with $\alpha>0$ and $\beta^2\ge\alpha$ we are guaranteed a positive definite metric. To study the bounds of $\beta^2$ in more detail, we must analyse the eigenvalues of $G$, which is computationally expensive in this example.
${\rm d} K$ is of the form \eqref{eq:dKgen}, and thus contributes to the first, third and fourth torsion classes. In addition, we find that $\gamma$ does not set the phase of ${\rm d} \Omega$, which shows that $\mathcal W_5$ is non-zero. All in all, the torsion classes are given by \eqref{sctvtorsions}, where in the patch $z_1, z_4, z_6 \neq 0$
\bea
\nonumber
&\mathcal W_1^0 = -i \frac{2 B_1B_2\sqrt{\mbox{det} g_{ab}}}{3 \alpha \beta p}\\ \nonumber
&\mathcal W_2^0= \mathcal W_1^0 \left\{
(2\beta^2 - \alpha)\left(
J +\frac{3i \beta^2}{2} K \wedge \bar{K}
\right) + \alpha^2 \tilde{\xi}^3 \left(-3 \frac{|K_1|^2|K_2|^2}{\mbox{det}g_{ab}} j + \frac{i}{|z^6|^2} \mathcal D z^5 \wedge \mathcal D \bar{z}^5 \right)
\right\}
\\
&\mathcal W_3^0 = -\mathcal W_4^0 \wedge \left(J + i \beta^2 K \wedge \bar{K} \right)
\\ \nonumber
&\mathcal W_4^0 = \frac{1}{4\beta^2} \left\{{\rm d} \ln p + \frac{2 (B_1^2 \tilde{\xi}^2 -B_2^2 \tilde{\xi}^1)}{p} \mathrm{Re}(\bar{z}_5 \mathcal{D}z^5)\right\} \\ \nonumber
&\mathcal W_5^0 = 2 \beta^2 \mathcal W_4^0 + {\rm d} \ln p - \frac{1}{2} {\rm d} \ln \mbox{det} g_{ab} \; .
\eea
For any constant $B_1, B_2$, $\mathcal W_4^0$ and $\mathcal W_5^0$ are closed, and hence exact since $b^1=0$. Primitivity of $\mathcal W_2^0$ and $\mathcal W_3^0$ is readily checked.
Since both $\mathcal W_4^0$ and $\mathcal W_5^0$ are exact, we can choose $\alpha$, $\beta$ to put a linear combination of them to zero. However, since the Lie forms are not proportional, there is no parametric choice that gives a half-flat $SU(3)$ structure. Similarly, complex and symplectic $SU(3)$ structures cannot be reached. Comparing with the maximally symmetric string vacua of section \ref{sec:survey}, we thus see that this $SU(3)$ structure does not allow $\mathcal N=1$ vacua. Calibrated $\mathcal N=0$ vacua, however, may be allowed. Particularly, by choosing $\alpha, \beta$ and $\gamma$, we can set ${\rm d} \mbox{Re}(e^{-2 \phi} \Omega) = 0 = {\rm d} (e^{-2\phi} J \wedge J)$ as required for calibrated vacua of type IIB (with O5 planes) or heterotic string theory \cite{Lust:2008zd,Held:2010az}. These constraints can be met without violating the positivity of the $SU(3)$ metric. Moreover, although $\mathcal W_2^0$ is slightly more complicated than for $\mathbb{CP}^3$, its form is similar to that required for calibrated vacua of no-scale type listed in table \ref{tab:su3N0}. We leave a more detailed investigation of this question, as well as the other constraints for calibrated vacua, to the future.
\subsubsection{Additional examples of $\mathbb{CP}^1$ bundles}
Finally, let us present some data on $SU(3)$ structure on $\mathbb{CP}^1$ bundles over two-dimensional SCTVs with three, five and six generators. For each example, we present a choice of $K$ that meets the three conditions specified in section \ref{sec:su3constr}, thus confirming that such a form can be found on toric $\mathbb{CP}^1$ bundles. In contrast to the previous examples, we have not been able to find $K$ whose exterior derivatives of the are of the form \eqref{eq:dKgen}, nor lead to $SU(3)$ structure with exact the Lie forms. In all other respects, the analysis parallels the previous section, so we will only present the results of our study.
\paragraph{$\mathbb{CP}^1$ over $\mathbb{CP}^2$:}
When viewed as a twistor space or a coset, $\mathbb{CP}^1$ over $\mathbb{CP}^2$ allows a half-flat $SU(3)$ structure \cite{Tomasiello:2007eq,Koerber:2008rx}. Consequently, one would expect there to be an equally simple choice for $K$ as there is on $\mathbb{CP}^3$. Curiously, such a simple $K$ has not been found. The reason is that none of the $P_{ij}$ eigenvectors presented in table \ref{tab:1} have nowhere vanishing norm, and so a rather involved linear combination of $K_i$ is needed to construct $K$.
One possibility is
\begin{equation}
\hat{K} = \alpha_2 \sqrt{\xi_2} K_2 + \alpha_3 K_3 \; .
\end{equation}
If $\alpha_2$, $\alpha_3$ are pure phases then the norm of this form is $|\hat{K}|^2 = \mbox{det}g_{ab} \neq 0$. We note that the exact contributions that the non-constant $\mbox{det}g_{ab}$ give to the Lie forms can be compensated by choices of $\alpha$ and $\beta$ as in \eqref{eq:dpcontr}.
To satisfy also the charge requirement on $K$, one possible choice is to take
\begin{equation} \label{eq:aphase}
\alpha_2 = z^4/|z^4| \; \; , \; \; \alpha_3 = |z^4|/z^4 \; .
\end{equation}
This choice of $K$ is valid for any odd value of the twist parameter $n^1$. Computing the torsion classes for undetermined $\alpha_i$ is a daunting task, as ${\rm d} K$ is not of the simple form \eqref{eq:dKgen}. Specialising to $n^1=1$ and using \eqref{eq:aphase}, we find they are all non-vanishing, and $\mathcal W_{4,5}$ are not closed. The expressions for the torsion classes are not particularly illuminating, so we do not reproduce them here.
\paragraph{Toric $\mathbb{CP}^1$ bundle, $N=5$: }
This example is quite similar to the $\mathbb{CP}^1$ bundle over $\mathbb{F}_0$. $K$ can be constructed using the orthogonal forms $K_1$ and
\begin{equation} \label{eq:k2perp}
K_2^{\perp} = K_2-\frac{\bar{K_1}\cdot K_2}{|K_1|^2} K_1 \; ,
\end{equation}
where $K_{1,2}$ can be found in table \ref{tab:1}. $\widetilde{\Omega}$ has even charge if and only if $n^1, n^3+a$ are odd and $n^2$ is even. The (1,0)-form
\begin{equation}
\hat{K}= z^7 K_1 + z^6 K_2^{\perp}
\end{equation}
then has the right charge $Q (z^7 K_1) = Q (z^6 K_2) = \frac{1}{2} Q (\widetilde{\Omega})$ if we impose $n^1 = -3$, $n^2= 2$, and $n^3=a-1$. With these parameter values, we identify the
basis of the Mori cone and the corresponding charge basis $\widetilde{Q}^1=Q^1-Q^3$, $\widetilde{Q}^2=Q^2-n^2Q^4$, $\widetilde{Q}^{3,4}=Q^{3,4}$ (see appendix \ref{sec:Mori}), and use the result to show that the norms $|K_{1,2}|$ are non-zero whenever $a\le-1$. Since $z^6$ and $z^7$ cannot be zero simultaneously, we have then constructed a $K$ that has the required properties.
\paragraph{Toric $\mathbb{CP}^1$ bundle, $N=6$: }
There are three two-dimensional SCTVs with six generators, and hence we get three different three-dimensional $\mathbb{CP}^1$ fibrations. Here we show how $K$ can be chosen for the first of these. The analysis for the two other examples is completely analogous.
We construct $K$ using $K_{1,2}$ from table \ref{tab:1}. Since these are not orthogonal, we first define $K_2^{\perp}$ as in \eqref{eq:k2perp}. Inspired by the previous examples, we take
\begin{equation}
\hat K = z^7 K_1 + z^8 K_2^{\perp}~.
\end{equation}
and impose the charge constraint $Q (z^7 K_1) = Q (z^8 K_2) = \frac{1}{2} Q (\widetilde{\Omega})$. This equation has the solution
\begin{equation} \label{eq:6charge}
n^1 = -1; \quad n^2= a-2; \quad n^3=2; \quad n^4 = a-1
~.
\end{equation}
To show that the norm of $\hat K$ is non-vanishing, we note that $z^7$ and $z^8$ cannot be zero simultaneously. Moreover, the norms of $K_1$ and $K_2^{\perp}$ are bounded from below by the K\"ahler moduli $\widetilde{\xi}^a$, where the $U(1)$ charge basis associated with the parameter values \eqref{eq:6charge} is $\widetilde{Q}^2=Q^2-Q^4$, $\widetilde{Q}^3=Q^3 - n^3 Q^5$, $\widetilde{Q}^{1,4,5}=Q^{1,4,5}$.
\section{Discussion}
\label{sec:conclusion}
Six-dimensional $SU(3)$ structure manifolds have a long history in string theory compactifications, and have been used to construct a variety of four-dimensional vacua where supersymmetry is either preserved or spontaneously broken. Since $SU(3)$ structure manifolds can accommodate fluxes, these vacua are believed to have fewer moduli than vacua arising from compactifications on CY manifolds. However, confirming this assertion is difficult, since in contrast to the great number of CY manifolds, comparably few explicit examples of $SU(3)$ structure manifolds exist. One obstacle in the construction of example manifolds is the lack of integrable complex structures which hinders the use of algebraic geometry. In this paper, we have used the fact that toric varieties allow both integrable and non-integrable almost complex structures to construct new examples of $SU(3)$ structure manifolds. In doing so we show that the construction of \cite{Larfors:2010wb} extends to an infinite class of toric varieties, which is an important step to a more systematic study of toric $SU(3)$ structures.
We have shown that $\mathbb{CP}^3$ and all toric $\mathbb{CP}^1$ fibrations allow $SU(3)$ structures, since they have even first Chern class.\footnote{For toric $\mathbb{CP}^1$ fibrations, this is true if the parameters of the associated fan are chosen accordingly.} In contrast, toric $\mathbb{CP}^2$ fibrations do not allow $SU(3)$ structures. The $SU(3)$ structures can be constructed using the method of \cite{Larfors:2010wb}, which is based on a local one-form $K$, and we argue that this form exists as long as $c_1$ is even. Indeed, we have constructed $K$ explicitly for $\mathbb{CP}^3$, and $\mathbb{CP}^1$ bundles over two-dimensional SCTVs with up to six generators. These $K$ are not claimed to be unique, and in two of the examples we investigate how simple modifications of $K$ lead to changes in the torsion classes. In general, we have found that the torsion classes simplify if $d K$ satisfies the relation \eqref{eq:dKgen}. A better understanding of the relation between the choice of $K$ and the resulting torsion classes would certainly be desirable, and we hope to return to this in the future. It would also be interesting to investigate if alternative methods of constructing $SU(3)$ structures can be used to derive global constraints on the torsion classes. In this respect, it is interesting to note that $\mathbb{CP}^1$ fibrations over four-dimensional Riemannian spaces are twistor spaces, so it is possible twistor techniques can be used in such studies.
Since the method we use has a parametric freedom, specified by three real functions $\alpha, \beta$ and $\gamma$, it is possible to tune toric $SU(3)$ structures to some degree. In accordance with \cite{Larfors:2010wb}, we show that the exterior derivative of $J$ is proportional to $(\alpha+\beta^2)$ when $\alpha$ and $\beta$ are constant. Moreover, we find that the phase of $\mathcal W_1$ and $\mathcal W_2$ is set by $\gamma$ and that exact contributions to the Lie forms $\mathcal W_4$ and $\mathcal W_5$ can to some extent be compensated by $\alpha$ and $\beta$; one exact Lie form can always be set to zero by a judicious choice of parameters, and if in addition the quotient $\mathcal W_4/\mathcal W_5$ is constant, both $\mathcal W_4$ and $\mathcal W_5$ can be removed.\footnote{Since SCTVs have vanishing odd Betti numbers, it is enough to prove that the Lie forms are closed to ascertain that they are exact.} However, an important constraint on the parametric freedom of $SU(3)$ structures comes from positivity of the associated metric. We have shown that the $SU(3)$ metric is related to the metric inherited from $\mathbb{C}^n$ by
\begin{equation} \nonumber
G_{mn} =\alpha \left[ \widetilde{G}_{mn} + \left(\frac{\beta^2}{\alpha}-1\right) \mbox{Re} \left(K_m \bar{K}_{n}
\right) \right]
\; ,
\end{equation}
so that metric positivity requires $\alpha > 0$. The parameter limit $\alpha = - \beta^2 < 0$ is thus not attainable, and the toric $SU(3)$ structures do not have a generic symplectic limit, contrary to what the expression for the torsion classes would suggest (as shown in figure \ref{fig:parbound}).
\begin{figure}
\centering
\includegraphics[width=9cm]{parameterbounds3}
\caption{\it Toric $SU(3)$ structures are parameterised by the real functions $\alpha$ and $\beta$. Metric positivity always restricts the parameters to the shaded area $\alpha>0$, and sometimes further to $\beta^2\ge\alpha$. The dash-dotted blue line is the symplectic limit $J = \alpha \tilde{J}|$, which is clearly excluded by metric positivity.}
\label{fig:parbound}
\end{figure}
To complement our general analysis, we compute the torsion classes in full for three examples, and find that all are in general non-vanishing. The toric $SU(3)$ structure we construct on $\mathbb{CP}^1 \hookrightarrow \mathbb{CP}^2$ has non-exact Lie forms and so does not agree with the half-flat $SU(3)$ structure found in previous studies. Contrarily, on $\mathbb{CP}^3$ and $\mathbb{CP}^1 \hookrightarrow \mathbb{F}_0$, we show that $K$ can be chosen so that the Lie forms are exact, that $|\mathcal W_2| \propto |\mathcal W_1|$, and that, for constant $\alpha, \beta$, $\mathcal W_3 = -\mathcal W_4 \wedge (J + i \beta^2 K \wedge \overline{K})$. On $\mathbb{CP}^3$, $K$ can be simplified further, leading to a restricted half-flat $SU(3)$ structure, in accordance with previous studies. The $SU(3)$ structure on $\mathbb{CP}^1 \hookrightarrow \mathbb{F}_0$ is less adaptable, and always retain non-zero $\mathcal W_1$ and $\mathcal W_2$, in addition to at least one of the Lie forms.
The existence of toric $SU(3)$ structures opens up for many applications, even though contrary to the CY case, an $SU(3)$ structure is not enough to prove that string compactification results in a four-dimensional vacuum. In many cases, the equations that define string vacua can be translated to necessary constraints on the torsion classes of the $SU(3)$ structure, and our example manifolds can be compared with these constraints. In particular, it is a well-known fact that the restricted half-flat $SU(3)$ structures on $\mathbb{CP}^1 \hookrightarrow \mathbb{CP}^2$ and $\mathbb{CP}^3$ matches the requirements for several vacua, including supersymmetric ones. For $\mathbb{CP}^3$, the less constrained choices for $K$ mentioned above do not lead different types of string vacua.
We have not found any new SCTVs that match the conditions for supersymmetric string vacua. However, we have found that $\mathbb{CP}^1 \hookrightarrow \mathbb{F}_0$ matches at least some of the necessary constraints for calibrated $\mathcal N=0$ vacua, if the parameters $\alpha, \beta, \gamma$ are chosen accordingly. A more complete study is needed to see if all Bianchi identities and equations of motion for such vacua are satisfied, and we hope to come back to this in the future. It would be also be interesting to investigate if other non-supersymmetric string vacua can be constructed on this manifold. Of particular interest are dS vacua, which are notoriously difficult to find in string theory. Such vacua require negative scalar curvature of the internal space, so it is interesting to note that $\alpha, \beta$ can be chosen so that the contribution from the Lie forms to the scalar curvature of $\mathbb{CP}^1 \hookrightarrow \mathbb{F}_0$ is negative definite. On the other hand, this $SU(3)$ structure is neither half-flat nor symplectic, as has been assumed for known dS solutions. Consequently, a new take on such constructions would be required to investigate whether this toric $SU(3)$ structure could be of relevance for dS vacua in string theory.
\subsection*{Acknowledgements}
This research is supported by the Swedish Research Council (VR) under the contract 623-2011-7205. It's a pleasure to thank R.~Bryant, P.~Candelas, X.~de~la~Ossa, R.~Davies, E.~Sharpe and D.~Tsimpis for illuminating discussions at various stages of this project. Additionally, I'm grateful to D.~Andriot, J.~Bl{\aa}b\"ack, U.~Danielsson and G.~Dibitetto for interesting remarks regarding dS vacua on $SU(3)$ structure manifolds.
\newpage
\begin{appendix}
\section{Mori and K\"ahler cones}\label{sec:Mori}
For the constructions of $K$, it is crucial to know that the K\"ahler moduli $\xi^a$ are strictly positive. This condition is satisfied within the K\"ahler cone for any non-singular manifold. However, the identification of K\"ahler moduli depends on the choice of $U(1)$ charges $Q^a$, as we now explain. To do so, we need to introduce some concepts from complex geometry. Most of the material in this appendix, which is included for the reader's convenience, can be found in the pedagogical review \cite{Denef:2008wq}.
A divisor $D$ is a formal sum of holomorphic hypersurfaces $S^I$, that are defined locally in a coordinate patch $U^{\alpha}$ by a holomorphic equation $f^I_{\alpha}=0$, such that $f^I_{\alpha}/f^I_{\beta}$ has no zeros or poles on the intersection $U^{\alpha} \cap U^{\beta}$. For a toric variety, each $z^i$ defines a divisor:
\begin{equation}
D_i: z^i=0,~~~~ i=1,\dots 6~.
\end{equation}
Divisors are linearly equivalent, $D_i = D_j$, if the quotient of their defining equations is a globally defined rational function. Using this, the $U(1)$ gauge invariances can be used to show that there are only $s$ linearly independent divisors $D_i$ on an SCTV, where $s$ is the number of $U(1)$ actions.
Transversal intersections of divisors, $D_i D_j$, are holomorphic curves. The integral of the K\"ahler form over a holomorphic curve measures the area of the curve, and is therefore positive:
\begin{equation}
\int_C J \ge 0,~~~~ C\mbox{ holomorphic curve}.
\end{equation}
It can be shown that the intersections $D_i D_j$ generate the full set of two-cycle classes with holomorphic representatives, which is known as the Mori cone. However, since not all divisors need be linearly independent, not all $D_i D_j$ are linearly independent either. One can define a basis $C^a$, $a=1,2,..s$, so that all $D_i D_j$ can be expanded in $C^a$ with non-negative coefficients:
\begin{equation} \label{eq:didj}
D_i D_j = \sum_{a=1}^s b_{ij}^a C^a
\quad \mbox{where} \quad
b_{ij}^a \ge 0
~.
\end{equation}
The $C^a$ constitute a basis for the Mori cone, and we can use them to define K\"ahler moduli
\begin{equation} \label{eq:kahmod}
\xi^a = \int_{C^a} J \ge 0~.
\end{equation}
Here we have used that the $C^a$'s are holomorphic curves to infer that $\xi^a$ are positive.
Given a basis $C^a$ for the Mori cone, one can always change the charge basis so that
\begin{equation} \label{eq:Ca}
D_i C^a=Q^a_i
~.
\end{equation}
It is important to note that it is only when we express the moment maps in terms of this charge basis, that we can conclude that the parameters $\xi^a$ match \eqref{eq:kahmod}. If there are parameters in the charges $Q^a$ ($a, n^a$ in the examples above), their sign must be determined before it can be concluded that the expansion coefficients $b_{ij}^a$ in \eqref{eq:didj} are strictly positive. In other words, which charge basis is associated with $C^a$ depends on whether the parameters $a, n^a$ are positive or negative.
\end{appendix}
\newpage
\bibliographystyle{JHEP}
\providecommand{\href}[2]{#2}\begingroup\raggedright |
1,477,468,750,988 | arxiv | \section{Introduction}
We consider the following Cauchy problem
\begin{equation}
\left\{
\begin{array}{c}
^{C}D_{0|t}^{\gamma _{1}}u-\Delta u=f(v(t,.)),\;\;t>0,x\in \mathbb{R}^{n}, \\%
[5pt]
^{C}D_{0|t}^{\gamma _{2}}v-\Delta v=g(u(t,.)),\;\;t>0,x\in \mathbb{R}^{n}
\end{array
\right. \label{sys1}
\end{equation
subject to the initial conditions
\begin{equation}
\left\{
\begin{array}{l}
u(0,x)=u_{0}(x),u_{t}(0,x)=u_{1}\left( x\right) ,\;\;x\in \mathbb{R}^{n}, \\%
[5pt]
v(0,x)=v_{0}(x),\;v_{t}(0,x)=v_{1}(x),\;\;\;x\in \mathbb{R}^{n}
\end{array
\right. \label{initdat}
\end{equation
where $^{C}D_{0|t}^{\alpha }u$ denotes the Caputo derivative, defined for a
function $u$ of class $C^{2}$, as
\begin{equation*}
\left( ^{C}D_{0|t}^{\alpha }u\right) \left( t\right) :=\frac{1}{\Gamma
\left( 2-\alpha \right) }\displaystyle\int_{0}^{t}\frac{u_{tt}\left(
s,.\right) }{\left( t-s\right) ^{\alpha -1}}\,ds,\text{ }1<\alpha <2\;\text
(see, e.g. \cite{SKM}),}
\end{equation*
$\Delta $ is the Laplacian, $f(v)=\pm \left\vert v\right\vert ^{p-1}v$ or
\pm \left\vert v\right\vert ^{p},$ $g(u)=\pm \left\vert u\right\vert ^{q-1}u$
or $\pm \left\vert u\right\vert ^{q}$, $p,q\geq 1$, and $u_{0}$, $v_{0}$,
u_{1}$, $v_{1}$ are given initial data.\newline
Observe that system (\ref{sys1}) interpolates reaction-diffusion system (
\gamma _{1}=\gamma _{2}=1$) and hyperbolic system ($\gamma _{1}=\gamma
_{2}=2 $).\newline
Before we present our results and comment on them, let us dwell on some
related existing results.\newline
Escobedo and Herrero \cite{Escobedo} studied the global existence and
blowing-up solutions of the system
\begin{equation} \label{EscoHerr}
\left\{
\begin{array}{c}
u_{t}-\Delta u=v^{p}, \quad t>0, x\in \mathbb{R}^{N}, \bigskip \\
v_{t}-\Delta v=u^{q}, \quad t>0, x\in \mathbb{R}^{N}
\end{array}
\right.
\end{equation}
In particular, for
\begin{equation*}
pq >1, \quad \frac{N}{2} \leq \frac{\max\lbrace p, q \rbrace +1}{pq - 1},
\end{equation*}
they have shown that every nontrivial solution of \eqref{EscoHerr} blows-up
in a finite time $T^{*}=T^{*}(u, v)$, and
\begin{equation*}
\limsup_{t \rightarrow T^{*}}\Vert u(t)\Vert_{\infty}= \limsup_{t
\rightarrow T^{*}}\Vert v(t)\Vert_{\infty}= + \infty.
\end{equation*}
Some related results concerning global existence or blowing-up solutions can
be found in \cite{FilaUda}, \cite{Samarski}, \cite{PangSunWang}, \cite{Pozio
, \cite{Redlinger}, etc. In particular, see the review papers \cit
{Denglevine}, \cite{Bandle} and the authoritative paper \cite{Pohozaev}
\newline
Blowing-up solutions and global solutions for time-fractional differential
systems have been studied, for example, in \cite{KLT}, \cite{ZhangQuan},
\cite{HakemBer}, \cite{Zacher}, \cite{EidelKoch}, \cite{AlmeidaEJDE}, \cit
{AlmeidaJMAA}, \cite{AlmeidaDIE}, \cite{YF1}, \cite{HirataMiao}, \cit
{ZhangQuan}.\newline
Concerning the system of wave equations
\begin{equation}
\left\{
\begin{array}{c}
u_{tt}-\Delta u=|v|^{p},\quad 0<t<T,x\in \mathbb{R}^{N}, \\
v_{tt}-\Delta v=|u|^{q},\quad 0<t<T,x\in \mathbb{R}^{N}
\end{array
\right. \label{SystWave}
\end{equation
subject to initial data
\begin{equation}
\left\{
\begin{array}{c}
u(0,x)=f(x),\quad u_{t}(0,x)=g(x),\;\;x\in \mathbb{R}^{N}, \\
v(0,x)=h(x),\quad v_{t}(0,x)=k(x),\;\;x\in \mathbb{R}^{N}
\end{array
\right. \label{incd}
\end{equation
where $f,g,h,k\in C_{0}^{\infty }(\mathbb{R}^{N})$, we may mention the works
\cite{KDeng}, \cite{Deng} and \cite{DelSantoMitidieri}. For $N=3$ in \cit
{DelSantoMitidieri}, the following optimal results were obtained:\newline
$\triangleright$ If $p, q>1$ and
\begin{equation*}
\max \left\lbrace \frac{p+2+q^{-1}}{pq-1}, \frac{q+2+p^{-1}}{pq-1}
\right\rbrace >1,
\end{equation*}
then the classical solution to \eqref{SystWave}-\eqref{incd} blows-up in a
finite time.\newline
$\triangleright$ If $p, q>1$ and
\begin{equation*}
\max \left\lbrace \frac{p+2+q^{-1}}{pq-1}, \frac{q+2+p^{-1}}{pq-1}
\right\rbrace <1,
\end{equation*}
then there exists a global classical solution to \eqref{SystWave}
\eqref{incd} for sufficiently ``small'' initial data.\newline
Our interest in \eqref{sys1} stems from the fact that it interpolates
different situations; for example, reaction-diffusion systems with
fractional derivatives can model chemical reactions taking place in porous
media. In this case, fractional (nonlocal) terms with order in $(0, 1)$
account for the anomalous diffusion \cite{Magin}, \cite{Metzler}.
Experimental results show that several complex systems have a non-local
dynamics. \newline
On the other hand, equations/systems of fractional differential equations
with order in $(1, 2)$ have been studied in \cite{Mainardi}, \cite{Tarasov},
\cite{ChenHolm}, etc. Examples include mechanical, acoustical, biological
phenomena, marine sediments, etc. \cite{Straka}, \cite{Kappeler}. \newline
In the present paper, we consider the problem \eqref{sys1}-\eqref{initdat}
and present conditions, relating the space dimension $N$ with the parameters
$\gamma _{1}$, $\gamma _{2}$, $p$, and $q$, for which the solution of
\eqref{sys1}-\eqref{initdat} exists globally in time and satisfies
L^{\infty }$-decay estimates. We also investigate blowing-up in finite time
solutions with initial data having positive average. Our study of global
existence employs the mild formulation of the solution via Mittag-Leffler's
function, while we use the test function approach due to Mitidieri and
Pohozaev \cite{Pohozaev} for the case of blowing-up solutions. The test
function approach has been used by several authors, for instance, see \cit
{KLT},\cite{KirFin},\cite{PangSunWang}, \cite{ZhangQuan},\cite{Bermdemp}
\cite{Bermdemp}). To the best of our knowledge, there do not exist global
existence and large time behavior results for the time-fractional diffusion
system with two different fractional powers. Thus our results are new and
contribute significantly to the existing literature on the topic. \newline
The rest of this paper is organized as follows. In next section, we present
some preliminary Lemmas, basic facts and useful tools such as time
fractional derivative, $L^{p}$-$L^{q}$-estimates of the fundamental solution
of the linear time fractional wave equation. section 3 contains the main
results of the paper. Finally, section 4 and section 5 are devoted to the
proof of small data global existence and blow-up in finite time of the
solution of problem (\ref{sys1})-(\ref{initdat}).\newline
In the sequel, $C$ will be a positive constant which may have different
values from line to line. The space $L^{p}\left( \mathbb{R}^{N}\right) $
\left( 1\leq p<\infty \right) $ will be equipped with the norm:
\begin{equation*}
\left\Vert u\right\Vert _{L^{p}\left( \mathbb{R}^{N}\right) }^{p}
\displaystyle\int_{\mathbb{R}^{N}}\left\vert u\left( t,x\right)
\right\vert^{p}dx.
\end{equation*}
\section{Preliminaries}
The Riemann-Liouville fractional integral of order $0<\alpha <1$ of $f(t)
\in L^{1}\left( 0,T\right)$ is defined as
\begin{equation*}
\left( J_{0|t}^{\alpha }f\right) (t) =\frac{1}{\Gamma (\alpha ) }
\int_{0}^{t}\left( t-\tau \right) ^{\alpha -1}f( \tau) \, d\tau,
\end{equation*}
where $\Gamma $ stands for the usual Euler gamma function.\newline
The left-sided Riemann-Liouville derivative $D_{0|t}^{\alpha }f$ (see \cit
{SKM}), for $f\in C^{m-1}(0,T),$ of order $\alpha $ is defined as follows:
\begin{equation*}
\left( D_{0|t}^{\alpha }f\right) (t)=\frac{d^{m}}{dt^{m}}\left(
J_{0|t}^{m-\alpha }f\right) (t),\text{ }t>0,\text{ }m-1<\alpha <m,\text{
m\in \mathbb{N}.
\end{equation*
The Caputo fractional derivative of a function $f\in C^{m}(0,T)$ is defined
as
\begin{equation*}
\left( ^{C}D_{0|t}^{\alpha }f\right) (t)=J_{0|t}^{m-\alpha }f^{(m)}(t),\text{
}t>0,\text{ }m-1<\alpha <m,\text{ }m\in \mathbb{N}.
\end{equation*
For $0<\alpha <1$ and $f$ of class $C^{1}$, we have
\begin{equation*}
\left( D_{0|t}^{\alpha }f\right) \left( t\right) =\frac{1}{\Gamma \left(
1-\alpha \right) }\left[ \frac{f\left( 0\right) }{t^{\alpha }}+\int_{0}^{t
\frac{f^{\prime }\left( \sigma \right) }{\left( t-\sigma \right) ^{\alpha }
d\sigma \right] ,
\end{equation*
and
\begin{equation}
\left( D_{t|T}^{\alpha }f\right) \left( t\right) =\frac{1}{\Gamma \left(
1-\alpha \right) }\left[ \frac{f\left( T\right) }{\left( T-t\right) ^{\alpha
}}-\int_{t}^{T}\frac{f^{\prime }\left( \sigma \right) }{\left( \sigma
-t\right) ^{\alpha }}d\sigma \right] . \label{eq:3}
\end{equation
The Caputo derivative is related to the Riemann-Liouville derivative for
f\in AC\left[ 0,T\right] $ (the space of absolutely continuous functions
defined on $\left[ 0,T\right] $) by
\begin{equation*}
\left( ^{C}D_{0|t}^{\alpha }f\right) (t)=D_{0|t}^{\alpha }\left(
f(t)-f(0)\right) .
\end{equation*
Assume that $0<\alpha <1,f\in C^{1}([a,b])$ and $g\in C(a,b)$. Then the
formula of integration by parts is
\begin{equation*}
\int_{a}^{b}f(t)(D_{0|t}^{\alpha
}g)(t)\,dt=\int_{a}^{b}g(t)(^{C}D_{t|T}^{\alpha }f)(t)\,dt+f(a)(I_{a\mid
t}^{1-\alpha }g)(t)\Big\vert_{t=a}^{t=b}.
\end{equation*
The Mittag-Leffler function is defined (see \cite{SKM}) by :
\begin{equation*}
E_{\alpha ,\beta }\left( z\right) =\sum_{k=0}^{\infty }\frac{z^{k}}{\Gamma
\left( \alpha k+\beta \right) }\text{, }\alpha \text{, }\beta \in \mathbb{C
\text{, }\Re \left( \alpha \right) >0\text{, }z\in \mathbb{C};
\end{equation*
its Riemann-Liouville fractional integral satisfies
\begin{equation*}
J_{0|t}^{1-\alpha }\left( t^{\alpha -1}E_{\alpha ,\alpha }\left( \lambda
t^{\alpha }\right) \right) =E_{\alpha ,1}\left( \lambda t^{\alpha }\right)
\text{ for }\lambda \in \mathbb{C},0<\alpha <1.
\end{equation*
For later use, let
\begin{equation*}
\varphi \left( t\right) =\left( 1-\frac{t}{T}\right) _{+}^{l}\text{, }l\geq
2;
\end{equation*
then
\begin{equation*}
^{C}D_{t|T}^{\alpha }\varphi \left( t\right) =\frac{\Gamma \left( l+1\right)
}{\Gamma \left( l+1-\alpha \right) }T^{-\alpha }\left( 1-\frac{t}{T}\right)
_{+}^{l-\alpha }\text{, }t\leq T,
\end{equation*
(see for example \cite{KLT}).
\subsection{Linear estimates}
In this section, we present fundamental estimates which will be used to
prove Theorem \ref{GELT}.\newline
For $1<\alpha <2$, we define the operators $\tilde{E}_{\alpha ,1}\left(
t,x\right) $ and $\tilde{E}_{\alpha ,\alpha }\left( t,x\right) $ as follows.
\begin{equation}
\tilde{E}_{\alpha ,1}(t,x)=\left( 2\pi \right) ^{-N/2}\mathcal{F}^{-1}\left(
E_{\alpha ,1}(-4\pi ^{2}t^{\alpha }|\xi |^{2})\right) ,\;x\in \mathbb{R}^{N}
\text{ }t>0, \label{Mit-Lef}
\end{equation}
\begin{equation}
\tilde{E}_{\alpha ,2}(t,x)=\left( 2\pi \right) ^{-N/2}\mathcal{F}^{-1}\left(
E_{\alpha ,2}(-4\pi ^{2}t^{\alpha }|\xi |^{2})\right) ,\;x\in \mathbb{R}^{N}
\text{ }t>0, \label{Mitleft}
\end{equation}
\begin{equation}
\tilde{E}_{\alpha ,\alpha }\left( t,x\right) =\left( 2\pi \right) ^{-N/2
\mathcal{F}^{-1}\left( E_{\alpha ,\alpha }\left( -\left\vert \xi \right\vert
^{2}t^{\alpha }\right) \right) ,\text{ }t\geq 0,\text{ }x\in \mathbb{R}^{N}
\text{ }t>0\text{.} \label{salpha}
\end{equation}
Consider the following linear inhomogeneous time fractional equation with
initial data:
\begin{equation}
\left\{
\begin{array}{l}
^{C}D_{0|t}^{\alpha }u-\Delta u=f(t,x),\text{ }t>0,\text{ }x\in \mathbb{R
^{N},\text{ }1<\alpha <2, \\
u(0,x)=u_{0}\left( x\right) ,\text{ }u_{t}(0,x)=u_{1}(x),\text{ }x\in
\mathbb{R}^{N}\text{.
\end{array}
\right. \label{hlfde}
\end{equation}
If $u_{0}\in \mathcal{S}(\mathbb{R}^{N})$ (the Schwartz space), $u_{1}\in
\mathcal{S}(\mathbb{R}^{N})$ and $f\in L^{1}\left( \left( 0,+\infty \right)
\mathcal{S}(\mathbb{R}^{N})\right) $, then by \cite{HirataMiao} (see also
\cite{AlmeidaDIE}) problem (\ref{hlfde}) admits a solution $u\in C^{\alpha
}\left( \left[ 0,+\infty \right) ;\mathcal{S}(\mathbb{R}^{N})\right)$, which
satisfies
\begin{equation*}
u(t,x)=\tilde{E}_{\alpha ,1}(t,x)u_{0}(x)+t\tilde{E}_{\alpha
,2}(t,x)u_{1}(x)+\displaystyle\int_{0}^{t}\left( t-s\right) ^{\alpha -1
\tilde{E}_{\alpha ,\alpha }\left( t-s\right) \ast f\left( s,x\right) ds.
\end{equation*}
The following lemmas contain the so called smoothing effect of the
Mittag-Leffler operators family $\left\{ \tilde{E}_{\alpha ,1}(t)\right\}
_{t\geq 0}$ and $\left\{ \tilde{E}_{\alpha ,\alpha }(t)\right\} _{t\geq 0}
in Lebesgue spaces and play an important role in obtaining the first result
of this paper; they appear in \cite[Lemma 5.1]{HirataMiao}, \cite[Lemma 5.1
{AlmeidaEJDE}. Their proofs are based on the Fourier multiplier theorem
combined with a scaling argument (see \cite[Lemma 3.1-(i)]{AlmeidaJMAA},
\cite[Proposition 4.2 and Proposition 4.3]{AlmeidaEJDE}).
\begin{lemma}[{\protect\cite[Lemma 5.1]{AlmeidaEJDE}}]
\label{galpha} Let $1<p_{1}\leq p_{2}<\infty $,$\ 1<\alpha <2$ and $\lambda
\frac{N}{p_{1}}-\frac{N}{p_{2}}$. Then there is a constant $C>0$ such that
\begin{align}
\Vert \tilde{E}_{\alpha ,1}(t)f\Vert _{L^{p_{2}}}& \leq Ct^{-\frac{\alpha }{
}\lambda }\Vert f\Vert _{L^{p_{1}}},\;\;\;\;\text{ if }\;\lambda <2\text{,}
\label{item-i} \\
\Vert t\tilde{E}_{\alpha ,2}(t)f\Vert _{L^{p_{2}}}& \leq Ct^{1-\frac{\alpha
}{2}\lambda }\,\Vert f\Vert _{L^{p_{1}}},\;\;\text{ if }\;\frac{2}{\alpha
<\lambda <2, \label{item-ii} \\
\Vert t\tilde{E}_{\alpha ,2}(t)f\Vert _{L^{p_{2}}}& \leq Ct^{-\frac{\alpha }
2}\lambda }\,\Vert f\Vert _{\mathcal{\dot{H}}_{p_{1}}^{-\frac{2}{\alpha }
\text{ }}\text{,}\;\text{ if }\;\frac{2}{\alpha }<\lambda <2,
\label{item-iii} \\
\Vert \tilde{E}_{\alpha ,\alpha }(t)\ast f\Vert _{L^{p_{2}}}& \leq Ct^{
\frac{\alpha }{2}\lambda }\Vert f\Vert _{L^{p_{1}}},\;\;\;\;\text{ if
\;\left( 2-\frac{2}{\alpha }\right) <\lambda <2, \label{item-iv}
\end{align
for all $f\in \mathcal{S}^{\prime }(\mathbb{R}^{N})$, where $\mathcal{\dot{H
}_{p_{1}}^{-\frac{2}{\alpha }}$ is the homogeneous Sobolev spaces of
negative order $-\frac{2}{\alpha }$.
\end{lemma}
\begin{lemma}
\label{Linfty}The family of operators $\left\{ \tilde{E}_{\alpha ,1}\left(
t\right) \right\} _{t>0},$ $\left\{ \tilde{E}_{\alpha ,1}\left( t\right)
\right\} _{t>0}$ and $\left\{ \tilde{E}_{\alpha ,\alpha }\left( t\right)
\right\} _{t>0}$ enjoy the following $L^{p_{1}}-L^{p_{1}}$ estimates
property:
\begin{itemize}
\item[(i)] \textit{If }$h\in L^{p_{1}}\left( \mathbb{R}^{N}\right) $ ($1\leq
p_{1}\leq +\infty )$,\textit{\ then }$\tilde{E}_{\alpha ,\beta }\left(
t\right) h\in L^{p_{1}}\left( \mathbb{R}^{N}\right) $\textit{\ and
\begin{equation*}
\left\Vert \tilde{E}_{\alpha ,\beta }\left( t\right) h\right\Vert
_{L^{p_{1}}\left( \mathbb{R}^{N}\right) }\leq C\left\Vert h\right\Vert
_{L^{p_{1}}\left( \mathbb{R}^{N}\right) },\text{ }t>0\text{, for}\mathit{\
\beta =1,2,\alpha ,
\end{equation*
for some positive constant $C>0$.
\item[(ii)] Let $p_{1}>N/2$. If $h\in L^{p_{1}}\left( \mathbb{R}^{N}\right)
, then $\tilde{E}_{\alpha ,\beta }\left( t\right) h\in L^{\infty }\left(
\mathbb{R}^{N}\right) $ and we hav
\begin{equation*}
\left\Vert \tilde{E}_{\alpha ,\beta }\left( t\right) h\right\Vert
_{L^{\infty }\left( \mathbb{R}^{N}\right) }\leq Ct^{-\frac{\alpha }{2}\frac{
}{p_{1}}}\left\Vert h\right\Vert _{L^{p_{1}}\left( \mathbb{R}^{N}\right)
\text{, }t>0\text{, for}\mathit{\ }\beta =1,2,\alpha .
\end{equation*}
\end{itemize}
\end{lemma}
\begin{proof}
We use the following pointwise estimates that are shown in \cite[Theorem 5.1
{Kim}:
\begin{equation*}
\left\vert \tilde{E}_{\alpha ,\alpha }\left( t,x\right) \right\vert \leq
\left\vert x\right\vert ^{-N}\exp \left\{ -c\left( t^{-\alpha }\left\vert
x\right\vert ^{2}\right) ^{\frac{1}{2-\alpha }}\right\} ,\text{ \ \ if
R:=\left\vert x\right\vert ^{2}t^{-\alpha }\geq 1,
\end{equation*
and if $R:=\left\vert x\right\vert ^{2}t^{-\alpha }<1$, then we hav
\begin{equation*}
\left\vert \tilde{E}_{\alpha ,\alpha }\left( t,x\right) \right\vert \leq
\left\{
\begin{array}{l}
t^{-\frac{\alpha N}{2}},\quad \quad \quad \quad \quad \quad \quad \quad
\quad N<2,, \\
t^{-\alpha }\left\vert x\right\vert ^{-N+2}\left( 1+\left\vert \ln \left(
\left\vert x\right\vert ^{2}t^{-\alpha }\right) \right\vert \right) ,\quad
N=2,\text{ } \\
\left\vert x\right\vert ^{-N+2}t^{-\alpha },\quad \quad \quad \quad \quad
\quad \,N>2
\end{array
\right.
\end{equation*
Concerning the operator $t\tilde{E}_{\alpha ,2}(t)$, we have the pointwise
estimates
\begin{equation*}
\left\vert t\tilde{E}_{\alpha ,2}(t)\right\vert \leq C\left\vert
x\right\vert ^{-N}t\exp \left\{ -c\left( t^{-\alpha }\left\vert x\right\vert
^{2}\right) ^{\frac{1}{2-\alpha }}\right\} ,\text{ \ \ if }R:=\left\vert
x\right\vert ^{2}t^{-\alpha }\geq 1,
\end{equation*
and if $R:=\left\vert x\right\vert ^{2}t^{-\alpha }<1$, then
\begin{equation*}
\left\vert t\tilde{E}_{\alpha ,2}\left( t,x\right) \right\vert \leq \left\{
\begin{array}{l}
t^{1-\frac{\alpha N}{2}},\quad \quad \quad \quad \quad \quad \quad \quad
\quad N<2,, \\
\left\vert x\right\vert ^{-N+2}t^{1-\alpha }\left( 1+\left\vert \ln \left(
\left\vert x\right\vert ^{2}t^{-\alpha }\right) \right\vert \right) ,\quad
N=2,\text{ } \\
\left\vert x\right\vert ^{-N+2}t^{1-\alpha },\quad \quad \quad \quad \quad
\quad \,N>2
\end{array
\right.
\end{equation*
Arguing as in Zacher et \textit{al \cite{Zacher}, } $\tilde{E}_{\alpha
,1}\left( t,.\right) ,$ $\tilde{E}_{\alpha ,2}\left( t,.\right) $ and
\tilde{E}_{\alpha ,\alpha }\left( t,.\right) $ are Lebesgue integrable.
In fact, we have
\begin{equation*}
\int_{\mathbb{R}^{N}}\left\vert \tilde{E}_{\alpha ,\alpha }\left( t,x\right)
\right\vert dx=\int_{\left\{ R\geq 1\right\} }\left\vert \tilde{E}_{\alpha
,\alpha }\left( t,x\right) \right\vert dx+\int_{\left\{ R<1\right\}
}\left\vert \tilde{E}_{\alpha ,\alpha }\left( t,x\right) \right\vert \,dx;
\end{equation*
Using the first pointwise estimate, we get
\begin{eqnarray*}
\int_{\left\{ R\geq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert dx &\leq &\int_{\left\{ R\geq 1\right\} }\left\vert
x\right\vert ^{-N}\exp \left\{ -c\left( t^{-\alpha }\left\vert x\right\vert
^{2}\right) ^{\frac{1}{2-\alpha }}\right\} dx \\
&=&\int_{t^{\frac{\alpha }{2}}}^{+\infty }r^{-N}\exp \left\{ -c\left(
t^{-\alpha }r^{2}\right) ^{\frac{1}{2-\alpha }}\right\} r^{N-1}dr \\
&=&\int_{t^{\frac{\alpha }{2}}}^{+\infty }r^{-1}\exp \left\{ -c\left(
t^{-\alpha }r^{2}\right) ^{\frac{1}{2-\alpha }}\right\} dr\text{, set }z=t^{
\frac{\alpha }{2}}r \\
&=&\int_{1}^{+\infty }z^{-1}\exp \left\{ -c\left( z^{2}\right) ^{\frac{1}
2-\alpha }}\right\} dz\leq C\text{.}
\end{eqnarray*
On the other hand, if $N<2,$ we have
\begin{equation*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert dx\leq \int_{\left\{ R\leq 1\right\} }t^{-\frac
\alpha N}{2}}dx=t^{-\frac{\alpha N}{2}}\int_{0}^{t^{\frac{\alpha }{2
}}r^{N-1}dr=t^{-\frac{\alpha N}{2}}\frac{t^{\frac{\alpha N}{2}}}{N}=C.
\end{equation*
For $N=2,$ we have
\begin{eqnarray*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert dx &\leq &\int_{\left\{ R\leq 1\right\} }\left\vert
x\right\vert ^{-N+2}t^{-\alpha }\left( 1-\ln \left( \left\vert x\right\vert
^{2}t^{-\alpha }\right) \right) \,dx \\
&=&t^{-\alpha }\int_{0}^{t^{\frac{\alpha }{2}}}\left( 1+\left\vert \ln
\left( r^{2}t^{-\alpha }\right) \right\vert \right) rdr \\
&=&t^{-\alpha }t^{\frac{\alpha }{2}N}\int_{0}^{1}\left( 1-\ln \left(
z^{2}\right) \right) zdz=C.
\end{eqnarray*
When $N>2,$ we have
\begin{eqnarray*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert dx &\leq &\int_{\left\{ R\leq 1\right\} }\left\vert
x\right\vert ^{-N+2}t^{-\alpha }dx \\
&=&t^{-\alpha }\int_{0}^{t^{\frac{\alpha }{2}}}r^{-N+2}r^{N-1}dr=t^{-\alpha
}\int_{0}^{t^{\frac{\alpha }{2}}}rdr,
\end{eqnarray*
so,
\begin{equation*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert dx\leq 1/2\text{.}
\end{equation*
The first result (i) follows from Young's convolution inequality, that is,
\begin{eqnarray*}
\left\Vert \tilde{E}_{\alpha ,\alpha }\left( t,.\right) h\right\Vert
_{L^{p_{1}}\left( \mathbb{R}^{N}\right) } &=&\left\Vert \tilde{E}_{\alpha
,\alpha }\left( t,.\right) \ast h\left( x\right) \right\Vert
_{L^{p_{1}}\left( \mathbb{R}^{N}\right) }\leq \left\Vert \tilde{E}_{\alpha
,\alpha }\left( t,.\right) \right\Vert _{L^{1}\left( \mathbb{R}^{N}\right)
}\left\Vert h\right\Vert _{L^{p_{1}}\left( \mathbb{R}^{N}\right) } \\
&\leq &C\left\Vert h\right\Vert _{L^{p_{1}}\left( \mathbb{R}^{N}\right) }.
\end{eqnarray*
In a s similar manner, it can be shown that the operators $\tilde{E}_{\alpha
,1}\left( t,.\right) $ and $\tilde{E}_{\alpha ,2}\left( t,.\right) $ are
bounded.
In order to show statment (ii), we need to prove that $\tilde{E}_{\alpha
,\alpha }\left( t,.\right) $, belongs to $L^{p_{2}}\left( \mathbb{R
^{N}\right) $
\begin{eqnarray*}
\int_{\left\{ R\geq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert ^{p_{2}}dx &\leq &\int_{\left\{ R\geq 1\right\}
}\left\vert x\right\vert ^{-Np_{2}}\exp \left\{ -c\left( t^{-\alpha
}\left\vert x\right\vert ^{2}\right) ^{\frac{1}{2-\alpha }}\right\} dx \\
&=&\int_{t^{\frac{\alpha }{2}}}^{+\infty }r^{-Np_{2}}\exp \left\{ -c\left(
t^{-\alpha }r^{2}\right) ^{\frac{1}{2-\alpha }}\right\} r^{N-1}dr \\
&=&\int_{t^{\frac{\alpha }{2}}}^{+\infty }r^{-Nr+N-1}\exp \left\{ -c\left(
t^{-\alpha }r^{2}\right) ^{\frac{1}{2-\alpha }}\right\} dr\text{, set }z=t^{
\frac{\alpha }{2}}r \\
&=&t^{-\frac{\alpha }{2}N\left( p_{2}-1\right) }\int_{1}^{+\infty
}z^{-Np_{2}+N-1}\exp \left\{ -c\left( z^{2}\right) ^{\frac{1}{2-\alpha
}\right\} dz\leq Ct^{-\frac{\alpha }{2}N\left( p_{2}-1\right) }\text{.}
\end{eqnarray*
On the other hand, if $N=1,$ we have
\begin{equation*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert ^{p_{2}}dx\leq \int_{\left\{ R\leq 1\right\} }t^{
\frac{\alpha N}{2}p_{2}}dx=t^{-\frac{\alpha N}{2}p_{2}}\int_{0}^{t^{\frac
\alpha }{2}}}r^{N-1}dr=\frac{1}{N}t^{-\frac{\alpha N}{2}p_{2}+\frac{\alpha }
2}}=Ct^{-\frac{\alpha }{2}N\left( p_{2}-1\right) }.
\end{equation*
For $N=2$, we have
\begin{eqnarray*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert ^{p_{2}}dx &\leq &\int_{\left\{ R\leq 1\right\}
}t^{-\alpha p_{2}}\left( 1-\ln \left( \left\vert x\right\vert ^{2}t^{-\alpha
}\right) \right) ^{p_{2}}dx \\
&=&t^{-\alpha p_{2}}\int_{0}^{t^{\frac{\alpha }{2}}}\left( 1+\left\vert \ln
\left( r^{2}t^{-\alpha }\right) \right\vert \right) ^{p_{2}}r^{N-1}dr \\
&=&t^{-\alpha \left( p_{2}-1\right) }\int_{0}^{1}\left( 1-\ln \left(
z^{2}\right) \right) ^{p_{2}}zdz=Ct^{-\alpha \left( p_{2}-1\right) }.
\end{eqnarray*
When $N>2,$ we have
\begin{eqnarray*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert ^{p_{2}}dx &\leq &\int_{\left\{ R\leq 1\right\}
}\left\vert x\right\vert ^{-\left( N-2\right) p_{2}}t^{-\alpha p_{2}}dx \\
&=&t^{-\alpha p_{2}}\int_{0}^{t^{\frac{\alpha }{2}}}r^{-\left( N-2\right)
p_{2}}r^{N-1}dr=t^{-\alpha p_{2}}\int_{0}^{t^{\frac{\alpha }{2}}}r^{-\left(
N-2\right) p_{2}+N-1}dr,
\end{eqnarray*
provided $N>(N-2)p_{2}$. So,
\begin{equation*}
\int_{\left\{ R\leq 1\right\} }\left\vert \tilde{E}_{\alpha ,\alpha }\left(
t,x\right) \right\vert ^{p_{2}}dx\leq Ct^{-\alpha p_{2}-\frac{\alpha }{2
\left( N-2\right) p_{2}+\frac{\alpha }{2}N}=Ct^{-\frac{\alpha }{2}N\left(
p_{2}-1\right) }\text{.}
\end{equation*
Hence $\left\Vert \tilde{E}_{\alpha ,\alpha }\left( t,.\right) \right\Vert
_{p_{2}}\leq Ct^{-\frac{\alpha }{2}N\left( 1-\frac{1}{p_{2}}\right) }$, for
p_{2}<N/\left( N-2\right) $.
Now (ii) follows by Young's convolution inequality and the last estimat
\begin{equation*}
\Vert \tilde{E}_{\alpha ,\alpha }(t)\ast f\Vert _{L^{\infty }}\leq \Vert
\tilde{E}_{\alpha ,\alpha }(t)\Vert _{L^{p_{1}^{\prime }}}\Vert f\Vert
_{L^{p_{1}}}\leq Ct^{-\frac{\alpha }{2}\frac{N}{p_{1}}}\Vert f\Vert
_{L^{p_{1}}}\text{, for }p_{1}>\frac{N}{2}.
\end{equation*
Where $p_{1}^{\prime }$ is the conjugate of $p_{1}$ ($1/p_{1}+1/p_{1}^
\prime }=1$). Arguing in a similar way, we obtain $L^{p_{1}}-L^{\infty }$
estimates to the operators $\tilde{E}_{\alpha ,\beta }\left( t\right) $, for
$\beta =1,2.$
\end{proof}
\begin{lemma}
\label{poldec}Let $l\geq 1,$ and let the function $f\left( t,x\right) $
satisfy
\begin{equation*}
\left\Vert f(t,.)\right\Vert _{l}\leq C_{1},\;0\leq t\leq 1\text{, \
\;\left\Vert f(t,.)\right\Vert _{l}\leq C_{2}t^{-\alpha }\text{, }\;t>0,
\end{equation*
for some positive constants $C_{1},C_{2}$ and $\alpha $. Then
\begin{equation*}
\left\Vert f\left( t,.\right) \right\Vert _{l}\leq \max \left\{
C_{1},C_{2}\right\} \left( 1+t\right) ^{-\beta }\text{, }\;\text{ for all
\;0<\beta \leq \alpha \;\text{ and }\;t\geq 0\text{.}
\end{equation*}
\end{lemma}
\section{Main results}
In this section, we state our main results. Let us begin with the definition
of a mild solution of problem (\ref{sys1})-(\ref{initdat}).
\begin{definition}
Let $u_{0},v_{0},u_{1},v_{1}\in \mathbb{X},(\mathbb{X}:=L^{1}(\mathbb{R
^{N})\cap L^{\infty }\left( \mathbb{R}^{N}\right) ),1<\gamma _{1},\gamma
_{2}<2$, $f,g\in L^{1}\left( \left( 0,T\right) ,\mathcal{S}(\mathbb{R
^{N})\right) $ and $T>0$. We call $\left( u,v\right) \in C\left( \left[ 0,
\right] ;\mathbb{X}\right) \times C\left( \left[ 0,T\right] ;\mathbb{X
\right) $ a mild solution of system (\ref{sys1})-(\ref{initdat}) if $\left(
u,v\right) $ satisfies the following integra
\begin{eqnarray}
u\left( t,x\right) &=&\tilde{E}_{\gamma _{1},1}(t,x)u_{0}(x)+t\tilde{E
_{\gamma _{1},2}(t,x)u_{1}(x) \notag \\
&&+\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1}\tilde{E
_{\gamma _{1},\gamma _{1}}\left( t-\tau ,x\right) f(v\left( \tau ,x\right)
)\,d\tau , \label{ms1}
\end{eqnarray
\begin{eqnarray}
v\left( t,x\right) &=&\tilde{E}_{\gamma _{2},1}(t,x)v_{0}(x)+t\tilde{E
_{\gamma _{2},2}(t,x)v_{1}(x) \notag \\
&&+\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1}\tilde{E
_{\gamma _{2},\gamma _{2}}\left( t-\tau ,x\right) g(u\left( \tau ,x\right)
)\,d\tau . \label{ms2}
\end{eqnarray}
\end{definition}
The existence and uniqueness of a local solution of (\ref{sys1}) can be
established by using the Banach fixed point theorem and Gronwall's
inequality.
\begin{proposition}[Local existence of a mild solution]
\label{FD} Let $u_{0}, v_{0}, u_{1}, v_{1}\in \mathbb{X} $, $1<\gamma
_{1},\gamma _{2}<2$, $p$, $q\geq 1$ such that $pq>1$. Then there exist a
maximal time $T_{\max }>0$ and a unique mild solution to problem (\ref{sys1
)-(\ref{initdat}), such that either
\begin{itemize}
\item[(i)] $T_{\max }=\infty $ (the solution is global), or \newline
\item[(ii)] $T_{\max }<\infty $ and $\lim\limits_{t\rightarrow T_{\max
}}\left( \left\Vert u(t)\right\Vert _{\infty }+\left\Vert v(t)\right\Vert
_{\infty }\right) =\infty $ (the solution blows up in a finite time).
\end{itemize}
Moreover, for any $s_{1},s_{2}\in \left( 1,+\infty \right) ,$ $\left(
u,v\right) \in C\left( \left[ 0,T\right] ;L^{s_{1}}\left( \mathbb{R
^{N}\right) \times L^{s_{2}}\left( \mathbb{R}^{N}\right) \right) .$
\end{proposition}
Now, we are in a position to state the first main result of this section
concerning global existence and large time behavior of solutions of (\re
{sys1})-(\ref{initdat}).
\begin{theorem}[Global existence of a mild solution]
\label{GELT} Let $N\geq 2,$ $q\geq p\geq 1,$ $pq>1$, $1<\gamma _{1}\leq
\gamma _{2}<2.$ If
\begin{equation}
\frac{N}{2}\geq \max \left\{ \frac{1}{\gamma _{1}}+\frac{q+1}{pq-1},\frac{1}
\gamma _{1}}+\frac{p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right)
\right\} ,\text{\ } \label{critdimension}
\end{equation
the initial data satisfy
\begin{equation*}
\left\Vert u_{0}\right\Vert _{\mathbb{X}}+\left\Vert u_{1}\right\Vert _
\mathbb{X}}+\left\Vert v_{0}\right\Vert _{\mathbb{X}}+\left\Vert
v_{1}\right\Vert _{\mathbb{X}}\leq \varepsilon _{0},
\end{equation*
for some $\varepsilon _{0}>0$, then problem (\ref{sys1})-(\ref{initdat})
admits a global mild solution and that
\begin{eqnarray*}
u &\in &L^{\infty }\left( \left[ 0,\infty \right) ,L^{\infty }\left( \mathbb
R}^{N}\right) \right) \cap L^{\infty }\left( \left[ 0,\infty \right)
,L^{s_{1}}\left( \mathbb{R}^{N}\right) \right) \text{,}\bigskip \\
v &\in &L^{\infty }\left( \left[ 0,\infty \right) ,L^{\infty }\left( \mathbb
R}^{N}\right) \right) \cap L^{\infty }\left( \left[ 0,\infty \right)
,L^{s_{2}}\left( \mathbb{R}^{N}\right) \right) \text{,}
\end{eqnarray*
where $s_{1}>q$ and $s_{2}>p.$ \newline
Furthermore, for any $\delta $ satisfying $1-\frac{1+q}{(p+1)q\gamma _{2}
<\delta <\min \left\{ 1,\frac{N\left( pq-1\right) }{2q(p+1)}\right\} ,$
\begin{equation*}
\left\Vert u\left( t\right) \right\Vert _{s_{1}}\leq C\left( t+1\right) ^{
\frac{\left( 1-\delta \right) \left( \gamma _{1}+p\gamma _{2}\right) }{pq-1
},\;t\geq 0,
\end{equation*
\begin{equation*}
\text{ }\left\Vert v\left( t\right) \right\Vert _{s_{2}}\leq C\left(
t+1\right) ^{-\frac{\left( 1-\delta \right) \left( \gamma _{2}+q\gamma
_{1}\right) }{pq-1}},\;t\geq 0.
\end{equation*
If, in addition,
\begin{equation*}
\frac{pN}{2s_{2}}<1\;\text{ and}\;\;\;\frac{qN}{2s_{1}}<1,
\end{equation*
or
\begin{equation*}
N>2,\;pN/(2s_{2})<1\;\text{ and}\;\;qN/(2s_{1})\geq 1,
\end{equation*
or
\begin{equation*}
N>2,\;qN/(2s_{1})\geq 1,\;pN/(2s_{2})\geq 1\;\text{ and}\;\;q\geq p>1\;\text{
with}\;\;\sqrt{\frac{\left( p+1\right) q\gamma _{1}}{\left( q+1\right) p}
<\gamma _{1}\leq \gamma _{2}<2,
\end{equation*
then,
\begin{eqnarray*}
u,v &\in &L^{\infty }\left( \left[ 0,\infty \right) ,L^{\infty }\left(
\mathbb{R}^{N}\right) \right) , \\
\left\Vert u\left( t\right) \right\Vert _{\infty } &\leq &C\left( t+1\right)
^{-\tilde{\sigma}},\text{ }\left\Vert v\left( t\right) \right\Vert _{\infty
}\leq C\left( t+1\right) ^{-\hat{\sigma}},\;\,\text{for all}\;t\geq 0,
\end{eqnarray*
for some positive constants $\tilde{\sigma}$ and $\hat{\sigma}$.\bigskip\
\end{theorem}
\begin{definition}[Weak solution]
\label{Weaks} Let $u_{0},v_{0}\in L_{loc}^{\infty }\left( \mathbb{R
^{N}\right) ,u_{1},v_{1}\in L_{loc}^{\infty }\left( \mathbb{R}^{N}\right)
,T>0$. We say that $\left( u,v\right) \in L^{q}\left( (0,T),L_{loc}^{\infty
}\left( \mathbb{R}^{N}\right) \right) \times L^{p}\left(
(0,T),L_{loc}^{\infty }\left( \mathbb{R}^{N}\right) \right) $ is a weak
solution of (\ref{sys1}) if
\begin{equation*}
\begin{array}{l}
\displaystyle\int_{0}^{T}\displaystyle\int_{\mathbb{R}^{N}}uD_{t|T}^{\gamma
_{1}}\varphi \left( t,x\right) dxdt-\displaystyle\int_{0}^{T}\displaystyl
\int_{\mathbb{R}^{N}}u\Delta \varphi \left( t,x\right) dxdt=\displaystyl
\int_{\mathbb{R}^{N}}u_{0}\left( x\right) \left( D_{t|T}^{\gamma
_{1}-1}\varphi \right) \left( 0,.\right) dx \\
+\displaystyle\int_{0}^{T}\displaystyle\int_{\mathbb{R}^{N}}u_{1}D_{t|T}^
\gamma _{1}-1}\varphi \left( t,x\right) dxdt+\displaystyle\int_{0}^{T
\displaystyle\int_{\mathbb{R}^{N}}f(v\left( \tau ,x\right) )\varphi \left(
t,x\right) dxdt\text{,}\bigskip \\
\displaystyle\int_{0}^{T}\displaystyle\int_{\mathbb{R}^{N}}vD_{t|T}^{\gamma
_{2}}\varphi \left( t,x\right) dxdt-\displaystyle\int_{0}^{T}\displaystyl
\int_{\mathbb{R}^{N}}v\Delta \varphi \left( t,x\right) dxdt=\displaystyl
\int_{\mathbb{R}^{N}}v_{0}\left( x\right) \left( D_{t|T}^{\gamma
_{2}-1}\varphi \right) \left( 0,.\right) dx\bigskip \\
+\displaystyle\int_{0}^{T}\displaystyle\int_{\mathbb{R}^{N}}v_{1}D_{t|T}^
\gamma _{2}-1}\varphi \left( t,x\right) dxdt+\displaystyle\int_{0}^{T
\displaystyle\int_{\mathbb{R}^{N}}g(u\left( \tau ,x\right) )\varphi \left(
t,x\right) dxdt\text{.
\end{array
\end{equation*
for every function $\varphi \in C_{t,x}^{1,2}\left( [0,T]\times \mathbb{R
^{N}\right) $ such that $\varphi \left( T,.\right) =0$.
\end{definition}
Similar to the proof in \cite{KirFin}, we can obtain the following lemma
asserting that the mild solution is the weak solution.
\begin{lemma}
Assume that $\left( u_{0},v_{0}\right) ,\left( u_{1},v_{1}\right) \in
\mathcal{S}\left( \mathbb{R}^{N}\right) \times \mathcal{S}\left( \mathbb{R
^{N}\right) $and let $\left( u,v\right) \in C^{\gamma _{1}}\left( [0,T]
\mathcal{S}\left( \mathbb{R}^{N}\right) \right) \times C^{\gamma _{2}}\left(
[0,T],\mathcal{S}\left( \mathbb{R}^{N}\right) \right) $ be a mild solution
of (\ref{sys1})-(\ref{initdat}). Then $\left( u,v\right) $ is also a weak
solution of (\ref{sys1})-(\ref{initdat}).
\end{lemma}
\begin{proof}
As $\left( u,v\right)$ is a mild solution, we have
\begin{equation*}
u(t,x)=\tilde{E}_{\gamma _{1},1}(t,x)u_{0}(x)+t\tilde{E}_{\gamma
_{1},2}(t,x)u_{1}(x)+\displaystyle\int_{0}^{t}\left( t-s\right) ^{\gamma
_{1}-1}\tilde{E}_{\gamma _{1},\gamma _{1}}\left( t-s\right) \ast f\left(
v(s,x)\right) ds.
\end{equation*}
Differentiating with respect to $t$ and noting that $1<\gamma _{1}<2$, we
get
\begin{eqnarray}
u_{t}(t,x)-u_{1}(x) &=&\partial _{t}\tilde{E}_{\gamma
_{1},1}(t,x)u_{0}(x)+\partial _{t}\left( t\tilde{E}_{\gamma
_{1},2}(t,x)\right) u_{1}(x)-u_{1}(x) \notag \\
&&+\displaystyle\int_{0}^{t}\left( t-s\right) ^{\gamma _{1}-2}\tilde{E
_{\gamma _{1},\gamma _{1}-1}\left( t-s\right) \ast f\left( v(s,x\right) )d
\text{,} \label{deriv}
\end{eqnarray}
where we have used the following formula
\begin{equation*}
\left( \frac{d}{dz}\right) ^{(m)}\left[ z^{\beta -1}E_{\alpha ,\beta }\left(
z^{\alpha }\right) \right] =z^{\beta -m-1}E_{\alpha ,\beta -m}\left(
z^{\alpha }\right) ,\text{ }\Re \left( \beta -m\right) >0,\text{ }m=0,1,...
\end{equation*
Applying $J_{0|t}^{2-\gamma _{1}}$ to both sides of (\ref{deriv}), we obtain
\begin{equation*}
\begin{array}{c}
J_{0|t}^{2-\gamma _{1}}\left( u_{t}-u_{1}\right) =J_{0|t}^{2-\gamma
_{1}}\left( \partial _{t}\tilde{E}_{\gamma _{1},1}(t,x)\right)
u_{0}(x)+J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\left( t\tilde{E
_{\gamma _{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right) \\
+J_{0|t}^{2-\gamma _{1}}\left( \displaystyle\int_{0}^{t}\left( t-s\right)
^{\gamma _{1}-2}\tilde{E}_{\gamma _{1},\gamma _{1}-1}\left( t-s,.\right)
\ast f\left( v(s,x\right) )\right) ds
\end{array
\end{equation*}
On the other hand, we have
\begin{equation}
\begin{array}{l}
J_{0|t}^{2-\gamma _{1}}\left( \displaystyle\int_{0}^{t}\left( t-s\right)
^{\gamma _{1}-2}E_{\gamma _{1},\gamma _{1}-1}\left( -\left\vert \xi
\right\vert ^{2}\left( s-\tau \right) ^{\gamma _{1}}\right) \hat{f}\left(
s,\xi \right) ds\right) \\
=\frac{1}{\Gamma \left( 2-\gamma _{1}\right) }\displaystyl
\int_{0}^{t}\left( t-s\right) ^{1-\gamma _{1}}\displaystyl
\int_{0}^{s}\left( s-\tau \right) ^{\gamma _{1}-2}E_{\gamma _{1},\gamma
_{1}-1}\left( -\left\vert \xi \right\vert ^{2}\left( s-\tau \right) ^{\gamma
_{1}}\right) \hat{f}\left( \tau ,\xi \right) d\tau ds \\
=\sum\limits_{k=0}^{+\infty }\frac{\left( -1\right) ^{k}\left\vert \xi
\right\vert ^{2k}}{\Gamma \left( 2-\gamma _{1}\right) \Gamma \left( \gamma
_{1}k+\gamma _{1}-1\right) }\displaystyle\displaystyle\int_{0}^{t}\left(
t-s\right) ^{1-\gamma _{1}}\displaystyle\int_{0}^{s}\left( s-\tau \right)
^{\gamma _{1}-2+\gamma _{1}k}\hat{f}\left( \tau ,\xi \right) d\tau ds \\
=\sum\limits_{k=0}^{+\infty }\frac{\left( -1\right) ^{k}\left\vert \xi
\right\vert ^{2k}}{\Gamma \left( 2-\gamma _{1}\right) \Gamma \left( \gamma
_{1}k+\gamma _{1}-1\right) }\displaystyle\int_{0}^{t}\displaystyle\int_{\tau
}^{t}\left( t-s\right) ^{1-\gamma _{1}}\left( s-\tau \right) ^{\gamma
_{1}-2+\gamma _{1}k}ds\hat{f}\left( \tau ,\xi \right) d\tau \\
=\sum\limits_{k=0}^{+\infty }\frac{\left( -1\right) ^{k}\left\vert \xi
\right\vert ^{2k}}{\Gamma \left( 2-\gamma _{1}\right) \Gamma \left( \gamma
_{1}k+\gamma _{1}-1\right) }\mathbf{B}\left( 2-\gamma _{1},\gamma
_{1}k+\gamma _{1}-1\right) \displaystyle\int_{0}^{t}\left( t-s\right)
^{\gamma _{1}k}\hat{f}\left( s,\xi \right) ds \\
=\displaystyle\int_{0}^{t}E_{\gamma _{1},1}\left( -\left\vert \xi
\right\vert ^{2}\left( s-\tau \right) ^{\gamma _{1}}\right) \hat{f}\left(
s,\xi \right) \, ds
\end{array}
\label{ftmlf}
\end{equation}
Here $\mathbf{B}$ denotes to the beta function.\newline
Applying the Fourier inverse transform to both sides of (\ref{ftmlf}) yields
\begin{equation*}
J_{0|t}^{2-\gamma _{1}}\left( \displaystyle\int_{0}^{t}\left( t-s\right)
^{\gamma _{1}-2}\tilde{E}_{\gamma _{1},\gamma _{1}-1}\left( t-s,.\right)
\ast f\left( v(s,x\right) )\right) ds
\end{equation*}
\begin{equation*}
=\displaystyle\int_{0}^{t}\tilde{E}_{\gamma _{1},1}\left( t-s,.\right) \ast
f\left( v(s,x\right) )ds\text{.}
\end{equation*}
Then, for every test function $\varphi \in C_{x,t}^{2,1}\left( \mathbb{R
^{N}\times \left[ 0,T\right] \right) ,$ supp$\varphi \subset \subset \mathbb
R}^{N}\times \left[ 0,T\right] $ and $\varphi \left( T,x\right) =0$, we have
\begin{equation*}
\begin{array}{l}
\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left(
u_{t}-u_{1}\right) \varphi dx=\displaystyle\int_{\mathbb{R
^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\tilde{E}_{\gamma
_{1},1}(t,x)\right) u_{0}(x)\varphi \,dx \\
\quad \quad \quad \quad \quad \quad +\displaystyle\int_{\mathbb{R
^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\left( t\tilde{E}_{\gamma
_{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right) \varphi \,dx \\
\quad \quad \quad \quad \quad \quad +\displaystyle\int_{\mathbb{R}^{N}
\displaystyle\int_{0}^{t}\tilde{E}_{\gamma _{1},1}\left( t-s\right) \ast
f\left( v(s,x)\right) ds\varphi \,dx
\end{array
\end{equation*
Settin
\begin{equation*}
I:=\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left(
u_{t}-u_{1}\right) \varphi \,dx\text{,}
\end{equation*
we get
\begin{equation*}
\begin{array}{l}
\;\;\frac{\partial }{\partial t}I_{{}}=\displaystyle\int_{\mathbb{R}^{N}
\frac{\partial }{\partial t}\left[ J_{0|t}^{2-\gamma _{1}}\left(
u_{t}-u_{1}\right) \varphi \right] dx \\
\quad \quad =\displaystyle\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t
\left[ J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\tilde{E}_{\gamma
_{1},1}(t,x)\right) u_{0}(x)\varphi \right] dx \\
\quad \quad +\displaystyle\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t
\left[ J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\left( t\tilde{E}_{\gamma
_{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right) \varphi \right] dx \\
\quad \quad +\displaystyle\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t
\left( \displaystyle\int_{0}^{t}\tilde{E}_{\gamma _{1},1}\left( t-s\right)
d\tau \ast f\left( s,x\right) ds\varphi \right) dx
\end{array
\end{equation*
On the other hand, using the relations
\begin{equation*}
D_{0|t}^{\gamma _{1}}\tilde{E}_{\gamma _{1},1}\left( t,.\right) u_{0}\left(
x\right) =\Delta \tilde{E}_{\gamma _{1},1}\left( t,.\right) u_{0}\left(
x\right) ,
\end{equation*
\begin{equation*}
D_{0|t}^{\gamma _{1}}\left( t\tilde{E}_{\gamma _{1},2}(t,.)\right)
u_{1}(x)=\Delta \left( t\tilde{E}_{\gamma _{1},2}(t,.)\right) u_{1}(x)\text{
}
\end{equation*
we obtain
\begin{eqnarray*}
&&\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t}\left[ J_{0|t}^{2-\gamma
_{1}}\left( \partial _{t}\left( t\tilde{E}_{\gamma _{1},2}(t,.)\right)
u_{1}(x)-u_{1}(x)\right) \varphi \right] dx \\
&=&\int_{\mathbb{R}^{N}}D_{0|t}^{\gamma _{1}}\left( t\tilde{E}_{\gamma
_{1},2}(t,.)\right) u_{1}(x)\varphi \left( t,x\right) dx \\
&&+\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\left(
\tilde{E}_{\gamma _{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right) \varphi
_{t}\left( t,x\right) dx \\
&=&\int_{\mathbb{R}^{N}}t\tilde{E}_{\gamma _{1},2}(t,.)u_{1}(x)\Delta
\varphi \left( t,x\right) dx \\
&&+\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\left(
\tilde{E}_{\gamma _{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right) \varphi
_{t}\left( t,x\right) dx\text{,}
\end{eqnarray*
an
\begin{equation*}
\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t}\left[ J_{0|t}^{2-\gamma
_{1}}\left( \partial _{t}\tilde{E}_{\gamma _{1},1}(t,x)\right)
u_{0}(x)\varphi \right] dx=\int_{\mathbb{R}^{N}}\tilde{E}_{\gamma
_{1},1}(t,x)u_{0}(x)\Delta \varphi \left( t,x\right) dx
\end{equation*
\begin{equation*}
\quad \quad \quad \quad \quad \quad \quad \quad \quad \quad +\int_{\mathbb{R
^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial _{t}\tilde{E}_{\gamma
_{1},1}(t,x)\right) u_{0}(x)\varphi _{t}\left( t,x\right) dx\text{.}
\end{equation*
Using the Leibniz formula, we get
\begin{equation*}
\frac{\partial }{\partial t}\displaystyle\int_{0}^{t}\tilde{E}_{\gamma
_{1},1}\left( t-s\right) \ast f\left( v(s,x)\right) ds
\end{equation*
\begin{equation*}
\quad \quad \quad \quad \quad \quad \quad \quad =\tilde{E}_{\gamma
_{1},1}\left( 0\right) f\left( v(t,x)\right) +\displaystyl
\int_{0}^{t}\partial _{t}\tilde{E}_{\gamma _{1},1}\left( t-s\right) \ast
f\left( v(s,x)\right) \,ds
\end{equation*
\begin{equation*}
\quad \quad \quad \quad =f\left( v(t,x\right) )+\displaystyl
\int_{0}^{t}\partial _{t}\tilde{E}_{\gamma _{1},1}\left( t-s\right) \ast
f\left( v(t,x)\right) \,ds.
\end{equation*
So,
\begin{equation*}
\begin{array}{l}
\frac{\partial }{\partial t}I=\displaystyle\int_{\mathbb{R}^{N}}\tilde{E
_{\gamma _{1},1}(t,x)u_{0}(x)\Delta \varphi dx+\displaystyle\int_{\mathbb{R
^{N}}t\tilde{E}_{\gamma _{1},2}(t,.)u_{1}(x)\Delta \varphi \,dx \\
+\displaystyle\int_{\mathbb{R}^{N}}f\left( v(t,x\right) )\varphi \,dx
\displaystyle\int_{\mathbb{R}^{N}}\displaystyle\int_{0}^{t}\left( t-s\right)
^{\gamma _{1}-1}\tilde{E}_{\gamma _{1},\gamma _{1}}\left( t-s\right) \ast
f\left( v(s,x)\right) \Delta \varphi \,dsdx \\
+\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial
_{t}\tilde{E}_{\gamma _{1},1}(t,x)\right) u_{0}(x)\varphi _{t}\,dx
\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial
_{t}\left( t\tilde{E}_{\gamma _{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right)
\varphi _{t}\,dx \\
+\displaystyle\int_{\mathbb{R}^{N}}\displaystyle\int_{0}^{t}\tilde{E
_{\gamma _{1},1}\left( t-s\right) \ast f\left( v(s,x)\right) ds\varphi
_{t}\,dx
\end{array
\end{equation*
Using the fact that $u$ is a mild solution, we obtain
\begin{equation}
\begin{array}{c}
\frac{\partial }{\partial t}I=\displaystyle\int_{\mathbb{R}^{N}}u\Delta
\varphi dx+\displaystyle\int_{\mathbb{R}^{N}}f\left( v(t,x\right) )\varphi
dx+\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial
_{t}\tilde{E}_{\gamma _{1},1}(t,x)\right) u_{0}(x)\varphi _{t}\,dx \\
+\displaystyle\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( \partial
_{t}\left( t\tilde{E}_{\gamma _{1},2}(t,.)\right) u_{1}(x)-u_{1}(x)\right)
\varphi _{t}\,dx \\
+\displaystyle\int_{\mathbb{R}^{N}}\displaystyle\int_{0}^{t}\tilde{E
_{\gamma _{1},1}\left( t-s\right) \ast f\left( v(s,x)\right) ds\varphi
_{t}\,dx \\
=\displaystyle\int_{\mathbb{R}^{N}}u\Delta \varphi \,dx+\displaystyle\int_
\mathbb{R}^{N}}f\left( v(t,x\right) )\varphi \,dx+\displaystyle\int_{\mathbb
R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( u_{t}-u_{1}\right) \varphi _{t}\,d
\text{.
\end{array}
\label{i1}
\end{equation
On the other hand, we have
\begin{equation}
\frac{\partial }{\partial t}I=\int_{\mathbb{R}^{N}}\frac{\partial }{\partial
t}\left[ J_{0|t}^{2-\gamma _{1}}\left( u_{t}-u_{1}\right) \right] \varphi
\,dx+\int_{\mathbb{R}^{N}}J_{0|t}^{2-\gamma _{1}}\left( u_{t}-u_{1}\right)
\varphi _{t}\,dx. \label{i2}
\end{equation
Integrating both sides of (\ref{i1}) and (\ref{i2}) on $[0,T]$, and then
identifying the terms, we get
\begin{equation*}
\int_{0}^{T}\int_{\mathbb{R}^{N}}\frac{\partial }{\partial t
J_{0|t}^{2-\gamma _{1}}\left( u_{t}-u_{1}\right) \varphi
\,dxdt=\int_{0}^{T}\int_{\mathbb{R}^{N}}u\Delta \varphi \,dxdt+\displaystyl
\int_{0}^{T}\int_{\mathbb{R}^{N}}f\left( v(t,x\right) )\varphi \,dxdt\text{.}
\end{equation*
The formula of integration by parts allows to write
\begin{equation*}
\int_{0}^{T}\int_{\mathbb{R}^{N}}\left( u_{t}-u_{1}\right) D_{t|T}^{\gamma
_{1}}\varphi \,dxdt=\int_{0}^{T}\int_{\mathbb{R}^{N}}u\Delta \varphi
\,dxdt+\int_{0}^{T}\int_{\mathbb{R}^{N}}f\left( v(t,x\right) )\varphi \,dxdt.
\end{equation*
By an analogous calculation, we can show that
\begin{equation*}
\int_{0}^{T}\int_{\mathbb{R}^{N}}\left( v_{t}-v_{1}\right) D_{t|T}^{\gamma
_{2}}\varphi \,dxdt=\int_{0}^{T}\int_{\mathbb{R}^{N}}v\Delta \varphi
\,dxdt+\int_{0}^{T}\int_{\mathbb{R}^{N}}g\left( u(t,x\right) )\varphi \,dxdt.
\end{equation*
This completes the proof.
\end{proof}
Our next result concerns the blow-up of solutions of (\ref{sys1}).\qquad\
\begin{theorem}[Blow-up of mild solution]
\label{NEG}Let $N\geq 1,$ $p>1,$ $q>1,$ $u_{0},v_{0},u_{1},v_{1}\in
L_{loc}^{p}\left( \mathbb{R}^{N}\right) ,$ $1<\gamma _{1},\gamma _{2}<2$, be
such that $\int_{\mathbb{R}^{N}}u_{1}\left( x\right) dx>0,$ $\int_{\mathbb{R
^{N}}v_{1}\left( x\right) dx>0.$ If
\begin{equation*}
\text{or
\begin{array}{c}
\frac{N}{2}<\min \left\{ \frac{1}{\gamma _{1}}+\frac{\gamma _{1}+p\gamma _{2
}{\gamma _{1}\left( pq-1\right) },\frac{1}{\gamma _{1}}+\frac{1+p}{\left(
pq-1\right) },\frac{1}{\gamma _{1}}+\frac{\gamma _{2}+q\gamma _{1}}{\gamma
_{1}\left( pq-1\right) },\frac{1-\gamma _{2}}{\gamma _{1}}+\frac{q\left(
p+1\right) }{\left( pq-1\right) }\right\} , \\
\\
\frac{N}{2}<\min \left\{ \frac{1}{\gamma _{2}}+\frac{\gamma _{1}+\gamma _{2}
}{\gamma _{2}\left( pq-1\right) },\frac{1-\gamma _{1}}{\gamma _{2}}+\frac
p\left( q+1\right) }{\left( pq-1\right) },\frac{1}{\gamma _{2}}+\frac{\gamma
_{2}+\gamma _{1}q}{\gamma _{2}\left( pq-1\right) },\frac{1}{\gamma _{2}}
\frac{1+q}{\left( pq-1\right) }\right\}
\end{array
\end{equation*
then the mild solution $(u,v)$ of (\ref{sys1})-(\ref{initdat}) blows up in a
finite time. \ \qquad\
\end{theorem}
\section{Global Existence and Decay Estimates}
\textbf{Proof of Theorem} \ref{GELT}.\bigskip
The proof of Theorem \ref{GELT} proceeds in three steps. Without loss of
generality, we assume that $1<\gamma _{1}\leq \gamma _{2}<2$ and $q\geq
p\geq 1$ such that $pq>1$.\newline
\noindent \textbf{First step:} \textbf{Global existence for} $\left(
u,v\right) $ \textbf{in} $L^{s_{1}}\left( \mathbb{R}^{N}\right) \times
L^{s_{2}}\left( \mathbb{R}^{N}\right) .$ \newline
Since $pq>1$, from (\ref{critdimension}) we have for $N\geq 2$ that
\begin{equation*}
\frac{N}{2}\geq \max \left\{ \frac{1}{\gamma _{1}}+\frac{q+1}{pq-1},\frac{1}
\gamma _{1}}+\frac{p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right)
\right\} .
\end{equation*
If $\max \left\{ \frac{1}{\gamma _{1}}+\frac{q+1}{pq-1},\frac{1}{\gamma _{1}
+\frac{p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right) }\right\}
\frac{1}{\gamma _{1}}+\frac{q+1}{pq-1},$ then $\frac{N}{2}\geq \frac{1}
\gamma _{1}}+\frac{q+1}{pq-1}$, which gives
\begin{equation*}
1-\frac{pq-1}{q(p+1)\gamma _{2}}<1-\frac{pq-1}{2q(p+1)}<\frac{pq-1+q\gamma
_{1}+\gamma _{1}}{\gamma _{1}q(p+1)}\leq \frac{N\left( pq-1\right) }{2q(p+1)
.
\end{equation*
If $\max \left\{ \frac{1}{2}+\frac{q+1}{pq-1},\frac{1}{\gamma _{1}}+\frac
p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right) }\right\} =\frac{1}
\gamma _{1}}+\frac{p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right) }
. That is $\frac{1}{\gamma _{1}}+\frac{q+1}{pq-1}\leq \frac{1}{\gamma _{1}}
\frac{p\gamma _{2}+\gamma _{1}}{\gamma _{1}\left( pq-1\right) },$ in this
case
\begin{equation*}
\frac{N}{2}\geq \frac{1}{\gamma _{1}}+\frac{p\gamma _{2}+\gamma _{1}}{\gamma
_{1}\left( pq-1\right) }\geq \frac{1}{\gamma _{1}}+\frac{q+1}{pq-1},
\end{equation*
which gives again $\frac{N\left( pq-1\right) }{2q(p+1)}>1-\frac{pq-1}
q(p+1)\gamma _{2}}$, and since $1-\frac{pq-1}{q(p+1)\gamma _{2}}<1$, we can
choose $\delta >0$ such tha
\begin{equation}
1-\frac{pq-1}{q(p+1)\gamma _{2}}<\delta <\min \left\{ 1,\frac{N\left(
pq-1\right) }{2q(p+1)}\right\} . \label{delta}
\end{equation
We set
\begin{equation*}
r_{1}=\frac{N\gamma _{1}\left( pq-1\right) }{2\left[ \gamma _{1}\left(
1+\delta p\right) +\gamma _{2}p\left( 1-\delta \right) \right] }\text{,
\qquad r_{2}=\frac{N\gamma _{2}\left( pq-1\right) }{2\left[ \gamma
_{2}\left( 1+\delta q\right) +\gamma _{1}q\left( 1-\delta \right) \right]
\text{,}
\end{equation*
\begin{equation}
\frac{1}{s_{1}}=\frac{2\delta }{N}\frac{p+1}{pq-1},\text{ }\qquad \frac{1}
s_{2}}=\frac{2\delta }{N}\frac{q+1}{pq-1}, \label{sonerone}
\end{equation
\begin{equation*}
\sigma _{1}=\frac{\left( 1-\delta \right) \left( \gamma _{1}+\gamma
_{2}p\right) }{pq-1}\text{, }\qquad \sigma _{2}=\frac{\left( 1-\delta
\right) \left( \gamma _{2}+\gamma _{1}q\right) }{pq-1}.
\end{equation*
Clearly, we have
\begin{equation*}
\frac{1}{r_{1}}=\frac{2}{N\gamma _{1}}\frac{\left( 1-\delta \right) \left(
\gamma _{1}+\gamma _{2}p\right) }{pq-1}+\frac{2\delta }{N}\frac{\left(
p+1\right) }{pq-1},
\end{equation*
\begin{equation*}
\frac{1}{r_{2}}=\frac{2}{N\gamma _{2}}\frac{\left( 1-\delta \right) \left(
\gamma _{2}+\gamma _{1}q\right) }{pq-1}+\frac{2\delta }{N}\frac{\left(
q+1\right) }{pq-1}\text{.}
\end{equation*
The choice of $\delta $ gives
\begin{equation*}
\delta >1-\frac{pq-1}{\left( \gamma _{2}+\gamma _{1}q\right) p}\;\,\text
implies}\;\,p\sigma _{2}=\frac{\left( 1-\delta \right) \left( \gamma
_{2}+\gamma _{1}q\right) }{pq-1}p<1,
\end{equation*
an
\begin{equation*}
\delta >1-\frac{pq-1}{\left( \gamma _{1}+\gamma _{2}p\right) q}\;\,\text
implies}\;\,q\sigma _{1}=\frac{\left( 1-\delta \right) \left( \gamma
_{1}+\gamma _{2}p\right) }{pq-1}q<1\text{.}
\end{equation*
It is easy to check that
\begin{equation*}
s_{1}>q\text{, \ }\;s_{2}>p\text{, \ }\;ps_{1}>s_{2}\text{, }\;qs_{2}>s_{1
\text{, \ }\;s_{1}>r_{1}>1\text{, \ }\;s_{2}>r_{2}>1,
\end{equation*
\begin{equation*}
\frac{N}{2}\gamma _{1}\left( \frac{1}{r_{1}}-\frac{1}{s_{1}}\right)
q<1,\qquad \frac{N}{2}\gamma _{2}\left( \frac{1}{r_{2}}-\frac{1}{s_{2}
\right) p<1\text{,}
\end{equation*
an
\begin{equation*}
\frac{N}{2}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) =\delta =\frac{N}{2
\left( \frac{q}{s_{1}}-\frac{1}{s_{2}}\right) .
\end{equation*
One can easily verify that
\begin{equation*}
\delta >\frac{pq\left( \gamma _{1}-1\right) +1+p\gamma _{2}}{\left[ \gamma
_{1}q+\gamma _{2}\right] p}\;\;\Longleftrightarrow \;\;\left( \gamma _{2}
\frac{N}{2}\gamma _{2}\left( \frac{q}{s_{1}}-\frac{1}{s_{2}}\right) -q\sigma
_{1}\right) p>-1.
\end{equation*}
Let $\left( u_{0},v_{0}\right) \in L^{r_{1}}\left( \mathbb{R}^{N}\right)
\times L^{r_{2}}\left( \mathbb{R}^{N}\right) $. Let $u\in C\left( \left[
0,T_{\max }\right) ;L^{s_{1}}\left( \mathbb{R}^{N}\right) \right) \newline
$and $v\in C\left( \left[ 0,T_{\max }\right) ;L^{s_{2}}\left( \mathbb{R
^{N}\right) \right) $. For $t\in \lbrack 0,T_{\max })$, from (\ref{sys1}),
we hav
\begin{eqnarray}
\left\Vert u(t,.)\right\Vert _{s_{1}} &\leq &\left\Vert \tilde{E}_{\gamma
_{1},1}(t)u_{0}\right\Vert _{s_{1}}+\left\Vert t\tilde{E}_{\gamma
_{1},2}(t,.)\right\Vert _{s_{1}} \notag \\
&&+\int_{0}^{t}(t-\tau )^{\gamma _{1}-1}\left\Vert \tilde{E}_{\gamma
_{1},\gamma _{1}}(t-\tau )\left\vert v(\tau ,.)\right\vert ^{p}\right\Vert
_{s_{1}}d\tau , \label{lso1}
\end{eqnarray
\begin{eqnarray}
\left\Vert v\left( t,.\right) \right\Vert _{s_{2}} &\leq &\left\Vert \tilde{
}_{\gamma _{2},1}(t)v_{0}\right\Vert _{s_{2}}+\left\Vert t\tilde{E}_{\gamma
_{2},2}(t,.)\right\Vert _{s_{2}} \notag \\
&&+\int_{0}^{t}(t-\tau )^{\gamma _{2}-1}\left\Vert \tilde{E}_{\gamma
_{2},\gamma _{2}}(t-\tau )\left\vert u(\tau ,.)\right\vert ^{q}\right\Vert
_{s_{2}}\,d\tau . \label{lso2}
\end{eqnarray
Applying Lemmas \ref{galpha} and \ref{Linfty}, we get
\begin{eqnarray}
\left\Vert u\left( t,.\right) \right\Vert _{s_{1}} &\leq &t^{-\sigma
_{1}}\left\Vert u_{0}\right\Vert _{r_{1}}+t^{-\sigma _{1}}\Vert u_{1}\Vert _
\mathcal{\dot{H}}_{r_{1}}^{-\frac{2}{\gamma _{1}}}} \notag \\
&&+\,C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1}\left( t-\tau
\right) ^{-\frac{N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}
\right) }\left\Vert v\left( \tau ,.\right) \right\Vert _{s_{2}}^{p}d\tau
\text{,} \label{lso3}
\end{eqnarray
\begin{eqnarray}
\left\Vert v\left( t,.\right) \right\Vert _{s_{2}} &\leq &t^{-\sigma
_{2}}\left\Vert v_{0}\right\Vert _{r_{2}}+t^{-\sigma _{1}}\Vert v_{1}\Vert _
\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma _{2}}}} \notag \\
&&+\,C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1}\left( t-\tau
\right) ^{-\frac{N}{2}\gamma _{2}\left( \frac{q}{s_{1}}-\frac{1}{s_{2}
\right) }\left\Vert u\left( \tau ,.\right) \right\Vert _{s_{1}}^{q}d\tau
\text{.} \label{lso4}
\end{eqnarray
Using (\ref{lso4}) into (\ref{lso3}), we obtai
\begin{eqnarray*}
&&\left\Vert u\left( t,.\right) \right\Vert _{s_{1}}\leq \left( \left\Vert
u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _{\mathcal{\dot{H}}_{r_{1}}^{
\frac{2}{\gamma _{1}}}}\right) t^{-\sigma _{1}}+\,C\int_{0}^{t}\left( t-\tau
\right) ^{\gamma _{1}-1}\left( t-\tau \right) ^{-\frac{N}{2}\gamma
_{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) }d\tau \\
&&\qquad \times \left( \left( \left\Vert v_{0}\right\Vert _{r_{2}}+\Vert
v_{1}\Vert _{\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma _{2}}}}\right)
t^{-\sigma _{2}}+\,C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1}\left(
t-\tau \right) ^{-\frac{N}{2}\gamma _{2}\left( \frac{q}{s_{1}}-\frac{1}{s_{2
}\right) }\left\Vert u\left( t,.\right) \right\Vert _{s_{1}}^{q}d\tau
\right) ^{p},
\end{eqnarray*
provided that $1-\frac{1}{\gamma _{1}}<\frac{N}{2}\left( \frac{q}{s_{1}}
\frac{1}{s_{2}}\right) <1$ and $1-\frac{1}{\gamma _{2}}<\frac{N}{2}\left(
\frac{p}{s_{2}}-\frac{1}{s_{1}}\right) <1$ which are indeed satisfied.
Hence
\begin{eqnarray}
\left\Vert u(t,.)\right\Vert _{s_{1}} &\leq &\left( \left\Vert
u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _{\mathcal{\dot{H}}_{r_{1}}^{
\frac{2}{\gamma _{1}}}}\right) t^{-\sigma _{1}} \notag \\
&&+\,C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\frac{
}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) }\tau
^{-p\sigma _{2}}d\tau \left( \left\Vert v_{0}\right\Vert _{r_{2}}+\Vert
v_{1}\Vert _{\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma _{2}}}}\right) ^{p}
\label{lsom} \\
&&+\,C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\frac{N}{2}\gamma
_{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) }\tau ^{\left( \gamma _{2}
\frac{N}{2}\gamma _{2}\left( \frac{q}{s_{1}}-\frac{1}{s_{2}}\right) -q\sigma
_{1}\right) p}\left( \tau ^{^{\sigma _{1}}}\left\Vert u\left( \tau ,.\right)
\right\Vert _{s_{1}}\right) ^{pq}\,d\tau . \notag
\end{eqnarray
Multiplying both sides of (\ref{lsom}) by $t^{\sigma _{1}}$ with $\sigma
_{1}=\frac{\left( 1-\delta \right) \left( \gamma _{1}+\gamma _{2}p\right) }
pq-1},$ we get
\begin{eqnarray}
t^{\sigma _{1}}\left\Vert u\left( t,.\right) \right\Vert _{s_{1}} &\leq
&\left\Vert u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _{\mathcal{\dot{H}
_{r_{1}}^{-\frac{2}{\gamma _{1}}}} \notag \\
&&+\,Ct^{\sigma _{1}}\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\frac
N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) }\tau
^{-p\sigma _{2}}d\tau \left( \left\Vert v_{0}\right\Vert _{r_{2}}+\Vert
v_{1}\Vert _{\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma _{2}}}}\right) ^{p}
\label{lso5} \\
&&+\,Ct^{\sigma _{1}}\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\frac
N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) }\tau ^{\left(
\gamma _{2}-\frac{N}{2}\gamma _{2}\left( \frac{q}{s_{1}}-\frac{1}{s_{2}
\right) -q\sigma _{1}\right) p}\left( \tau ^{^{\sigma _{1}}}\left\Vert
u\left( \tau ,.\right) \right\Vert _{s_{1}}\right) ^{pq}d\tau . \notag
\end{eqnarray}
Since $\gamma _{1}-1-\frac{N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}
s_{1}}\right) >-1,$ $\left( \gamma _{2}-\frac{N}{2}\gamma _{2}\left( \frac{
}{s_{1}}-\frac{1}{s_{2}}\right) -q\sigma _{1}\right) p>-1,$ we have
\begin{eqnarray}
t^{\sigma _{1}}\left\Vert u\left( t,.\right) \right\Vert _{s_{1}} &\leq
&\left\Vert u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _{\mathcal{\dot{H}
_{r_{1}}^{-\frac{2}{\gamma _{1}}}}+Ct^{\sigma _{1}+\gamma _{1}-\frac{N}{2
\gamma _{1}\left( \frac{p}{s_{2}}-\frac{1}{s_{1}}\right) -p\sigma
_{2}}\left( \left\Vert v_{0}\right\Vert _{r_{2}}^{p}+\Vert v_{1}\Vert _
\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma _{2}}}}^{p}\right) \notag \\
&&+Ct^{\sigma _{1}+\gamma _{1}-\frac{N}{2}\gamma _{1}\left( \frac{p}{s_{2}}
\frac{1}{s_{1}}\right) +\left( \gamma _{2}-\frac{N}{2}\gamma _{2}\left(
\frac{q}{s_{1}}-\frac{1}{s_{2}}\right) -q\sigma _{1}\right) p}\left(
\sup_{0\leq \tau \leq t}\tau ^{^{\sigma _{1}}}\left\Vert u\left( \tau
,.\right) \right\Vert _{s_{1}}\right) ^{pq}. \notag \\
&& \label{lso6}
\end{eqnarray
Note tha
\begin{equation*}
\sigma _{1}=\frac{N}{2}\gamma _{1}\left( \frac{1}{r_{1}}-\frac{1}{s_{1}
\right) \text{,}
\end{equation*
\begin{equation*}
\sigma _{1}+\gamma _{1}-\frac{N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{
}{s_{1}}\right) -p\sigma _{2}=0\text{,}
\end{equation*
\begin{equation*}
\sigma _{1}+\gamma _{1}-\frac{N}{2}\gamma _{1}\left( \frac{p}{s_{2}}-\frac{
}{s_{1}}\right) +\left( \gamma _{2}-\frac{N}{2}\gamma _{2}\left( \frac{q}
s_{1}}-\frac{1}{s_{2}}\right) -q\sigma _{1}\right) p=0,
\end{equation*
\begin{equation*}
\sigma _{1}+\gamma _{1}-\gamma _{1}\delta +\left( \gamma _{2}-\gamma
_{2}\delta -q\sigma _{1}\right) p=0\text{.}
\end{equation*
Define $f(t)=\sup\limits_{0\leq \tau \leq t}\tau ^{^{\sigma _{1}}}\left\Vert
u\left( \tau ,.\right) \right\Vert _{s_{1}},$ $t\in \left[ 0,T_{\max
}\right) $. So we deduce from (\ref{lso5}) tha
\begin{equation}
f(t)\leq C\left( \left\Vert u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _
\mathcal{\dot{H}}_{r_{1}}^{-\frac{2}{\gamma _{1}}}}+\left\Vert
v_{0}\right\Vert _{r_{2}}^{p}+\Vert v_{1}\Vert _{\mathcal{\dot{H}}_{r_{2}}^{
\frac{2}{\gamma _{2}}}}^{p}+f(t)^{pq}\right) ,\text{ } \label{lso7}
\end{equation
for all $t\in \left( 0,T_{\max }\right) $.\newline
Setting
\begin{equation*}
A=\left\Vert u_{0}\right\Vert _{r_{1}}+\Vert u_{1}\Vert _{\mathcal{\dot{H}
_{r_{1}}^{-\frac{2}{\gamma _{1}}}}+\left\Vert v_{0}\right\Vert
_{r_{2}}^{p}+\Vert v_{1}\Vert _{\mathcal{\dot{H}}_{r_{2}}^{-\frac{2}{\gamma
_{2}}}}^{p}\text{.}
\end{equation*
Now if we take $A$ small enough such that $A<\left( 2C\right) ^{\frac{pq}
1-pq}}$, then it follows by continuity argument that (\ref{lso7}) implies
\begin{equation}
f(t)\leq 2CA\text{, for all }t\in \left[ 0,T_{\max }\right) .
\label{epsimpli}
\end{equation
Indeed, if (\ref{epsimpli}) is not true. That is to say $f(t_{0})>2CA$,
holds true for some\newline
$t_{0}\in \left( 0,T_{\max }\right) $. By the intermediate value theorem
since $f$ is continuous, non-decreasing and $f\left( 0\right) =0$, there
exists $t_{1}\in \left( 0,t_{0}\right) $ such that $f(t_{1})=2CA$. From (\re
{lso7}), we get
\begin{equation}
2CA=f(t_{1})\leq C\left( A+f(t_{1})^{pq}\right) ,\text{ } \label{nineq}
\end{equation
from which, it yield
\begin{equation*}
2CA\leq C\left( A+\left( 2CA\right) ^{pq}\right) \text{,}
\end{equation*
which is equivalent t
\begin{equation*}
A\geq \left( 2C\right) ^{\frac{pq}{1-pq}}\text{.}
\end{equation*
This is a contradiction. Therefore, it follows tha
\begin{equation}
f(t)\leq 2CA\text{, for any }t\in \left[ 0,T_{\max }\right) \text{.}
\label{inv}
\end{equation
Thus
\begin{equation}
t^{\sigma _{1}}\left\Vert u\left( t,.\right) \right\Vert _{s_{1}}\leq C\text
, for any }t\in \left[ 0,T_{\max }\right) \text{. } \label{lso8}
\end{equation
Similarly, we obtai
\begin{equation}
t^{\sigma _{2}}\left\Vert v\left( t,.\right) \right\Vert _{s_{2}}\leq C\text
, for any }t\in \left[ 0,T_{\max }\right) \text{.} \label{lso9}
\end{equation
Now, from (\ref{lso1}), (\ref{lso2}) and Lemma \ref{Linfty}, we can easily
see tha
\begin{equation}
\left\Vert u\left( t,.\right) \right\Vert _{\infty },\text{ }\left\Vert
v\left( t,.\right) \right\Vert _{\infty }\leq C\text{, for any }t\in \left[
0,1\right] \text{.} \label{lso10}
\end{equation
On the other hand, since $s_{1}$ and $s_{2}$ satisfy
\begin{equation*}
\frac{\left( 1-\delta \right) \left( p+1\right) s_{1}}{\left( pq-1\right)
s_{2}}\gamma _{2}<1,\text{ \ }\quad \frac{\left( 1-\delta \right) \left(
q+1\right) s_{2}}{\left( pq-1\right) s_{1}}\gamma _{2}<1,
\end{equation*
it follows from (\ref{lso1}), (\ref{lso2}), Lemma \ref{galpha} and Lemma \re
{Linfty} that
\begin{eqnarray}
&&\left\Vert u\left( t,.\right) \right\Vert _{s_{1}}\leq \left\Vert \tilde{E
_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert _{s_{1}}+t\left\Vert \tilde
E}_{\gamma _{2},2}\left( t\right) u_{1}\right\Vert _{s_{1}}\qquad \notag \\
&&\qquad \qquad \;\;+\int_{0}^{t}\left( t-\tau \right) ^{\gamma
_{1}-1}\left\Vert \tilde{E}_{\gamma _{1},\gamma _{1}}\left( t-\tau \right)
\left\vert v\left( \tau ,.\right) \right\vert ^{p}\right\Vert _{s_{1}}d\tau
\qquad \notag \\
&&\qquad \qquad \;\;\leq C\left\Vert u_{0}\right\Vert _{s_{1}}+t\left\Vert
u_{1}\right\Vert _{s_{1}}+C\int_{0}^{t}\left( t-\tau \right) ^{\gamma
_{1}-1}\left\Vert \left\vert v\left( \tau ,.\right) \right\vert
^{p}\right\Vert _{s_{1}}d\tau \qquad \notag \\
&&\qquad \qquad \;\;\leq C\left\Vert u_{0}\right\Vert _{s_{1}}+\left\Vert
u_{1}\right\Vert _{s_{1}}+C\sup_{\tau \in \left( 0,t\right) }\left\Vert
v\left( \tau \right) \right\Vert _{\infty }^{p-\frac{s_{2}}{s_{1}}
\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1}\left\Vert
v\left( \tau ,.\right) \right\Vert _{s_{2}}^{\frac{s_{2}}{s_{1}}}d\tau \qquad
\notag \\
&&\qquad \qquad \;\;\leq C\left\Vert u_{0}\right\Vert _{s_{1}}+\left\Vert
u_{1}\right\Vert _{s_{1}}+C\sup_{\tau \in \left( 0,t\right) }\left\Vert
v\left( \tau \right) \right\Vert _{\infty }^{p-\frac{s_{2}}{s_{1}}
\displaystyle\int_{0}^{t}\left\Vert v\left( \tau ,.\right) \right\Vert
_{s_{2}}^{\frac{s_{2}}{s_{1}}}d\tau \label{lso11}
\end{eqnarray
for all $t\in \left[ 0,1\right] $. Hence $\left\Vert u\left( t,.\right)
\right\Vert _{s_{1}}\leq C,$ for any $t\in \left[ 0,1\right] $.\newline
Analogously,
\begin{equation}
\left\Vert v\left( t,.\right) \right\Vert _{s_{2}}\leq C, \; \, \text{for al
} \; \, t\in \left[ 0,1\right] . \label{lso12}
\end{equation}
From (\ref{lso8}), (\ref{lso9}), (\ref{lso11}), (\ref{lso12}) and Lemma \re
{poldec}, we conclude that
\begin{equation}
\left\{
\begin{array}{l}
\left\Vert u\left( t,.\right) \right\Vert _{s_{1}}\leq C\left( t+1\right) ^{
\frac{\left( 1-\delta \right) \left( \gamma _{1}+p\gamma _{2}\right) }{pq-1
},\bigskip \\
\left\Vert u\left( t,.\right) \right\Vert _{s_{2}}\leq C\left( t+1\right) ^{
\frac{\left( 1-\delta \right) \left( \gamma _{2}+q\gamma _{1}\right) }{pq-1
}
\end{array
\right. \label{lso13}
\end{equation
for all $t\in \left[ 0,T_{\max }\right) $ .
\noindent \textbf{Second step. }$L^{\infty }$\textbf{-global existence
estimates of\ }$\left( u,v\right) $ in $L^{\infty }\left( \mathbb{R
^{N}\right) \times L^{\infty }\left( \mathbb{R}^{N}\right) $.\newline
Let $s_{1},$ $s_{2}$ be as in (\ref{sonerone}). Since $p\leq q,$ we have
\begin{equation*}
\frac{Np}{2s_{2}}\leq \frac{Nq}{2s_{1}}\text{.}
\end{equation*
We further assume, for some $\xi >q$ and $w>p,$ that $u(t)\in L^{w}\left(
\mathbb{R}^{N}\right) ,$ $v(t)\in L^{\xi }\left( \mathbb{R}^{N}\right) ,$
and
\begin{equation}
\left\{
\begin{array}{l}
\left\Vert u\left( t,.\right) \right\Vert _{w}\leq C\left(
1+t^{k_{1}}\right) ,\;\,t\in \left[ 0,T_{\max }\right) ,\bigskip \\
\left\Vert v\left( t,.\right) \right\Vert _{\xi }\leq C\left(
1+t^{k_{2}}\right) ,\;\,t\in \left[ 0,T_{\max }\right)
\end{array
\right. \label{lso14}
\end{equation
holds true for some positive constants $k_{1}$ and $k_{2}$. Then, by (\re
{lso1}), (\ref{lso2}) and Lemma \ref{Linfty}, we have
\begin{equation*}
\qquad \left\Vert u\left( t,.\right) \right\Vert _{\infty }\leq \left\Vert
\tilde{E}_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert _{\infty
}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left( t\right) u_{1}\right\Vert
_{\infty }
\end{equation*
\begin{equation}
\quad \quad \quad \quad \quad \quad \quad \quad +\int_{0}^{t}\left( t-\tau
\right) ^{\gamma _{1}-1-\frac{N\gamma _{1}p}{2\xi }}\left\Vert v\left( \tau
,.\right) \right\Vert _{\xi }^{p}d\tau , \label{lso15}
\end{equation
\begin{equation*}
\qquad \left\Vert v\left( t,.\right) \right\Vert _{\infty }\leq \left\Vert
\tilde{E}_{\gamma _{2},1}\left( t\right) v_{0}\right\Vert _{\infty
}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left( t\right) u_{1}\right\Vert
_{\infty }
\end{equation*
\begin{equation}
\quad \quad \quad \quad \quad \quad \quad \quad +\int_{0}^{t}\left( t-\tau
\right) ^{\gamma _{2}-1-\frac{N\gamma _{2}q}{2w}}\left\Vert u\left( \tau
,.\right) \right\Vert _{w}^{q}d\tau , \label{lso16}
\end{equation
for all $t\in \left[ 0,T_{\max }\right) $. If one can find $\xi $ and $w$
such that
\begin{equation}
\frac{Np}{2\xi }<1\qquad \text{or }\qquad \frac{Nq}{2w}<1, \label{test}
\end{equation
then the $L^{\infty }$-estimates of\textbf{\ }$\left( u,v\right) $ is
obtained. In fact, if $\frac{Np}{2\xi }<1,$ in view of (\ref{lso14}), it
yields from (\ref{lso15}) tha
\begin{eqnarray}
\left\Vert u\left( t,.\right) \right\Vert _{\infty } &\leq &\left\Vert
\tilde{E}_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert _{\infty
}+C\max_{\tau \in \left[ 0,t\right] }\left\Vert v\left( \tau ,.\right)
\right\Vert _{\xi }^{p}t^{\left( 1-\frac{Np}{2\xi }\right) \gamma _{1}}
\notag \\
&\leq &C\left( 1+t^{\left( 1-\frac{Np}{2\xi }\right) \gamma
_{1}+pk_{2}}\right) , \label{lso17}
\end{eqnarray
and by taking $w=\infty $ in (\ref{lso16}), we get
\begin{eqnarray}
\left\Vert v\left( t,.\right) \right\Vert _{\infty } &\leq &\left\Vert
\tilde{E}_{\gamma _{2},1}\left( t\right) v_{0}\right\Vert _{\infty
}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left( t\right) u_{1}\right\Vert
_{\infty }+\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1}\left\Vert
u\left( \tau ,.\right) \right\Vert _{\infty }^{q}d\tau \qquad \notag \\
\qquad \qquad &\leq &\left\Vert \tilde{E}_{\gamma _{2},1}\left( t\right)
v_{0}\right\Vert _{\infty }+t\left\Vert \tilde{E}_{\gamma _{2},2}\left(
t\right) u_{1}\right\Vert _{\infty }\qquad \qquad \notag \\
&+&\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1}\left( 1+t^{\left( 1
\frac{Np}{2\xi }\right) \gamma _{1}+pk_{2}}\right) ^{q}d\tau \qquad \qquad
\notag \\
&\leq &C\left( 1+t^{\gamma _{2}+\left[ \left( 1-\frac{Np}{2\xi }\right)
\gamma _{1}+pk_{2}\right] q}\right) \text{.} \label{lso18}
\end{eqnarray
These estimates show that $T_{\max }=\infty $, and
\begin{equation}
u,v\in L_{loc}^{\infty }\left( \left[ 0,\infty \right) ;L^{\infty }\left(
\mathbb{R}^{N}\right) \right) . \label{test1}
\end{equation
In a similar manner, we can establish the case $\frac{Nq}{2w}<1$. To find
appropriate $\xi $ and $w,$ we note that (\ref{lso17}) and (\ref{lso18})
hold by taking $\xi =s_{1}$ or $w=s_{2}$ if $\frac{Nq}{2s_{1}}<1$ or $\frac
Np}{2s_{2}}<1$; this is certainly the case when $N\leq 2$ with $s_{1}>q$ and
$s_{2}>p$.
Thus it remains to deal with the case $N>2$, $\frac{Nq}{2s_{1}}\geq 1$ and
\frac{Np}{2s_{2}}\geq 1$. We do this via an iterative process. Define
s_{1}^{\prime }=s_{1},$ $s_{1}^{\prime \prime }=s_{2},$ since $s_{1}^{\prime
}>q$ and $s_{1}^{\prime \prime }>p,$ using the H\"{o}lder inequality and
Lemmas \ref{galpha} and \ref{Linfty}, we get from (\ref{lso1}), (\ref{lso2})
that
\begin{eqnarray*}
&&\qquad \left\Vert u\left( t,.\right) \right\Vert _{s_{2}^{\prime }}\leq
\left\Vert \tilde{E}_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert
_{s_{2}^{\prime }}+t\left\Vert \tilde{E}_{\gamma _{1},2}\left( t\right)
u_{1}\right\Vert _{s_{2}^{\prime }}\qquad \\
&&\quad \quad \quad \quad \quad \quad \;\;+\int_{0}^{t}\left( t-\tau \right)
^{\gamma _{1}-1-\frac{N\gamma _{1}}{2}\left( \frac{p}{s_{2}^{\prime \prime }
-\frac{1}{s_{2}^{\prime }}\right) }\left\Vert v\left( \tau ,.\right)
\right\Vert _{s_{2}^{\prime \prime }}^{p}d\tau ,\qquad
\end{eqnarray*
\begin{eqnarray*}
&&\qquad \left\Vert v\left( t,.\right) \right\Vert _{s_{2}^{\prime \prime
}}\leq \left\Vert \tilde{E}_{\gamma _{2},1}\left( t\right) v_{0}\right\Vert
_{s_{2}^{\prime \prime }}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left(
t\right) u_{1}\right\Vert _{s_{2}^{\prime \prime }}\qquad \\
&&\quad \quad \quad \quad \quad \quad \;\;+\int_{0}^{t}\left( t-\tau \right)
^{\gamma _{2}-1-\frac{N\gamma _{2}}{2}\left( \frac{q}{s_{1}^{\prime }}-\frac
1}{s_{2}^{\prime \prime }}\right) }\left\Vert u\left( \tau ,.\right)
\right\Vert _{s_{1}^{\prime }}^{q}d\tau ,\qquad
\end{eqnarray*
where $s_{2}^{\prime }$ and $s_{2}^{\prime \prime }$ are such that
\begin{equation*}
\frac{N}{2}\left( \frac{p}{s_{1}^{\prime \prime }}-\frac{1}{s_{2}^{\prime }
\right) <1,\text{ }\qquad \frac{N}{2}\left( \frac{q}{s_{1}^{\prime }}-\frac{
}{s_{2}^{\prime \prime }}\right) <1.
\end{equation*
This can be shown by taking
\begin{equation*}
\frac{1}{s_{2}^{\prime }}=\frac{p}{s_{1}^{\prime \prime }}-\frac{2}{N}+\eta
,\qquad \frac{1}{s_{2}^{\prime \prime }}=\frac{q}{s_{1}^{\prime }}-\frac{2}{
}+\eta ,
\end{equation*
where $0<\eta <\frac{2\left( 1-\delta \right) }{N}$ with $\delta >1-\frac{1}
\gamma _{1}}$. Namel
\begin{equation*}
\frac{N}{2}\left( \frac{p}{s_{1}^{\prime \prime }}-\frac{1}{s_{2}^{\prime }
\right) =\frac{N}{2}\left( \frac{q}{s_{1}^{\prime }}-\frac{1}{s_{2}^{\prime
\prime }}\right) =1-\frac{N}{2}\eta >1-\frac{1}{\gamma _{1}}\text{.}
\end{equation*
Observe that, since $\delta >1-\frac{pq-1}{q(p+1)\gamma _{2}}>1-\frac{1}
\gamma _{2}}$, we have
\begin{equation*}
1-\frac{1}{\gamma _{1}}<\frac{N}{2}\left( \frac{p}{s_{1}^{\prime \prime }}
\frac{1}{s_{2}^{\prime }}\right) <1,\qquad 1-\frac{1}{\gamma _{2}}<\frac{N}{
}\left( \frac{q}{s_{1}^{\prime }}-\frac{1}{s_{2}^{\prime \prime }}\right) <
\text{,}
\end{equation*
\begin{equation}
\frac{1}{s_{1}^{\prime }}-\frac{1}{s_{2}^{\prime }}=\frac{2}{N}\left(
1-\delta \right) -\eta >0,\qquad \frac{1}{s_{1}^{\prime \prime }}-\frac{1}
s_{2}^{\prime \prime }}=\frac{2}{N}\left( 1-\delta \right) -\eta >0\text{,}
\label{lso19}
\end{equation
and hence $s_{2}^{\prime }>s_{1}^{\prime }>q$ and $s_{2}^{\prime \prime
}>s_{1}^{\prime \prime }>p$.\newline
Next, define the sequences $\left\{ s_{i}^{\prime }\right\} _{i\geq 1}$ and
\left\{ s_{i}^{\prime \prime }\right\} _{i\geq 1}$, iteratively, as follows
\begin{equation}
\frac{1}{s_{i}^{\prime }}=\frac{p}{s_{i-1}^{\prime \prime }}-\frac{2}{N
+\eta ,\text{ }\qquad \frac{1}{s_{i}^{\prime \prime }}=\frac{q}
s_{i-1}^{\prime }}-\frac{2}{N}+\eta ,\text{ }i\geq 3. \label{lso20}
\end{equation
Then
\begin{equation*}
\frac{1}{s_{i}^{\prime }}-\frac{1}{s_{i+1}^{\prime }}=p\left( \frac{1}
s_{i-1}^{\prime \prime }}-\frac{1}{s_{i}^{\prime \prime }}\right) =pq\left(
\frac{1}{s_{i-2}^{\prime }}-\frac{1}{s_{i-1}^{\prime }}\right) ,
\end{equation*
\begin{equation*}
\frac{1}{s_{i}^{\prime \prime }}-\frac{1}{s_{i+1}^{\prime \prime }}=q\left(
\frac{1}{s_{i-1}^{\prime }}-\frac{1}{s_{i}^{\prime }}\right) =pq\left( \frac
1}{s_{i-2}^{\prime \prime }}-\frac{1}{s_{i-1}^{\prime \prime }}\right) .
\end{equation*
Since $pq>1$, in view of (\ref{lso19}), we get
\begin{equation}
\frac{1}{s_{i}^{\prime }}>\frac{1}{s_{i+1}^{\prime }},\text{ }\qquad \frac{
}{s_{i}^{\prime \prime }}>\frac{1}{s_{i+1}^{\prime \prime }}\text{, }i\geq
\text{,} \label{lso21}
\end{equation
and
\begin{equation}
\lim_{i\rightarrow +\infty }\left( \frac{1}{s_{i}^{\prime }}-\frac{1}
s_{i+1}^{\prime }}\right) =\lim_{i\rightarrow +\infty }\left( \frac{1}
s_{i}^{\prime \prime }}-\frac{1}{s_{i+1}^{\prime \prime }}\right) =+\infty
\text{.} \label{lso22}
\end{equation
Now, we ensure that there exists $i_{0}$ such that
\begin{equation}
\frac{p}{s_{i_{0}}^{\prime \prime }}<\frac{2}{N}\text{ }\qquad \text{or
\qquad \frac{q}{s_{i_{0}}^{\prime }}<\frac{2}{N}. \label{lso23}
\end{equation
On the contrary, that is, $\frac{p}{s_{i}^{\prime \prime }}\geq \frac{2}{N}$
and $\frac{q}{s_{i}^{\prime }}\geq \frac{2}{N}$ for all $i\geq 1.$ Then, by
\ref{lso20}), we see that $s_{i}^{\prime }>0,$ $s_{i}^{\prime \prime }>0$
for all $i\geq 1$ and hence, by (\ref{lso21}),
\begin{equation*}
q<s_{1}^{\prime }<...<s_{i}^{\prime }<...,\text{ }p<s_{1}^{\prime \prime
}<...<s_{i}^{\prime \prime }<....
\end{equation*
which contradicts \eqref{lso22}.
Let $i_{0}$ be the smallest number satisfying (\ref{lso23}). Notice that
i_{0}\geq 2$. Without loss of generality, we assume that
\begin{equation}
\frac{p}{s_{i_{0}}^{\prime \prime }}<\frac{2}{N},\text{ \ }\frac{p}
s_{i}^{\prime \prime }}\geq \frac{2}{N}\text{ for any }1\leq i\leq
i_{0}-1,\qquad \frac{q}{s_{i}^{\prime }}\geq \frac{2}{N}\text{ for any
1\leq i\leq i_{0}\text{.} \label{lso24}
\end{equation
It then follows from \eqref{lso20} that
\begin{equation*}
s_{i}^{\prime }>0\text{ for any }1\leq i\leq i_{0},\qquad s_{i}^{\prime
\prime }>0\text{ for any }1\leq i\leq i_{0}+1\text{,}
\end{equation*
which together with (\ref{lso21}) leads to
\begin{equation*}
q<...<s_{i_{0}-1}^{\prime }<s_{i_{0}}^{\prime },\qquad
p<...<s_{i_{0}}^{\prime \prime }<s_{i_{0}+1}^{\prime \prime }\text{.}
\end{equation*
Now, from \eqref{lso20}, we have, for all $i\geq 2$,
\begin{equation*}
\frac{N}{2}\left( \frac{p}{s_{i-1}^{\prime \prime }}-\frac{1}{s_{i}^{\prime
}\right) =1-\frac{N}{2}\eta =\frac{N}{2}\left( \frac{q}{s_{i-1}^{\prime }}
\frac{1}{s_{i}^{\prime \prime }}\right) \text{.}
\end{equation*
Now, let us deal with the boundedness of $\left( u(t,.),v(t,.)\right) $ in
L^{s_{i}^{\prime }}\left( \mathbb{R}^{N}\right) \times L^{s_{i}^{\prime
\prime }}\left( \mathbb{R}^{N}\right) $. Using the H\"{o}lder inequality and
Lemmas \ref{galpha}, \ref{Linfty}, it follows from \eqref{ms1}-\eqref{ms2},
inductively, that
\begin{eqnarray}
\left\Vert u\left( t,.\right) \right\Vert _{s_{i}^{\prime }} &\leq
&\left\Vert \tilde{E}_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert
_{s_{i}^{\prime }}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left( t\right)
u_{1}\right\Vert _{s_{i}^{\prime }} \notag \\
&+&C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\frac{N}{
}\gamma _{1}\left( \frac{p}{s_{i-1}^{\prime \prime }}-\frac{1}{s_{i}^{\prime
}}\right) }\left\Vert v\left( \tau ,.\right) \right\Vert _{s_{i-1}^{\prime
\prime }}^{p}d\tau \notag \\
&\leq &C\left\Vert u_{0}\right\Vert _{s_{i}^{\prime }}+t\left\Vert
u_{1}\right\Vert _{s_{i}^{\prime }} \notag \\
&+&C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma
_{1}\left( 1-\frac{N}{2}\eta \right) }\left\Vert v\left( \tau ,.\right)
\right\Vert _{s_{i-1}^{\prime \prime }}^{p}d\tau ,\text{ } \label{lso25}
\end{eqnarray
for any $2\leq i\leq i_{0},$ $t\in \left( 0,T_{\max }\right) $ and
\begin{eqnarray}
\left\Vert v\left( t,.\right) \right\Vert _{s_{i}^{\prime \prime }} &\leq
&\left\Vert \tilde{E}_{\gamma _{2},1}\left( t\right) v_{0}\right\Vert
_{s_{i}^{\prime \prime }}+t\left\Vert \tilde{E}_{\gamma _{2},2}\left(
t\right) v_{1}\right\Vert _{s_{i}^{\prime \prime }} \notag \\
&+&C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1+\frac{N}{2}\gamma
_{2}\left( \frac{q}{s_{i-1}^{\prime }}-\frac{1}{s_{i}^{\prime \prime }
\right) }\left\Vert u\left( \tau ,.\right) \right\Vert _{s_{i-1}^{\prime
}}^{q}d\tau \notag \\
&\leq &C\left\Vert v_{0}\right\Vert _{s_{i}^{\prime \prime }}+C\left\Vert
v_{1}\right\Vert _{s_{i}^{\prime \prime }} \notag \\
&+&C\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1-\gamma _{2}\left( 1
\frac{N\eta }{2}\right) }\left\Vert u\left( \tau ,.\right) \right\Vert
_{s_{i-1}^{\prime }}^{q}d\tau ,\text{ } \label{lso26}
\end{eqnarray
for any $t\in \left( 0,T_{\max }\right) $ and for any $2\leq i\leq i_{0}+1$
\newline
It clearly follows from \eqref{lso25} and \eqref{lso26} that $u\left(
t\right) \in L^{s_{i}^{\prime }}\left( \mathbb{R}^{N}\right) ,$ $v\left(
t\right) \in L^{s_{i}^{\prime \prime }}\left( \mathbb{R}^{N}\right) $
\newline
\begin{equation}
\left\{
\begin{array}{l}
u\left( t,.\right) \in L^{s_{i}^{\prime }}\left( \mathbb{R}^{N}\right)
\text{ }\left\Vert u\left( t,.\right) \right\Vert _{s_{i}^{\prime }}\leq
C\left( 1+t^{a_{i}}\right) ,1\leq \forall i\leq i_{0},\text{ }t\in \left(
0,T_{\max }\right) , \\[5pt]
v\left( t,.\right) \in L^{s_{i}^{\prime \prime }}\left( \mathbb{R
^{N}\right) ,\text{ }\left\Vert v\left( t,.\right) \right\Vert
_{s_{i}^{\prime \prime }}\leq C\left( 1+t^{b_{i}}\right) ,1\leq \forall
i\leq i_{0}+1,\text{ }t\in \left( 0,T_{\max }\right)
\end{array
\right. \label{estlprim}
\end{equation
for some positive constants $a_{i},$ $b_{i}.$ Since $\frac{Np}
2s_{i_{0}^{\prime \prime }}}<1,$ taking $s_{2}=s_{i_{0}}^{\prime \prime },$
\ref{test}) holds. In consequence, we get $T_{\max }=+\infty $ and that (\re
{test1}) holds.
\noindent \textbf{3.} $L^{\infty }$\textbf{-decay estimates.}
Let
\begin{equation*}
\sigma _{1}=\frac{\left( 1-\delta \right) \left( p\gamma _{2}+\gamma
_{1}\right) }{\left( pq-1\right) }\text{, \ \ \ }\sigma _{2}=\frac{\left(
1-\delta \right) \left( q\gamma _{1}+\gamma _{2}\right) }{\left( pq-1\right)
}.
\end{equation*
If $\frac{pN}{2s_{2}}<1$, by taking $\xi =s_{2}$ in \eqref{lso16} and using
\ref{lso13}), we get
\begin{equation}
\left\Vert u\left( t,.\right) \right\Vert _{\infty }\leq Ct^{-\frac{N\gamma
_{1}}{2r_{1}}}\left\Vert u_{0}\right\Vert _{r_{1}}+Ct^{1-\frac{N\gamma _{1}}
2m}}\left\Vert u_{1}\right\Vert _{m}+C\displaystyle\int_{0}^{t}\left( t-\tau
\right) ^{\gamma _{1}-1-\frac{N\gamma _{1}}{2}\frac{p}{s_{2}}}\tau
^{-p\sigma _{2}}d\tau \text{.} \label{lso27}
\end{equation
From (\ref{critdimension}) with $pq>q+2$, we get $2\left( 1+p\right) -\left(
pq-1\right) <\frac{N\left( pq-1\right) }{2q}$ which implies that $\frac{N}
2r_{1}}<1$ and for any $m$ depending on $N$ such that $\frac{N}{2}<m<\frac
N\gamma _{1}}{2},$ $N\geq 2$, we infer that
\begin{equation*}
1-\frac{N\gamma _{1}}{2m}<0\text{ \ \ and \ }\frac{N}{2m}<1.
\end{equation*
On the other hand, since
\begin{equation*}
p\sigma _{2}<1,\text{ }\quad \gamma _{1}-\frac{N\gamma _{1}}{2}\frac{p}{s_{2
}-p\sigma _{2}=-\frac{\left[ \gamma _{1}+\gamma _{1}p\delta +\left( 1-\delta
\right) p\gamma _{2}\right] }{pq-1},
\end{equation*
and
\begin{equation}
\frac{\gamma _{1}+\gamma _{1}p\delta +p\gamma _{2}\left( 1-\delta \right) }
pq-1}=\frac{N\gamma _{1}}{2r_{1}}, \label{comp}
\end{equation
it follows from \eqref{lso27} and \eqref{comp} that
\begin{equation}
\left\Vert u\left( t,.\right) \right\Vert _{\infty }\leq Ct^{-\frac{N}{2r_{1
}\gamma _{1}}+Ct^{1-\frac{N}{2m}\gamma _{1}}+Ct^{-\frac{\left[ \gamma
_{1}+\gamma _{1}p\delta +\left( 1-\delta \right) p\gamma _{2}\right] }{pq-1
}. \label{lso28}
\end{equation
Therefore, we have from \eqref{estlprim}, \eqref{lso28} and Lemma \re
{poldec} that
\begin{equation*}
\left\Vert u\left( t,.\right) \right\Vert _{\infty }\leq C\left( 1+t\right)
^{-\min \left\{ \frac{N}{2r_{1}}\gamma _{1},\frac{N}{2m}\gamma
_{1}-1\right\} }\text{, for any }t\geq 0.
\end{equation*
Similarly, for $\frac{qN}{2s_{1}}<1$ we find that
\begin{equation}
\left\Vert v\left( t,.\right) \right\Vert _{\infty }\leq C\left( 1+t\right)
^{-\min \left\{ \frac{N}{2r_{2}}\gamma _{2},1-\frac{N}{2m}\gamma
_{2}\right\} }\text{, for any }t\geq 0\text{.} \label{lso29}
\end{equation
Also, (\ref{lso28}) holds as $pN/\left( 2s_{2}\right) \leq qN/\left(
2s_{1}\right) .$
In particular, if $pq>\gamma _{2}\left( q+1\right) +1$, we can choose
\delta >1-\frac{pq-1}{q(p+1)\gamma _{2}}$ and $\delta \approx 1-\frac{pq-1}
q(p+1)\gamma _{2}}$ such that $qN/\left( 2s_{1}\right) <1$. Therefore, the
estimates (\ref{lso28}) and (\ref{lso29}) hold. It is useful to note that
N\leq 2$ implies $qN/\left( 2s_{1}\right) <1$ and $qN/\left( 2s_{1}\right)
<1 $ implies $pq>\gamma _{2}\left( q+1\right) +1$.
It remains to consider the following two cases:
$\triangleright$ $N>2,$ $\frac{Np}{2s_{2}}<1$ and $\frac{Nq}{2s_{1}}\geq 1.$
Let
\begin{equation*}
\sigma ^{\prime }=\frac{\gamma _{1}+\gamma _{1}p\delta +\left( 1-\delta
\right) p\gamma _{2}}{pq-1}.
\end{equation*
For positive $\mu $ such that $\mu <\min \left\{ \sigma ^{\prime },\sigma
_{1}\right\} $ and $q\mu <1$; Since $N>2$ and $q>1,$ we can choose $k>0$
such that $k>\frac{qN}{2}$ and $q\mu +\frac{qN\gamma _{2}}{2k}>\gamma _{2}.$
Since $s_{1}\leq qN/2,$ we have $k>s_{1}$. By the interpolation inequality,
\begin{equation*}
\left\Vert u\left( t\right) \right\Vert _{k}\leq \left\Vert u\left( t\right)
\right\Vert _{\infty }^{\left( k-s_{1}\right) /k}\left\Vert u\left( t\right)
\right\Vert _{s_{1}}^{s_{1}/k}\leq Ct^{-\sigma ^{\prime }\left(
k-s_{1}\right) /k}t^{-\sigma _{1}s_{1}/k},\text{ for any }t>0.
\end{equation*
Therefore, by \eqref{lso8}, \eqref{lso28}, we have
\begin{equation*}
\left\Vert u\left( t\right) \right\Vert _{k}\leq Ct^{-\mu },\;\;\text{for
all }\,t>0.
\end{equation*
Consequently, for any $t>0$,
\begin{equation}
\begin{array}{l}
\left\Vert v\left( t\right) \right\Vert _{\infty }\leq \left\Vert \tilde{E
_{\gamma _{2},1}\left( t\right) v_{0}\right\Vert _{\infty }+t\left\Vert
\tilde{E}_{\gamma _{2},2}\left( t\right) v_{1}\right\Vert _{\infty }+
\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{2}-1-\frac{Nq}{2k
\gamma _{2}}\left\Vert u\left( \tau \right) \right\Vert _{k}^{q}d\tau \\
\qquad \qquad \leq Ct^{-\frac{N}{2r_{2}}\gamma _{2}}\left\Vert
v_{0}\right\Vert _{r_{2}}+Ct^{1-\frac{N}{2r_{2}}\gamma _{2}}\left\Vert
v_{1}\right\Vert _{r_{2}}+C\displaystyle\int_{0}^{t}\left( t-\tau \right)
^{\gamma _{2}-1-\frac{N\gamma _{2}q}{2k}}\tau ^{-q\mu }d\tau , \\
\qquad \qquad \leq C\left( t^{-\frac{N}{2}\gamma _{2}}+t^{1-\frac{N}{2r_{2}
\gamma _{2}}+t^{\gamma _{2}-\frac{N\gamma _{2}q}{2k}-q\mu }\right) \\
\qquad \qquad \leq Ct^{-\alpha }
\end{array}
\label{lso30}
\end{equation
where $\alpha =\min \left\{ \frac{N}{2r_{2}}\gamma _{2}-1,-\gamma _{2}+\frac
N\gamma _{2}q}{2k}+q\mu \right\} >0,$\newline
\begin{center}
$k>s_{1}, \quad q\mu <1, \quad k>q, \quad \gamma _{2}-\frac{Nq\gamma _{2}}{2
}>0, \;\; \gamma _{2}-\frac{Nq\gamma _{2}}{2k}-q\mu <0.$
\end{center}
\vskip.3cm From (\ref{lso10}) and (\ref{lso30}), we infer that
\begin{equation*}
\left\Vert v\left( t\right) \right\Vert _{\infty }\leq C\left( 1+t\right)
^{-\alpha }, \; \; \text{for all}\; \; t\geq 0.
\end{equation*}
In case $p=1$ and $q^{2}>1+4q,$ we can choose \newline
$\delta >(1+3q)/(p+1)q\gamma _{2}=(1+3q)/\left( 2\gamma _{2}q\right) $ and
\delta \approx (1+3q)/\left( 2\gamma _{2}q\right) $ such that $N/(2s_{2})<1.$
Thus we obtain the estimate (\ref{lso28}).
$\triangleright $ \textbf{The case: }$N>2$\textbf{$,$ }$qN/(2s_{1})\geq 1
\textbf{$,$ }$pN/(2s_{2})\geq 1$\textbf{$,$ }$q\geq p>1$\textbf{\ }and
\gamma _{1}\leq \gamma _{2}$.\newline
This case needs a careful handling and we need to restrict further the
choice of $\delta $. As $\sqrt{\frac{\left( p+1\right) q\gamma _{1}}{\left(
q+1\right) p}}<\gamma _{1}\leq \gamma _{2}<2$, $pq>1$, it follows that $1
\frac{pq-1}{q(p+1)\gamma _{2}}<1-\frac{\left( pq-1\right) }{p\left(
q+1\right) \gamma _{1}^{2}}$. We can select $\delta $ such that
\begin{equation*}
1-\frac{pq-1}{q(p+1)\gamma _{2}}<\delta <\min \left\{ \frac{N\left(
pq-1\right) }{2\left( p+1\right) q},1-\frac{pq-1}{p\left( q+1\right) \gamma
_{1}^{2}}\right\} .
\end{equation*
Then we get immediately that $p\sigma _{2}>1/\gamma _{1}>1/q\gamma _{1}$ and
$q\sigma _{1}>1/\gamma _{1}>1/p\gamma _{2}$.\newline
Further, we notice that there exist $\varepsilon \in \left( 0,1\right) $ and
$\beta <1$ close to $1$ such that
\begin{equation}
p\sigma _{2}-\varepsilon >1/\gamma _{1}>1/q\gamma _{1}\text{, \ }q\sigma
_{1}-\varepsilon >1/\gamma _{2}>1/p\gamma _{2}\text{, }\;\text{ and
\;\;1/\gamma _{1}<\beta -\varepsilon . \label{epsilonbeta}
\end{equation
Letting $\eta =2\varepsilon \left( 1-\delta \right) /N$, we find the integer
$i_{0}$ as in the Step 2, and, without loss of generality, assume that (\re
{lso24}) holds. We choose $\beta $ in addition to (\ref{epsilonbeta})
satisfying
\begin{equation*}
\gamma _{1}<\gamma _{1}\frac{pN}{2s_{i_{0}}^{\prime \prime }}+\beta \text{,
\;\,\text{ \ since }1-\frac{1}{\gamma _{1}}<\frac{pN}{2s_{i_{0}}^{\prime
\prime }}\text{.}
\end{equation*
As
\begin{equation*}
\delta <\frac{N\left( pq-1\right) }{2\left( p+1\right) q}\leq \frac{N\left(
pq-1\right) }{2\left( q+1\right) p},\;\;\text{and}\;\;\beta <1,
\end{equation*
we have
\begin{equation}
\beta +\frac{\left( p+1\right) q\delta }{\left( pq-1\right) }<1+\frac{N}{2
\text{,}\;\;\text{ }\beta +\frac{\left( q+1\right) p\delta }{\left(
pq-1\right) }<1+\frac{N}{2}\text{.} \label{betadelt}
\end{equation
For $2\leq i\leq i_{0}-1,$ define $r_{i+1}^{\prime }$ and $r_{i+1}^{\prime
\prime }$, inductively, as follows:
\begin{eqnarray*}
\frac{1}{r_{2}^{\prime }} &=&\frac{1}{s_{2}^{\prime }}+\frac{2}{N}\left(
p\sigma _{2}-\varepsilon \left( 1-\delta \right) \right) \text{,}\quad \quad
\frac{1}{r_{2}^{\prime \prime }}=\frac{1}{s_{2}^{\prime \prime }}+\frac{2}{N
\left( q\sigma _{1}-\varepsilon \left( 1-\delta \right) \right) \text{,} \\
\frac{1}{r_{i+1}^{\prime }} &=&\frac{1}{s_{i+1}^{\prime }}+\frac{2}{N}\left(
\beta -\varepsilon \left( 1-\delta \right) \right) ,\quad \quad \frac{1}
r_{i+1}^{\prime \prime }}=\frac{1}{s_{i+1}^{\prime \prime }}+\frac{2}{N
\left( \beta -\varepsilon \left( 1-\delta \right) \right) . \\
&&\hspace{-3cm}
\end{eqnarray*
It is clear that $r_{i}^{\prime },$ $r_{i}^{\prime \prime }>0$ and
r_{i}^{\prime }<s_{i}^{\prime },$ $r_{i}^{\prime \prime }<s_{i}^{\prime
\prime }$ for all $2\leq i\leq i_{0}.$ A simple calculation shows that
r_{i}^{\prime },$ $r_{i}^{\prime \prime }>1$.
As $s_{i}^{\prime }$ and $s_{i}^{\prime \prime }$ are increasing in $i$ for
1\leq i\leq i_{0},$ we have
\begin{eqnarray*}
\frac{1}{r_{i+1}^{\prime }} &<&\frac{1}{s_{2}^{\prime }}+\frac{2}{N}\left(
\beta -\varepsilon \left( 1-\delta \right) \right) \\
&=&\frac{p}{s_{1}^{\prime \prime }}-\frac{2}{N}+\frac{2}{N}\varepsilon
\left( 1-\delta \right) +\frac{2}{N}\left( \beta -\varepsilon \left(
1-\delta \right) \right) \qquad \qquad \\
&=&\frac{2}{N}\left( \frac{p\left( q+1\right) \delta }{pq-1}+\beta -1\right)
<1,
\end{eqnarray*
from (\ref{betadelt}), i.e. $r_{i+1}^{\prime }>1.$
Similarly, we can find that $r_{i+1}^{\prime \prime }>1$.
From (\ref{test1}) and (\ref{estlprim}), we infer that there exists a
positive constant $C$ such that, for any $0\leq t\leq 1$,
\begin{equation}
\qquad \qquad \left\Vert u(t)\right\Vert _{\infty },\text{ }\left\Vert
v(t)\right\Vert _{\infty },\text{ }\left\Vert u(t)\right\Vert _{k_{1}},\text{
}\left\Vert v(t)\right\Vert _{k_{2}}\leq C,\text{ }s_{1}^{\prime }\leq
k_{1}\leq s_{i_{0}}^{\prime },\text{ }s_{1}^{\prime \prime }\leq k_{2}\leq
s_{i_{0}}^{\prime \prime }\text{. } \notag
\end{equation
Further, since $1-\eta N/2=1-\varepsilon \left( 1-\delta \right) $ and
p\sigma _{2}<1,$ using (\ref{lso25}), (\ref{lso26}), (\ref{lso8}) and (\re
{lso9}), we arrive at the estimate
\begin{eqnarray*}
\left\Vert u\left( t,.\right) \right\Vert _{s_{2}^{\prime }} &\leq
&\left\Vert \tilde{E}_{\gamma _{1},1}\left( t\right) u_{0}\right\Vert
_{s_{2}^{\prime }}+t\left\Vert \tilde{E}_{\gamma _{1},2}\left( t\right)
u_{1}\right\Vert _{s_{2}^{\prime }} \\
&+&C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma
_{1}\left( 1-\varepsilon \left( 1-\delta \right) \right) }\left\Vert u\left(
\tau ,.\right) \right\Vert _{s_{1}^{\prime \prime }}^{p}d\tau \text{,}
\end{eqnarray*
from which, we ge
\begin{eqnarray*}
\left\Vert u\left( t,.\right) \right\Vert _{s_{2}^{\prime }} &\leq &Ct^{
\frac{N}{2}\gamma _{1}\left( \frac{1}{r_{2}^{\prime }}-\frac{1}
s_{2}^{\prime }}\right) }\left\Vert u_{0}\right\Vert _{r_{2}^{\prime }}+t^{
\frac{N}{2}\gamma _{1}\left( \frac{1}{r_{2}^{\prime }}-\frac{1}
s_{2}^{\prime }}\right) }\left\Vert u_{1}\right\Vert _{\mathcal{\dot{H}
_{r_{2}^{\prime }}^{-\frac{2}{\gamma _{1}}}} \\
&+&C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma
_{1}\left( 1-\varepsilon \left( 1-\delta \right) \right) }\tau ^{-p\sigma
_{2}}d\tau .\qquad \qquad
\end{eqnarray*
Therefor
\begin{eqnarray*}
\left\Vert u\left( t,.\right) \right\Vert _{s_{2}^{\prime }} &\leq
&Ct^{-\gamma _{1}\left( p\sigma _{2}-\varepsilon \left( 1-\delta \right)
\right) }\left\Vert u_{0}\right\Vert _{r_{2}^{\prime }}+t^{-\gamma
_{1}\left( p\sigma _{2}-\varepsilon \left( 1-\delta \right) \right)
}\left\Vert u_{1}\right\Vert _{\mathcal{\dot{H}}_{r_{2}^{\prime }}^{-\frac{
}{\gamma _{1}}}} \\
&+&C\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma
_{1}\left( 1-\varepsilon \left( 1-\delta \right) \right) }\tau ^{-p\sigma
_{2}}d\tau \\
&\leq &Ct^{-\gamma _{1}\left( p\sigma _{2}-\varepsilon \left( 1-\delta
\right) \right) },\text{ for any }t>0\text{.}
\end{eqnarray*
Similarly,
\begin{equation*}
\left\Vert v\left( t,.\right) \right\Vert _{s_{2}^{\prime \prime }}\leq
Ct^{-\gamma _{2}\left( q\sigma _{1}-\varepsilon \left( 1-\delta \right)
\right) },\text{ for any }t>0.
\end{equation*
In view of (\ref{epsilonbeta}) and $\beta <1,$ thanks to Lemma \ref{poldec},
for any $t>0,$ we conclude that
\begin{equation}
\left\Vert u\left( t,.\right) \right\Vert _{s_{2}^{\prime }}\leq Ct^{-\gamma
_{1}\beta /q}\text{ \ \ and \ \ }\left\Vert v\left( t,.\right) \right\Vert
_{s_{2}^{\prime \prime }}\leq Ct^{-\gamma _{2}\beta /p}\text{.}
\label{betgamma}
\end{equation
An iterative argument leads to
\begin{equation*}
\left\Vert u\left( t,.\right) \right\Vert _{s_{i_{0}}^{\prime }}\leq
Ct^{-\gamma _{1}\left( \beta -\varepsilon \left( 1-\delta \right) \right)
}\leq Ct^{-\beta /q}\text{, }\left\Vert v\left( t,.\right) \right\Vert
_{s_{i_{0}}^{\prime \prime }}\leq Ct^{-\gamma _{2}\left( \beta -\varepsilon
\left( 1-\delta \right) \right) }\leq Ct^{-\beta /p},
\end{equation*
for any $t\geq 1$. Therefore, by \eqref{lso15} and \eqref{lso16}, we have
\begin{eqnarray*}
\left\Vert u\left( t,.\right) \right\Vert _{\infty } &\leq &Ct^{-\frac{N}
2r_{1}}\gamma _{1}}\left\Vert u_{0}\right\Vert _{r_{1}}+Ct^{1-\frac{N}{2m
\gamma _{1}}\left\Vert u_{1}\right\Vert _{m}+C\displaystyl
\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma _{1}\frac{pN}
2s_{i_{0}}^{\prime \prime }}}\left\Vert v\left( \tau ,.\right) \right\Vert
_{s_{i_{0}}^{\prime \prime }}^{p}d\tau \\
&\leq &Ct^{-\frac{N}{2r_{1}}\gamma _{1}}\left\Vert u_{0}\right\Vert
_{r_{1}}+Ct^{1-\frac{N}{2m}\gamma _{1}}\left\Vert u_{1}\right\Vert _{m}+
\displaystyle\int_{0}^{t}\left( t-\tau \right) ^{\gamma _{1}-1-\gamma _{1
\frac{pN}{2s_{i_{0}}^{\prime \prime }}}\tau ^{-\beta }d\tau .
\end{eqnarray*
S
\begin{eqnarray*}
\qquad \left\Vert u\left( t,.\right) \right\Vert _{\infty } &\leq &C\left(
t^{-\frac{N}{2r_{1}}\gamma _{1}}+t^{1-\frac{N}{2m}\gamma _{1}}+t^{\gamma
_{1}-\gamma _{1}\frac{pN}{2s_{i_{0}}^{\prime \prime }}-\beta }\right) \qquad
\\
&\leq &Ct^{-\tilde{\sigma}}\text{, }
\end{eqnarray*
where $\tilde{\sigma}=\min \left\{ \frac{N}{2r_{1}}\gamma _{1},\frac{N}{2m
\gamma _{1}-1,\gamma _{1}\frac{pN}{2s_{i_{0}}^{\prime \prime }}-\gamma
_{1}+\beta \right\} >0$ from (\ref{betgamma}).\newline
In view of the fact that $\frac{Nq}{2s_{1}}\geq 1$, we can make use of the
arguments similar to the ones employed for the case $\frac{Np}{2s_{2}}<1$
and $\frac{Nq}{2s_{1}}\geq 1$ to obtain $\left\Vert v\left( t,.\right)
\right\Vert _{\infty }\leq Ct^{-\hat{\sigma}}$ for some $\hat{\sigma}>0$ and
for every $t>0$. This completes the proof.\newline
\begin{remark}
In the particular case: $N>2$, $qN/(2s_{1})\geq 1$, $pN/(2s_{2})\geq 1,$
q>p=1$ and $q\leq 3$, using the above method, we obtain
\begin{equation*}
\left\Vert u\left( t,.\right) \right\Vert _{\infty }\leq Ct^{-\tilde{\sigma}
\text{, \ for any }t>0\text{,}
\end{equation*
where $\tilde{\sigma}=\min \left\{ \frac{N}{2}\gamma _{1},\frac{N}{2m}\gamma
_{1}-1,\frac{pN}{2s_{i_{0}}^{\prime \prime }}\gamma _{1}-\gamma _{1}+\gamma
_{2}\left( \beta -\varepsilon \left( 1-\delta \right) \right) \right\} $.
Here, $\varepsilon >0$ can be arbitrarily small, and $\beta $ can be
arbitrarily close to $1$. However, since $s_{i_{0}}^{\prime \prime }$
depends on $\varepsilon $ and $s_{i_{0}}^{\prime \prime }$ is decreasing in
\varepsilon $, it is not clear that $\tilde{\sigma}$ positive.
\end{remark}
\vskip.2cm
\begin{proof}[Proof of Theorem \protect\ref{NEG}]
The proof proceeds by contradiction. Suppose that $(u,v)$ is a mild solution
of (\ref{sys1}) which exists globally in time. Set
\begin{equation*}
\varphi \left( t, x\right) =\varphi _{1}\left( x\right) \varphi _{2}\left(
t\right) ,\text{ }
\end{equation*
where $\varphi _{1}\left( x\right) =\Phi ^{l}\left( \frac{\left\vert
x\right\vert }{T^{\lambda }}\right)$ with $\Phi \in C_{0}^{\infty }\left(
\mathbb{R}\right) $, $0\leq \Phi \left( z\right) \leq 1$, that satisfies
\begin{equation*}
\Phi \left( z\right) =\left\{
\begin{array}{c}
1\text{ if }\left\vert z\right\vert \leq 1, \\[4pt]
0\text{ if }\left\vert z\right\vert >2
\end{array}
\right.
\end{equation*}
and
\begin{equation*}
\varphi _{2}\left( t\right) =\left\{
\begin{array}{l}
\left( 1-\frac{t}{T}\right) ^{l} \ \ \ \ \ \ \text{ if } \, t\leq T, \\[4pt]
\text{ \ \ \ \ \ \ }0\text{\ \ \ \ \ \ \ \ \ \ if } \, t>T
\end{array}
\right.
\end{equation*}
where $l>\max \left\{ 1,\frac{q}{q-1}\gamma _{1}-1,\frac{p}{p-1}\gamma
_{2}-1\right\} $ and $\lambda >0$ to be determined later. \newline
We set $Q_{T}:=\mathbb{R}^{N}\times \left[ 0,T \right] .$\newline
From the definition \ref{Weaks} of the weak solution (\ref{lso16}), we have
\begin{eqnarray}
&&\displaystyle\int_{Q_{T}}uD_{t|T}^{\gamma _{1}}\varphi \left( t,x\right)
dxdt-\displaystyle\int_{Q_{T}}u\Delta \varphi \left( t,x\right) \, dxdt =
\displaystyle\int_{\mathbb{R}^{N}}u_{0}\left( x\right) \left(
D_{t|T}^{\gamma _{1}-1}\varphi \right) \left( 0,.\right) dx \notag \\
&&+\displaystyle\int_{Q_{T}}u_{1}\left( x\right) D_{t|T}^{\gamma
_{1}-1}\varphi \left( t,x\right) dxdt+\displaystyle\int_{Q_{T}}\left\vert
v\left( t,x\right) \right\vert ^{p}\varphi \left( t,x\right) dxdt,
\label{formu1}
\end{eqnarray
\begin{eqnarray}
&&\displaystyle\int_{Q_{T}}vD_{t|T}^{\gamma _{2}}\varphi \left( t,x\right)
dxdt-\displaystyle\int_{Q_{T}}v\Delta \varphi \left( t,x\right) dxdt
\displaystyle\int_{\mathbb{R}^{N}}v_{0}\left( x\right) \left(
D_{t|T}^{\gamma _{2}-1}\varphi \right) \left( 0,.\right) dx \notag \\
&&+\displaystyle\int_{Q_{T}}v_{1}\left( x\right) D_{t|T}^{\gamma
_{2}-1}\varphi \left( t,x\right) dxdt+\displaystyle\int_{Q_{T}}\left\vert
u\left( t,x\right) \right\vert ^{q}\varphi \left( t,x\right) dxdt.
\label{formut} \\
&& \notag
\end{eqnarray}
On the other hand, we have from the definition of $\varphi $ that
\begin{eqnarray}
&&\displaystyle\int_{Q_{T}}u\varphi _{1}\left( x\right) D_{t|T}^{\gamma
_{1}}\varphi _{2}\left( t\right) dxdt-\displaystyle\int_{Q_{T}}u\varphi
_{2}\left( t\right) \Delta \varphi _{1}\left( x\right) \, dxdt \notag \\
&&=\displaystyle\int_{\mathbb{R}^{N}}u_{0}\left( x\right) \varphi _{1}\left(
x\right) \left( D_{t|T}^{\gamma _{1}-1}\varphi _{2}\right) \left( 0,.\right)
dx \notag \\
&&+\displaystyle\int_{Q_{T}}u_{1}\varphi _{1}\left( x\right) D_{t|T}^{\gamma
_{1}-1}\varphi _{2}\left( t\right) dxdt+\displaystyle\int_{Q_{T}}\left\vert
v\left( t,x\right) \right\vert ^{p}\varphi _{1}\left( x\right) \varphi
_{2}\left( t\right) dxdt\text{,}
\end{eqnarray
an
\begin{eqnarray}
&&\displaystyle\int_{Q_{T}}v\varphi _{1}\left( x\right) D_{t|T}^{\gamma
_{2}}\varphi _{2}\left( t\right) dxdt-\displaystyle\int_{Q_{T}}v\varphi
_{2}\left( t\right) \Delta \varphi _{1}\left( x\right) \, dxdt \notag \\
&&=\displaystyle\int_{\mathbb{R}^{N}}v_{0}\left( x\right) \varphi _{1}\left(
x\right) \left( D_{t|T}^{\gamma _{2}-1}\varphi _{2}\right) \left( 0,.\right)
dx \notag \\
&&+\displaystyle\int_{Q_{T}}v_{1}\left( x\right) \varphi _{1}\left( x\right)
D_{t|T}^{\gamma _{2}-1}\varphi _{2}\left( t\right) dxdt+\displaystyl
\int_{Q_{T}}\left\vert u\left( t,x\right) \right\vert ^{q}\varphi _{1}\left(
x\right) \varphi _{2}\left( t\right) dxdt\text{.} \\
&& \notag
\end{eqnarray}
Applying H\"{o}lder's inequality with exponents $q$ and $q^{\prime }=\frac{
}{q-1}$ to the right-hand side of (\ref{formu1}), we get
\begin{eqnarray*}
\displaystyle\int_{Q_{T}}u\varphi _{1}\left( x\right) D_{t|T}^{\gamma
_{1}}\varphi _{2}\left( t\right) dxdt &=&\displaystyle\int_{Q_{T}}
u\left\vert \varphi _{2}\left( t\right) \right\vert ^{\frac{1}{q}}\left\vert
\varphi _{1}\left( x\right) \right\vert ^{1-\frac{1}{q}+\frac{1}{q
}\left\vert \varphi _{2}\left( t\right) \right\vert ^{-\frac{1}{q
}D_{t|T}^{\gamma _{1}}\varphi _{2}\left( t\right) dxdt \\
&\leq &\mathcal{I}^{\frac{1}{q}} \mathcal{\tilde A},
\end{eqnarray*}
where we have set
\begin{equation*}
\mathcal{I} := \int_{Q_{T}}\left\vert u\right\vert ^{q}\varphi _{1}\left(
x\right) \varphi _{2} \, dxdt
\end{equation*}
\begin{equation*}
\mathcal{\tilde A}:= \left( \displaystyle \int_{Q_{T}}\left\vert
D_{t|T}^{\gamma _{1}}\varphi _{2}\left( t\right) \right\vert ^{q^{\prime
}}\left\vert \varphi _{2}\left( t\right) \right\vert ^{-\frac{q^{\prime }}{q
}\left\vert \varphi _{1}\left( x\right) \right\vert ^{\left( 1-\frac{1}{q
\right) q^{\prime }}dxdt\right) ^{\frac{1}{q^{\prime }}},
\end{equation*}
\begin{eqnarray*}
\displaystyle\int_{Q_{T}}u\Delta \varphi _{1}\left( x\right) \varphi
_{2}\left( t\right) \, dxdt &\leq &\mathcal{I}^{\frac{1}{q}}\left(
\int_{Q_{T}}\left\vert \Delta \varphi _{1}\left( x\right) \right\vert
^{q^{\prime }}\left\vert \varphi _{1}\left( x\right) \right\vert ^{-\frac
q^{\prime }}{q}}\left\vert \varphi _{2}\left( t\right) \right\vert ^{\left(
1-\frac{1}{q}\right) q^{\prime }}dxdt\right) ^{\frac{1}{q^{\prime }}} \\
&\leq &C\mathcal{I}^{\frac{1}{q}}\left( \int_{\text{supp}\left( \Delta
\varphi _{1}\right) }\varphi _{1}^{1-q^{\prime }}\left( x\right) \left\vert
\Delta \varphi _{1}\left( x\right) \right\vert ^{q^{\prime }}\left\vert
\varphi _{1}^{l}\left( x\right) \right\vert ^{-\frac{q^{\prime }}{q
}dx\int_{0}^{T}\left\vert \varphi _{2}\left( t\right) \right\vert ^{\left( 1
\frac{1}{q}\right) q^{\prime }} \, dt\right) ^{\frac{1}{q^{\prime }}}\text{.}
\end{eqnarray*}
Collecting the above estimates, we obtain
\begin{eqnarray}
&&CT^{\left( 1-\gamma _{1}\right) }\displaystyle\int_{\mathbb{R
^{N}}u_{0}\left( x\right) \varphi _{1}\left( x\right) dx+CT^{2-\gamma _{1}
\displaystyle\int_{\mathbb{R}^{N}}u_{1}\left( x\right) \varphi _{1}\left(
x\right) dx+\mathcal{J} \notag \\
&\leq &\mathcal{I} ^{\frac{1}{q}} \mathcal{\tilde A} \notag \\
&&+\mathcal{I} ^{\frac{1}{q}}\left( \int_{\mathbb{R}^{N}}\left\vert \Delta
\varphi _{1}\left( x\right) \right\vert ^{q^{\prime }}\left\vert \varphi
_{1}\left( x\right) \right\vert ^{-\frac{q^{\prime }}{q}}dx\int_{0}^{T}\lef
\vert \varphi _{2}\left( t\right) \right\vert ^{\left( 1-\frac{1}{q}\right)
q^{\prime }}dt\right) ^{\frac{1}{q^{\prime }}}, \label{in1}
\end{eqnarray}
where we have set
\begin{equation*}
\mathcal{J}:=\int_{Q_{T}}\left\vert v\right\vert ^{p}\varphi _{1}\left(
x\right) \varphi _{2}\left( t\right) dxdt.
\end{equation*
Similarly, we obtain
\begin{eqnarray}
&&\mathcal{I}+T^{\left( 1-\gamma _{2}\right) }\int_{\mathbb{R
^{N}}v_{0}\varphi _{1}\left( x\right) dx + CT^{2-\gamma _{2}}\displaystyl
\int_{\mathbb{R}^{N}}v_{1}\left( x\right) \varphi _{1}\left( x\right) dx
\notag \\
&\leq &\mathcal{J} ^{\frac{1}{p}}\left( \displaystyle\int_{Q_{T}}\left\vert
D_{t|T}^{\gamma _{2}}\varphi _{2}\left( t\right) \right\vert ^{p^{\prime
}}\left\vert \varphi _{2}\left( t\right) \right\vert ^{-\frac{p^{\prime }}{p
}\left\vert \varphi _{1}\left( x\right) \right\vert ^{\left( 1-\frac{1}{p
\right) p^{\prime }}dxdt\right) ^{\frac{1}{p^{\prime }}} \notag \\
&&+\mathcal{J} ^{\frac{1}{p}}\left( \int_{\mathbb{R}^{N}}\left\vert \Delta
\varphi _{1}\left( x\right) \right\vert ^{p^{\prime }}\left\vert \varphi
_{1}\left( x\right) \right\vert ^{-\frac{p^{\prime }}{p}}dx\int_{0}^{T}\lef
\vert \varphi _{2}\left( t\right) \right\vert ^{\left( 1-\frac{1}{p}\right)
p^{\prime }}dt\right) ^{\frac{1}{p^{\prime }}}, \label{in2}
\end{eqnarray}
where $pp^{\prime }=p+p^{\prime }$.\newline
Consequently,
\begin{eqnarray*}
\mathcal{J}+CT^{\left( 1-\gamma _{1}\right) }\displaystyle\int_{\mathbb{R
^{N}}u_{0}\varphi _{1}^{l}\left( x\right) dx+CT^{2-\gamma _{1}}\displaystyl
\int_{\mathbb{R}^{N}}u_{1}\left( x\right) \varphi _{1}\left( x\right) \, dx
\leq \mathcal{A}\mathcal{I}^{\frac{1}{q}},
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{I}+CT^{\left( 1-\gamma _{2}\right) }\int_{\mathbb{R}^{N}}v_{0}\Phi
\left( x\right) dx+CT^{2-\gamma _{2}}\displaystyle\int_{\mathbb{R
^{N}}v_{1}\left( x\right) \varphi _{1}\left( x\right) \, dx \leq \mathcal{B
\mathcal{J} ^{\frac{1}{p}},
\end{eqnarray*}
with
\begin{eqnarray*}
\mathcal{A} &=&\left( \displaystyle\int_{Q_{T}}\left\vert D_{t|T}^{\gamma
_{1}}\varphi _{2}\left( t\right) \right\vert ^{q^{\prime }}\left\vert
\varphi _{2}\left( t\right) \right\vert ^{-\frac{q^{\prime }}{q}}\left\vert
\varphi _{1}\left( x\right) \right\vert ^{\left( 1-\frac{1}{q}\right)
q^{\prime }}dxdt\right) ^{\frac{1}{q^{\prime }}} \\
&&+\left( \int_{Q_{T}}\left\vert \Delta \varphi _{1}\left( x\right)
\right\vert ^{q^{\prime }}\left\vert \varphi _{1}\left( x\right) \right\vert
^{-\frac{q^{\prime }}{q}}\left\vert \varphi _{2}\left( t\right) \right\vert
^{\left( 1-\frac{1}{q}\right) q^{\prime }}dxdt\right) ^{\frac{1}{q^{\prime }
} \\
&\leq &CT^{\left( -q^{\prime }\gamma _{1}+1+N\lambda \right) \frac{1}
q^{\prime }}}+CT^{\left( -2\lambda q^{\prime }+1+N\lambda \right) \frac{1}
q^{\prime }}}\text{,}
\end{eqnarray*}
and
\begin{eqnarray*}
\mathcal{B} &=&\left( \displaystyle\int_{Q_{T}}\left\vert D_{t|T}^{\gamma
_{2}}\varphi _{2}\left( t\right) \right\vert ^{p^{\prime }}\left\vert
\varphi _{2}\left( t\right) \right\vert ^{-\frac{p^{\prime }}{p}}\left\vert
\varphi _{1}\left( x\right) \right\vert ^{\left( 1-\frac{1}{p}\right)
p^{\prime }}dxdt\right) ^{\frac{1}{p^{\prime }}} \\
&&+\left( \displaystyle\int_{Q_{T}}\left\vert \Delta \varphi _{1}\left(
x\right) \right\vert ^{p^{\prime }}\left\vert \varphi _{1}^{l}\left(
x\right) \right\vert ^{-\frac{p^{\prime }}{p}}\left\vert \varphi _{2}\left(
t\right) \right\vert ^{\left( 1-\frac{1}{p}\right) p^{\prime }}dxdt\right) ^
\frac{1}{p^{\prime }}} \\
&\leq &CT^{\left( -\gamma _{2}p^{\prime }+1+N\lambda \right) \frac{1}
p^{\prime }}}+CT^{\left( -2\lambda p^{\prime }+1+N\lambda \right) \frac{1}
p^{\prime }}}.
\end{eqnarray*}
Using inequalities (\ref{in1}) and (\ref{in2}), we can write
\begin{equation*}
\mathcal{J}+CT^{2-\gamma _{1}}\displaystyle\int_{\mathbb{R}^{N}}u_{1}\left(
x\right) \varphi _{1}\left( x\right) dx\leq \mathcal{A} \, \mathcal{B}^
\frac{1}{q}}\mathcal{J} ^{\frac{1}{pq}},
\end{equation*}
and
\begin{eqnarray*}
\mathcal{I}+CT^{2-\gamma _{2}}\displaystyle\int_{\mathbb{R}^{N}}v_{1}\left(
x\right) \varphi _{1}^{l}\left( x\right) \, dx \leq \mathcal{B}\, \mathcal{A
^{\frac{1}{p}}\mathcal{I} ^{\frac{1}{pq}}.
\end{eqnarray*}
Now, applying Young's inequality to the right hand side of the above
estimates, we get
\begin{equation*}
\left( pq-1\right) \mathcal{J}+CpqT^{2-\gamma _{1}}\displaystyle\int_
\mathbb{R}^{N}}u_{1}\left( x\right) \varphi _{1}\left( x\right) \, dx \leq
\left( pq-1\right) \left( \mathcal{A}\,\mathcal{B}^{\frac{1}{q}}\right) ^
\frac{pq}{pq-1}},
\end{equation*}
and
\begin{equation*}
\left( pq -1\right) \mathcal{I}+Cpq T^{2-\gamma _{2}}\displaystyle\int_
\mathbb{R}^{N}}v_{1}\left( x\right) \varphi _{1}\left( x\right) dx \newline
\leq \left( pq-1\right) \left( \mathcal{B}\,\mathcal{A}^{\frac{1}{p}}\right)
^{\frac{pq}{pq-1}}. \notag
\end{equation*}
At this stage, we set $x=T^{\lambda }y,t=T\tau ,$ with $\lambda >0$ to be
chosen later. Then we have
\begin{equation*}
\mathcal{A} \, \mathcal{B}^{\frac{1}{q}}\leq CT^{\left[ \left( -q^{\prime
}\gamma _{1}+1+\lambda N\right) \frac{1}{q^{\prime }}+\left( -p^{\prime
}\gamma _{2}+1+N\lambda \right) \frac{1}{qp^{\prime }}\right] \frac{pq}{pq-1
}+T^{\left[ \left( -q^{\prime }\gamma _{1}+1+\lambda N\right) \frac{1}
q^{\prime }}+\left( -2\lambda p^{\prime }+1+N\lambda \right) \frac{1}
qp^{\prime }}\right] \frac{pq}{pq-1}}\text{,}
\end{equation*}
and
\begin{equation*}
\mathcal{B}\, \mathcal{A}^{\frac{1}{p}}\leq C\left( T^{\left( -\gamma
_{2}p^{\prime }+1+N\lambda \right) \frac{1}{p^{\prime }}}+T^{\left(
-2\lambda p^{\prime }+1+N\lambda \right) \frac{1}{p^{\prime }}}\right)
\left( T^{\left( -q^{\prime }\gamma _{1}+1+N\lambda \right) \frac{1}
q^{\prime }}}+T^{\left( -2\lambda q^{\prime }+1+N\lambda \right) \frac{1}
q^{\prime }}}\right) ^{\frac{1}{p}}\text{.}
\end{equation*}
We choose $\lambda =\frac{\gamma _{1}}{2}$ so that ($\left( -q^{\prime
}\gamma _{1}+1+N\lambda \right) \frac{1}{q^{\prime }}=\left( -2\lambda
q^{\prime }+1+N\lambda \right) \frac{1}{q^{\prime }})$.
Therefore, we hav
\begin{equation}
\displaystyle\int_{\mathbb{R}^{N}}u_{1}\left( x\right) \varphi _{1}\left(
x\right) dx\leq CT^{\delta _{1}}\text{,} \label{cigm1}
\end{equation
an
\begin{equation}
\int_{\mathbb{R}^{N}}v_{1}\left( x\right) \varphi _{1}\left( x\right) dx\leq
CT^{\delta _{2}}\text{,} \label{cigm2}
\end{equation
where
\begin{eqnarray*}
&&\delta _{1}=\max \left\{ \left[ \left( -q^{\prime }\gamma _{1}+1+\frac
\gamma _{1}}{2}N\right) \frac{1}{q^{\prime }}+\left( -p^{\prime }\gamma
_{2}+1+N\frac{\gamma _{1}}{2}\right) \frac{1}{qp^{\prime }}\right] \frac{pq}
pq-1}+\gamma _{1}-2,\right. \\
&&\left. \left[ \left( -q^{\prime }\gamma _{1}+1+\frac{\gamma _{1}}{2
N\right) \frac{1}{q^{\prime }}+\left( -2\frac{\gamma _{1}}{2}p^{\prime }+1+
\frac{\gamma _{1}}{2}\right) \frac{1}{qp^{\prime }}\right] \frac{pq}{pq-1
+\gamma _{1}-2\right\} ,
\end{eqnarray*
and
\begin{eqnarray*}
&&\delta _{2}=\max \left\{ \left[ \left( -\gamma _{2}p^{\prime }+1+N\frac
\gamma _{1}}{2}\right) \frac{1}{p^{\prime }}+\left( -q^{\prime }\gamma
_{1}+1+N\frac{\gamma _{1}}{2}\right) \frac{1}{pq^{\prime }}\right] \frac{pq}
pq-1}+\gamma _{2}-2,\right. \\
&&\left. \left[ \left( -2\frac{\gamma _{1}}{2}p^{\prime }+1+N\frac{\gamma
_{1}}{2}\right) \frac{1}{p^{\prime }}+\left( -q^{\prime }\gamma _{1}+1+
\frac{\gamma _{1}}{2}\right) \frac{1}{pq^{\prime }}\right] \frac{pq}{pq-1
+\gamma _{2}-2\right\} .
\end{eqnarray*
The condition (\ref{critdimension}) leads to either $\delta _{1}<0$ or
\delta _{2}<0$. Then, as $T\rightarrow \infty $, the right-hand side of (\re
{cigm1})(resp. (\ref{cigm2})) tends to zero and the left-hand side converges
to $\displaystyle\int_{\mathbb{R}^{N}}u_{1}\left( x\right) dx>0$ (resp.
\int_{\mathbb{R}^{N}}v_{1}\left( x\right) dx>0)$, which is contradiction.
We repeat the same argument with $\lambda =\frac{\gamma _{2}}{2}$ to
conclude the proof of the Theorem\ref{NEG}.
\end{proof}
\begin{remark}
\textrm{When }$\gamma _{1}=\gamma _{2}=\gamma $\textrm{, we recover the case
studied by }\cite{AlmeidaEJDE}.
\end{remark}
|
1,477,468,750,989 | arxiv | \section{Introduction}
The power spectra of fluctuations in the matter and (more observably)
galaxy fields carry important cosmological information. On large
scales and early epochs, where the fluctuations are small and
Gaussian, this information is preserved from early epochs, each
Fourier mode having evolved linearly. On smaller scales, when the
amplitudes of fluctuations grow to $\gtrsim 1$, the linear
approximation breaks down. Modes of the overdensity $\delta$ become
coupled, and their evolution becomes much harder to model.
Inconveniently, we need high-order perturbation theory and numerical
simulations to model the expectation value of the power spectrum
accurately. A more fundamental problem is that the cosmic
(co)variance in the power spectrum acquires a dominant non-Gaussian
component on surprisingly large, ``translinear'' scales
\citep{mw,szh,coorayhu}. This has unpleasant consequences for
cosmological parameter estimation, quantified by a ``translinear
plateau'' in cumulative Fisher information content
\citep{rh05,rh06,ns06,ns07,leepen,takahashi}.
Recently, we found that performing a logarithmic transform on the
matter overdensity $\delta$, i.e.\ using $\ln(1+\delta)$ instead of
$\delta$ as a density variable, drastically reduces the nonlinearities
on translinear scales in the power spectrum \citep[][Paper I]{nss09}.
The logarithmic transform pushes the translinear plateau to scales
about 2-3 times smaller, revealing about 10 times more Fisher
information. It also gives a power-spectrum shape intriguingly close
to the linear-theory prediction. The density field 1-point PDF
(probability density function) is approximately lognormal
\citep{colesjones}; in fact, we found that an exact Gaussianization of
the PDF (described below) performs even better.
PDF Gaussianization in large-scale structure was first proposed by
\citet{weinberg}, although not explicitly to increase power-spectrum
information content, but to reconstruct the initial density field.
\citet{croft98} used PDF Gaussianization in processing Lyman-$\alpha$
forest data from quasar spectra, but this turns out not to be an
essential step in estimating small-scale power spectra from this
data, because radiative transfer already maps the overdensity into a
narrow range of flux \citep{croft99}.
Although PDF Gaussianization impressively recovers the shape of the
initial power spectrum, the transformation is less-successful in
reconstructing initial mode-by-mode phases and amplitudes
\citep{nssprep}. This is because of bulk motions of matter on $\sim
10$\,$h^{-1}$\,Mpc\ scales, and formation of the cosmic web. For example, the
initial and final PDF-Gaussianized fields shown in Fig.\ 1 in Paper I
look by eye quite different on small scales. Precise reconstruction
of the phases and amplitudes of translinear Fourier modes appears to
require the accurate estimation and subtraction of the Lagrangian
displacement field \citep[e.g., ][]{mak03,recon,lavaux08,noh}. With a
Lagrangian reconstruction, it is obvious that the shape of the linear
power spectrum should be reconstructed on translinear scales, but the
methods are much more complicated and computationally intensive than a
simple density PDF transform.
It does make sense intuitively that PDF Gaussianization should help
the power spectrum to describe a field. While the cosmologically
useful information in a Gaussian field is entirely in Fourier
amplitudes, the information in a non-Gaussian field is partly in phase
correlations, which are necessary to describe features such as sharp
density peaks. Thus, flattening peaks restores information to the
Fourier amplitudes from the phases. Phase correlations affect
higher-order statistics, not the power spectrum, so Gaussianization
can be seen as pulling information from higher-order statistics into
the power spectrum.
In the approximation that a non-Gaussian field is a non-linear
transformation of a Gaussian field, PDF Gaussianization will produce a
Gaussian field, vanquishing all higher-order correlations.
Conversely, subjecting a Gaussian field to a non-linear transformation
produces higher-order correlations \citep{Szalay88}. In particular,
over length scales where the two-point correlation function is
positive, a monotonic transformation will generally produce a positive
four-point function, which indicates a positive non-Gaussian
contribution to the covariance through the trispectrum. Thus it is
plausible that much of the covariance on small scales is purely from
the non-Gaussianity of the PDF. Perhaps a related statistic to the
power spectrum of the Gaussianized field is the copula \citep{copula},
which is similarly immune to monotonic transformations on the field it
is applied to.
Despite the promising results, there remain issues to be resolved
before PDF Gaussianization can be used in practice. In this paper, we
investigate discreteness noise. For a logarithmic transform, the
problem becomes obvious when there are cells with zero galaxies, which
would transform to $-\infty$. We first investigate the ideal case of
Poisson noise in the matter power spectrum, and then the galaxy power
spectrum. We also make a start at exploring the effect of redshift
distortions.
\section{Gaussianizing transformations}
\label{sec:transforms}
There are many possible meanings of ``Gaussianizing.'' For example,
\citet{zhang10} split a density field into Gaussianized and
non-Gaussianized components based on distributions of wavelet
coefficients, and showed that the Gaussianized component of the matter
density field carries somewhat more Fisher information than the full
field. In the present paper, by ``Gaussianization'' we mean PDF
Gaussianization, i.e.\ a function applied equally to each pixel that
reduces the higher-order moments of the one-point distribution of the
field.
We use a simple approach, first estimating the density using simple
Nearest-Grid-Point (NGP) mass assignment, and then Gaussianizing.
Perhaps some gains in information on small scales could come from
using a higher-order mass assignment scheme, or an interpolation
naturally suited to discrete data, for example the DTFE
\citep[Delaunay Tessellation Field Estimator, ][]{vdws}.
Sophisticated techniques have even been developed to estimate the
$\ln(1+\delta)$ field directly \citep[e.g., ][]{kitaura10,weig}. Here we
choose NGP for its simplicity, and for the simple, constant form of
its shot noise, at least for ideal Poisson data. More-sophisticated
techniques could perform (or inform) even better.
The two transformations we consider are ``exact'' Gaussianization,
$G(\delta)$, and a modified logarithmic transform, ${\log_{+ \hspace{-1pt}}}(\delta)$.
\citet{seo} have dealt with the problem of log-transforming zero cells
by introducing a density floor, i.e.\ adding an arbitrary small,
positive parameter to the argument of the logarithm. This alternative
modified logarithmic transform did succeed in boosting the Fisher
information in the lensing-convergence power spectrum. There are many
possible alternative Gaussianizing transforms. One commonly used
transform used for producing a Gaussian distribution from a Poisson
distribution is the \citet{anscombe} transform, but this transform is
not ideal in our case. While the separate density PDF's for each
pixel, over different realizations, should be Poisson in our case, the
global pixel density PDF will generally not be Poisson.
Our first transformation, $G(\delta)$, is the density one would expect
from an exactly Gaussian PDF with the same ranking of cell densities
as $\delta$. Explicitly,
\begin{equation}
G(\delta) = \sqrt{2}\sigma\,{\rm erf}^{-1}\left(2f_{<\delta}-1+1/N\right),
\label{eqn:g}
\end{equation}
where $f_{<\delta}$ is the fraction of cells less-dense than $\delta$
in the density field, $\sigma$ is the standard deviation of the
Gaussian that $\delta$ is mapped onto, and $N$ is the number of cells.
If there are multiple cells with the same $\delta$, as usually occurs
in Poisson-sampled $\delta$'s, then there will be some range of
$G(\delta)$ that is mapped to cells with the same $\delta$. In this
case, the actual $G(\delta)$ that we assign to these cells is an
average of $G(\delta)$ over this range.
On the other hand, a drawback of $G(\delta)$ is that it is globally
defined, nontrivially depending on the entire $\delta$ field. Also,
the implicitly defined $G^{-1}$ function need not be well-behaved,
complicating attempts at predicting statistics of $G(\delta)$
analytically. So, we also investigate a modified logarithmic transform,
which only depends globally on $\delta$ through the mean density. We
define
\begin{equation}
{\log_{+ \hspace{-1pt}}}(\delta) = \left\{
\begin{array}{lc}
\ln(1+\delta),
& \delta > 0 \\
\delta, & \tt{ otherwise}
\end{array}
\right . .
\label{eqn:log+}
\end{equation}
\section{Poisson-sampled matter density fields}
First we investigate the simple case of exact Poisson discreteness
noise, in the matter field investigated in Paper I. We Poisson-sample
the density field of the Millennium Simulation \citep[MS,\ ][]{mill}
on a publicly available $256^3$ density grid at $z=0$, at various
sampling levels, from $n_{\rm cell} = 1/64$ to $n_{\rm cell}=64$ particles per
(2\,$h^{-1}$\,Mpc)$^3$ cell. (The full sampling level of the MS is
$n_{\rm cell}\approx 600$.) To Poisson-sample, we simply set the number of
particles in a cell to a random Poisson number of mean equal to the
full-sampling density.
\subsection{Effects on the mean}
It is well-known \citep[e.g., ][]{peebles} that particle discreteness
produces a white-noise $1/n$ shot noise in the power spectrum, where
$n$ is the number density of particles.
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{figs/pgauss_shotnoise.pdf}
\end{center}
\caption{Poisson shot noise in the power spectra of $\delta$ (black)
and the Gaussianized $G(\delta)$ (green), using the MS matter
density field. Dashed and dotted curves show $P_\delta$ and $P_G$
at $n_{\rm cell}=1/8$ and full sampling, respectively; solid black and
green curves show their differences. At low $k$, the absolute
values of these differences are shown with light dotted lines.
The solid red curve is the initial-conditions (linear) power
spectrum, multiplied by a factor to line up with $P_{\rm \delta}$
in the lowest-$k$ bin. The dash-dotted line is the shot-noise
estimate in Eq.\ (\ref{eqn:neff}). (Some curves described here do
not appear in the figure legend.) }
\label{fig:pgauss_shotnoise}
\end{figure}
Fig.\ \ref{fig:pgauss_shotnoise} shows shot noises of $\delta$ and
$G(\delta)$, estimated as the difference between power spectra of the
density fields with and without the added discreteness noise, measured
from the MS matter density field on a $256^3$ grid. When $P_G$ and
$P_{\log\hspace{-1pt}+\hspace{-1pt}}$ (the power spectra of $G(\delta)$ and ${\log_{+ \hspace{-1pt}}}(\delta)$) are
plotted in this paper, we multiply them by constants to line them up
with $P_\delta$ in the lowest-$k$ bin. For $P_G$, this is equivalent
to setting the $\sigma$ used in Eq.\ (\ref{eqn:g}).
For $P_\delta$, as expected, the simple $1/n$ estimate works quite
well. The shot noise in $P_G$, on the other hand, carries some slight
scale dependence, and is generally greater than the $P_\delta$ shot
noise. Intuitively, Gaussianization increases the shot noise because
it increases the contrast between low-density cells.
The green dot-dashed curve in Fig.\ \ref{fig:pgauss_shotnoise} shows
an estimate of this shot noise. It was calculated from a histogram of
$G(\delta)$, using the empirical expression
\begin{equation}
1/n_{\rm eff}= V_{\rm cell}\sum_i f(\delta_i) (\delta_{i+1}-\delta_{i}),
\label{eqn:neff}
\end{equation}
substituting $G(\delta)_i$ for $\delta_i$. Here, $V_{\rm cell}$ is
the volume of a cell, and $f(\delta_i)$ is the fraction of cells with
$\delta=\delta_i$, for density bins $i$.
Eq.\ (\ref{eqn:neff}) is motivated by the low-density tail of the
density distribution, where the $G$ function stretches the contrast.
For example, in a density field with cells of only 0 or 1 particle,
$1/n_{\rm eff}=V_{\rm cell}(\delta_1-\delta_0)=V_{\rm cell}(1/N_{\rm
particles}-0/N_{\rm particles})=1/n$. When this density field is
transformed by $G$, the shot noise increases, proportionally with
$[G(\delta)_1-G(\delta)_0]/(\delta_1-\delta_0)$. A simpler
approximation than Eq.\ (\ref{eqn:neff}) would be to use only $i=0,1$
in the sum (as in the preceding example), but we found that
Eq.\ (\ref{eqn:neff}) works a bit better.
\begin{figure}
\begin{center}
\includegraphics[scale=0.42]{figs/manyshots.pdf}
\end{center}
\caption{\vspace{-2pt} Shot noise in the power spectra of
$G(\delta)$, and ${\log_{+ \hspace{-1pt}}}(\delta)$, with varying $n_{\rm cell}$, the mean
number of particles per cell on the 256$^3$ grid. From bottom
(magenta) to top (blue), the number of particles per
(2-$h^{-1}$\,Mpc$)^3$ cell varies from 64 to $1/64$, in multiples of
8. Solid curves are power spectra of the transformed
Poisson-sampled fields, and the black dotted curves are power
spectra of $G(\delta)$ and ${\log_{+ \hspace{-1pt}}}(\delta)$ with the full MS
particle sampling. Solid curves are multiplied by factors to line
up with the dotted curves in the lowest-$k$ bin. Dashed curves
show the differences between the solid and dotted curves, and the
dot-dashed lines show the shot noise estimated from
Eq.\ (\ref{eqn:neff}). }
\label{fig:manyshots}
\end{figure}
Fig.\ \ref{fig:manyshots} explores the shot noise in $P_G$ and
$P_{\log\hspace{-1pt}+\hspace{-1pt}}$ with varying sampling levels on a $256^3$ grid. The shot
noise in $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ at high sampling and high $k$ is generally smaller
than in $P_G$; however, the shape of $P_{\log\hspace{-1pt}+\hspace{-1pt}}$'s shot-noise curve is
less consistent than $P_G$'s. This hints at the higher (co)variance
in $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ than in $G$, which will be discussed further in the next
subsection. The estimate in Eq.\ (\ref{eqn:neff}) works well for low
sampling, but overestimates the shot noise if the sampling is $n_{\rm
cell} \gtrsim 1$, especially in the $P_G$ case.
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{figs/manyres.pdf}
\includegraphics[scale=0.43]{figs/ratlin.pdf}
\end{center}
\caption[1]{ {\it Top}. Shot noise in the power spectra of
$G(\delta)$, and ${\log_{+ \hspace{-1pt}}}(\delta)$, with varying grid resolution,
for a matter density field with a fixed Poisson sampling of
$n_{\rm cell} = 1/8$ particles per (2-$h^{-1}$\,Mpc$)^3$ cell. The power
spectra are shown at grid resolutions of $32^3$ (black), $64^3$
(yellow), $128^3$ (blue), and $256^3$ (red). Both the power
spectra of the Poisson-sampled field and their differences from
the full-resolution power spectra are shown as solid curves; the
full-resolution power spectra appear as dashed curves. The shot
noise is rather consistent at different resolutions, especially in
the $G(\delta)$ case.
{\it Bottom}. Ratios of $P_{\delta}$ and $P_G$ to the initial
power spectrum, normalized to 1 in the lowest-$k$ bin. $P_G$ and
$P_\delta$ are at full sampling (as in Paper I), and ``$P_G$, + shot
noise'' is the raw power spectrum of the density field sampled at
$n_{\rm cell}=1$ on the 256$^3$ grid. Ratios at two resolutions
are shown: 128$^3$ and 256$^3$.
\label{fig:manyres}
}
\end{figure}
Fig.\ \ref{fig:manyres} shows how the shot noise varies with
resolution, at a fixed sampling. Generally, especially for $P_G$, the
shot noise is rather consistent for different resolutions. In fact,
the approximation that the shot noise is constant over different
resolutions seems to be a better approximation than the one in
Eq.\ (\ref{eqn:neff}), so we will use it when we deal with galaxies
(with no easily measurable ``no-shot-noise'' power spectrum).
The bottom panel of Fig.\ \ref{fig:manyres} shows the ``nonlinear
transfer function'' $P(k)/P_{\rm init}(k)$ for $P_G$, raw and after
subtracting shot noise, and for $P_\delta$ (after subtracting shot
noise).
\subsection{Effects on Information Content}
As in Paper I, we use a Fisher information \citep{fisher, tth}
formalism to quantify the information in the power spectrum. The
cumulative Fisher information in the power spectrum about parameters
$\alpha$ and $\beta$ over a range of power-spectrum bin indices
$i\in\mathcal{R}$ is estimated as
\begin{equation}
F_{\alpha\beta}(\mathcal{R}) =
\sum_{i,j\in \mathcal{R}} \frac{\partial\ln P^{\rm \hspace{1pt} \mhyphen sn}_i}{\partial\alpha}(\textbf{\textsf{C}}_\mathcal{R}^{-1})_{ij}
\frac{\partial\ln P^{\rm \hspace{1pt} \mhyphen sn}_j}{\partial\beta},
\label{inforange}
\end{equation}
where $\textbf{\textsf{C}}_\mathcal{R}$ is the square submatrix of $\textbf{\textsf{C}}$ with both indices
ranging over $\mathcal{R}$. $\textbf{\textsf{C}}_\mathcal{R}$ is the covariance matrix of the power
spectrum in bins,
$C_{ij}=\avg{\Delta P^{\rm \hspace{1pt} \mhyphen sn}_i \Delta P^{\rm \hspace{1pt} \mhyphen sn}_j}/(P^{\rm \hspace{1pt} \mhyphen sn}_i P^{\rm \hspace{1pt} \mhyphen sn}_j) =
\avg{\Delta\ln P^{\rm \hspace{1pt} \mhyphen sn}_i\Delta\ln P^{\rm \hspace{1pt} \mhyphen sn}_j}$.
In Paper I, we considered the signal-to-noise ratio S/N, the
information in the power spectrum about the power spectrum itself.
(S/N)$^2$ (called simply S/N in Paper I) is the Fisher information
about a (possibly hypothetical) parameter that depends on each mode of
the power spectrum equally. Thus, the derivative terms above were set
to unity. For $P_\delta$, the linear-power-spectrum amplitude $A$
\citep[e.g., investigated in][]{rh05,ns06} is a parameter such that
$\partial \ln P^{\rm \hspace{1pt} \mhyphen sn}_i/\partial \ln A = 1$ on linear scales, reaching
$\approx 2$ on translinear scales. The situation is more subtle in
the case of power spectra of nonlinearly-transformed fields, since
there is generally a large-scale bias (Paper I). But this does not
affect parameters that depend on the power spectrum's shape. And for
parameters that depend on the amplitude, the large-scale bias can be
constrained by measuring both $P_\delta$ and $P_G$ (or $P_{\log\hspace{-1pt}+\hspace{-1pt}}$) in
the linear regime. In this paper, though, we simply investigate the
S/N in $P_G$ (and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$) themselves. This would be entirely
appropriate when comparing data to a mock catalog, for example.
Shot noise further complicates the situation. The statistically
stable shot noise component $S_i$ of the power spectrum that appears
on small scales actually reduces the covariance in $P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i$ (the
power spectrum including shot noise), mimicking a gain in clustering
information, when really all that's being accurately measured is the
shot noise. The correct thing to investigate is the information in
$(P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i)$ about the power spectrum without shot noise, $P^{\rm \hspace{1pt} \mhyphen sn}_i$. Thus
we investigate
\begin{eqnarray}
({\rm S/N})^2 & = & \sum_{i,j\in \mathcal{R}} \frac{\partial\ln(P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i)}{\partial\ln P^{\rm \hspace{1pt} \mhyphen sn}_i}(\textbf{\textsf{C}}_\mathcal{R}^{-1})_{ij} \frac{\partial\ln(P^{\rm \hspace{1pt} \mhyphen sn}_j+S_j)}{\partial\ln P^{\rm \hspace{1pt} \mhyphen sn}_j}\nonumber\\
& = & \sum_{i,j\in \mathcal{R}} \frac{P^{\rm \hspace{1pt} \mhyphen sn}_i}{P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i}(\textbf{\textsf{C}}_\mathcal{R}^{-1})_{ij} \frac{P^{\rm \hspace{1pt} \mhyphen sn}_j}{P^{\rm \hspace{1pt} \mhyphen sn}_j+S_j}.
\label{eqn:infosn}
\end{eqnarray}
For the last line, we assume that the shot noise is independent of the
clustering fluctuations. In fact, this result is roughly what one
would get by subtracting the mean shot noise from the power spectra
before measuring the covariance matrix. For example, if the
covariance matrix is diagonal, the Fisher-matrix entries for the power
spectrum including shot noise will be $(\textbf{\textsf{C}}_R^{-1})_{ii} =
C_{ii}^{-1} = (P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i)^2/(\Delta P^{\rm \hspace{1pt} \mhyphen sn}_i)^2$. Plugging this into
Eq.\ (\ref{eqn:infosn}) causes the fractions to cancel, giving the
Fisher-matrix entries for the power spectrum without shot noise.
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{figs/gausspoissinfo.pdf}
\end{center}
\caption[1]{ Information (S/N)$^2$\ curves in the presence of
discreteness effects, for $P_\delta$ (solid curves), and $P_G$,
and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ (dashed curves), using Eq.\ (\ref{eqn:infosn}).
The plethora of curves show the results for different
combinations of samplings and resolutions. The three samplings
shown are $n_{\rm cell}=1/64$, 1, and 8 particles per (2$h^{-1}$\,Mpc)$^3$
cell, in blue, green and red, respectively. These $256^3$
density grids are degraded in resolution by powers of two,
giving $32^3$, $64^3$, and $128^3$ grids. Information curves
are measured for each case, and symbols are placed at their ends
(at the Nyquist frequency). All of these differences have
little effect on the $P_\delta$ information, but change $P_G$,
and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ significantly.
\label{fig:gausspoissinfo}
}
\end{figure}
Fig.\ \ref{fig:gausspoissinfo} shows a comparison of the (S/N)$^2$\ in
$P_\delta$, $P_G$, and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$, for Poisson-sampled density fields
of various resolutions and samplings of the MS. As in Paper I,
covariance matrices are estimated from power spectra measured after
applying 248 sinusoidal weightings \citep{hrs} to the density field.
$P_G$ (dashed) out-informs $P_\delta$ (solid) in all cases, although
the gains are modest at a sampling of $n_{\rm cell}=1/64$.
Interestingly, especially for $P_G$, there appears to be a resolution
at which the gains in information from Gaussianization (i.e.\ the
vertical distance between the dashed and solid curves) peak. This is
not surprising: in the low-resolution limit, the field is already
Gaussian, so Gaussianization has no effect. In the high-resolution
limit, even the highest peaks can only be sampled with one particle,
giving a field of only 0's and 1's, which Gaussianization will only
cause to be multiplied by a constant. Mathematically, the information
without the $S_i/(P^{\rm \hspace{1pt} \mhyphen sn}_i+S_i)$ fractions keeps rising, but these
fractions can cause it to turn over, producing a peak. As we find
below in Section \ref{sec:galinfo}, the peak is generally at a
resoution a few times coarser than that where $P^{\rm
-sn}(k_{\rm Nyq})\approx S(k_{\rm Nyq})$. And particularly if one is interested
in the power spectrum over the range of scales just smaller than the
linear regime ($0.1 \lesssim k/$(\,$h$\,Mpc$^{-1}$) $\lesssim 0.3$), and not in
scraping information from smaller scales, it is wise to Gaussianize at
this peak resolution or coarser.
\section{Galaxy density fields}
In this section, we address the impacts of discreteness and
redshift-space distortions on the observationally relevant case of a
galaxy density field, in both real and redshift space. Again we use
the MS, using the publicly available galaxy catalog as modelled by
\citet{delucia}. We use three galaxy samples, with R-band
absolute-magnitude cuts $R<-17$, $R<-20$, and $R<-22$; these have
respective mean galaxy number densities of about $n=0.003$, 0.02, and
0.1 ($h^{-1}$\,Mpc)$^{-3}$. We generate galaxy-density grids using NGP
density assignment from these galaxy samples. A tiny fraction of
galaxies exactly overlapped other galaxies in position; in this case,
we keep only the brightest one. In the redshift-space case, before
gridding the galaxies we first displace them along the $x$-axis by
$v_x/H_0$.
\subsection{Effects on the mean}
\label{sec:galmean}
\begin{figure}
\begin{center}
\includegraphics[scale=0.43]{figs/galpg.pdf}
\end{center}
\caption{In bold, real- (black) and redshift-space (red) power
spectra of Gaussianized galaxy-density fields, using MS galaxy
samples satisfying $R<-17$ (solid), $R<-20$ (dashed), and $R<-22$
(dotted). Shot noise, estimated to be $P_G(k_{\rm max})$ (see
text), has been subtracted from them; this constant shot noise for
each case is shown in faint lines.
}
\label{fig:galpg}
\end{figure}
Fig.\ \ref{fig:galpg} shows $P_G$ for galaxy density fields using the
$R<-17$, $R<-20$, and $R<-22$ samples, in real and redshift space. In
each case, we have subtracted a shot noise, constant in $k$, and shown
with faint lines. In this subsection we do not show $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ because
its shape in all cases is nearly identical to $P_G$. As in the matter
power spectrum section, in showing $G(\delta)$, we set the variance of
the Gaussians onto which the $\delta$'s are mapped so that the power
spectra line up in the lowest $k$ bin.
In the matter case, the shot noise could be directly measured by
comparing to a very well-sampled matter density field, which we lack
in the galaxy case. We estimate the shot noise using a
prescription motivated by the rough agreement of the shot noise across
different resolutions in Fig.\ \ref{fig:manyshots}. We measure the
power spectrum on a rather high-resolution ($512^3$) grid, and assume
that for all resolutions, the shot noise is a constant with $k$, at an
amplitude of $P_G(k_{\rm max})$. (Here, $k_{\rm max}$ is the highest
$k$ measured in the high-resolution grid. pIn the cubic box, $k_{\rm
max} = \sqrt{3} k_{\rm Nyquist}$, although we only plot the power
spectra to $k_{\rm Nyquist}$.) This probably overestimates the shot
noise somewhat, but perhaps this is appropriate, at least for $P_G$,
given the slight rise in the shot-noise power at slightly lower $k$
than $k_{\rm max}$ (e.g., at $k\approx 0.8$ \,$h$\,Mpc$^{-1}$\ in
Fig.\ \ref{fig:manyshots}).
\begin{figure}
\begin{center}
\includegraphics[scale=0.45]{figs/galpgratio.pdf}
\end{center}
\caption{Nonlinear transfer functions $P_G(k)/P_{\rm linear}$ and
$P_\delta(k)/P_{\rm linear}(k)$ for three MS galaxy samples, in
real and redshift space. The power spectra are measured on
128$^3$ (green) and 256$^3$ (black) grids. $P_G(k)/P_{\rm
linear}$ is shown, both including (dotted), and having
subtracted (solid), a shot-noise estimate (see text). Thin and
bold dashed lines show the same for $P_\delta(k)/P_{\rm
linear}(k)$. }
\label{fig:galpgratio}
\end{figure}
Fig.\ \ref{fig:galpgratio} shows ratios of galaxy power spectra to the
linear power spectrum $P_{\rm init}$, in both real and redshift space.
These could be thought of as transfer functions between $P_{\rm init}$ and $z=0$ galaxy power spectra.
In real space, in the limit of low sampling (in the $R<-22$ sample),
$P_G$ and $P_\delta$ look nearly identical before shot noise is
subtracted. This is not surprising, as $G(\delta)$ differs not much,
after removing a linear scaling, from $\delta$ in this limit. But
after shot noise is subtracted, even at this sampling, $P_G$ seems to
track $P_{\rm init}$ a bit better than $P_\delta$. As the sampling
increases (in the $R<-17$ sample), $P_G$ comes to track $P_{\rm init}$
significantly better, even before shot noise is subtracted.
In redshift space, the story is not as clear. A full analysis of the
effects of redshift-space distortions on $P_G$ is beyond the scope of
this paper. One obvious piece of analysis that is lacking is that
here we merely analyze the angle-averaged redshift-space power
spectrum. But in general, Gaussianization modifies the shape
of galaxy power spectra less in redshift space than in real space.
This is likely because the galaxy-density PDF is already somewhat
Gaussianized because peaks are smeared by fingers of God.
\subsection{Effects on Information Content}
\label{sec:galinfo}
Our method in investigating the galaxy power spectrum Fisher
information, (S/N)$^2$, is essentially the same as in the matter case. The
difference is that we do not have a meaningful measurement of the
exact shot noise, and so we estimate it as in Section
\ref{sec:galmean}. This estimate is conservative (i.e.\ likely an
overestimate) from the point of view of information estimation, so the
information curves for $P_G$ and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$ appearing below could be
considered conservative at high $k$ (i.e.\ perhaps a slight
underestimate). For $P_\delta$, we use the simple $1/n$ factor.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{figs/infogalres.pdf}
\end{center}
\caption{ Total cumulative signal-to-noise (information), up to
$k_{\rm Nyq}$, for the galaxy real-space $P_G$, $P_{\log\hspace{-1pt}+\hspace{-1pt}}$, and
$P_\delta$, computed at different resolutions and galaxy samples.
Each point comprising the curves is analogous to a (S/N)$^2$ curve
endpoint-symbol in Fig.\ \ref{fig:gausspoissinfo}. The dashed
curves show (S/N)$^2$ in the raw power spectra, while for the solid
curves, the shot-noise effect has been taken into account, as in
Eqn.\ (\ref{eqn:infosn}). The vertical dotted lines are at the
resolutions where, at $k_{\rm Nyq}$, the shot-noise and clustering
components of the power spectrum are closest. }
\label{fig:infogalres}
\end{figure}
Fig.\ \ref{fig:infogalres} shows (S/N)$^2$($k_{\rm Nyq}$) curves for the
real-space power spectra $P_\delta$, $P_G$, and $P_{\log\hspace{-1pt}+\hspace{-1pt}}$, for the
three MS galaxy samples, measured on grid sizes varying from $16^3$ to
$256^3$. To reduce clutter, we do not show each cumulative (S/N)$^2$$(k)$
curve, but just the total cumulative (S/N)$^2$($k_{\rm Nyq}$) up to the Nyquist
frequency $k_{\rm Nyq}$. In the matter case
(Fig.\ \ref{fig:gausspoissinfo}), these appear as symbols at
information-curve endpoints. The dashed lines show (S/N)$^2$\ without
taking into account shot noise [i.e.\ without the $(P^{\rm \hspace{1pt} \mhyphen sn}_i + S_i)/S_i$
fractions in Eq.\ (\ref{eqn:infosn})]. The solid lines, for which
these fractions are included, are more meaningful.
Typically, as suggested in the matter case in
Fig.\ \ref{fig:gausspoissinfo}, there appears to be a peak in the gain
in (S/N)$^2$($k_{\rm Nyq}$ provided by the Gaussianization transform. The dotted
lines are at the resolution where the shot noise is most comparable to
the clustering signal; here, there is typically a trough in the
cumulative (S/N)$^2$. It appears that the resolution that optimizes the
gain from Gaussianizing is typically a factor of 2-4 coarser than
that. But in all cases, the Gaussianized power spectra out-inform the
standard power spectra.
\begin{figure}
\begin{center}
\includegraphics[scale=0.35]{figs/z_infogalres.pdf}
\end{center}
\caption{The redshift-space version of Fig.\ \ref{fig:infogalres}.}
\label{fig:z_infogalres}
\end{figure}
Fig.\ \ref{fig:z_infogalres} shows the same for redshift-space power
spectra. Here, $P_\delta$ fares better, likely because in redshift
space, fingers of God smear out density peaks and already make the PDF
of $\delta$ more Gaussian. Still, again the power spectra of the
Gaussianized fields out-inform $P_\delta$ in all cases.
\section{Conclusion}
In this paper, we extend our previous analysis of the rejuvenating
effects that PDF Gaussianization has on the matter power spectrum
(Paper I). We include discreteness effects, and look at the
observationally relevant case of the galaxy density field, both in
real and redshift space. As in Paper I, we analyze the Millennium
Simulation.
We find that the conclusions of Paper I remain unchanged in the
presence of discreteness noise, as long as one is looking at scales
where the shot noise (which Gaussianization does increase somewhat) is
negligible. In real space, Gaussianizing the galaxy and discretized
matter density fields does seem to extend the range over which their
power spectra trace the linear power spectrum, well into the nonlinear
regime, until the shot noise becomes comparable to the clustering
signal. In redshift space, Gaussianization also reduces the
small-scale rise in the galaxy power spectrum.
Gaussianization removes or reduces the small-scale rise that one sees
in power spectra relative to the linear power spectrum. In the
context of the halo model \citep[e.g., ][]{cooraysheth}, this rise is
associated with a one-halo term. It is perhaps not surprising that
Gaussianizing would reduce this one-halo term, since haloes are the
most non-linear, non-Gaussian structures in the Universe. However,
the removal of this rise in the galaxy power spectrum without direct
mention of haloes also suggests that explaining galaxy bias with a
non-linear transformation of a Gaussian field \citep[e.g.,
][]{politzerwise,Szalay88}, which has somewhat gone out of fashion,
may be a fruitful area for further study.
In all cases, Gaussianization also improves the inherent Fisher
information, (S/N)$^2$, of the power spectrum. But the degree of help it
provides depends on the resolution of the grid over which the PDF is
Gaussianized. It seems that the grid size providing the most
cumulative added (S/N)$^2$\ is a factor of 2-4 coarser than the resolution
where, at the grid's Nyquist frequency, the shot noise and clustering
components of the power spectrum are of comparable magnitude. That
is, to reap the information gains of Gaussianization on translinear
scales, one should be careful not to use grid cells that are too
small. In redshift space, the gains in (S/N)$^2$\ for galaxy density
fields are somewhat smaller than in real space, if the galaxy sampling
is high enough to resolve fingers of God. This is because fingers of
God smear high density peaks, producing an already more-Gaussian PDF.
While discreteness effects are an essential issue to investigate in
the study of power spectra of Gaussianized fields, a few issues still
remain. With a variable survey selection function, it may be
necessary to apply a Gaussianization transform seperately in different
redshift shells. We have made a start at analyzing the effects of
redshift distortions, but much more work can be done in this area. It
also remains to be investigated precisely how faithfully, and to what
scales, the power spectrum of the Gaussianized matter and galaxy
density fields traces the linear power spectrum, for arbitrary
cosmologies. Put more practically, the Fisher-matrix analysis needs
to be extended to particular cosmological parameters. Also, our
assertion that Gaussianization pulls information from higher-point
statistics could do with further quantitative elucidation.
\acknowledgments We thank Andrew Hamilton for useful discussions, in
particular for the ${\log_{+ \hspace{-1pt}}}$ transform idea. The Millennium Simulation
databases used in this paper and the web application providing online
access to them were constructed as part of the activities of the
German Astrophysical Virtual Observatory. MN and AS are grateful for
support from the W.M.\ Keck and the Gordon and Betty Moore
Foundations, and IS from NASA grants NNG06GE71G and NNX10AD53G.
\bibliographystyle{hapj}
|
1,477,468,750,990 | arxiv | \section{Introduction}
We shall use the 1+3 frame formalism \cite{EU,WE} to write down the
evolution equations for spherically symmetric models as a
well-posed system of first order PDEs in 2 variables.
The formalism is particularly well-suited for studying
perfect fluid spherically symmetric
models \cite{SSSS}, and especially
for numerical and qualitative analysis, and is useful in
various applications, such as
structure formation in the spherically symmetric dust
Lema\^{\i}tre-Tolman-Bondi model. This preprint is intended as a
resource paper for researchers working in this field.
\section{Spherically symmetric models}
The metric is:%
\footnote{We use $x$ instead of $r$ because $r$ is used to denote the spatial derivative of $H$.}
\begin{equation}
\label{metric}
ds^2 = - N^2 dt^2 + (e_1{}^1)^{-2} dx^2
+ (e_2{}^2)^{-2} (d\vartheta^2 + \sin^2 \vartheta\, d\varphi^2).
\end{equation}
The Killing vector fields (KVF) are given by \cite{kramer}:
\begin{equation}
\partial_\varphi,\quad
\cos \varphi \ \partial_\vartheta - \sin \varphi \cot \vartheta \ \partial_\varphi,\quad
\sin \varphi \ \partial_\vartheta + \cos \varphi \cot \vartheta \ \partial_\varphi.
\end{equation}
The frame vectors in coordinate form are:
\begin{equation}
\mathbf{e}_0 = N^{-1} \partial_t
,\quad
\mathbf{e}_1 = e_1{}^1 \partial_x
,\quad
\mathbf{e}_2 = e_2{}^2 \partial_\vartheta
,\quad
\mathbf{e}_3 = e_3{}^3 \partial_\varphi,
\end{equation}
where $e_3{}^3 = e_2{}^2 / \sin \vartheta$. $N$, $e_1{}^1$ and $e_2{}^2$ are functions of $t$
and $x$.%
\footnote{Note that the frame vectors $\mathbf{e}_2$ and $\mathbf{e}_3$
tangent to the spheres are not group-invariant -- the commutators
$[\mathbf{e}_2, \partial_\varphi]$ and $[\mathbf{e}_3, \partial_\varphi]$
are zero, but not with the other two Killing vectors.
The frame vectors $\mathbf{e}_0$ and $\mathbf{e}_1$
orthogonal to the spheres are group-invariant.}
This leads to the following restrictions on the kinematic variables:
\begin{equation}
\sigma_{\alpha\beta} = \text{diag}(-2\sigma_+,\sigma_+,\sigma_+),\quad
\omega_{\alpha\beta} =0,\quad
\dot{u}_\alpha =(\dot{u}_1,0,0),
\end{equation}
where
\begin{equation}
\dot{u}_1 = \mathbf{e}_1 \ln N;
\end{equation}
on the spatial commutation functions:
\begin{equation}
a_\alpha = (a_1, a_2, 0),\quad
n_{\alpha\beta} = \left( \begin{array}{ccc}
0 & 0 & n_{13} \\
0 & 0 & 0 \\
n_{13} & 0 & 0 \end{array} \right),
\end{equation}
where%
\footnote{The dependence of $a_2$ and $n_{13}$ on $\vartheta$ is due to the fact
that the chosen orthonormal frame is not group-invariant. However, this
is not a concern, since the $\vartheta$ dependence will be hidden.}
\begin{equation}
a_1 = \mathbf{e}_1 \lne_2{}^2,\quad
a_2 = n_{13} = - \frac12 e_2{}^2 \cot \vartheta;
\end{equation}
and on the matter components:
\begin{equation}
q_\alpha = (q_1,0,0),\quad
\pi_{\alpha\beta} = \text{diag}(-2\pi_+,\pi_+,\pi_+).
\end{equation}
The frame rotation $\Omega_{\alpha\beta}$ is also zero.
Furthermore, $n_{13}$ only appears in the equations together with
$\mathbf{e}_2 n_{13}$ in the form of the Gauss curvature of the spheres
\begin{equation}
{}^2\!K := 2(\mathbf{e}_2 - 2 n_{13}) n_{13},
\end{equation}
which simplifies to
\begin{equation}
{}^2\!K = (e_2{}^2)^2.
\end{equation}
Thus the dependence on $\vartheta$ is hidden in the equations. We will
also use ${}^2\!K$ in place of $e_2{}^2$.
The spatial curvatures also simplify to:
\begin{equation}
{}^3\!S_{\alpha\beta} = \text{diag}(-2{}^3\!S_+,{}^3\!S_+,{}^3\!S_+),
\end{equation}
with ${}^3\!R$ and ${}^3\!S_+$ given by:
\begin{align}
{}^3\!R &= 4 \mathbf{e}_1 a_1 - 6 a_1^2 + 2 {}^2\!K
\\
{}^3\!S_+ &= - \tfrac13 \mathbf{e}_1 a_1 + \tfrac13 {}^2\!K.
\end{align}
The Weyl curvature components simplify to:
\begin{equation}
E_{\alpha\beta} = \text{diag}(-2E_+,E_+,E_+),\quad
H_{\alpha\beta} = 0,
\end{equation}
with $E_+$ given by
\begin{equation}
E_+ = H\sigma_+ + \sigma_+^2 + {}^3\!S_+ - \tfrac12 \pi_+.
\end{equation}
To simplify notation, we will write
\[
{}^2\!K,\ \dot{u}_1,\ a_1
\]
as
\[
K,\ \dot{u},\ a.
\]
To summarize, the essential variables are
\begin{equation}
N, e_1{}^1,\ K,\ H,\ \sigma_+,\ a,\ \mu,\ q_1,\ p,\ \pi_+,
\end{equation}
and the auxiliary variables are
\begin{equation}
{}^3\!R,\ {}^3\!S_+,\ \dot{u}.
\end{equation}
So far, there are no evolution equations for $N$, $p$ and $\pi_+$, and
they need to be specified by a temporal gauge (for $N$), and by a fluid
model (for $p$ and $\pi_+$).
The evolution equations are now:%
\footnote{We include a non-negative $\Lambda$.}
\begin{align}
\mathbf{e}_0 e_1{}^1 &= (-H+2\sigma_+) e_1{}^1
\\
\mathbf{e}_0 K &= -2(H+\sigma_+)K
\\
\mathbf{e}_0 H &= - H^2 - 2 \sigma_+^2 + \tfrac13 (\mathbf{e}_1 + \dot{u} - 2 a)\dot{u}
- \tfrac16(\mu+3p) + \tfrac13 \Lambda
\\
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - \tfrac13(\mathbf{e}_1 + \dot{u} + a)\dot{u}
- {}^3\!S_+ + \pi_+
\\
\mathbf{e}_0 a &= (-H+2\sigma_+) a - (\mathbf{e}_1 + \dot{u})(H+\sigma_+)
\\
\mathbf{e}_0 \mu &= -3H(\mu+p) - (\mathbf{e}_1+2\dot{u}-2a)q_1 - 6\sigma_+\pi_+
\\
\mathbf{e}_0 q_1 &= (-4H+2\sigma_+)q_1 - \mathbf{e}_1 p -(\mu+p)\dot{u}
+ 2(\mathbf{e}_1+\dot{u}-3a)\pi_+.
\end{align}
The constraint equations are the Gauss and Codazzi constraints, and the
definition of $a$:
\begin{align}
0 &= 3H^2 + \tfrac12 {}^3\!R - 3 \sigma_+^2 - \mu - \Lambda
\\
0 &= -2 \mathbf{e}_1(H+\sigma_+) + 6 a \sigma_+ + q_1
\\
0 &= (\mathbf{e}_1 - 2 a) K,
\end{align}
where the spatial curvatures are given by
\begin{align}
{}^3\!R &= 4 \mathbf{e}_1 a - 6 a^2 + 2 K
\\
{}^3\!S_+ &= - \tfrac13 \mathbf{e}_1 a + \tfrac13 K.
\end{align}
\section{The matter and gauge}
There are various choices for the matter.
\subsection{Perfect fluid}
A perfect fluid is defined by \cite{WE} \begin{equation}
T_{ab} = \hat{\mu} u_a u_b + \hat{p} ( g_{ab} + u_a u_b).
\end{equation}
with $\hat{p}$ to be specified.
In general, the 4-velocity vector $\mathbf{u}$ of the perfect fluid is not
aligned with the vector $\mathbf{e}_0$ of a chosen temporal gauge. In spherically
symmetric models, $\mathbf{u}$ is allowed to be of the form
\begin{equation}
\mathbf{u} = \Gamma(\mathbf{e}_0 + v \mathbf{e}_1),\quad
\Gamma = (1-v^2)^{-\frac12}.
\end{equation}
We choose a linear equation of state for the perfect fluid:
\begin{equation}
\label{linear_eos}
\hat{p} = (\gamma-1) \hat{\mu},
\end{equation}
where $\gamma$ is a constant satisfying $1 \leq \gamma < 2$.
Then we obtain for the tilted fluid:
\begin{align}
\mu &= \frac{G_+}{1-v^2} \hat{\mu}
\\
p &= \frac{(\gamma-1)(1-v^2) + \frac13\gamma v^2}{G_+} \mu
\\
q_1 &= \frac{\gamma \mu}{G_+} v
\\
\pi_+ &= - \frac13 \frac{\gamma \mu}{G_+} v^2,
\end{align}
where $G_\pm = 1 \pm (\gamma-1)v^2$.
Thus $p$, $q_1$ and $\pi_+$ are given in terms of $\mu$ and $v$.
These are then substituted into the evolution and constraint equations.
The evolution equations for $\mu$ and $q_1$ now give (in terms of $\mu$
and $v$)
\begin{align}
\mathbf{e}_0 \mu &= - \frac{\gamma v}{G_+} \mathbf{e}_1 \mu
- \frac{\gamma G_-}{G_+^2} \mu \mathbf{e}_1 v
- \frac{\gamma}{G_+} \mu \left[
(3+v^2)H + 2v(\dot{u}-a) - 2 v^2 \sigma_+ \right]
\\
\label{dv}
\mathbf{e}_0 v &= - \frac{(\gamma-1)(1-v^2)^2}{\gamma G_- \mu} \mathbf{e}_1 \mu
+ \frac{[(3\gamma-4)-(\gamma-1)(4-\gamma)v^2]v}{G_+ G_-} \mathbf{e}_1 v
\notag\\
&\qquad
- \frac{(1-v^2)}{G_-} \left[
-(3\gamma-4)v H - 2 v \sigma_+ + G_- \dot{u} +2(\gamma-1)v^2 a
\right].
\end{align}
\subsection{Scalar fields and anisotropic fluid}
The total energy-momentum tensor of a non-interacting scalar field
$\phi$ with a self-interaction potential $V(\phi)$ is \begin{equation} T^{sf}_{a
b}=\phi_{;a}\phi_{;b}-g_{a
b}(\tfrac12\phi_{;c}\phi^{;c}+V(\phi)),\end{equation} where $\phi=\phi(t,x).$
In particular, exponential potentials have been the subject of much interest and
arise naturally from theories of gravity such as scalar-tensor
theories or string theory \cite{exponential}. Spherically symmetric scalar field
models have been studied in
\cite{Genly}.
A spherically symmetric model can also admit an anisotropic fluid matter source,
in which the energy momentum tensor
has energy density $\mu$, a pressure $p_{||}$ parallel to the
radial unit normal and a perpendicular pressure $p_\perp$.
Fluids with an anisotropic pressure have been studied in the
cosmological context for a number of reasons \cite{Kras:1996}. An
energy-momentum tensor of this form formally arises if the source
consists of two perfect fluids with distinct four-velocities, a
heat conducting viscous fluid and a perfect fluid and a magnetic field;
in addition, a cosmic
string and a global monopole are of the form of an anisotropic fluid.
Most importantly, perhaps, a contribution in the form of an
anisotropic fluid arises when averaging the Einstein equation to
obtain the averaged field equations in spherically symmetric
geometries \cite{ColPel}.
\subsection{Temporal gauge}
The common temporal gauges used in spherically symmetric cosmological models
are the {\em synchronous gauge} and the {\em separable
area gauge}. The synchronous gauge is useful when used with a dust perfect
fluid ($\gamma=1$) because the dust perfect fluid has zero acceleration
($\dot{u}=0$). This gives the Lema\^{\i}tre-Tolman-Bondi models.
It may also be useful when used with a non-dust tilted perfect
fluid. The un-normalized system is well-posed
when the Gauss and Codazzi constraints are used to eliminate the spatial
derivatives. $H$-normalization preserves well-posedness.
The separable area gauge has a special case ($a=0$), called the
{\em timelike area gauge}. The un-normalized system is well-posed when
$\mathbf{e}_0(H+\sigma_+)$ is used, and the Gauss constraint is solved for $H-\sigma_+$.
$(H+\sigma_+)$-normalization preserves well-posedness, while $H$-normalization
does not.
\section{Special cases with extra Killing vectors}
Spherically symmetric models with more than 3 KVF are either
spatially homogeneous or static. Let us discuss the spatially
homogeneous cosmological models.
Spatially homogeneous spherically symmetric models consist of two
disjoint sets of models: the Kantowski-Sachs models and the
Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) models.
Static and self-similar spherically symmetric models have been studied in
\cite{Genly,static,SSSS}.
\subsection{The Kantowski-Sachs models}
The spatially homogeneous spherically symmetric models (that has 4
Killing vectors, the fourth being $\partial_x$) are the so-called
Kantowski-Sachs models \cite{kramer}.
The metric (\ref{metric}) simplifies to
\begin{equation}
ds^2 = - N(t)^2 dt^2 + (e_1{}^1(t))^{-2} dx^2
+ (e_2{}^2(t))^{-2} (d\vartheta^2 + \sin^2 \vartheta\, d\varphi^2);
\end{equation}
i.e., $N$, $e_1{}^1$ and $e_2{}^2$ are now independent of $x$.
The spatial derivative terms $\mathbf{e}_1(\ )$ vanish and as a
result $a=0=\dot{u}$. Since $\dot{u}=0$, the temporal gauge is
synchronous and we can set $N$ to any positive function of $t$.
The Codazzi constraint restricts the source by
\begin{equation}
q_1 = 0.
\end{equation}
$p$ and $\pi_+$ are still unspecified.
The evolution equations for Kantowski-Sachs models with unspecified
source are:
\begin{align}
\mathbf{e}_0 e_1{}^1 &= (-H+2\sigma_+) e_1{}^1
\\
\mathbf{e}_0 K &= -2(H+\sigma_+)K
\\
\mathbf{e}_0 H &= - H^2 - 2 \sigma_+^2
- \tfrac16(\mu+3p) + \tfrac13 \Lambda
\\
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - \tfrac13 K + \pi_+
\\
\mathbf{e}_0 \mu &= -3H(\mu+p) - 6\sigma_+\pi_+
\end{align}
The remaining constraint equation is the Gauss constraint:
\begin{equation}
0 = 3H^2 + K - 3 \sigma_+^2 - \mu - \Lambda.
\end{equation}
The spatial curvatures are given by
\begin{align}
{}^3\!R &= 2 K
\\
{}^3\!S_+ &= \tfrac13 K.
\end{align}
\subsection{The FLRW models}
Spatially homogeneous spherically symmetric models, that are not
Kantowski-Sachs, are the
Friedmann-Lema\^{\i}tre-Robertson-Walker (FLRW) models (with or without
$\Lambda$).
The source must be of the form of a comoving perfect fluid (or vacuum).
The metric has the form
\begin{equation}
ds^2 = - N(t)^2 dt^2 + \ell^2(t) dx^2
+ \ell^2(t) f^2(x) (d\vartheta^2 + \sin^2 \vartheta\, d\varphi^2),
\end{equation}
with
\begin{equation}
\label{fx_FLRW}
f(x) = \sin x,\ x,\ \sinh x,
\end{equation}
for closed, flat, and open FLRW models respectively.
The frame coefficients are given by $e_1{}^1 = \ell^{-1}(t)$ and $e_2{}^2 =
\ell^{-1}(t) f^{-1}(x)$. Then $\sigma_+ = \frac13\mathbf{e}_0 \ln(e_1{}^1/e_2{}^2)$
vanishes.
$N=N(t)$ implies that $\dot{u}=0$; i.e.,
the temporal gauge is synchronous, and we can set $N$ to any positive
function of $t$.
The Hubble scalar $H = \mathbf{e}_0 \ln \ell(t)$ is also a function of
$t$.
{\footnote {We shall not list the KVFs as they are complicated in spherically
symmetric coordinates and not needed here.}}
For the spatial curvatures, ${}^3\!S_+$ does vanish because (\ref{fx_FLRW})
implies $\mathbf{e}_1 a = K$,%
\footnote{That $\mathbf{e}_1 a$ does not vanish is consistent with
the frame vector $\mathbf{e}_1$ not being group-invariant.}
while ${}^3\!R$ simplifies to
\begin{equation}
{}^3\!R = \frac{6k}{\ell^2},\quad
k = 1,0,-1,
\end{equation}
for closed, flat, and open FLRW respectively.
The evolution equation for $\sigma_+$ and the Codazzi constraint then imply
that $\pi_+ =0= q_1$; i.e., the source is a comoving perfect fluid, with
unspecified pressure $p$.
The evolution equations simplify to:
\begin{align}
\mathbf{e}_0 \ell &= H \ell
\\
\mathbf{e}_0 H &= - H^2 - \tfrac16(\mu+3p) + \tfrac13 \Lambda
\\
\mathbf{e}_0 \mu &= -3H(\mu+p)
\end{align}
The Gauss constraint simplifies to
\begin{equation}
0 = 3H^2 + \frac{3k}{\ell^2} - \mu - \Lambda,\quad k=1,0,-1.
\end{equation}
Note that $\mu$ and $p$ also depend on $t$ only, and that $p$ is not
specified yet.
The vacuum cases are the de Sitter model ($\Lambda>0$, $k=0$),
the model with $\Lambda>0$, $k=1$,
the model with $\Lambda>0$, $k=-1$,
the Milne model ($\Lambda=0$, $k=-1$),
and the Minkowski spacetime ($\Lambda=0$, $k=0$), which is also static.
The model with $\Lambda>0$, $k=1$ is past asymptotic to the de Sitter
model with negative $H$ and is future asympotic to the de Sitter model
with positive $H$.
The model with $\Lambda>0$, $k=-1$ (and positive $H$) is past asymptotic
to the Milne model and is future asympotic to the de Sitter model with
positive $H$.
\section{Synchronous gauge, tilted perfect fluid}
We shall investigate perfect fluid models with linear equation of state
using the synchronous gauge.
We shall simplify the equations step-by-step, by choosing the synchronous
gauge, eliminating spatial derivatives, and specifying the perfect fluid.
The equations in synchronous gauge ($\dot{u}=0$) are:
\begin{align}
\mathbf{e}_0 e_1{}^1 &= (-H+2\sigma_+) e_1{}^1
\\
\mathbf{e}_0 K &= -2(H+\sigma_+)K
\\
\mathbf{e}_0 H &= - H^2 - 2 \sigma_+^2 - \tfrac16(\mu+3p) + \tfrac13 \Lambda
\\
\label{dsp_synch}
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - {}^3\!S_+ + \pi_+
\\
\label{da_synch}
\mathbf{e}_0 a &= (-H+2\sigma_+) a - \mathbf{e}_1(H+\sigma_+)
\\
\mathbf{e}_0 \mu &= -3H(\mu+p) - (\mathbf{e}_1-2a)q_1 - 6\sigma_+\pi_+
\\
\mathbf{e}_0 q_1 &= (-4H+2\sigma_+)q_1 - \mathbf{e}_1 p + 2(\mathbf{e}_1-3a)\pi_+.
\end{align}
The constraint equations are:
\begin{align}
\label{CG_synch}
0 &= 3H^2 + \tfrac12 {}^3\!R - 3 \sigma_+^2 - \mu - \Lambda
\\
\label{CC_synch}
0 &= -2 \mathbf{e}_1(H+\sigma_+) + 6 a \sigma_+ + q_1
\\
0 &= (\mathbf{e}_1 - 2 a) K.
\end{align}
where the spatial curvatures are given by
\begin{align}
{}^3\!R &= 4 \mathbf{e}_1 a - 6 a^2 + 2 K
\\
{}^3\!S_+ &= - \tfrac13 \mathbf{e}_1 a + \tfrac13 K.
\end{align}
The evolution equations (\ref{dsp_synch}) and (\ref{da_synch}) contain
spatial derivative terms, but these can be replaced using the
constraints (\ref{CG_synch}) and (\ref{CC_synch}):
\begin{align}
\mathbf{e}_1 a &= - \tfrac32 H^2 + \tfrac32 a^2 - \tfrac12 K
+ \tfrac32 \sigma_+^2 + \tfrac12 \mu + \tfrac12 \Lambda
\\
\mathbf{e}_1 (H+\sigma_+) &= 3a\sigma_+ + \tfrac12 q_1.
\end{align}
As a result, equations (\ref{dsp_synch}) and (\ref{da_synch}) now read:
\begin{align}
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - \tfrac12 H^2 + \tfrac12 a^2 - \tfrac12 K
+ \tfrac12 \sigma_+^2 + \tfrac16 \mu + \tfrac16 \Lambda
+ \pi_+
\\
\mathbf{e}_0 a &= -(H+\sigma_+) a - \tfrac12 q_1.
\end{align}
The benefit here is that the evolution equations for the geometric part
are now free of spatial derivative terms.
The spatial curvatures are given by
\begin{align}
{}^3\!R &= - 6 H^2 + 6 \sigma_+^2 + 2 \mu + 2 \Lambda
\\
{}^3\!S_+ &= \tfrac12 H^2 - \tfrac12 a^2 + \tfrac12 K
- \tfrac12 \sigma_+^2 - \tfrac16 \mu - \tfrac16 \Lambda.
\end{align}
Lastly, we specify the perfect fluid with linear equation of state. From
equations (\ref{linear_eos})--(\ref{dv}) and the above equations, the
final form of the system is:
\begin{align}
\label{dex_pfs}
\mathbf{e}_0 e_1{}^1 &= (-H+2\sigma_+) e_1{}^1
\\
\mathbf{e}_0 K &= -2(H+\sigma_+)K
\\
\mathbf{e}_0 H &= - H^2 - 2 \sigma_+^2
- \frac{(3\gamma-2+(2-\gamma)v^2)}{6G_+} \mu
+ \tfrac13 \Lambda
\\
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - \tfrac12 H^2 + \tfrac12 a^2 - \tfrac12 K
+ \tfrac12 \sigma_+^2
+ \frac{(1-(\gamma+1)v^2)}{6G_+} \mu
+ \tfrac16 \Lambda
\\
\mathbf{e}_0 a &= -(H+\sigma_+) a - \frac{\gamma v}{2G_+} \mu
\\
\mathbf{e}_0 \mu &+ \frac{\gamma v}{G_+} \mathbf{e}_1 \mu
+ \frac{\gamma G_-}{G_+^2} \mu \mathbf{e}_1 v
\notag\\
&=
- \frac{\gamma}{G_+} \mu \left[
(3+v^2)H - 2va - 2 v^2 \sigma_+ \right]
\\
\label{dv_pfs}
\mathbf{e}_0 v &+ \frac{(\gamma-1)(1-v^2)^2}{\gamma G_- \mu} \mathbf{e}_1 \mu
- \frac{[(3\gamma-4)-(\gamma-1)(4-\gamma)v^2]v}{G_+ G_-} \mathbf{e}_1 v
\notag\\
&=
- \frac{(1-v^2)}{G_-} \left[
-(3\gamma-4)H - 2 \sigma_+ +2(\gamma-1)v a
\right] v,
\end{align}
where $G_\pm = 1 \pm (\gamma-1)v^2$.
The constraints are:
\begin{align}
\mathbf{e}_1 a &= - \tfrac32 H^2 + \tfrac32 a^2 - \tfrac12 K
+ \tfrac32 \sigma_+^2 + \tfrac12 \mu + \tfrac12 \Lambda
\\
\mathbf{e}_1 (H+\sigma_+) &= 3a\sigma_+ + \frac{\gamma v}{2G_+} \mu
\\
\mathbf{e}_1 K &= 2 a K.
\end{align}
The spatial curvatures are given by
\begin{align}
{}^3\!R &= - 6 H^2 + 6 \sigma_+^2 + 2 \mu + 2 \Lambda
\\
{}^3\!S_+ &= \tfrac12 H^2 - \tfrac12 a^2 + \tfrac12 K
- \tfrac12 \sigma_+^2 - \tfrac16 \mu - \tfrac16 \Lambda.
\end{align}
\subsection{Well-posedness}
We now show that the system is well-posed for $\gamma \geq 1$.
The coefficient matrix for the spatial derivative terms is:
\footnote{Strictly speaking, we should also include the factor $e_1{}^1/N$ in the
matrix, but the result on well-posedness is the same.}
\begin{equation}
\left(
\begin{array}{cc}
\dfrac{\gamma v}{G_+} & \dfrac{\gamma G_-}{G_+^2} \mu \\
\dfrac{(\gamma-1)(1-v^2)^2}{\gamma G_- \mu} &
- \dfrac{[(3\gamma-4)-(\gamma-1)(4-\gamma)v^2]v}{G_+ G_-}
\end{array}
\right).
\end{equation}
Its eigenvalues are
\begin{equation}
\frac{(2-\gamma)v \pm \sqrt{\gamma-1}(1-v^2)}{G_-},
\end{equation}
with corresponding eigenvectors (for example)
\begin{equation}
\left(
\begin{array}{c}
1 \\
\dfrac{\sqrt{\gamma-1}(1-v^2)G_+}{\mu\gamma(1+\sqrt{\gamma-1}v)^2}
\end{array}
\right)
,\quad
\left(
\begin{array}{c}
1 \\
-\dfrac{\sqrt{\gamma-1}(1-v^2)G_+}{\mu\gamma(1-\sqrt{\gamma-1}v)^2}
\end{array}
\right).
\end{equation}
The matrix is diagonalizable for $\gamma > 1$, with
$c_s = \sqrt{\gamma-1}$ being the speed of sound in the perfect fluid.
The system (\ref{dex_pfs})--(\ref{dv_pfs}) is thus well-posed for $\gamma
> 1$. For $\gamma < 1$ the system is elliptic and not well-posed.
\section{Irrotational dust (Lema\^{\i}tre-Tolman-Bondi model)}
The Lema\^{\i}tre-Tolman-Bondi (LTB) model
\cite{lemaitre,Kras:1996} is the spherically symmetric dust
solution of the Einstein equations which can be regarded as a
generalization of the FLRW universe. LTB metrics with dust source
and a comoving and geodesic 4-velocity constitute a well known
class of exact solutions of Einstein's field equations
\cite{kramer, Kras:1996}.
For the dust case $\gamma=1$ with zero vorticity, we can use the
freedom within the synchronous gauge to set $v=0$, so that the
synchronous frame is comoving with the perfect fluid and we obtain:
\begin{align}
\mathbf{e}_0 e_1{}^1 &= (-H+2\sigma_+) e_1{}^1
\\
\mathbf{e}_0 K &= -2(H+\sigma_+)K
\\
\mathbf{e}_0 H &= - H^2 - 2 \sigma_+^2
- \tfrac16 \mu
+ \tfrac13 \Lambda
\\
\mathbf{e}_0 \sigma_+ &= -3H \sigma_+ - \tfrac12 H^2 + \tfrac12 a^2 - \tfrac12 K
+ \tfrac12 \sigma_+^2
+ \tfrac16 \mu
+ \tfrac16 \Lambda
\\
\mathbf{e}_0 a &= -(H+\sigma_+) a
\\
\mathbf{e}_0 \mu &= - 3H\mu.
\end{align}
Notice that the system is completely free of spatial derivatives, and is
thus well-posed.
The constraints are:
\begin{align}
\mathbf{e}_1 a &= - \tfrac32 H^2 + \tfrac32 a^2 - \tfrac12 K
+ \tfrac32 \sigma_+^2 + \tfrac12 \mu + \tfrac12 \Lambda
\\
\mathbf{e}_1 (H+\sigma_+) &= 3a\sigma_+
\\
\mathbf{e}_1 K &= 2 a K.
\end{align}
The spatial curvatures are given by
\begin{align}
{}^3\!R &= - 6 H^2 + 6 \sigma_+^2 + 2 \mu + 2 \Lambda
\\
{}^3\!S_+ &= \tfrac12 H^2 - \tfrac12 a^2 + \tfrac12 K
- \tfrac12 \sigma_+^2 - \tfrac16 \mu - \tfrac16 \Lambda.
\end{align}
Further simplifications with $a$ and $K$ are possible.
A suitable normalization factor is $\beta=H+\sigma_+$,
introduced for $G_2$ models in \cite{WE,EU}, and used for the LTB model in \cite{Sussman}.
With this normalization, it can be shown that at late times
the LTB solutions that are ever-expanding will tend to the isotropic and homogeneous Milne solution, with the following rates:
\begin{gather}
\beta \sim e^{-\tau},\qquad \frac{a^2-K}{\beta^2} - 1 \sim e^{-\tau},
\\
\frac{\sigma_+}{\beta} \sim \tau e^{-\tau},\qquad \frac{\mu}{3\beta^2} \sim e^{-\tau}.
\end{gather}
That is, the rates are the same for all dust observers,
although the multiplicative ``constants" depend on the radius.
This dependency reveals itself in the leading order of ratios of variables such as the density contrast.
\subsection{Structure formation}
Structure formation in the LTB model has been studied in \cite{KH}.
More recently, the LTB inhomogeneous dust solutions have been examined
numerically and qualitatively as a 3--dimensional dynamical
system, in terms of an average
density parameter, $\langle\Omega\rangle$ (which behaves dynamically like
the usual
$\Omega$ in FLRW dust spacetimes), and a
shear parameter and a density contrast function which convey the
effects of inhomogeneities \cite{Sussman}. The evolution equations for the
averaged variables are formally identical to those of an
equivalent FLRW cosmology, and are an alternative set of evolution equations
to those presented above. In particular, the phase space evolution of
structure formation scenario was examined in \cite{Sussman}.
|
1,477,468,750,991 | arxiv | \section{Introduction}
Over the past decade, memes have become a ubiquitous phenomenon over the internet. Memes can come in several formats such as images, video, etc. Memes can take a combined form of both text and images too. Due to its vast popularity, different people perceive memes distinctively. Recent studies have prompted the usage of memes as a mode of communication across social media platforms. The presence of text in images makes it harder to decode the sentiment or any other characteristic \cite{avvaru-vobilisetty-2020-bert}. Regardless of the type of the meme, they may be changed, recreated over social media networks, and tend to be used in contexts involving sensitive topics such as politics, casteism, etc, to add a sarcastic perspective \cite{8354676,Nave2018TalkingIP}.
\begin{figure*}[!h]
\centering
\includegraphics[width=\textwidth,height=8cm]{vit_bert.pdf}
\caption{System Architecture \cite{dosovitskiy2021an,devlin-etal-2019-bert}} \label{fig1}
\end{figure*}
Due to its multimodality, conscientious analysis of memes can shed light on the societal factors, their implications on culture, and the values promoted by them \cite{Milner2013FCJ156HT}. In addition to that, analyzing the intended emotion of a meme could help us acknowledge fake news, offensive content that is being propagated using the internet memes as a medium, thus helping in eradicating the spread of misinformation and hatred to the large user base in social media \cite{chakravarthi-etal-2020-corpus,chakravarthi-etal-2020-sentiment}. It is plausible that memes might become an integral part of most of the people, as it is used to understand racial and gender discourse on social media platforms such as Reddit \cite{Milner2013FCJ156HT,nikhilhope,nikhiloffen}. One of the approaches to overcome this is manually monitoring and moderating user-generated content. But due to the amount of data being generated on the internet every day, it would be ideal to develop automated systems to moderate them \cite{kumar-etal-2018-benchmarking,adeepoffensive,adeephope,10.1145/3441501.3441515,10.1145/3441501.3441517}.
Consider countries with huge populations such as India, several memes are directed towards targeted communities. To address these issues of identifying if a given meme is trolling a person's sentiments, a dataset for memes that were suspected to troll a particular community. We participate in the shared task on meme classification based on the troll classification of Tamil Memes \cite{suryawanshi-etal-2020-dataset}. Tamil (ISO 639-3: tam) language is spoken in South Asia \cite{chakravarthi2020leveraging}. The earliest inscription in India dated from 580 BCE was the Tamil inscription in pottery and then the Asoka inscription in Prakrit, Greek, and Aramaic dating from 260 BCE. The earliest known inscriptions in Sanskrit are from the inscriptions of the 1st century BCE. Tamil is the official language of Tamil Nadu, India,
as well as of Singapore and Sri Lanka \cite{chakravarthi-etal-2018-improving,chakravarthi-etal-2019-wordnet}. The task primarily consists of identifying whether a meme is a \emph{troll} or a \emph{non-troll} \cite{dravidiantrollmeme-eacl}. We use the images and captions that are provided to achieve the most efficient model to classify the memes. We use a combination of Vision Transform (ViT) \cite{dosovitskiy2021an} and mBERT \cite{pires-etal-2019-multilingual} over other pretrained models used for image classification as described in \cite{Venkatesh2020TransferLB,10.3844/jcssp.2021.44.54}.
\section{Related Work}
\label{related work}
Internet memes have been a subject of interest for both Computer Vision and Natural Language Processing researchers. The type of memes that are being used illustrates the context of discussions on social media platforms. People are using memes to express themselves, and in the making, showcase their stance on a certain social issue, be it in acknowledgment or rejection of the issue \cite{8354676,boinepelli-etal-2020-sis,Gal2016ItGB}. There exist several reasons that suggest the spread of memes. Some of the reasons include novelty, simplicity, coherence. It also includes an emotional attachment, its ability to have different meanings, depending on how a person perceives it \cite{Nave2018TalkingIP,stephens2018ryan,Chielens2002OperationalizationOM}. \citeauthor{Hu_2018} developed a multimodal sentiment analysis by developing a deep neural network that combines both visual analysis and text analysis to predict the emotional state of the user by using Tumblr posts.
\section{Data}
\label{data}
We use Troll Classification dataset of Tamil Memes \cite{suryawanshi-etal-2020-dataset}. It consists of 2,699 memes, of which most of the images have text embedded within them. We are also provided with captions for all images. The distribution is shown is Table \ref{tab1}.
\begin{table}[!h]
\begin{center}
\renewcommand{\tabcolsep}{3mm}
\begin{tabular}{|l|r|r|r|}
\hline
Class & Train & Validation & Test\\
\hline
Troll & 1,154 & 128 & 395\\
Non-Troll & 917 & 101 & 272\\
\hline
total & 2,071 & 229 & 667 \\
\hline
\end{tabular}
\end{center}
\caption{Dataset Distribution}\label{tab1}
\end{table}
\section{System Description}
\label{system description}
Multimodal deep learning is a robust and efficient way of addressing the main goals of artificial intelligence by integrating and combining multiple communicative modalities to obtain crucial results which usually improves the outcome of the single models trained. As deep learning models tend to extract features on their own, the objective can easily be achieved with the help of neural networks.
\begin{table*}[!h]
\begin{center}
\renewcommand{\tabcolsep}{3mm}
\begin{tabular}{l|r|r|r|r}
\hline
& Precision & Recall & F1-Score & Support\\
\hline
Non-Troll & 0.96 & 0.95 & 0.96 & 101\\
Troll & 0.96 & 0.97 & 0.96 & 128\\
\hline
Accuracy & & & 0.96 & 229\\
Macro Avg & 0.96 & 0.96 & 0.96 & 229 \\
Weighted Avg & 0.96 & 0.96 & 0.96 & 229\\
\hline
\end{tabular}
\end{center}
\caption{Classification report of ViT to images of validation set}\label{tab2}
\end{table*}
\begin{table*}[!h]
\begin{center}
\renewcommand{\tabcolsep}{3mm}
\begin{tabular}{l|r|r|r|r}
\hline
& Precision & Recall & F1-Score & Support\\
\hline
Non-Troll & 0.87 & 0.99 & 0.93 & 101\\
Troll & 0.99 & 0.88 & 0.93 & 128\\
\hline
Accuracy & & & 0.93 & 229\\
Macro Avg & 0.93 & 0.94 & 0.93 & 229 \\
Weighted Avg & 0.94 & 0.93 & 0.93 & 229\\
\hline
\end{tabular}
\end{center}
\caption{Classification report when memes are classified based on captions on validation set}\label{tab3}
\end{table*}
Given the images of Tamil Memes, along with the embedded text on the images, scrutiny of images and texts independently and then picking out relevant information for further process plays a climacteric role in our system. At the end of the training, the model has to output a single value stating the given meme is Troll or Non-Troll. The specialty of our model was to neither use the Convolutional Neural Networks (CNN) nor Recurrent Neural Networks (RNN). As the title of the paper points out, the model tries to gain more attention towards the salient portions of text and images. The proposed solution makes an effort to convey the importance of attention gain and its relation to the performance of the model. The model is put forward to compute the classification is \textbf{Vision transformer} \cite{dosovitskiy2021an} for images and \textbf{Bidirectional Encoder Representations from Transformers (BERT)} \cite{devlin-etal-2019-bert} for captions of memes. This corresponds to a \emph{transformer-transformer} architecture as shown in Fig \ref{fig1}.
\subsection{Vision Transformer (ViT)}
\label{ViT}
The architecture of the ViT is analogous to the transformer used for Natural Language Processing (NLP) tasks. NLP transformers use self-attention which is a highly cost-inefficient approach in regard to images. Admitting this, the technique applied here was Global Attention. Keeping the analogy of sentences, instead of 1D token embeddings as input, ViT receives a sequence of flattened 2D patches. If H, W is the height and width of the image and (P, P) is the resolution of each patch, \(N=HW/P^2\) is the effective sequence length for the transformer \cite{dosovitskiy2021an}. Then the patches are projected linearly and then multiplied with an embedding matrix to eventually form patched embeddings. This along with position embeddings are sent through the transformer. Similar to BERT's [CLS] token, a token is prepended along with the patched embeddings. The transformer consists of an encoder block which consists of alternating layers of multiheaded self-attention blocks to generate attention for specific regions of the images. Layer normalization and residual connections are made comparable to the original NLP transformer.
\begin{table*}[!h]
\centering
\begin{tabular}{l|r|r|r|r}
\hline
& Precision & Recall & F1-Score & Support\\
\hline
Non-Troll & 0.60 & 0.03 & 0.06 & 272\\
Troll & 0.60 & 0.98 & 0.74 & 395\\
\hline
Accuracy & & & 0.60 & 667\\
macro Avg & 0.60 & 0.51 & 0.40 & 667 \\
Weighted Avg & 0.60 & 0.60 & 0.47 & 667\\
\hline
\end{tabular}
\caption{Classification report of our system on the test set}\label{tab4}
\end{table*}
\subsection{BERT}
\label{bert}
The success of fine-tuning a pretrained model in computer-vision prompted researchers to do the same in Natural Language Processing. Therefore it was the objective of the researchers to develop a model which can be fine-tuned for NLP related works. \textbf{B}idirectional \textbf{E}ncoder \textbf{R}epresentations from \textbf{T}ransformers \textbf{(BERT)} \cite{devlin-etal-2019-bert} is a language representation model which was trained on Wikipedia corpus. The training phase had two tasks. First was Masked Language Modelling(MLM), where the sentence had random masks in them and the model has to predict the masked word. The second task Next Sentence Prediction(NSP), where the model has to predict whether the second sentence is the continuation of the first one.
The input to the transformer is the sum of the token segmentation and positional embeddings. As the name suggests, the model is jointly conditioned on both left and right contexts to extract meaning. BERT is comparable to the transformer encoder block of \cite{NIPS2017_3f5ee243}. The NSP task matches the classification task for the objective of the model. During NSP, two sentence separated by [SEP] and [CLS] token are fed in and the output of the [CLS] token is pondered upon to determine the required class. Here, the input is only a single sentence with tokens and the model is fine-tuned as necessary. For this system, \textit{bert-base-multilingual-cased} (L=12, H=768, A=12, Total Parameters=179M) was used. This model is pretrained on largest available Wikipedia dumps of the top 104 different languages, with the largest MLM objective, also making the model case sensitive \cite{pires-etal-2019-multilingual}.
\section{Experiments}
\label{experiments}
All suitable models were implemented using PyTorch version 1.5.0 in a google colaboratory environment. The early stages of this model include preprocessing of images. The dataset had pictures with various resolutions and had to be made equal. The images were resized to 256 X 256 pixels. Most of the images had texts on the top and bottom of the images. Texts in the images were considered as noise for classification, which resulted in performing a center crop for all images. The border of the portions was removed and images of size 224 X 224 were produced. Finally, the images were ready as the input to the transformer by normalizing the RGB channels with mean 0.485, 0.456, 0.406, and standard deviation 0.229, 0.224, 0.225 respectively. No augmentations were made to preserve the meaning of the images. The transformer was originally trained on the ImageNet dataset and had achieved remarkable results. The trained weights are transferred to this downstream task. The base version of ViT is fine-tuned which had default hyperparameters of 16 patches, an embed dimension of 768, 12 layers, 12 attention heads, and a dropout rate of 0.1. The head of the vision transformer, which outputs 1000 classes, is now replaced by a linear layer of 128 neurons.
\captionsetup[sub]{labelformat=simple}
\renewcommand{\thesubfigure}{(\alph{subfigure})}
\begin{figure}[!hbt]
\centering
\begin{subfigure}[b]{0.9\linewidth}
\centering
\includegraphics[width=\textwidth]{val_cm.pdf}
\caption{Validation set}
\label{fig:1-1}
\end{subfigure}
\\
\begin{subfigure}[b]{0.9\linewidth}
\centering
\includegraphics[width=\textwidth]{test_cm.pdf}
\caption{ Test set}
\label{fig:1-2}
\end{subfigure}
\caption{Confusion Matrix}
\label{fig:1}
\end{figure}
The texts were also preprocessed by removing stopwords, special characters, and punctuation. Texts need to be tokenized before feeding into the BERT configuration. After inserting it into the transformer, the resulting pooled output from the multilingual BERT model is also passed through a linear layer of 128 neurons.
The two layers obtained from the transformers are merged together to form a single layer with 256 neurons. This is passed through the ReLu activation function and a dropout to obtain one final neuron which determines the class as Troll or Non-Troll. A learning rate of \(2e-5\) was used with a batch size of 16. The maximum length of the captions was truncated to 128 as memes usually do not contain very long sentences. The training was done for 4 epochs and with a linear schedule with warmup. To our surprise, the model learned very rapidly and achieved well progress on the validation set which mimicked the train set. It was also observed that merging the outputs of two different domain models did not harm the training, moreover, it helped in getting better results.
\section{Results}
\label{results}
We achieve an overall F1-score of 0.96 when we use images for classification using ViT as shown in \ref{tab2}. It is to be noted that using mBERT to classify memes solely based on the captions achieves 0.93 as F1-score as shown in Table \ref{tab3}. While we achieve such good results in comparison to the baseline scores of 0.59 mentioned in the dataset paper, we feel that if both of representations of ViT and mBERT were concatenated and then fed into a linear layer, the model would learn better. We find that the model achieves a perfect 1.00 weighted F1-score on the validation set. We believe that preprocessing of the images was a major factor for achieving a great F1-score on validation set. This argument is supported by our system's poor performance on the test set, as the test set was not coherent with the training data in terms of the positioning of texts on the images as shown in Table \ref{tab4}. The confusion matrix on validation and test set are as shown in Figures \ref{fig:1-1} and \ref{fig:1-2} respectively.
\section{Conclusions}
\label{conclusion}
The proposed solution performs at greater heights on the validation and set in the training phase. The validation set mimics the train set as the memes are split looking at the distribution of the classes. The dataset is very small and augmenting it will not help for the optimal results. The algorithm over-fits the train set undoubtedly. The reason behind the poor performance is due to the change in the distribution. The memes in the test set had multiple images which were difficult for the ViT to capture features. The model scored a F1 score of 0.46 on the test set and 1.0 on the validation set. Vast difference can be observed due to high bias. Here, in this paper, we have tried to come up with this innovation of transformer-transformer architecture which can achieve extreme results. In the future, we will be performing a wonderful task of having more transformers in parallel computation and syncing them makes an immense difference in this era of deep learning.
.
|
1,477,468,750,992 | arxiv | \section{Introduction}
Recently, the ability to embed representations of images and text in a joint space was studied thoroughly for many tasks. Among which are image annotation and search~\cite{klein2015associating, chen2017amc}, zero-shot recognition~\cite{socher2013zero, mukherjee2016gaussian, frome2013devise, reed2016learning}, robust image classification~\cite{frome2013devise}, image description generation~\cite{karpathy2015deep}, visual question-answering~\cite{nam2017dual} and more.
Vector arithmetic properties have been demonstrated lately as a surprising artifact of learning semantic embedding spaces. Mikolov et al.~\cite{mikolov2013linguistic} showed that a learned \emph{word2vec} embedding space can capture semantic vector arithmetics, such as: ``Paris'' - ``France'' +``Italy'' = ``Rome''. Kiros et al.~\cite{uniVSE} demonstrated a similar phenomenon in multimodal visual-semantic embedding spaces, in which, with linear encoders, the learned embedding space captures multimodal regularities. For instance, given $ f_I $, a representing vector of an image of a blue car, $ f_I $ - ``blue'' + ``red'' yields a representing vector of a red car image.
This paper refers to the specific, fine-grained, task of visual-textual multimodal search in the fashion domain. Example queries and their retrieved results can be seen in Figure~\ref{fig:MMS-examples}.
We believe this type of application can greatly impact the customer shopping experience, by enabling intuitive and interactive search refinements.
With this technology, browsing large fashion catalogs and finding specific products can become easier and less frustrating.
We consider training a visual-textual joint embedding model in an end-to-end manner, based on images and textual metadata of catalog products. We propose a training objective function which we refer to as Mini-Batch Match Retrieval (MBMR). Each mini-batch consists of matching and non-matching image-text pairs. We compute the cosine similarity of each pair, and maximize matching samples similarities with cross-entropy loss, as done in~\cite{wojke2018deep}. However, unlike ~\cite{wojke2018deep}, which assigns an embedding vector per each category, in our retrieval task the notion of category does not exist. Instead, we learn an embedding for each item (image and text) and try to classify the correct pair from the mini-batch reference set.
We demonstrate the superiority of this approach over the commonly used triplet loss.
In addition, we explore the task of visual fashion attribute extraction, utilizing the noisy catalog data alone, without additional annotation effort. A pool of possible fashion attributes is extracted from frequent words in the catalog metadata, and a multi-label classifier is trained to extract the correct ones given a product image. We demonstrate that, although the catalog-based labels are noisy, attribute extraction produces satisfying results.
We propose and evaluate several approaches for multimodal search. The first approach leverages the query-arithmetic phenomenon of visual-textual joint embeddings. A second approach utilizes our learned attribute extraction module, for soft textual filtering, alongside visual search, based on our visual-semantic embedding space. Finally, we propose a combined approach, which leverages both the joint embedding based query-arithmetic property and soft attribute filtering. This approach yields a considerable performance improvement over the other methods.
\begin{figure}[t]
\centering
\includegraphics[width=0.75\textwidth]{example_queries_obscured.pdf}
\caption{Examples of typical multimodal search queries and their top retrieved results.}
\label{fig:MMS-examples}
\vspace*{-10pt}
\end{figure}
\section{Related Work}
Image recognition classifiers treat labels as disconnected and unrelated, resulting in visual recognition systems that cannot transfer semantic information about learned labels to unseen words or phrases. Early visual-textual joint embedding works addressed this problem by mapping image-word pairs, where the words corresponded to image labels or attributes. Weston et al.~\cite{weston2010large} trained a joint embedding model of both images and their labels, by employing an online learning-to-rank algorithm. Frome et al.~\cite{frome2013devise} leveraged textual data to learn semantic relationships between labels by explicitly mapping images into a common visual-semantic embedding space. They showed this approach leads to more semantically reasonable errors and significantly improved zero-shot predictions.
More recent works attempted to map images and their textual descriptions into a common embedding space.
Klein et al.~\cite{klein2015associating} employed Canonical Correlation Analysis (CCA)~\cite{cca} to learn projections of precomputed image and caption features, onto a joint embedding space, for cross-modal retrieval tasks. Kiros et al.~\cite{uniVSE} employed a triplet-based ranking loss in order to learn a similar embedding space for images and text, for caption generation and ranking tasks.
Karpathy et al.\cite{karpathy2014deep} worked on a finer level, embedding fragments of images and sentences jointly with a max-margin based objective. In the fashion domain, Han et al. ~\cite{han2017learning} learned a similar visual-semantic embedding for product images and their corresponding textual descriptions.
They combined this joint embedding in their outfit recommendation engine, so that it is agnostic to the input type (image, text or a combination of both).
Several works considered the task of manipulating attributes for fashion search.
Zhao et al.~\cite{zhao2017memory} trained a network to jointly optimize attribute classification loss and triplet ranking loss, over image triplets, for facilitating precise attribute manipulation and image retrieving. The network learned, in a supervised manner, to modify the intermediate image representation based on the desired manipulation.
Kenan et al.~\cite{ak2018fashionsearchnet} proposed learning attribute specific representations by leveraging weakly-supervised localization, in order to manipulate attributes during fashion search.
M. G{\"u}nel et al.~\cite{gunel2018language} proposed a GAN-based solution for language guided image manipulation, where the generator performs feature-wise linear modulation between visual features and desired natural language descriptions.
Zhao et al. ~\cite{zhao2018multi} proposed a Multi-Task Learning (MTL) system to jointly train an image captioning and attribute extraction model. They demonstrated how the auxiliary attribute extraction task resulted in better image representation and improved performance in the original captioning task.
\section{Data}
The data used for training the joint embedding model consists of 0.5M fashion products from a retail website. Each product item has associated image and textual metadata. The catalog has a very diverse range of products, and includes rich and relatively accurate metadata. The actual search, and its evaluation, are performed on an larger set of 1.5M catalog items (only tops, bottoms and dresses) from a different retail website.
Although the textual metadata of our training catalog is relatively clean and accurate compared to other catalogs, there still exists noise and variability in the textual metadata. Similar items can have very different textual descriptions, while non-similar items may have relatively similar descriptions.
Moreover, textual metadata may be lacking in details frequently.
\begin{figure}[b]
\centering
\includegraphics[width=1.0\textwidth]{model_architecture_full.pdf}
\caption{ \textbf{Joint Embedding:} A ResNet-18 CNN extracts visual features from the image with an additional fully connected (FC) layer which projects these features to the joint space. The textual encoder sums the word embeddings of all relevant words in the textual metadata. \textbf{Attribute Extraction:} An additional network branch extracts attribute probabilities from the image representation. It utilizes the catalog textual metadata as ground-truth attribute labels. }
\label{fig:model}
\vspace*{-10pt}
\end{figure}
\section{Training}
\label{sec:training}
Figure~\ref{fig:model} illustrates the architecture of our model.
The basic joint embedding model consists of two main branches, an image encoder and a text encoder.
Image encoding is based on a ResNet-18~\cite{he2016deep} deep convolutional neural network (CNN), followed by an additional fully connected layer which projects the visual feature vector to the same space as the textual encoding.
Text encoding is done by summing the word embeddings of all input words.
The text is treated as a bag-of-words, rather than an ordered sequence of words, since it is accumulated from several metadata fields, and may contain a mixture of sentences and individual keywords.
For attribute extraction, we add a third branch to this joint embedding architecture. The branch consists of a fully connected layer, followed by a sigmoid activation function for multi-label classification. The input to this branch is the image feature vector, $f_I$, and the output is a vector of attribute probabilities, $p_{\bm w}(I)$.
The size of the attribute probability vector is determined by the vocabulary size, $|V|$.
The model is trained end-to-end. That is, both encoder branches are trained jointly.
The ResNet weights are initialized by a pre-trained ImageNet~\cite{deng2009imagenet} model. The word embeddings are based on \emph{word2vec}, and are trained on product titles. Word embeddings that do not appear in this set are initialized randomly. The fully connected layer parameters are initialized with PCA over the extracted ImageNet features. We also fix the ResNet weights at the begining of training, and unfreeze only the two top Residual blocks after two epochs. We use the Adam~\cite{kingma2014adam} optimizer, with an exponentially decaying learning rate schedule. We have found that all of these settings are helpful in order to improve convergence and reduce overfitting.
Our training objective is composed of two loss terms.
A Mini-Batch Match Retrieval (MBMR) loss, $\mathcal{L}_{MBMR}$, for the task of learning a joint embedding space, and a multi-label cross-entropy loss, $\mathcal{L}_a$, for attribute extraction. The final objective is a weighted sum of both loss terms.
\subsection{Textual Metadata Preprocessing}
In order to clean and normalize the textual metadata we use several preprocessing steps when building our vocabulary. (1) Tokenization – divide the raw description text into a set of tokens.
(2) Stemming – normalize words to their base form, in order to avoid multiple word variations with the same visual meaning.
(3) Part-Of-Speech (POS) based filtering – identify noun and adjective tokens, which are more likely to have visual significance, and ignore the rest.
(4) Word frequency thresholding – words that appear less times in the dataset than some hard cut-off threshold are removed, thus reducing noise and avoiding an unnecessarily large vocabulary. We set our threshold to $500$.
These preprocessing steps determine the vocabulary, $V$, of our model. Its size, $|V|$, also affects the number of parameters in the word embeddings and attribute extraction fully connected layer.
\subsection{Mini-Batch Match Retrieval Objective}
\label{mbmr}
The objective of the joint-embedding training procedure should encourage matching (non-matching) image-text pairs to be as close (distant) as possible to (from) each other, in the common embedding space. To achieve this, we propose the following Mini-Batch Match Retrieval (MBMR) objective.
In our training setting, each mini-batch consists of $N$ product items, $ \left\{I_i, T_i \right\}_{i=1}^{N} $, where $ I_i $ is an image, and $ T_i $ is its corresponding textual metadata.
For each image embedding in the batch, $ f_I $, and text embedding in the batch, $ f_T $, we compute their cosine similarity,
\begin{equation}
S_{I,T} = \dfrac {f_I \cdot f_T} {\norm{f_I} \norm{f_T}}.
\end{equation}
We then define the probability of image $ I_i $ to match description $ T_j $ as,
\begin{equation}
P(T_j \: | \: I_i) = \dfrac{\exp\left\{S_{I_i,T_j} / \tau\right\}}{\sum_k{\exp\left\{S_{I_i,T_k} / \tau \right\}}},
\end{equation}
where $ \tau $ is a temperature parameter. The probability of $ T_i $ to match image $ I_j $, is calculated similarly,
\begin{equation}
P(I_j \: | \: T_i) = \dfrac{\exp\left\{S_{I_j, T_i} / \tau\right\}}{\sum_k{\exp\left\{S_{I_k, T_i } / \tau \right\}}}.
\end{equation}
The final objective is obtained by applying cross-entropy for every query image and text in the batch,
\begin{equation}
\mathcal{L}_{MBMR} = - \sum_i{\log P(T_i \: | \: I_i)} - \sum_i{\log P(I_i \: | \: T_i)}.
\end{equation}
\vspace{-20pt}
\subsection{Attributes Extraction}
Since our model learns to bridge the gap between images and text, it is natural to expect it to be able to provide out-of-the-box attribute extraction just by computing cosine-similarities between images and words. In practice, however, this leads to noisy results, due to the following reasons.
First, not all words are equally visually grounded. Some words are very visually dominant, while others may have very little (if any) visual significance, and may exist only due to imperfect textual preprocessing.
Second, word frequencies vary significantly. Some attributes appear in almost every item description in the catalog, like garment types, while others appear very rarely. This data behavior can be considered as noisy labels for our attribute extractor.
In order to create a more robust attribute extraction model, we add another branch to the model which consists of a fully connected layer that projects image embeddings to the vocabulary size, $|V|$, followed by a sigmoid function. The outputs of this branch, $ \left\{\hat p_w(I)\right\} $, are approximations of the probabilities for each word $ w $ in the vocabulary to belong to image $ I $. The ground-truth labels are determined by the existence of words in the product textual metadata. An additional loss term is added for this multi-label classification task.
During inference, we take the following additional steps in order to obtain reliable attribute extraction. We compute a per-word threshold, by optimizing the F-score on the validation set. This threshold, $thr_w$, is used to define a classification score,
\begin{equation}
\tilde{p}_w(I) = {\rm sigmoid}\left( \dfrac{\hat p_w (I)-{\rm thr}_w}{{\rm thr}_w }\right).
\end{equation}
Additionally, we compute a cosine-similarity score between word and image features,
\begin{equation}
S_{w, I} = \dfrac {f_w \cdot f_I} {\norm{f_w} \norm{f_I}},
\end{equation}
where $f_w$ and $f_I$ are the word and image embeddings, in the joint space, respectively.
Finally, we average the classification score, $\tilde{p}_w(I)$, and the clipped cosine-similarity score, $S_{w, I}$, in order to obtain the final probability that word $ w $ is a characteristic of image $ I $,
\begin{equation}
p_w(I) = \dfrac{\tilde{p}_w(I) + \max{(S_{w, I},0)}}{2}.
\end{equation}
In order to approximate the probability of a desired and undesired attribute set $\bm w= \left\{\bm w^+, \bm w^-\right\}$, in the multimodal search scenario, we follow Bayes rule, under the independence assumption,
\begin{equation}
p_{\bm w}(I) = \prod_{w^+ \in \bm w^+}{p_{w^+}(I)} \prod_{w^- \in \bm w^-}{(1-p_{w^-}(I))}.
\end{equation}
\begin{figure}[t]
\centering
\label{fig:attribute-examples}
\includegraphics[width=0.65\textwidth]{attribute_examples_obscured.pdf}
\caption{Attributes extraction examples. We list the top 6 extracted attributes, for each catalog image, according to their probabilities $p_w(I)$. }
\end{figure}
\section{Multimodal Refinement Search}
\subsection{Query Arithmetic Approach}
\label{sec:query_arithemtic}
During inference, the text and image encoders can yield image and textual query feature vectors which lay in a common embedding space. These feature vectors can be used to search for products, with similar visual or textual properties, in a dedicated catalog. The similarity metric used for matching the query and catalog items is, as in the training phase, cosine similarity. The catalog image and textual features can be precomputed offline once.
Ideally speaking, the fact that visual and textual modalities share the same embedding space, combined with the linear nature of the text encoder, enables performing arithmetic operations (as in \emph{word2vec}) in order to manipulate the desired search query. This enables searching for visually similar products with some different properties, defined textually, by simply adding (subtracting) desired (undesired) textual features to (from) the product visual feature vector. That is, for a given query image, $I$, and a desired and undesired attribute set, $\bm w= \left\{\bm w^+, \bm w^-\right\}$, the new mutlimodal query $ q $ can be defined by,
\begin{equation}
q = f_I + f_T,
\end{equation}
\begin{equation}
f_T = \sum_{w^+ \in \bm w^+}{f_{w^+}} - \sum_{w^- \in \bm w^-}{f_{w^-}},
\end{equation}
where $ f_I $ is the image embedding, and $ f_T $ is the linear combination of desired and undesired word embeddings.
The similarity score, $ S $, between the query and reference catalog items, is defined as the cosine similarity between $q$ and the reference visual features $f_{I_r}$.
\subsection{Attribute Filtering Approach}
\label{sec:attribute_filtering}
An alternative approach for multimodal search is filtering out all catalog items which are not consistent with the textual query. Then, the search score can be calculated based on visual similarity alone. This approach can be formulated as follows.
\begin{equation}
S = \dfrac {q \cdot f_{I_r}} {\norm{q} \norm{f_{I_r}}} \cdot \mathbbm{1}{( w \in T_r \;\;\; \forall w \in \bm w^+)} \cdot \mathbbm{1}{( w \notin T_r \;\;\; \forall w \in \bm w^- )},
\end{equation}
where $q = f_I $, $ T_r $ is the set of words in the reference textual metadata, and $ \bm w^+ $ ($ \bm w^-$ ) is the set of desired (undesired) properties.
This approach should work well given an ideal catalog, with complete and error-free textual metadata.
However, this is not the case in most catalogs. Hence, we derive a soft filtering method based on attribute extraction probabilities,
\begin{equation}
S = \dfrac {q \cdot f_{I_r}} {\norm{q} \norm{f_{I_r}}} \cdot p_{\bm w} (I_r),
\end{equation}
where $q = f_I$ and $p_{\bm w} (I_r)$ is the probability of the textual desired and undesired properties in the reference image $I_r$.
\subsection{Combined Approach}
We attempt to combine both previously described methods into a single robust one. We do so by using the soft attribute filtering along with the query arithmetic based search.
The motivation of incorporating attribute filtering is to better meet the textual manipulation criteria. Since attribute filtering is soft and noisy, it is not enough to use it with visual search alone (as in \ref{sec:attribute_filtering}), as it will encourage retrieval of visually similar items without considering the textual manipulation.
The exact formulation is as follows.
\begin{equation}
S =\dfrac {q \cdot f_{I_r}} {\norm{q} \norm{f_{I_r}}} \cdot p_{\bm w}(I_r),
\end{equation}
where $q = f_I + f_T$, as in \ref{sec:query_arithemtic}.
\section{Evaluation}
\label{eval}
\begin{figure}[b]
\includegraphics[width=0.7\textwidth]{mbr_vs_triplets.pdf}
\centering
\caption{ Comparison of top-$K$ validation accuracy convergence, during training, between triplet loss, MBMR loss and a multi-task objective composed of MBMR loss and multi-label cross-entropy loss for attribute extraction.
}
\label{fig:mbmr_exp}
\end{figure}
For evaluation purposes we automatically constructed a benchmark of multimodal queries. Query product images (of tops, bottoms and dresses) were randomly sampled, and assigned with desired and undesired textual requirements out of a pool of common fashion attributes. The pool consisted of $110$ fashion attributes from $5$ major categories: color, pattern, neckline, style and garment type. Textual requirements can specify either adding, removing or replacing specific properties to or from the query image. The final benchmark consists of 1500 queries (300 for each attribute category), after manual verification and query filtering.
A commonly used metric in information retrieval tasks is the normalized Discounted Cumulative Gain (nDCG) \cite{jarvelin2002cumulated}. The DCG metric measures ranking quality, which cumulates the relevance of the top-$K$ retrieved items per query, while penalizing them differently based on their rank.
\begin{equation}
{\rm DCG_K} = \sum_{i=1}^{K} \dfrac{rel_i}{log_2{(i+1)}},
\end{equation}
where $ rel_i $ is the relevance of the reference item ranked in place $ i $ by the model. The relevance is given by some oracle.
The nDCG normalizes the DCG metric by the Ideal-DCG value (IDCG), which is calculated similarly to the DCG, over an ideally sorted reference list. For IDCG, we used an upper-bound approximation which assumes our reference corpus contains $K$ items with $rel = 1$.
In order to evaluate multimodel search performance, two aspects need to be accounted for, visual and textual. A perfect result would meet the textual criteria while still being as visually similar as possible to the query image. We develop two nDCG metrics, with relevance scores based on a visual oracle and a textual oracle. Our final, multimodal, metric is a simple geometric mean of both nDCG scores.
\begin{itemize}
\itemsep-0.16em
\item \textbf{Visual nDCG (V-nDCG)}: Based on visual relevance, which is extracted from a baseline visual search model. This purely visual model was trained with triplet loss on catalog images, where for each query image a different image of the same item was considered as a positive sample and images of different items were considered as negative samples. The relevance is the cosine similarity between reference and query visual features, extracted from this baseline model.
\item \textbf{Textual nDCG (T-nDCG)}: Based on presence (absence) of desired (undesired) query words in the reference textual metadata. The relevance is defined by the rate of criteria that are met. A desired word criterion is considered as met if the reference metadata includes the word. An undesired word criterion is met if the reference metadata does not include the word.
\item \textbf{Multimodal (MM)}:
\mbox{\rm{MM}$ \triangleq \sqrt{\text{V-nDCG} \cdot \text{T-nDCG}}$}.
\end{itemize}
These metrics are somewhat noisy, and may be inaccurate in specific cases, such as incomplete and inaccurate metadata or inaccuracies caused by the baseline visual search model. However, on the corpus level we observe that they are stable and reliable enough to serve as evaluation metrics, and help us compare different methods.
\section{Experimental Results}
\label{sec:exp}
We compare our described Mini-Batch Match Retrieval (MBMR) objective with a triplet loss, as utilized in~\cite{uniVSE}. Figure \ref{fig:mbmr_exp} shows the convergence of top-5 and top-20 validation accuracy during the joint embedding training procedure. The top-$ K $ accuracy metric measures the rate of images and text descriptions for which the actual matching pair was ranked, based on cosine similarity, within the top $K$ references out of the entire validation set, which consists of 23.5K items. In our experiments, the mini-batch size was set to $160$, the MBMR temperature $\tau$ to 0.025 and the triplet loss margin to 0.2. We believe that top-$ K $ accuracy is a good metric for this task, as in retrieval tasks we usually mostly care about the top retrieved results. It can be seen that the MBMR objective leads to faster and superior convergence over triplet loss. Additionally, it can be seen that multi-task training, with the additional attribute extraction branch and corresponding loss, slightly increases performance.
We follow our evaluation protocol for multimodal search, as described in Section~\ref{eval}, and compare the following methods: Soft Attribute Filtering (SAF), Query Arithmetic (QA) and their combination (QA+SAF).
It can be seen in Table \ref{table:metric-eval} that although there is a clear trade-off between the visual and textual metrics (V-nDCG and T-nDCG), on the overall multimodal (MM) metric, the combined approach (QA+SAF) outperforms all others significantly.
These conclusions are further reinforced by our qualitative visualization and analysis of the results, as can be seen in Figure \ref{fig:results-examples}.
\begin{table}[t]
\begin{center}
\bgroup
\tiny
\def1.2{1.2}
\caption{Evaluation results: we report V-nDCG, T-nDCG and MM metrics, and compare the Soft Attribute Filtering (SAF), the naive linear Query-Arithmetic (QA) and the combined (QA+SAF) methods.
Queries are split by the type of textual criteria.}
\label{table:metric-eval}
\begin{tabular}{|l| |c|c|c|c|c|c|c|c|c|}
\hline
\multirow{2}{*}{} & \multicolumn{3}{|c|}{\textbf{SAF}} & \multicolumn{3}{|c|}{\textbf{QA}} & \multicolumn{3}{|c|}{\textbf{QA+SAF}} \\
\cline{2-10}
& V-nDCG &T-nDCG & MM & V-nDCG &T-nDCG & MM & V-nDCG &T-nDCG & MM \\
\hline
\hline
\cline{1-10}
\textbf{Color} &0.726 &0.407 &0.543 & 0.8 &0.413 &0.574 &0.621 &0.591 &\textbf{0.605} \\
\cline{1-10}
\textbf{Pattern} &0.769 &0.407 &0.559 &0.818 &0.426 &0.59 &0.672 &0.543 &\textbf{0.604} \\
\cline{1-10}
\textbf{Neckline} &0.77 &0.572 &\textbf{0.663} &0.806 &0.527 &0.651 &0.68 &0.628 &0.653 \\
\cline{1-10}
\textbf{Style} &0.761 &0.464 &0.594 &0.815 &0.401 &0.572 &0.676 &0.563 &\textbf{0.617} \\
\cline{1-10}
\textbf{Garment} &0.785 &0.27 &0.46 &0.828 &0.221 &0.427 &0.696 &0.486 &\textbf{0.581} \\
\cline{1-10}
\hline
\hline
\textbf{Overall} &0.76 &0.419 &0.564 &0.813 &0.397 &0.568 &0.669 &0.561 &\textbf{0.612} \\
\hline
\end{tabular}
\egroup
\end{center}
\end{table}
\section{Conclusions}
\label{sec:conclusion}
In this paper, we explored the task of multimodal fashion search.
We proposed utilizing a visual-textual joint embedding model for this task, suggested an alternative training objective and demonstrated its effectiveness.
We explored and evaluated several approaches to leverage this joint-embedding model for the multimodal search task.
Unlike previous works, our method does not require direct supervised data of images before and after the textual manipulation.
Moreover, our training and evaluation methods are all performed over noisy, not well structured, catalog data.
\begin{figure}[h]
\centering
\includegraphics[width=1\textwidth]{results_obscured.pdf}
\caption{Qualitative results of top 3 retrieved items for example queries with Soft Attribute Filtering (SAF), Query Arithmetics (QA) and combined approach (QA+SAF).}
\label{fig:results-examples}
\end{figure}
\newpage
\medskip
\small
\bibliographystyle{plain}
|
1,477,468,750,993 | arxiv | \section{Introduction}
Thermal energy storage (TES) \cite{TES1,TES2} technology has attracted wide attention due to its ability to solve the temporal and spatial imbalance between energy supply and energy demand. Among various TES methods, the latent heat storage (LHTES) \cite{LHTES,LHTES2} which could store thermal energy in latent heat through the solid-liquid phase change of phase change material (PCM) has been extensively developed in recent years. This is mainly due to LHTES technology has the advantages of high storage density, stable thermal energy storage temperature, and low cost. Unfortunately, most phase change materials in LHTES have a low thermal conductivity \cite{low thermal conductivity} which seriously limits the heat storage efficiency.
In order to improve the performance of LHTES, scholars have proposed many heat transfer enhancement techniques, including adding highly conductive particles \cite{particles1,particles2}, adding highly conductive metallic fins \cite{fin1,fin2,fin3}, using multiple PCM methods \cite{multiple PCM1,multiple PCM2} , embedding PCMs in highly conductive porous media \cite{chenIJHMT2014,yangAE2016,zhaoIJHMT2016,taoATE2016,zhuATE2016,yaoIJTS2018,yangICHMT2021} and so on. Among these heat transfer enhancement methods, highly conductive porous media with high thermal conductivity, high heat penetration, high porosity and high specific surface area has been showing progressive potentials.
Metal foam is a typical porous media with high thermal conductivity. Metal foam perfectly maintains the excellent properties of base metals, such as high stability, light weight, high thermal conductivity, ductility, and improved heat transfer characteristics. Therefore, metal foam is a widely used thermal conductivity enhancers. Some studies have explored the problems of solid-liquid phase change heat transfer in PCM-filled metal foam \cite{chenIJHMT2014,yangAE2016,zhaoIJHMT2016,taoATE2016,zhuATE2016,yaoIJTS2018,yangICHMT2021}. Chen et al. \cite{chenIJHMT2014} used an infrared camera and microscope to experimentally study the heat transfer phenomenon of PCM in foamed metal and used the double-distributed lattice Boltzmann method to carry out a numerical simulation. It is found that the numerical value is consistent with the experimental results and the foam metal has a significant enhancement effect on the melting of PCM. Yang et al. \cite{yangAE2016} experimentally studied the effects of metal foam to melting and found that completely melting the PCM in metal foam takes over 1/3 less time than that of pure paraffin under the same conditions. Zhao et al. \cite{zhaoIJHMT2016} numerically studied the melting behavior of paraffin in foam metal, it is noted that the Rayleigh number, porosity and pore density have a significant influence on on the melting and solidification process. Tao et al. \cite{taoATE2016} used the lattice Boltzmann method to study the heat storage performance of paraffin foam metal composites. The author studied the influence of porosity and pore density on the melting rate, and proposed the best metal foam structure with a porosity of 0.94 and a PPI of 45. Zhu et al. \cite{zhuATE2016} used the finite volume method to explore the influence of the three strengthening methods of foam metal porosity, cold wall shape and discrete heat source on the thermal properties of foam metal/phase change materials. Yao et al. \cite{yaoIJTS2018} conducted a visualized experiment to study the melting of paraffin in high porosity copper foam at pore scale. They concluded that the copper foam with a high porosity of 0.974 effectively extends the phase change interface and improves the heat storage of paraffin, while the reduction in the amount of latent heat is only 2.6\%. Yang et al. \cite{yangICHMT2021} numerically and experimentally explored the influence of the inclination angle of the inclined cavity containing foamed metal and the cavity aspect ratio on the melting of PCM. It is found that the tilt angle has little effect when the aspect ratio is given, and the smaller aspect ratio is better than the larger aspect ratio when the aspect ratio is not fixed.
All the studies mentioned above only consider metal foams with fixed pore parameters. Recently, many investigations have found that gradient metal foams can further enhance melting heat transfer under the same conditions. Yang et al. \cite{yangIJHMT2015} numerically investigated the melting process of sodium nitrate inside porous copper foam with linearly changed porosity. The numerical results show that porosity linearly increased from bottom to top could improve the heat transfer performance and shorten the completely melted time compared to that for constant porosity by enhancing natural convection. Yang et al. \cite{yangIJHMT2016} numerically studied the solidification behavior of saturated distilled water in open-cell foam metal and compared it with ungraded foam metal. The authors found that the positive gradient porosity and negative gradient pore density structures have a faster solidification rate compared to the non-graded foams. Zhu et al. \cite{zhuATE2017} proposed an improved metal foam structure, which is composed of metal foam and finned metal foam with gradient pores. The finite volume method is also used to analyze the influence of structural parameters on energy storage performance. It is found that this structure can shorten the melting time by changing the melting sequence of the phase change material. Zhang et al. \cite{zhangATE2017} numerically studied the melting behavior in gradient foam metal which was consist of three different homogeneous porosity slices, and results show that the gradient porosity structure can overcome the corner phenomenon at the bottom to increase the heat storage rate. Yang et al. \cite{yangAE2020} experimentally and numerically studied the effect of gradient porosity and gradient density in tube latent heat thermal energy storage. Results indicated that the positive gradient design of porosity can significantly reduce the melting time of PCMs filled in the pore space and obtain simultaneously a better temperature uniformity. Ghahremannezhad et al. \cite{ghahremannezhadATE2020} used a finite volume approach to numerically simulate the melting behavior of PCMs in gradient foam metal under different heating modes. They found that the direction of gradient porosity and PPI can affect the heat transfer rate. Hu et al. \cite{huATE2020} use a three-dimensional model to numerically simulate the melting behavior in a gradient metal foam saturated with PCM and quantitatively explored the effect of gradient size and gradient difference on the heat storage characteristics of PCMs. It found that gradient metal foam effectively improves and accelerates the heat storage efficiency and there is an optimal gradient difference under a fixed average porosity. Marri et al. \cite{marriIJHMT2021} experimentally numerically studied the thermal function of cylindrical foam metal/PCM composite heat sinks with gradients of porosity and density. The results show that the three-level gradient and the two-level gradient have comparable thermal performance, which are 4.4 times and 4 times stronger than the thermal performance of the uniform structure, respectively.
From the above experiments and numerical studies, it can be seen that the gradient pore structure has a significant impact on the heat transfer characteristics of PCM, and it is an effective means to enhance phase change heat transfer. However, most of the research on the influence mechanism of the gradient metal foam on the phase change process of the phase change material (PCM) is based on the representative elementary volume (REV) models, and seldom is studied by the pore-scale models. Compared with the REV model, the pore-scale model can provide detailed local information of the fluid flow and heat transfer inside the gradient metal foam. As far as the author knows, the effect of some key parameters such as Reynolds number on the solid-liquid phase transition in gradient metal foam has not been systematically studied. Therefore, we studied the melting behavior of PCM under a gradient porosity structure at the pore scale.
In this work, a pore-scale LB model was used to study the melting of composite metal foam PCM with porosity gradient. Lattice Boltzmann method (LBM) has been developed as an effective numerical method to solve the problem of solid-liquid phase transition \cite{boltzmann1,boltzmann2,boltzmann3}, and has solved the problem of solid-liquid phase transition of PCM in porous media \cite{chenIJHMT2014,taoATE2016,LBM_media1,LBM_media2}. The remainder of this paper is structured as follows. The governing equations for the melting problem and physical problem are presented in Section 2, followed by the LBM for the enthalpy equation and the fluid flow in Section 3.1 and 3.2. Subsequently, the melting model is verified and boundary treatment in Section 3.3 and Section 3.4, respectively. Numerical results are presented and discussed in Section 4, and finally, some conclusions from the present investigation are summarized in Section 5.
\section{Problem statement and governing equations}
\label{section2}
In this work, we consider the numerical simulation of solid-liquid phase change in a square cavity equipped filled with a gradient porous media. The schematic diagram of the two-dimensional physical model considered in this study is depicted in Fig.\ref{fig1}. A square cavity with side length $ \delta $ encloses the gradient foam metal, which is filled with solid PCMs at the melting temperature $ T_{m} $. In the solid-liquid phase change, the left wall is raised to a constant temperature $ T_{h} $, higher than $ T_{m} $, the temperature of the right wall is kept at the
\begin{figure} [H]
\centering
\subfigure[Case A: horizontal porosity gradient]{
\includegraphics[width=0.4\linewidth]{problem_x-eps-converted-to.pdf}}
\subfigure[Case B: vertical porosity gradient]{
\includegraphics[width=0.4\linewidth]{problem_y-eps-converted-to.pdf}}
\caption{Schematic of phase change in gradient structure.}
\label{fig1}
\end{figure}
\noindent melting temperature $ T_{m} $, while the other two walls are assumed to be adiabatic boundary. Additionally, the detailed description of the microstructure of porous media is a necessary prerequisite for the pore scale model. In the present work, the stochastic simulation method of porous media proposed by Chen et al. \cite{zhouCMS2014} is employed to generate porous structure. Since the gradient porosity structure created with tri-layer media have almost the same performance as a bi-layer media \cite{marriIJHMT2021}, so we adopt a two-layer structure for simplicity. Three structures of porous media have been defined with fixed average porosity as follows: positive gradient structure (porosity increases from 0.8 to 0.95), negative gradient structure (porosity decreases from 0.8 to 0.95), and uniform gradient structure (porosity keep at 0.875). The positive porosity gradient is obtained by increasing the porosity of porous media along the positive x-axis or y-axis direction, while the negative one meant the decreased porosity along the positive x-axis or y-axis direction. Case A and Case B study horizontal gradient porosity and vertical gradient porosity, respectively. Furthermore, It is worth noting that the particle diameter with 7.5 $ l.u. $ (lattice unit) and the volume of PCM that can be filled in the gradient structure is the same, which lays the foundation for comparisons of melting behaviors during phase change process.
Moreover, The following simplifying assumptions are made to establish the mathematical model: (1) the fluid in the cavity is incompressible and the Boussinesq approximation is applied, (2) the volume change during the phase transition is ignored, (3) all materials meet the assumptions of uniformity and isotropy, (4) the physical properties of PCM are different from porous media, (5) the physical parameters of phase change materials and porous media are almost constant. Based on the above assumptions, a mathematical model of the solid-liquid phase transition in the pore-sale model can be expressed by the following equations.
\begin{equation}
\nabla \cdot \boldsymbol{u}=0,
\end{equation}
\begin{equation}
\frac{\partial \mathbf{u}}{\partial t}+\nabla \cdot \mathbf{u} \mathbf{u}=-\frac{1}{\rho_{f}} \nabla p+\nabla \cdot\left[v_{f}\left(\nabla \mathbf{u}+(\nabla \mathbf{u})^{\mathrm{T}}\right)\right]-\mathbf{g}\left[1-\beta_{f}\left(T-T_{m}\right)\right],
\end{equation}
\begin{equation}
\frac{\partial\left[\left(\rho C_{p}\right)_{f} T\right]}{\partial t}+\nabla \cdot\left[\left(\rho C_{p}\right)_{f} \mathbf{u} T\right]=\nabla \cdot \lambda_{f} \nabla T
\end{equation}
and the energy equation for the solid phase is:
\begin{equation}
\frac{\partial\left[\left(\rho C_{p}\right)_{s} T\right]}{\partial t}=\nabla \cdot \lambda_{s} \nabla T
\end{equation}
where the subscript $l$ and $s$ denote the liquid and solid phases, respectively.$ \boldsymbol{g},\boldsymbol{u},T,p$ are the gravity acceleration vector, velocity, temperature, and pressure, respectively. $ \beta_{f} $ is the thermal volumetric expansion coefficient of the fluid. $ \rho,C_{p},\lambda $ represent the density, heat capacity, and thermal conductivity coefficient, respectively.
Moreover, thermal conductivity coefficient of PCM is:
\begin{equation}
\lambda_{P C M}=(1-f l) \lambda_{s}+f_{l} \lambda_{l}
\end{equation}
In two phase regions, the enthalpy H can be solved by the enthalpy-based method and is given by
\begin{equation}
H=C_{p} T+f l L
\end{equation}
here $ f_{l} $ denotes the liquid fraction and values within 0 and 1. $ L $ represent the latent heat of PCM.
This problem can be characterized by the following three main dimensionless parameters, i.e. Rayleigh number Ra, Prandtl number Pr, Stefan number Ste, which are defined by
\begin{equation}
R a=\frac{|\boldsymbol{g}| \beta (T_{h}-T_{m}) H^{3}}{\nu \alpha} \quad \operatorname{Pr}=\frac{\nu}{\alpha} \quad \text { Ste }=\frac{C_{p} (T_{h}-T_{m})}{L}
\label{equation_Ra}
\end{equation}
where $ \nu $ and $ \alpha $ are the kinetic viscosity and thermal diffusivity, respectively
\section{The lattice Boltzann model and model validation}
\label{section3}
\subsection{Lattice Boltzmann method for velocity field}
\label{section3_1}
In the present study, the incompressible lattice Bhatnagar–Gross–Krook (LBGK) model proposed by Guo et al. \cite{LBGK} is used to simulate the fluid flow, the evolution equation of the particle velocity field is
\begin{equation}
{f_i}\left( {{\bf{x}} + {{\bf{c}}_i}\Delta t,t + \Delta t} \right) -
{f_i}\left( {{\bf{x}},t} \right) = - \frac{1}{{{\tau _f}}}\left[
{{f_i}\left( {{\bf{x}},t} \right) - f_i^{(eq)}\left( {{\bf{x}},t}
\right)} \right] + \Delta t{F_i},
\label{F}
\end{equation}
where $ f_{i}(x,t) $ is the probability density distribution functions with velocity eiat position x and time t, $ \Delta t $ is the time increment. $ \tau _f $ is the dimensionless relaxation time determined by $\nu=\rho_{0} c_{s}^{2}\left(\tau_{l}-0.5\right) \Delta t $. In addition, $f_i^{eq}$ is the equilibrium distribution function given by \cite{LBGK}
\begin{equation}
f_{i}^{(e q)}(\mathbf{x}, t)=\eta_{i} p+\omega_{i}\left[\frac{\mathbf{c}_{i} \cdot \mathbf{u}}{c_{s}^{2}}+\frac{\mathbf{u u}:\left(\mathbf{c}_{i} \mathbf{c}_{i}-c_{s}^{2} \mathbf{I}\right)}{2 c_{s}^{4}}\right],
\end{equation}
where $ \eta_{i} $ is the model parameter satisfying $\eta_{0}=\left(\omega_{0}-1\right) / c_{s}^{2}+\rho_{0}, \eta_{i}=\omega_{i} / c_{s}^{2}(i \neq 0)$ with the constant $ \rho_{0} $ being the fluid average density. $ \omega_{i} $ is the weight coefficient, represented as
\begin{equation}
\omega_{i}=\left\{\begin{array}{ll}
4 / 9, & i=0 \\
1 / 9, & i=1,2,3,4 \\
1 / 36, & i=5,6,7,8
\end{array}\right.
\end{equation}
The direction of the discrete velocity $ c_{i} $ of the model in $ i $ direction are given by
\begin{equation}
\mathbf{c}_{i}=\left\{\begin{array}{ll}
(0,0) & i=0 \\
c(\cos [(i-1) \pi / 2], \sin [(i-1) \pi / 2]) & i=1,2,3,4 \\
\sqrt{2} \mathrm{c}(\cos [(2 i-9) \pi / 4], \sin [(2 i-9) \pi / 4]) & i=5,6,7,8
\end{array}\right.
\end{equation}
where $ i $ is velocity direction, and c is the lattice speed satisfying $ c=\Delta x/\Delta t $, $ \Delta x $ is the lattice spacing. The body force term $ F_{i} $ is defined as \cite{Guo_F}
\begin{equation}
F_{i}(\mathbf{x}, t)=\omega_{i}\left(1-\frac{1}{2 \tau_{f}}\right)\left[\frac{\mathbf{c}_{i} \cdot \mathbf{F}}{c_{s}^{2}}+\frac{(\mathbf{F u}+\mathbf{u F}):\left(\mathbf{c}_{i} \mathbf{c}_{i}-c_{s}^{2} \mathbf{I}\right)}{2 c_{s}^{4}}\right],
\end{equation}
where $ c_{s} = c/ \sqrt{3} $ is the sound speed of the model, $ \mathbf{F} $ is the buoyancy force, and can be calculated according to Boussinesq assumption:
\begin{equation}
\mathbf{F}=\rho \mathbf{g} \beta\left(T-T_{ref}\right)
\end{equation}After a serious of iterative computations, the macroscopic pressure and velocity can be obtained by
\begin{equation}
\mathbf{u}=\sum_{i} \mathbf{c}_{i} f_{i}+\frac{\Delta t}{2} \mathbf{F},
\end{equation}
\begin{equation}
p=\frac{c_{s}^{2}}{1-\omega_{0}}\left(\sum_{i \neq 0} f_{i}-\omega_{0} \frac{|\mathbf{u}|^{2}}{2 c_{s}^{2}}\right).
\end{equation}
\subsection{Lattice Boltzmann method for temperature field}
\label{section3_2}
For the heat transfer in the fluid and solid domains, the double relaxation time LB model proposed by Lu et al. \cite{luIJTS2019} is adopted to simulate the temperature. It can effectively suppress the non-physical diffusion phenomenon at the solid-liquid interface during the phase transition simulation process. The evolution equation of the temperature distribution function can be described as
\begin{equation}
\begin{aligned}
g_{i}\left(\mathbf{x}+\mathbf{c}_{i} \Delta t, t+\Delta t\right)= g_{i}(\mathbf{x}, t)
-\frac{1}{\tau_{g}^{s}}\left[g_{i}^{s}(\mathbf{x}, t)-g_{i}^{seq}(\mathbf{x}, t)\right]
-\frac{1}{\tau_{g}^{a}}\left[g_{i}^{a}(\mathbf{x}, t)-g_{i}^{aeq}(\mathbf{x}, t)\right],
\end{aligned}
\end{equation}
where $ g_{i}^{s}(\mathbf{x}, t) $ and $ g_{i}^{a}(\mathbf{x}, t) $ are the symmetric and anti-symmetric parts of the particle distribution function
where the superscript $ s $ and $ a $ represent the symmetric and anti-symmetric parts of distribution function, respectively. $ {\tau_{g}}^s $ and $ {\tau_{g}}^a $ are the symmetric relaxation time and anti-symmetric relaxation time, respectively. The
expressions of $ g_{i}^{s}(\mathbf{x}, t), g_{i}^{a}(\mathbf{x}, t), g_{i}^{seq}(\mathbf{x}, t), g_{i}^{aeq}(\mathbf{x}, t) $ are defined by
\begin{equation}
g_{i}^{s}=\frac{g_{i}+{g}_{\bar{i}}}{2} , g_{i}^{a}=\frac{g_{i}-{g}_{\bar{i}}}{2}, g_{i}^{seq}=\frac{g_{i}^{eq}+{g}_ {\bar{i}}^{\mathrm{eq}}}{2}, g_{i}^{\mathrm{aeq}}=\frac{g_{i}^{\mathrm{eq}}-{g}_{\bar{i}}^{\mathrm{eq}}}{2}.
\end{equation}
in which $ i $ represents the opposite direction of $ i $. The equilibrium distribution function can be expressed as
\begin{equation}
g_{i}^{e q}=\left\{\begin{array}{ll}
H-C_{p} \theta+\omega_{i} C_{p} \theta\left(1-\frac{|\mathbf{u}|^{2}}{2 c_{s}^{2}}\right), & i=0 \\
\omega_{i} C_{p} \theta\left[1+\frac{c_{i} \cdot \mathbf{u}}{c_{s}^{2}}+\frac{\left(\mathbf{c}_{1} \cdot \mathbf{u}\right)^{2}}{2 c_{s}^{4}}-\frac{|\mathbf{u}|^{2}}{2 c_{s}^{2}}\right], & i \neq 0
\end{array}\right.
\end{equation}
The dimensionless relaxation time $ \tau_{g}^{s} $ and $ \tau_{g}^{a} $ can be obtained by
\begin{equation}
\frac{\lambda}{\rho_{0} c_{p}}=c_{s}^{2}\left(\tau_{g}^{a}-0.5\right) \Delta t ,
\end{equation}
\begin{equation}
\frac{1}{\tau_{g}^{s}}+\frac{1}{\tau_{g}^{a}}=2.
\end{equation}
And the total enthalpy can be obtained as follows
\begin{equation}
H=\sum_{i} g_{i}.
\end{equation}
Finally, the liquid fraction $ f_{l} $ and the temperature $ T $ can be calculated from total enthalpy,
\begin{equation}
f_{l}=\left\{\begin{array}{ll}
0 & H \leq H_{s} \\
\frac{H-H_{s}}{H_{l}-H_{s}} & H_{s}<H<H_{l} \\
1 & H \geq H_{l}
\end{array}\right.
\end{equation}
\begin{equation}
T=\left\{\begin{array}{ll}
\frac{H}{C_{p}} & H \leq H_{s} \\
\frac{H_{l}-H}{H_{l}-H_{s}} T_{s}+\frac{H-H_{s}}{H_{l}-H_{s}} T_{l} & H_{s}<H<H_{l} \\
T_{l}+\frac{H-H_{l}}{C_{p}} & H \geq H_{l}
\end{array}\right.
\end{equation}
in which $ H_{s}=C_{p}T_{s} $ and $ H_{l}=C_{p}T_{s}+L $ are total enthalpy of the solid phase and the liquid phase, respectively. $ T_{s} $ and $ T_{l} $ denote the solidus and liquidus temperatures, respectively.
\subsection{LBM boundary conditions}
In this context, a volumetric LB scheme proposed by Huang et al. is adopted, which can be expressed as
\begin{equation}
f_{i}=f_{l} f_{i}^{*}+\left(1-f_{l}\right) f_{i}^{\mathrm{eq}}\left(\rho, \mathbf{u}_{s}\right)
\end{equation}
where the $ f_{i}^{*}=f_{i}\left(\mathbf{x}+\mathbf{c}_{i} \Delta t, t+\Delta t\right) $ is given by Eq. \ref{F} and $\mathbf{u}_{s}=\mathbf{0}$ is the the velocity of solid phase. What's more, the nonequilibrium
extrapolation scheme (NEES) proposed by Guo et al. \cite{GuoPf2002} is applied to deal with all boundary of walls for its second order accuracy.
\subsection{Validation of phase change model}
\label{section3_3}
\begin{figure}[h]
\centering
\subfigure[]{ \label{fig3}
\includegraphics[width=0.475\textwidth]{validation_1-eps-converted-to.pdf}}
\subfigure[]{ \label{fig4}
\includegraphics[width=0.475\textwidth]{validation_2-eps-converted-to.pdf}}
\caption{Comparing results between present and Huang et.al of (a) $ Nu_{ave} $, $ f_{l} $ and (b) phase interface position at Fo= 4, 10, 20.}
\end{figure}
\noindent In order to demonstrate the ability of the present code in simulation of fluid flow and heat transfer, we simulated the pure PCM melting at $ Ra=25000.0 $, $ Ste = 0.01 $, and $ Pr= 0.02 $ in a square cavity. Additionally, a grid resolution of 512 × 512 is employed in the simulations, which is fine enough to give the grid-independent solution. We compare the average Nusselt number of left wall, total liquid fraction and melting interface with Huang et.al \cite{huangIJHMT2013}. Fig.\ref{fig3} shows $ Nu_{ave} $ and $ f_{l} $ at different Fourier numbers ($ Fo $). It can be clearly seen that the heat transfer performance matches the Huang’s results well at different $ Fo $, a good agreement is obtained. Moreover, the comparison of the position of the solid-liquid interface at $ Fo=4, Fo=10 $ and $Fo=20 $ is shown in Fig.\ref{fig4}, and shows good agreement. The present numerical results are in good agreement with the results of Huang's results, indicating that the present LBM model and simulation code are accurate and reliable.
\section{Results and discussion}
\label{section4}
In this section, numerical simulations are carried out for the solid-liquid phase transition in gradient porous media via the LBM. And we mainly investigate the influence of the geometrical structure and arrangement of gradient porous media on the melting improvement of PCM, which is measured by the nondimensional Fourier numaber ($ Fo $), liquid phase fraction ($ f_{l} $). Unless otherwise stated, the Stefan numbe and the Prandtl number are selected as 0.1 and 0.2, which have been studied [38]. In order to show the main physical difference between porous media and PCM, we set the thermal conductivity of porous media to 500 times that of PCM.
We first examine the effect of the gradient porosity on melting behavior at $ Ra=10^{6} $. Fig. \ref{fig7} illustrates the liquid fraction $ f_{l} $ as a function of dimensionless time $ Fo $ for Case A and Case B with different gradient structures: uniform, positive and negative gradients. Based on the difference in the slope of the melting curve, the melting process can be divided into three stages. At the early stage, heat is mainly transferred through conduction so that the curves of the three gradient porosity structure completely overlap. With time elapsed, $ f_{l} $ incresaes as $ Fo $ incresses, and the trend is firstly steep and then gradually weaker, which indicates thermal conductivity performance degradation for melting interface. Specifically, the liquid fraction of the negative gradient in Case A is higher than the other two structures, and this trend is maintained until completely melted, which may due to the
\begin{figure}[H]
\centering
\subfigure[Case A]{ \label{fig5}
\includegraphics[width=0.4\textwidth]{1000000_fo_fl-eps-converted-to.pdf}}
\subfigure[Case B]{ \label{fig6}
\includegraphics[width=0.4\textwidth]{y1000000_fo_fl-eps-converted-to.pdf}}
\caption{Effects of gradient porosity on (a) total liquid fraction and (b) average Nusselt number along the left wall at $ Ra=10^{6} $.}
\label{fig7}
\end{figure}
\noindent fact that the left part with relatively low porosity of negative gradient leads to a growth in natural convection strength. However, the situation of Case B can get a little complicated, a turning point is observed and the liquid fraction of negative gradient increases with a higher rate compared to the other cases while the melting rate of other cases decreases. Eventually, since the fact that the main heat conduction mechanism has changed from the natural convection to the heat conduction, the melting rate of the three gradients structure is reduced. The negative gradient ends up with a relatively short period of time in Case A and Case B, followed by the uniform and the negative cases.
In order to intuitively understand the flow transition process in different gradient structures. Fig. \ref{fig8} shows streamline and total liquid fraction distributions with different gradient structure at $ Ra=10^{6} $ condition, where
\begin{figure}[H]
\centering
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case A: negative gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_1000000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_1000000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_1000000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case A: positive gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_1000000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_1000000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_1000000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Uniform gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_1000000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_1000000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_1000000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case B: negative gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_1000000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_1000000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_1000000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case B: positive gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_1000000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=0.2 $}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_1000000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=0.9 $}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_1000000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=3.0 $}
\end{minipage}
\caption{The effect of gradient porosity on the total liquid fraction at $ Ra=10^{6} $.}
\label{fig8}
\end{figure}
\noindent the liquid and solid phase zone are represented by the white and blue region, porous metal foams are denoted by black region. In the first place, the phenomenon in Fig. \ref{fig8} is caused by the combined effect of natural convection and conduction during the phase change. The final performance depends on which of natural convection and heat conduction is dominant during the melting process. In the initial stage of melting, there is not enough liquid PCM that can support natural convection, and natural convection has little effect on heat transfer resulting in heat conduction as the main heat conduction mechanism. As a consequence, the general trend of the solid-liquid interface is roughly perpendicular to the upper and lower wall for three different gradient structures. As the $ Fo $ increase ($ Fo=0.9 $), the gradual increase in melted PCM provides a larger carrier for natural convection resulting that the main heat transfer mechanism gradually changed from heat conduction to natural convection. As the results showed, the velocity fields were consistent with the phase-change interface. The molten PCM carrying energy flowed upward driven by the buoyancy force. It then changes the flow direction under the influence of the temperature gradient when it reaches the top of the wall, transfers energy to the vicinity of the solid-liquid interface. Consequently, the phase change interface moves forward and gradually tilts, the melting zone expands, and the liquid zone gradually thickens until the top reaches the right side. In the final stage of the phase transition process, the driving force of natural convection is reduced in the final stage, the PCM at the bottom right will slowly melt. This phenomenon is called "bottom corner phenomenon" \cite{guai}, which worsens the conduction heat transfer at the bottom region and reduce energy storage efficiency and should be avoided. Furthermore, the PCM near the foamed metal always preferentially melt due to the conductivity of porous metal foams is higher that it of PCM. It eventually lead to tilt and fluctuation of the solid-liquid interface. However, at the bottom of the PCM, natural convection is weaker due to weaker buoyancy and increasing thermal resistance. Therefore, the heat conduction plays a major role in the heat transfer process.
On the other hand, the difference in gradient structure is also one of the influencing factors. The increase of porosity contributes to the decrease of the intensity of natural convection, resulting in a strong suppression of the natural convection of the molten PCM. Compared to uniform case, negative gradient in Case A provides the lowest thermal conductivity and the lowest convection resistance in the left area because it has the highest porosity on the left region. As a result, stronger natural convection is formed in the left area, bringing more heat near the solid-liquid interface, so that the negative gradient has the fastest melting speed in the mid-term. The negative gradient has the lowest porosity and the highest thermal conductivity on the right side. Therefore, heat is transferred more effectively to the right side of the cavity during the final melting phase resulting in a smaller total melting time. Compared with the negative gradient, the positive gradient porosity has the completely opposite porosity gradient direction, which provides the highest resistance to natural convection at left area and also lower thermal conductivity at the bottom right section of the enclosure. Therefore, this gradient structure has a lower melting rate. Moreover, although the highest thermal conductivity on the left side of the positive gradient makes the melting area larger in the early stage, its advantages are not shown on the above melting curve due to the area of the PCM in this area is minimized. For Case B, negative gradient provides higher conductivity at the bottom at the top and lower suppression of circulations. As a result, the two main heat transfer mechanisms are effectively present at the heat source. And this structure further strengthened the scouring action of local natural convection at the bottom. The advantage of lower suppression of circulations at the bottom becomes more effective in the final melting stages especially at the bottom right section of the unit where other porous structures have slower melting rate. Additionally, the combined effect of natural convection and heat conduction leads to differences in local energy storage, which are shown in Fig. \ref{fig5} as local protrusions at the phase interface. Compared with Case A, the negative gradient of Case B has a shorter melting time mainly because the structure can eliminate the corner phenomenon of the bottom through the continuous scouring action of the local natural convection at the bottom to shorten the melting time. This shows the importance of providing lower suppression of circulations at the bottom for faster melting of PCM in energy storage units.
In addition, it is found that the results of different gradients have changed under low Rayleigh number conditions through numerical simulation results. Fig. \ref{fig11} illustrates the $ Fo $ dependent $ f_{_{l}} $ with three different gradient porosity conditions under $ Ra=10^{4} $ condition. For Case A, the variation tendency of the liquid fraction are extremely similar to that under high Rayleigh number conditions. The main difference is that the order in which the melting of the positive and negative gradients is completed has changed, and the positive gradient structure is firstly melted with a higher liquid fraction. It can be observed from the liquid fraction distribution in Fig \ref{fig12} that the positive gradient structure relies on the high thermal conductivity brought by the lower porosity on the left to melt faster in the left area, and the positive gradient structure is the first to melt due to the strong natural convection erosion caused by the lower natural convection resistance when melting to the right area. For Case B, there is little difference between the melting curves for the three gradient in the whole melting process. It can be seen from the liquid fraction distribution in Fig \ref{fig12} that the area with higher porosity melts more quickly, for this reason, the last melted area is always the corner with lower porosity. There are obvious steps in the solid-liquid interface, and the steps are located exactly at the interface where the porosity changes. Compare Case A, it is found that positive gradient of Case A is firstly melt due to the continuous scouring action of the local natural convection. The evolution of melting front further illustrated that the superiority of natural convection in eliminating corners.
\begin{figure}[H]
\centering
\subfigure[Case A]{ \label{fig9}
\includegraphics[width=0.4\textwidth]{10000_fo_fl-eps-converted-to.pdf}}
\subfigure[Case B]{ \label{fig10}
\includegraphics[width=0.4\textwidth]{y10000_fo_fl-eps-converted-to.pdf}}
\caption{Effects of gradient porosity on (a) total liquid fraction and (b) average Nusselt number along the left wall at $ Ra=10^{4} $.}
\label{fig11}
\end{figure}
The simulation results have changed dramatically under different Rayleigh number, we next turn to investigate the influence of $ Ra $ on melting. Rayleigh number is an important parameter reflecting the intensity of natural convection. It can be seen from Eqs. \ref{equation_Ra} that the buoyancy force, as the driving force of natural convection, gradually increases with the Ra increasing. Consequently, the intensity of natural convection became increasing with the increase of the $ Ra $. Fig. \ref{fig15} depicts the influence of the Rayleigh number on positive and negative gradient
\begin{figure}[H]
\centering
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case A: negative gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_10000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_10000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_10000_09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case A: positive gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_10000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_10000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_10000_08095-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Uniform gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_10000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_10000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_10000_0875-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case B: negative gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_10000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_10000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_10000_y09508-eps-converted-to.pdf}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\centering
\caption*{Case B: positive gradient}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.2_10000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=0.2 $}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{0.9_10000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=0.9 $}
\end{minipage}
\begin{minipage}[c]{0.2\textwidth}
\includegraphics[width=\textwidth]{3_10000_y08095-eps-converted-to.pdf}
\caption*{$ Fo=3.0 $}
\end{minipage}
\caption{The effect of gradient porosity on the total liquid fraction at $ Ra=10^{4} $.}
\label{fig12}
\end{figure}
\noindent of Case A and Case B. As shown in this figure, a critical point near $ Ra=10^{5} $ is observed, the positive gradient structure is superior to the positive gradient structure when $ Ra $ is less than this critical point, and when $ Ra $ exceeds this critical point, the negative gradient is more advantageous. As shown in this figure, the positive gradients have similar trends for two cases. Moreover, one can also found that the melting time of the positive gradient structure almost increases to the steady state with the increase of $ Ra $. For further insight into the overall melting situation of the positive gradient under different $ Ra $, Fig. \ref{fig18} illustrates that the dimensionless time dependent the liquid fraction with different $ Ra $ conditions. It can be clearly seen that Rayleigh number has little effect on conduction due to heat conduction is the main heat transfer mechanism, then high Rayleigh number brings stronger natural convection with the increasing liquid phase PCM. To further confirm this observation, we also present the temperature and vertical velocity distributions at the mid height of the domain for positive gradient in Fig. \ref{fig21}. It is clearly observed that the overall downtrend of temperature is gentler than that under $Ra=10^{6}$ condition as for the reduced Rayleigh number and the vertical velocity decrease and the thickness of velocity boundary layer is thickened accordingly with the decrease of Rayleigh number. As the Rayleigh number increases, the natural convection effect is strengthened, and thus causing the intense flow in the liquid PCM, which leads to greater changes in the temperature near the cavity wall. However, it is worth noting that a turning point is observed and the slope of liquid fraction changes dramatically when the time nears $ Fo = 3 $ point. Subsequently, the liquid fraction of low Rayleigh number increases with a higher rate compared to the other cases while the melting rate decreases in the high Rayleigh number. Fig. \ref{fig22} shows the temperature distributions, liquid fraction distributions and streamlines of positive at turning point for different Ra conditions. As it can be seen, the remaining part gradually concentrate towards the lower right corner and heat is concentrated in the upper partas $ Ra $ increases. For negative gradient, the enhancement of natural convection shortens the total melting time for Case A, while the trend of Case B first rises due to the above reasons, and then declines due to the enhancement of natural convection.
\begin{figure}[H]
\centering
\subfigure[Case A]{ \label{fig13}
\includegraphics[width=0.4\textwidth]{Case_A_Ra_fo-eps-converted-to.pdf}}
\subfigure[Case B]{ \label{fig14}
\includegraphics[width=0.4\textwidth]{Case_B_Ra_fo-eps-converted-to.pdf}}
\caption{Effects of Ra on (a) Case A and (b) Case B.}
\label{fig15}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[Case A]{ \label{fig16}
\includegraphics[width=0.4\textwidth]{x_positive-eps-converted-to.pdf}}
\subfigure[Case B]{ \label{fig17}
\includegraphics[width=0.4\textwidth]{y_positive-eps-converted-to.pdf}}
\caption{The effect of Rayleigh number on melting time for (a) Case A and (b) Case B.}
\label{fig18}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[]{ \label{fig19}
\includegraphics[width=0.4\textwidth]{x09508_0.5Ytem-eps-converted-to.pdf}}
\subfigure[]{ \label{fig20}
\includegraphics[width=0.4\textwidth]{x09508_0.5YV-eps-converted-to.pdf}}
\caption{(a) Temperature distributions and (b) vertical velocity distributions along with Y = 0.5 cross-section (fully melted) for Case A: positive gradient.}
\label{fig21}
\end{figure}
\begin{figure*}[htbp
\centering
\subfigure{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{Fo=3_25000_x08095_1-eps-converted-to.pdf}
\includegraphics[width=\textwidth]{Fo=3_25000_x08095_2-eps-converted-to.pdf}
\caption*{$ Ra=2.5 \times 10^{4} $}
\end{minipage}
}
\subfigure{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{Fo=3_100000_x08095_1-eps-converted-to.pdf}
\includegraphics[width=\textwidth]{Fo=3_100000_x08095_2-eps-converted-to.pdf}
\caption*{$ Ra=1.0 \times 10^{5} $}
\end{minipage}
}
\subfigure{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{Fo=3_200000_x08095_1-eps-converted-to.pdf}
\includegraphics[width=\textwidth]{Fo=3_200000_x08095_2-eps-converted-to.pdf}
\caption*{$ Ra=2.0 \times 10^{5} $}
\end{minipage}
}
\subfigure{
\begin{minipage}[b]{0.2\textwidth}
\includegraphics[width=\textwidth]{Fo=3_300000_x08095_1-eps-converted-to.pdf}
\includegraphics[width=\textwidth]{Fo=3_300000_x08095_2-eps-converted-to.pdf}
\caption*{$ Ra=3.0 \times 10^{5} $}
\end{minipage}
}
\caption{Temperature, streamlines and liquid fraction distributions of turning point for positive gradient of Case A.}
\label{fig22}
\end{figure*}
Finally, the influence of the particle diameters $ Ra $ is explored, and the corresponding numerical results are presented in Fig. \ref{fig26} and Fig. \ref{fig29}. It can be seen from Fig. \ref{fig24} that the liquid fraction cannot be distinguished at the beginning of melting process, which indicates that the effect of particle diameters is not such significant on conduction. About $ Fo=3.5 $ later, the influence of particle diameters appears gradually, which is caused by the decrease of the particle size leads to a growth in the internal surfaces for heat transfer in porous media. However, as Rayleigh number increases further, the situation begins to change and one can observed from Fig. \ref{fig25} and Fig. \ref{fig28} that owing to the increasing of the particle size, the toal melting time of PCM decreases, which is caused by the enhancement of convection. The reason why this phenomenon occurs is that smaller particle diameters indeed enhances the heat transfer by providing larger heat transfer surfaces, but as Rayleigh number increases, natural convection tends to play an increasingly stronger role and smaller particle diameters would lead to lower permeability which suppresses natural convection and then weakens the overall heat transfer. Further, one can deduce that the variation of total melting time depend little on the particle diameters since the
\begin{figure}[H]
\centering
\subfigure[$Ra=10^{4}$]{ \label{fig24}
\includegraphics[width=0.4\textwidth]{x10000_08095_R-eps-converted-to.pdf}
\quad
\includegraphics[width=0.4\textwidth]{x10000_09508_R-eps-converted-to.pdf}}
\subfigure[$Ra=10^{6}$]{ \label{fig25}
\includegraphics[width=0.4\textwidth]{x1000000_08095_R-eps-converted-to.pdf}
\quad
\includegraphics[width=0.4\textwidth]{x1000000_09508_R-eps-converted-to.pdf}}
\caption{Effect of particle diameters on the melting evolution of the PCM for Case A.}
\label{fig26}
\end{figure}
\begin{figure}[H]
\centering
\subfigure[$Ra=10^{4}$]{ \label{fig27}
\includegraphics[width=0.4\textwidth]{y10000_08095_R-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{y10000_09508_R-eps-converted-to.pdf}}
\subfigure[$Ra=10^{6}$]{ \label{fig28}
\includegraphics[width=0.4\textwidth]{y1000000_08095_R-eps-converted-to.pdf}
\includegraphics[width=0.4\textwidth]{y1000000_09508_R-eps-converted-to.pdf}}
\caption{Effect of particle diameters on the melting evolution of the PCM for Case B.}
\label{fig29}
\end{figure}
\noindent curves in Fig. \ref{fig26} are almost irregular.
\section{Conclusion}
\label{my_section5}
In this paper, the total enthalpy-based lattice Boltzmann model is adopted to simulate the solid–liquid phase change in a square cavity equipped filled with a gradient porous structure. In order to improve computational efficiency, the proposed algorithm is programmed in parallel by using CUDA. The influence of the gradient porosity, Stefan number, gradient direction and Rayleigh number on the heat transfer in the composite are investigated in detail.
According to the present numerical results, it turns out that porous media with gradient porosity have different effects on the solid-liquid phase change process. The positive gradient porosity shows a further reduction for melting time when the Rayleigh number is small. As the Rayleigh numbers continues to increase and when it exceeds a certain critical value, positive gradient showed further enhancement for melting heat transfer, while the negative gradient deteriorated the heat transfer performance. The melting time of the positive gradient increases as the number of Rayleigh increases, while total melting time of the negative gradient increases as the number of Rayleigh increases when the Rayleigh numbers is greater than $1.0 \times 10^{5} $ . Furthermore, whether positive gradient or negative gradient, the decreasing of the partical size leads to a growth in effective thermal conductivity, while can create an obstacle to natural convection.
\section*{Conflict of interest}
We declare that we have no financial and personal relationships with other people or organizations that can inappropriately influence our work, there is no professional or other personal interest of any nature or kind in any product, service and/or company that could be construed as influencing the position presented in, or the review of, the manuscript entitled.
\section*{Acknowledgements}
This work is financially supported by the National Natural Science Foundation of China (Grant No. 12002320), and the Fundamental Research Funds for the Central Universities (Grant Nos. CUG180618 and CUGGC05). We would like to thank Yin Jiang for his program to generate porous media which greatly helped the work.
|
1,477,468,750,994 | arxiv | \section{Introduction}
The quantum Yang-Baxter equation is an equation in the field of mathematical physics and it lies in the foundation of the theory of quantum groups.
Let $R:V \otimes V \rightarrow V \otimes V$ be a linear operator, where $V$ is a vector space. The quantum Yang-Baxter equation is the equality $R^{12}R^{13}R^{23}=R^{23}R^{13}R^{12}$ of linear transformations on $V \otimes V \otimes V$, where $R^{ij}$ means $R$ acting on the $i$th and $j$th components.\\
A set-theoretical solution of this equation is a solution for which $V$ is a vector space spanned by a set $X$ and $R$ is the linear operator induced by a mapping $X \times X \rightarrow X \times X$. The study of these was suggested by Drinfeld \cite{drinf}. Etingof, Soloviev and Schedler study set-theoretical solutions $(X,S)$ of the quantum Yang-Baxter equation that are non-degenerate, involutive and braided \cite{etingof}. To each such solution, they associate a group called the structure group and they show that this group satisfies some properties.
They also give a classification of such solutions $(X,S)$ up to isomorphism, when the cardinality of $X$ is up to eight. As an example, there are 23 solutions for $X$ with four elements, 595 solutions for $X$ with six elements and 34528 solutions for $X$ with eight elements. In this paper, we establish a one-to-one correspondence between non-degenerate, involutive and braided set-theoretical solutions of the quantum Yang-Baxter equation (up to isomorphism) and Garside presentations which satisfy some additional conditions up to t-isomorphism (a notion that will be defined below). The main result is as follows.
\begin{thm_A}
$(i)$ Let $X$ be a finite set, and $(X,S)$ be a non-degenerate, involutive and braided set-theoretical solution of the quantum Yang-Baxter equation. Then the structure group of $(X,S)$ is a Garside group.\\ $(ii)$ Conversely, assume that $\operatorname{Mon} \langle X\mid R \rangle$ is a Garside monoid such that:\\
- the cardinality of $R$ is $n(n-1)/2$, where $n$ is the cardinality of $X$ and each side of a relation in $R$ has length 2 and \\
- if the word $x_{i}x_{j}$ appears in $R$, then it appears only once.\\
Then there exists a function $S: X \times X \rightarrow X \times X$ such that $(X,S)$ is a non-degenerate, involutive and braided set-theoretical solution and $\operatorname{Gp} \langle X\mid R \rangle$ is its structure group.
\end{thm_A}
The main idea of the proof is to express the right and left complement on the generators in terms of the functions that define $(X,S)$. Moreover, we prove that the structure group of a set-theoretical solution satisfies some specific constraints. Picantin defines the notion of $\Delta$-pure Garside group in \cite{picantin} and he shows that the center of a $\Delta$-pure Garside group is a cyclic subgroup that is generated by some exponent of its Garside element.
\begin{thm_A}
Let $X$ be a finite set, and $(X,S)$ be a non-degenerate, involutive and braided set-theoretical solution of the quantum Yang-Baxter equation. Let $G$ be the structure group of $(X,S)$ and $M$ the monoid with the same presentation. Then \\
$(i)$ The right least common multiple of the elements in $X$ is a Garside element in $M$.\\
$(ii)$ The (co)homological dimension of $G$ is equal to the cardinality of $X$.\\
$(iii)$ The group $G$ is $\Delta$-pure Garside if and only if $(X,S)$ is indecomposable.
\end{thm_A}
Point \emph{$(i)$} above means that $G$ is Garside in the restricted sense of \cite{deh_Paris}.
Let us observe that, independently, Gateva-Ivanova and Van den Bergh define in \cite{gateva+van} monoids and groups of left and right I-type and they show that they yield solutions to the quantum Yang-Baxter equation. They show also that a monoid of left I-type is cancellative and has a group of fractions that is torsion-free and Abelian-by-finite. Jespers and Okninski extend their results in \cite{jespers+okninski}, and establish a correspondence between groups of I-type and the structure group of a non-degenerate, involutive and braided set-theoretical solution. Using our result, this makes a correspondence between groups of I-type and the class of Garside groups studied in this paper.
They also remark that the defining presentation of a monoid of I-type satisfies the right cube condition, as defined by Dehornoy in \cite[Prop.4.4]{deh_complte}. So, the necessity of being Garside can be derived from the combination of the results from \cite{jespers+okninski,gateva+van}. Our methods in this paper are different as we use the tools of reversing and complement developed in the theory of Garside monoids and groups and our techniques of proof are uniform throughout the paper. It can be observed that our results imply some earlier results by Gateva-Ivanova. Indeed, she shows in \cite{gateva} that the monoid corresponding to a special case of non-degenerate, involutive and braided set-theoretical solution (square-free) has a structure of distributive lattice with respect to left and right divisibility and that the left least common multiple of the generators is equal to their right least common multiple and she calls this element the principal monomial.\\
The paper is organized as follows.
In section $2$, we give some preliminaries on Garside monoids.
In section $3$, we give the definition of the structure group of a non-degenerate, involutive and braided set-theoretical solution and we show that it is Garside, using the criteria developed by Dehornoy in \cite{deh_francais}.
This implies that this group is torsion-free from \cite{deh_torsion} and biautomatic from \cite{deh_francais}.
In section $4$, we show that the right least common multiple of the generators is a Garside element and that the (co)homological dimension of the structure group of a non-degenerate, involutive and braided set-theoretical solution is equal to the cardinality of $X$. In section $5$, we give the definition of a $\Delta$-pure Garside group and we show that the structure group of $(X,S)$ is $\Delta$-pure Garside if and only if $(X,S)$ is indecomposable.
In section $6$, we establish a converse to the results of section $3$, namely that a Garside monoid satisfying some additional conditions defines a non-degenerate, involutive and braided set-theoretical solution of the quantum Yang-Baxter equation.
Finally, in section $7$, we address the case of non-involutive solutions. There, we consider the special case of permutation solutions that are not involutive and we show that their structure group is Garside. We could not extend this result to general solutions, although we conjecture this should be true. At the end of the section, we give the form of a Garside element in the case of permutation solutions.\\
\begin{acknowledgment}
This work is a part of the author's PhD research, done at the Technion under
the supervision of Professor Arye Juhasz. I am very grateful to
Professor Arye Juhasz, for his patience, his encouragement and his many helpful remarks.
I am also grateful to Professor Patrick Dehornoy for his comments on this paper.
\end{acknowledgment}
\section{ Garside monoids and groups}
All the definitions and results in this section are from
\cite{deh_francais} and \cite{deh_livre}. In this paper, if the element $x$ of $M$ is in the equivalence class of the word $w$, we say that \emph{$w$ represents $x$}.
\subsection{Garside monoids}
Let $M$ be a monoid and let $x,y,z$ be elements in $M$. The element $x$ is \emph{a left divisor} of $z$ if there is an element $t$ such that $z=xt$ in $M$ and $z$ is
\emph{a right least common multiple (right lcm)} of $x$ and $y$ if $x$ and $y$ are left divisors of $z$ and
additionally if there is an element $w$ such that $x$ and $y$ are left divisors of $w$, then
$z$ is left divisor of $w$. We denote it by $z=x\vee y$.
\emph{The complement at right of
$y$ on $x$} is defined to be an element $c\in M$ such that $x\vee y= xc$, whenever $x\vee y$ exists. We denote it by
$c=x \setminus y$ and by definition, $x\vee y=x(x \setminus y)$.
Dehornoy shows that if $M$ is left cancellative and $1$ is the unique invertible element in $M$, then the right lcm and the right complement of two elements are unique, whenever they exist \cite{deh_francais}. We refer the reader to \cite{deh_livre,deh_francais} for the definitions of the left lcm and the left and right gcd of two elements. An element $x$ in $M$ is \emph{an atom} if $x \neq 1$ and $x=yz$ implies $y=1$ or $z=1$. The \emph{norm} $\parallel x \parallel$ of $x$ is defined to be the supremum of the lengths of the
decompositions of $x$ as a product of atoms. The monoid $M$ is
\emph{atomic} if $M$ is generated by its atoms and for every $x$
in $M$ the norm of $x$ is finite. It holds that if all the relations in $M$
are length preserving, then $M$ is atomic, since each element $x$ of $M$ has a finite norm as all the words which represent $x$ have the same length.
A monoid $M$ is \emph{Gaussian} if $M$ is
atomic, left and right cancellative, and if any two elements in
$M$ have a left and right gcd and lcm. If $\Delta$ is an element in $M$, then $\Delta$ is a \emph{Garside element} if the left divisors of $\Delta$ are the same as the right divisors, there is a finite
number of them and they generate $M$. A monoid $M$ is \emph{Garside} if $M$ is Gaussian and it contains a Garside element. A group $G$ is a \emph{Gaussian group} (respectively
a \emph{Garside group}) if there exists a Gaussian monoid $M$
(respectively a Garside monoid) such that $G$ is the fraction
group of $M$. A Gaussian monoid satisfies both left and right Ore's conditions, so it embeds in its group of fractions (see \cite{Clifford}).
As an example, braid groups and Artin groups of finite type \cite{garside}, torus knot groups \cite{picantin_torus} are Garside groups.
\begin{defn}\cite[Defn.1.6]{deh_francais}\label{def_conditions}
Let $M$ be a monoid. $M$ satisfies:\\
- $(C_{0})$ if $1$ is the unique invertible element in $M$.\\
- $(C_{1})$ if $M$ is left cancellative.\\
- $(\tilde{C_{1}})$ if $M$ is right cancellative.\\
- $(C_{2})$ if any two elements in $M$ with a right common multiple
admit
a right lcm.\\
- $(C_{3})$ if $M$ has a finite generating set $P$ closed under
$\setminus$, i.e if $x,y \in P$ then $x \setminus y \in P$.
\end{defn}
\begin{thm}\cite[Prop. 2.1]{deh_francais} \label{gars_critere}
A monoid $M$ is a Garside monoid if and only if $M$ satisfies the
conditions $(C_{0})$, $(C_{1})$, $(\tilde{C_{1}})$, $(C_{2})$,
and $(C_{3})$.
\end{thm}
\subsection{Recognizing Garside monoids}
Let $X$ be an alphabet and denote by $\epsilon$ the empty word in $X^{*}$. Let $f$ be a partial function of $X\times X$ into $X^{*}$, $f$ is
a \emph{complement} on $X$ if $f(x,x)=\epsilon$ holds for every
$x$ in $X$, and $f(y,x)$ exists whenever $f(x,y)$ does.
The congruence on $X^{*}$ generated by the pairs $(xf(x,y),yf(y,x))$ with $(x,y)$ in the domain of $f$ is denoted by ``$\equiv^{+}$''. The monoid \emph{associated with $f$} is $X^{*}/ \equiv^{+}$ or in other words the monoid $\operatorname{Mon}\langle X\mid xf(x,y)=yf(y,x)\rangle$ (with $(x,y)$ in the domain of $f$). The complement mapping considered so far is defined on letters
only. Its extension on words is called \emph{word reversing} (see \cite{deh_livre}).
\begin{ex}
\label{example_struct_gp} Let $M$ be the monoid generated by $X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}$ and defined by the following $10$ relations.\\
$\begin{array}{ccccc}
x^{2}_{1}=x^{2}_{2} & x_{2}x_{5}=x_{5}x_{2} &x_{1}x_{2}=x_{3}x_{4} & x_{1}x_{5}=x_{5}x_{1}&
x_{1}x_{3}=x_{4}x_{2}\\ x^{2}_{3}=x^{2}_{4} &
x_{2}x_{4}=x_{3}x_{1} & x_{3}x_{5}=x_{5}x_{3}&
x_{2}x_{1}=x_{4}x_{3} &x_{4}x_{5}=x_{5}x_{4}
\end{array}$\\
Then the complement $f$ is defined totally on $X \times X$ and the monoid associated to $f$,
$X^{*} /\equiv^{+}$, is $M$.
As an example, $f(x_{1},x_{2})=x_{1}$ and
$f(x_{2},x_{1})=x_{2}$ are obtained from the relation
$x^{2}_{1}=x^{2}_{2}$, since it holds that $f(x_{1},x_{2})=x_{1}\setminus x_{2}$.
\end{ex}
Let $f$ be a complement on $X$. For $u,v,w \in X^{*}$, $f$ is \emph{coherent} at $(u,v,w)$ if either
$( (u
\setminus v ) \setminus (u \setminus w))
\setminus((v \setminus u) \setminus(v \setminus
w))\equiv^{+} \epsilon$ \ holds, or neither of the words $( (u
\setminus v )\setminus (u\setminus w)) , ((v
\setminus u)\setminus(v \setminus w))$ exists. The complement
\emph{$f$ is coherent} if it is coherent at every triple $(u,v,w)$ with $u,v,w \in X^{*}$.
Dehornoy shows that if the monoid is atomic then it is enough to show the coherence of $f$ on its set of atoms. Moreover, he shows that if $M$ is a monoid associated with a coherent complement, then
$M$ satisfies $C_{1}$ and $C_{2}$ (see \cite[p.55]{deh_livre}).
\begin{prop} \cite[Prop.6.1]{deh_francais}\label{atomic_coh}
Let $M$ be a monoid associated with a complement $f$ and assume
that $M$ is atomic. Then $f$ is coherent if and only if $f$ is
coherent on $X$.
\end{prop}
\begin{ex} In example \ref{example_struct_gp}, we check if
$( (x_{1} \setminus
x_{2} )\setminus (x_{1}\setminus x_{3}))
\setminus((x_{2} \setminus x_{1})\setminus(x_{2}
\setminus x_{3}))= \epsilon$ holds in $M$. We have $x_{1}\setminus x_{2}=x_{1}$
and $x_{1}\setminus x_{3}=x_{2}$, so $(x_{1}\setminus x_{2})\setminus(x_{1}\setminus x_{3})=x_{1}\setminus x_{2}=x_{1}$. Additionally, $x_{2}\setminus x_{1}=x_{2}$ and $x_{2}\setminus x_{3}=x_{4}$, so $(x_{2}\setminus x_{1})\setminus(x_{2}\setminus x_{3})=
x_{2}\setminus x_{4}=x_{1}$. At last,
$((x_{1}\setminus x_{2})\setminus(x_{1}\setminus x_{3}))
\setminus
((x_{2}\setminus x_{1})\setminus(x_{2}\setminus x_{3}))=
x_{1}\setminus x_{1}=\epsilon $.
\end{ex}
\section{Structure groups are Garside}
\subsection{The structure group of a set-theoretical solution}
All the definitions and results in this subsection are from \cite{etingof}.
A set-theoretical solution of the quantum Yang-Baxter equation is a pair $(X,S)$, where $X$ is a non-empty set and $S:X^{2}\rightarrow X^{2}$ is a bijection. Let $S_{1}$ and $S_{2}$ denote the components of $S$, that is $S(x,y)=(S_{1}(x,y),S_{2}(x,y))$.
A pair $(X,S)$ is \emph{nondegenerate} if the maps $X \rightarrow X$ defined by $x \mapsto S_{2}(x,y)$ and $x \mapsto S_{1}(z,x)$ are bijections for any fixed $y,z \in X$. A pair $(X,S)$ is \emph{braided} if $S$ satisfies the braid relation $S^{12}S^{23}S^{12}=S^{23}S^{12}S^{23}$, where the map $S^{ii+1}: X^{n} \rightarrow X^{n}$ is defined by $S^{ii+1}=id_{X^{i-1}} \times S\times id_{X^{n-i-1}} $, $i<n $.
A pair $(X,S)$ is \emph{involutive} if $S^{2}=id_{X^{2}}$, that is $S^{2}(x,y)=(x,y)$ for all $x,y \in X$.\\
Let $\alpha:X \times X \rightarrow X\times X$ be the permutation map, that is $\alpha(x,y)=(y,x)$, and let $R=\alpha \circ S$. The map $R$ is called the \emph{$R-$matrix corresponding to $S$}.
Etingof, Soloviev and Schedler show in \cite{etingof}, that $(X,S)$ is a braided set if and only
if $R$ satisfies the quantum Yang-Baxter equation $R^{12}R^{13}R^{23}=R^{23}R^{13}R^{12}$, where
$R^{ij}$ means acting on the $i$th and $j$th components and that $(X,S)$ is a symmetric set if and only if in addition $R$ satisfies the unitary condition $R^{21}R=1$. They define the \emph{structure group $G$ of $(X,S)$} to be the group generated by the elements of $X$ and with defining relations $xy=tz$ when $S(x,y)=(t,z)$. They show that if $(X,S)$ is non-degenerate and braided then the assignment $x \rightarrow f_{x}$ is a right action of $G$ on $X$.\\
We use the notation of \cite{etingof}, that is if $X$ is a finite set, then $S$ is defined by $S(x,y)=(g_{x}(y),f_{y}(x))$, $x,y$ in $X$.
Here, if $X=\{x_{1},...,x_{n}\}$ is a finite set and $y=x_{i}$ for some $1\leq i\leq n$, then
we write $f_{i},g_{i}$ instead of $f_{y},g_{y}$ and $S(i,j)=(g_{i}(j),f_{j}(i))$. The following claim from \cite{etingof} translates the properties of a solution $(X,S)$ in terms of the functions $f_{i},g_{i}$ and it will be very useful in this paper.
\begin{claim}\label{debut_form}
(i) $S$ is non-degenerate $\Leftrightarrow$ $f_{i}$, $g_{i}$ are bijective, $1 \leq i \leq n$.\\
$(ii)$ $S$ is involutive $\Leftrightarrow$ $g_{g_{i}(j)}f_{j}(i)=i$ and $f_{f_{j}(i)}g_{i}(j)=j$,
$1 \leq i,j \leq n$.\\
$(iii)$ $S$ is braided $\Leftrightarrow$ $g_{i}g_{j}=g_{g_{i}(j)}g_{f_{j}(i)}$, $f_{j}f_{i}=f_{f_{j}(i)}f_{g_{i}(j)}$,\\ and $f_{g_{f_{j}(i)}(k)}g_{i}(j)= g_{f_{g_{j}(k)}(i)}f_{k}(j)$, $1 \leq i,j,k \leq n$.
\end{claim}
\begin{ex} Let$X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}$ and
$S(i,j)=(g_{i}(j),f_{j}(i))$. Assume \\
$\begin{array}{ccc}
f_{1}=g_{1}=(1,2,3,4)(5) & f_{2}=g_{2}=(1,4,3,2)(5)\\
f_{3}=g_{3}=(1,2,3,4)(5) & f_{4}=g_{4}=(1,4,3,2)(5)
\end{array}$\\
Assume also that the functions $f_{5}$ and $g_{5}$ are the identity on $X$.
Then a case by case analysis shows that $(X,S)$ is a non-degenerate, involutive and braided solution. Its structure group is generated by $X$ and defined by the $10$ relations described in example \ref{example_struct_gp}.
\end{ex}
\subsection{Structure groups are Garside}
In this subsection, we prove the following result.
\begin{thm}\label{theo:garside}
The structure group $G$ of a non-degenerate, braided and involutive
set-theoretical solution of the quantum Yang-Baxter equation is a Garside group.
\end{thm}
In order to prove that the group $G$ is a Garside group, we
show that \emph{the monoid $M$ with the same presentation} is
a Garside monoid. For that, we use the Garsidity criterion
given in Theorem \ref{gars_critere}, that is we show that
$M$ satisfies the conditions $(C_{0})$ , $(C_{1})$ , $(C_{2})$,
$(C_{3})$, and $(\tilde{C_{1}})$. We refer the reader to \cite[Lemma 4.1]{gateva+van} for the proof that $M$ is right cancellative ($M$ satisfies $(\tilde{C_{1}})$).
We first show that $M$ satisfies the conditions $(C_{0})$. In order to do that, we describe the defining relations in $M$ and as they are length-preserving, this implies that $M$ is atomic.
\begin{claim}\label{cl_compl}
Assume $(X,S)$ is non-degenerate.
Let $x_{i}$ and $x_{j}$ be different elements in $X$. Then
there is exactly one defining relation $x_{i} a= x_{j} b$, where
$a,b$ are in $X$.
If in addition, $(X,S)$ is involutive then $a$ and $b$ are different.
\end{claim}
For a proof of this result, see \cite[Thm. 1.1]{gateva+van}.
Using the same arguments, if $(X,S)$ is non-degenerate and involutive there are no relations of the form $ax_{i}=ax_{j}$, where $i \neq j$.
We have the following direct result from claim \ref{cl_compl}.
\begin{prop} \label{complement_f_defined}
Assume $(X,S)$ is non-degenerate and involutive. Then the complement $f$ is totally defined on $X \times X$, its range is $X$ and the monoid associated to $f$ is $M$. Moreover, $M$ is atomic.
\end{prop}
Now, we show that $M$ satisfies the conditions $(C_{1})$, $(C_{2})$ and $(C_{3})$.
From Proposition \ref{complement_f_defined}, we have that there is a one-to-one correspondence between the complement $f$ and the monoid $M$ with the same presentation as the structure group, so we say that $M$ is coherent (by abuse of notation). In order to show that the monoid $M$ satisfies the conditions
$(C_{1})$ and $(C_{2})$, we show that $M$ is coherent (see \cite[p.55]{deh_livre}).
Since $M$ is atomic, it is enough to check its coherence on $X$
(from Proposition \ref{atomic_coh}).
We show that any triple of generators $(a,b,c)$ satisfies the following equation:
$ (a \setminus b )\setminus (a\setminus c)=
(b \setminus a)\setminus(b \setminus c)$, where the equality is in the free monoid $X^{*}$,
since the range of $f$ is $X$. In the following lemma, we establish a correspondence between the right complement of generators and the functions $g_{i}$ that define $(X,S)$.
\begin{lem}\label{form_compl}
Assume $(X,S)$ is non-degenerate. Let $x_{i}, x_{j}$ be different elements in $X$. Then
$x_{i}\setminus x_{j}= g^{-1}_{i}(j)$.
\end{lem}
\begin{proof}
If $S(i,a)=(j,b)$, then $x_{i}\setminus x_{j}=a$.
But by definition of $S$, we have that $S(i,a)=(g_{i}(a),f_{a}(i))$,
so $g_{i}(a)=j$
which gives $a=g_{i}^{-1}(j)$.
\end{proof}
\begin{lem}\label{formule}
Assume $(X,S)$ is non-degenerate and involutive. Let $x_{i}, x_{k}$ be elements in $X$. Then
$g^{-1}_{k}(i)=f_{g^{-1}_{i}(k)}(i)$
\end{lem}
\begin{proof}
Since $S$ is involutive, we have from claim \ref{debut_form} that for every $x_{i}, x_{j} \in X$, $g_{g_{i}(j)}f_{j}(i)=i$. We replace in this formula $j$ by $g^{-1}_{i}(k)$ for some $1 \leq k \leq n$, then we obtain $i= g_{g_{i}(g^{-1}_{i}(k))}f_{g^{-1}_{i}(k)}(i)=g_{k}f_{g^{-1}_{i}(k)}(i)$.
So, we have $g^{-1}_{k}(i)=f_{g^{-1}_{i}(k)}(i)$.
\end{proof}
\begin{prop}\label{M_c1&c2}
Assume $(X,S)$ is non-degenerate, involutive and braided. Every triple $(x_{i},x_{k},x_{m})$ of generators satisfies the following equation: $ (x_{i} \setminus x_{k} )\setminus (x_{i}\setminus x_{m})
=(x_{k} \setminus x_{i})\setminus(x_{k} \setminus
x_{m})$. Furthermore, $M$ is coherent and satisfies the conditions $(C_{1})$ and $(C_{2})$.
\end{prop}
\begin{proof}
If $x_{i}=x_{k}$ or $x_{i}=x_{m}$ or $x_{k}=x_{m}$, then the equality holds trivially. So, assume that $(x_{i},x_{k},x_{m})$ is a triple of different generators. This implies that $g_{i}^{-1} (k) \neq g_{i}^{-1} (m)$ and $g^{-1}_{k}(i) \neq g^{-1}_{k}(m)$, since the functions $g_{i}$ are bijective. Using the formulas for all different $1 \leq i,k,m \leq n$ from lemma \ref{form_compl}, we have: $ (x_{i} \setminus x_{k} )\setminus (x_{i}\setminus x_{m})=
g^{-1}_{x_{i} \setminus x_{k}} (x_{i}\setminus x_{m})=
g^{-1}_{g_{i}^{-1}(k)} g_{i}^{-1} (m)$ and $ (x_{k} \setminus x_{i} )\setminus (x_{k}\setminus
x_{m})=
g^{-1}_{x_{k} \setminus x_{i}} (x_{k}\setminus x_{m})=
g^{-1}_{g_{k}^{-1}(i)} g_{k}^{-1} (m)$. We prove that $g^{-1}_{g_{i}^{-1}(k)} g_{i}^{-1} (m)=g^{-1}_{g_{k}^{-1}(i)} g_{k}^{-1} (m)$ by showing that
$g_{i}g_{g_{i}^{-1}(k)}= g_{k}g_{g_{k}^{-1}(i)}$ for all $1 \leq i,k \leq n$.
Since $S$ is braided, we have from claim \ref{debut_form}, that
$g_{i}g_{g_{i}^{-1}(k)}=g_{g_{i}(g_{i}^{-1}(k))}g_{f_{g_{i}^{-1}(k)}(i)}
=$ $ g_{k}g_{f_{g_{i}^{-1}(k)}(i)}$.
But, from lemma \ref{formule}, $f_{g^{-1}_{i}(k)}(i)=g^{-1}_{k}(i)$, so
$g_{i}g_{g_{i}^{-1}(k)}= g_{k}g_{g^{-1}_{k}(i)}$.
The monoid $M$ is then coherent at $X$ but since $M$ is atomic, $M$ is
coherent. So, $M$ satisfies the conditions $(C_{1})$ and $(C_{2})$.
\end{proof}
Now, using the fact that $M$ satisfies $(C_{1})$ and $(C_{2})$, we show that it satisfies also $(C_{3})$.
\begin{prop}
Assume $(X,S)$ is non-degenerate, involutive and braided.
Then there is a finite generating set that is closed under
$\setminus$, that is $M$ satisfies the condition $(C_{3})$.
\end{prop}
\begin{proof}
From
claim \ref{cl_compl}, for any pair of generators $x_{i}, x_{j} $ there are unique $a,b \in X$ such that $x_{i} a= x_{j} b$, that is any pair of generators $x_{i}, x_{j} $ has
a right common multiple. Since from Proposition
\ref{M_c1&c2}, $M$ satisfies the condition $(C_{2})$, we have
that $x_{i}$ and $x_{j}$ have a right lcm and the word $x_{i} a$ (or
$ x_{j} b$) represents the element $x_{i} \vee x_{j}$, since this is a common multiple of $x_{i}$ and
$x_{j}$ of least length. So,
it holds that $x_{i} \setminus x_{j}=a$ and $ x_{j}
\setminus x_{i} = b$, where $a,b \in X$. So, $X \cup \{\epsilon\}$ is
closed under $\setminus$.
\end{proof}
\section{Additional properties of the structure group}
\subsection{The right lcm of the generators is a Garside element}
The braid groups and the Artin groups of finite type are Garside groups which satisfy the condition that the right lcm of their set of atoms is a Garside element. Dehornoy and Paris considered this additional condition as a part of the definition of Garside groups in \cite{deh_Paris} and in \cite{deh_francais} it was removed from the definition.
Indeed, Dehornoy gives the example of a monoid that is Garside and yet the right lcm of its atoms is not a Garside element \cite{deh_francais}. He shows that the right lcm of the simple elements of a Garside monoid is a Garside element, where an element is \emph{simple} if it belongs to the closure of a set of atoms under right complement and right lcm \cite{deh_francais}.
We prove the following result:
\begin{thm}\label{thm_delta_lcm_atoms}
Let $G$ be the structure group of a non-degenerate, involutive and braided solution $(X,S)$ and let $M$ be the monoid with the same presentation.
Then the right lcm of the atoms (the elements of $X$) is a Garside element in $M$.
\end{thm}
In order to prove that, we show that the set of simple elements $\chi$, that is the closure of $X$ under right complement and right lcm, is equal to the closure of $X$ under right lcm (denoted by $\overline{X}^{\vee}$), where the empty word $\epsilon$ is added. So, this implies that $\Delta$, the right lcm of the simple elements, is the right lcm of the elements in $X$.
We use the word reversing method developed by Dehornoy and the diagrams for word reversing. We illustrate in example $1$ below the definition of the diagram and we refer the reader to \cite{deh_francais} and \cite{deh_livre} for more details. Here, reversing the word $u^{-1}v$ using the diagram amounts to computing a right lcm for the elements represented by $u$ and $v$ (see \cite[p.65]{deh_livre}).
\begin{ex} Let us consider the monoid $M$ defined in example \ref{example_struct_gp}. We illustrate the construction of the reversing diagram.\\
(a) The reversing diagram of the word $x_{3}^{-1}x_{1}$ is constructed in figure \ref{des1} in the following way. First, we begin with the left diagram and then using the defining relation $x_{1}x_{2}=x_{3}x_{4}$ in $M$, we complete it in the right diagram. We have $x_{1}\setminus x_{3}=x_{2}$ and $x_{3}\setminus x_{1}=x_{4}$.
\begin{figure}[h]
\includegraphics[scale=0.95]{draw1.pdf}
\includegraphics[scale=0.95]{draw2.pdf}
\caption{Reversing diagram of $x_{3}^{-1}x_{1}$}\label{des1}
\end{figure}
(b) The reversing diagram of the word $x_{4}^{-2}x_{1}^{2}$ is described in figure \ref{des2}: we begin with the left diagram and then we complete it using the defining relations in the right diagram.
\begin{figure}[h]
\includegraphics[scale=0.95]{draw3.pdf}
\includegraphics[scale=0.95]{draw4_new.pdf}
\caption{Reversing diagram of $x_{4}^{-2}x_{1}^{2}$}\label{des2}
\end{figure}
So, we have $x_{1}^{2}x_{2}^{2}=x_{4}^{2}x_{3}^{2}$ in $M$ and since $x_{1}^{2}=x_{2}^{2}$ and $x_{3}^{2}=x_{4}^{2}$, it holds that $x_{1}^{4}=x_{2}^{4}=x_{3}^{4}=x_{4}^{4}$ in $M$.
So, a word representing the right lcm of $x_{1}^{2}$ and $x_{4}^{2}$ is the word $x_{1}^{4}$ or the word $x_{4}^{4}$. We obtain from the diagram that the word $x_{2}^{2}$ represents the element $x_{1}^{2}\setminus x_{4}^{2}$ and the word $x_{3}^{2}$ represents the element $x_{4}^{2}\setminus x_{1}^{2}$.
\end{ex}
In order to prove that $\chi=\overline{X}^{\vee}\cup \{\epsilon\}$, we show that every complement of simple elements is the right lcm of some generators.
The following technical lemma is the basis of induction for the proof of Theorem \ref{thm_delta_lcm_atoms}.
\begin{lem}\label{lem_MXin X}
$(i)$ It holds that $M\setminus X \subseteq X \cup \{\epsilon\}$.\\
$(ii)$ It holds that $M\setminus (\vee_{j=1}^{j=k} x_{i_{j}}) \subseteq \overline{X}^{\vee}\cup \{\epsilon\}$, where $x_{i_{j}} \in X$, $ 1\leq j \leq k$.
\end{lem}
\begin{proof}
It holds that $S(X \times X) \subseteq X \times X$, so $X\setminus X \subseteq X\cup \{\epsilon\}$ and this implies inductively \emph{(i)} $M \setminus X \subseteq X \cup \{\epsilon\}$ (see the reversing diagram).\\
Let $u \in M$, then using the following rule of calculation on complements from \cite[Lemma 1.7]{deh_francais}: for every $u,v,w \in M$, $u \setminus (v \vee w) = (u \setminus v) \vee (u \setminus w)$, we have inductively that $u \setminus (\vee_{j=1}^{j=k} x_{i_{j}}) = \vee_{j=1}^{j=k} (u \setminus x_{i_{j}})$.
From \emph{$(i)$}, $u \setminus x_{i_{j}}$ belongs to $X$, so
$\vee_{j=1}^{j=k} (u \setminus x_{i_{j}})$ is in $\overline{X}^{\vee}\cup \{\epsilon\}$. That is, $(ii)$ holds.
\end{proof}
Since the monoid $M$ is Garside, the set of simples $\chi$ is finite and its construction is done in a finite number of steps in the following way:\\At the $0-$th step, $\chi_{0}=X$.\\
At the first step, $\chi_{1}=X \cup \{x_{i} \vee x_{j};$ $x_{i},x_{j} \in X \}\cup
\{x_{i} \setminus x_{j};$ $x_{i},x_{j} \in X \}$.\\
At the second step, $\chi_{2}=\chi_{1} \cup \{u \vee v;$ $u,v \in \chi_{1} \} \cup \{u \setminus v;$ $u,v \in \chi_{1} \}$.\\
We go on inductively and after a finite number of steps $k$, $\chi_{k}=\chi$.
\begin{prop}\label{prop_Simples}
It holds that $\chi=\overline{X}^{\vee}\cup \{\epsilon\}$.
\end{prop}
\begin{proof}
The proof is by induction on the number of steps $k$ in the construction of $\chi$. We show that each complement of simple elements is the right lcm of some generators. At the first step, we have that $\{x_{i} \setminus x_{j};$ for all $x_{i},x_{j} \in X \}= X\cup \{\epsilon\}$. At the following steps, we do not consider the complements of the form $...\setminus x_{i}$ since these belong to $X$ (see lemma \ref{lem_MXin X}). At the second step, the complements have the following form
$x_{i} \setminus (x_{l} \vee x_{m})$ or $(x_{i} \vee x_{j}) \setminus (x_{l} \vee x_{m})$ and these belong to $\overline{X}^{\vee}\cup \{\epsilon\}$ from lemma \ref{lem_MXin X}. Assume that at the $k-$th step, all the complements obtained belong to
$\overline{X}^{\vee}\cup \{\epsilon\}$, that is all the elements of $\chi_{k}$ are right lcm of generators. At the $(k+1)-$th step, the complements have the following form $u \setminus v $, where $u,v \in \chi_{k}$. From the induction assumption, $v$ is a right lcm of generators, so from lemma \ref{lem_MXin X}, $u \setminus v $ belongs to $\overline{X}^{\vee}\cup \{\epsilon\}$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_delta_lcm_atoms}]
The right lcm of the set of simples $\chi$ is a Garside element and since from Proposition \ref{prop_Simples}, $\chi=\overline{X}^{\vee}\cup \{\epsilon\}$, we have that the right lcm of $X$ is a Garside element.
\end{proof}
We show now that the length of a Garside element $\Delta$ is equal to $n$, the cardinality of the set $X$. In order to show that, we prove in the following that the right lcm of $k$ different generators has length $k$.
\begin{rem}\label{rem_interpret_compl_g}
When $w\setminus x$ is not equal to the empty word, then we can interpret $w\setminus x$ in terms of the functions $g^{-1}_{*}$ using the reversing diagram corresponding to the words $w=h_{1}h_{2}..h_{k}$ and $x$, where $h_{i},x \in X$ and for brevity of notation we write $g^{-1}_{i}(x)$ for $g^{-1}_{h_{i}}(x)$:\\
\includegraphics{draw5.pdf}
That is, $h_{1}h_{2}..h_{k}\setminus x = g^{-1}_{k}..g^{-1}_{2}g^{-1}_{1}(x)$ and this is equal to $g^{-1}_{w}(x)$, since the action on $X$ is a right action.
Having a glance at the reversing diagram, we remark that if $w\setminus x$ is not equal to the empty word, then none of the complements $h_{1}\setminus x$, $h_{1}h_{2}\setminus x$,.., $h_{1}h_{2}..h_{k-1}\setminus x$ can be equal to the empty word.
\end{rem}
\begin{lem}\label{lem_compl_egal}
Let $h_{i}, x$ be all different elements in $X$, for $1 \leq i \leq k$.
Then $(\vee_{i=1}^{i=k}h_{i}) \setminus x$ is not equal to the empty word $\epsilon$.
\end{lem}
\begin{proof}
By induction on $k$. If $k=1$, then $h_{1}\setminus x \neq \epsilon$, as $h_{1} \neq x$.\\
Now assume $(\vee_{i=1}^{i=k-1}h_{i}) \setminus x \neq \epsilon$ and assume by contradiction that
$(\vee_{i=1}^{i=k}h_{i}) \setminus x= \epsilon$. Using the following rule of computation on the complement from \cite[Lemma 1.7]{deh_francais}: $(u \vee v)\setminus w=(u\setminus v)\setminus(u \setminus w)$, we have $(\vee_{i=1}^{i=k}h_{i}) \setminus x= (\vee_{i=1}^{i=k-1}h_{i} \vee h_{k}) \setminus x=
((\vee_{i=1}^{i=k-1}h_{i}) \setminus h_{k})\setminus ((\vee_{i=1}^{i=k-1}h_{i}) \setminus x)$.
From the induction assumption, $(\vee_{i=1}^{i=k-1}h_{i}) \setminus x \neq \epsilon$ and $(\vee_{i=1}^{i=k-1}h_{i}) \setminus h_{k} \neq \epsilon$, so $(\vee_{i=1}^{i=k}h_{i}) \setminus x= \epsilon$ implies that
$(\vee_{i=1}^{i=k-1}h_{i}) \setminus h_{k} = (\vee_{i=1}^{i=k-1}h_{i}) \setminus x$.
Assume $w$ represents the element $\vee_{i=1}^{i=k-1}h_{i}$, then from remark \ref{rem_interpret_compl_g}, $w \setminus h_{k}$ can be interpreted as $g^{-1}_{w}(h_{k})$ and $w \setminus x$ can be interpreted as $g^{-1}_{w}(x)$. The function $g^{-1}_{w}$ is a bijective function as it is the composition of bijective functions, so $g^{-1}_{w}(h_{k})=g^{-1}_{w}(x)$ implies that $h_{k}=x$, but this is a contradiction.
So, $(\vee_{i=1}^{i=k}h_{i}) \setminus x \neq \epsilon$.
\end{proof}
\begin{thm}\label{thm_Garside_length_n}
Let $G$ be the structure group of a non-degenerate, braided and involutive solution $(X,S)$, where $X=\{x_{1},..,x_{n}\}$ and let $M$ be the monoid with the same presentation.
Let $\Delta$ be a Garside element in $M$.
Then the length of $\Delta$ is $n$.
\end{thm}
\begin{proof}
From theorem \ref{thm_delta_lcm_atoms}, $\Delta$ represents the right lcm of the elements in $X$, that is
$\Delta= x_{1} \vee x_{2} \vee ...\vee x_{n}$ in $M$.
We show by induction that a word representing the right lcm $x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k}}$ has length $k$, where $x_{i_{j}} \neq x_{i_{l}}$ for $j \neq l$. If $k=2$, then there are different generators $a,b$ such that $S(x_{i_{1}},a)=(x_{i_{2}},b)$, so $x_{i_{1}}a=x_{i_{2}}b$
is a relation in $M$ and the right lcm of $x_{i_{1}},x_{i_{2}}$ has length $2$.
Assume that the right lcm $x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k-1}}$ has length $k-1$. Then the right lcm $x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k-1}} \vee x_{i_{k}}$ is obtained from the reversing diagram corresponding to the words $x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k-1}}$ and $x_{i_{k}}$.
From lemma \ref{lem_compl_egal}, $(x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k-1}})\setminus x_{i_{k}}$ is not equal to the empty word, so from lemma \ref{lem_MXin X} it has length $1$.
So, the right lcm $x_{i_{1}}\vee x_{i_{2}} \vee..\vee x_{i_{k}}$ has length $k$ and this implies that $x_{1}\vee x_{2} \vee..\vee x_{n}$ has length $n$.
\end{proof}
\subsection{Homological dimension}
Dehornoy and Laffont construct a resolution of $\Bbb Z$ (as trivial ${\Bbb Z}M$-module) by free
${\Bbb Z}M$-modules, when $M$ satisfies some conditions \cite{deh_homologie}. Moreover, they show that every Garside group is of type $FL$, that is with a finite resolution \cite[Prop. 2.9-2.10]{deh_homologie}. Charney, Meier, and Whittlesey show in \cite{charney} that Garside groups have finite homological dimension, using another approach. In \cite{deh_homologie}, Dehornoy and Laffont show that whenever $M$ is a Garside monoid then the resolution defined in \cite{charney} is isomorphic to the resolution they define.
We use the following result from \cite{deh_homologie} in
order to show that the homological dimension of the structure
group corresponding to a set-theoretical solution $(X,S)$ of the
quantum Yang-Baxter equation is equal to the number of
generators in $X$.
\begin{prop}\cite[Cor.3.6]{deh_homologie}\label{limiter_dimension}
Assume that $M$ is a locally Gaussian monoid admitting a
generating set $\chi$ such that $\chi \cup \{\epsilon\}$ is closed under left and right complement and
lcm and such that the norm of every element in $\chi$ is bounded
above by $n$. Then the (co)homological dimension of $M$ is at most $n$.
\end{prop}
Using Proposition \ref{limiter_dimension}, we prove the following result:
\begin{thm}
Let $(X,S)$ be a set-theoretical solution of the quantum
Yang-Baxter equation, where $X=\{x_{1},..,x_{n}\}$ and $(X,S)$ is non-degenerate, braided and involutive. Let $G$ be the structure group corresponding to $(X,S)$.
Then the (co)homological dimension of $G$ is equal to $n$, the number of generators in $X$.
\end{thm}
\begin{proof}
The set of simples $\chi$ satisfies the conditions of Corollary \ref{limiter_dimension} and the norm of every element in $\chi$ is bounded by $n$, since this is the length of the right lcm of $\chi$ (from Theorems \ref{thm_delta_lcm_atoms} and \ref{thm_Garside_length_n}).
So, the (co)homological dimension of $G$ is is equal to $n$.
\end{proof}
\begin{rem}
It was pointed to us by P.Etingof that this result can be also proved differently: by showing that the classifying space of the structure group $G$ is a compact manifold of dimension $n$ (as there is a free action of $G$ on $R^{X}$).
\end{rem}
\section{Structure groups and indecomposable solutions}
Picantin defines the notion of $\Delta$-pure Garside monoid $M$ in \cite{picantin} and he shows there that the center of $M$ is the infinite cyclic submonoid generated by some power of $\Delta$.
We find in this section conditions under which a monoid is $\Delta$-pure Garside in terms of set-theoretical solutions.
\subsection{$\Delta$-pure Garside monoids}
Let $\chi$ be the set of simples and $\Delta$ a Garside element in $M$. The \emph{exponent} of $M$ is the order of the automorphism $\phi$, where $\phi$ is the extension of the function $x \rightarrow (x\setminus \Delta)\setminus \Delta$ from $\chi$ into itself.
\begin{defn}\cite{picantin}
The monoid $M$ is $\Delta$\verb`-`\emph{pure} if for every $x,y$ in $X$, it holds that $\Delta_{x} = \Delta_{y}$, where $\Delta_{x} = \vee \{b \setminus x ; b \in M\}$ and $\vee$ denotes the right lcm.
\end{defn}
Picantin shows that if $M$ is a $\Delta$-pure Garside monoid with exponent $e$ and group of fractions $G$, then the center of $M$ (resp. of $G$) is the infinite cyclic submonoid (resp. subgroup) generated by $\Delta^{e}$.
Let consider the following example, to illustrate these definitions.
\begin{ex}\label{example_deltapure}
Let $X=\{x_{1},x_{2},x_{3}\}$ and let $S(x_{i},x_{j})=(f(j),f^{-1}(i))$, where $f=(1,2,3)$, be a non-degenerate, braided and involutive set-theoretical solution. Let $M$ be the monoid with the same presentation as the structure group of $(X,S)$, the defining relations in $M$ are then: $x_{1}^{2}=x_{2}x_{3}$, $x_{2}^{2}=x_{3}x_{1}$ and $x_{3}^{2}=x_{1}x_{2}$.
So, $X \setminus x_{1}=\{x_{3}\}$, $X \setminus x_{2}=\{x_{1}\}$ and $X \setminus x_{3}=\{x_{2}\}$.
Using the reversing diagram, we obtain inductively that $M \setminus x_{i}=X\cup \{\epsilon\}$ for $1 \leq i \leq 3$, that is $M$ is $\Delta$-pure Garside, since $\Delta_{1}=\Delta_{2}=\Delta_{3}$.
As an example, $x_{2}\setminus x_{1}=x_{3}$, so $x_{2}x_{1}\setminus x_{1}=x_{2}$ and so $x_{2}x_{1}x_{3}\setminus x_{1}=x_{1}$ that is $X \cup \{\epsilon\} \subseteq M \setminus x_{1}$ and since $M \setminus x_{1}\subseteq X\cup \{\epsilon\}$ (see lemma \ref{lem_MXin X}) we have the equality.
Each word $x_{i}^{3}$ for $i=1,2,3$ represents a Garside element, denoted by $\Delta$. The set of simples is $\chi=\{\epsilon, x_{1},x_{2},x_{3},x_{1}^{2},x_{2}^{2},x_{3}^{2},\Delta\}$. The exponent of $M$ is equal to 1, since the function $x \rightarrow (x\setminus \Delta)\setminus \Delta$ from $\chi$ to itself is the identity.
As an example, the image of $x_{1}$ is $(x_{1}\setminus \Delta)\setminus \Delta=x_{1}^{2}\setminus \Delta=x_{1}$,
the image of $x_{2}^{2}$ is $(x_{2}^{2}\setminus \Delta)\setminus \Delta=x_{2}\setminus \Delta=x_{2}^{2}$ and so on.
So, the center of the structure group of $(X,S)$ is cyclic and generated by $\Delta$, using the result of Picantin.
\end{ex}
\subsection{Structure groups and indecomposable solutions}
A non-degenerate, braided and involutive set-theoretical solution $(X,S)$ is said to be \emph{decomposable} if $X$ is a union of two nonempty disjoint non-degenerate invariant subsets, where an \emph{invariant} subset $Y$ is a set that satisfies $S(Y \times Y)\subseteq Y \times Y$. Otherwise, $(X,S)$ is said to be \emph{indecomposable}.\\
Etingof et al give a classification of non-degenerate, braided and involutive solutions with $X$ up to $8$ elements, considering their decomposability and other properties \cite{etingof}.
Rump proves Gateva-Ivanova's conjecture that every square-free, non-degenerate, involutive and braided solution $(X,S)$ is decomposable, whenever $X$ is finite. Moreover, he shows that an extension to infinite $X$ is false \cite{rump}.
We find a criterion for decomposability of the solution involving the Garside structure of the structure group (monoid), that is we prove the following result.
\begin{thm}\label{thm_deltapure_indecomp}
Let $G$ be the structure group of a non-degenerate, braided and involutive solution $(X,S)$ and let $M$ be the monoid with the same presentation.
Then $M$ is $\Delta$-pure Garside if and only if $(X,S)$ is indecomposable.
\end{thm}
In what follows, we use the notation from \cite{picantin}: for $X,Y \subseteq M$, $Y \setminus X$ denotes the set of elements $b \setminus a$ for $a \in X$ and $b \in Y$. We write $Y \setminus a$ for $Y \setminus \{a\}$ and $b \setminus X$ for $\{b\} \setminus X$. We need the following lemma for the proof of Theorem \ref{thm_deltapure_indecomp}
\begin{lem}\label{lem_YYimpliqMY}
Let $(X,S)$ be the union of non-degenerate invariant subsets $Y$ and $Z$. Then $M\setminus Y \subseteq Y \cup \{\epsilon\}$ and $M\setminus Z \subseteq Z\cup \{\epsilon\}$.
\end{lem}
\begin{proof}
If $Y$ is an invariant subset of $X$, then $Y\setminus Y \subseteq Y\cup \{\epsilon\}$, since $S(Y \times Y) \subseteq Y \times Y$.
From \cite[Proposition 2.15]{etingof}, the map $S$ defines bijections $Y \times Z \rightarrow Z \times Y$ and $Z \times Y \rightarrow Y \times Z$. So, $S(Y \times Z) \subseteq Z \times Y$ and $S(Z \times Y) \subseteq Y \times Z$, and this implies $Z\setminus Y \subseteq Y$.
That is, we have that $Y\setminus Y \subseteq Y\cup \{\epsilon\}$ and $Z\setminus Y \subseteq Y$, so $X\setminus Y \subseteq Y\cup \{\epsilon\}$ and this implies inductively that $M\setminus Y \subseteq Y\cup \{\epsilon\}$ (see the reversing diagram). The same holds for $Z$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm_deltapure_indecomp}]
Assume $(X,S)$ is decomposable, that is $(X,S)$ is the union of non-degenerate invariant subsets $Y$ and $Z$.
From lemma \ref{lem_YYimpliqMY}, we have $M\setminus Y \subseteq Y\cup \{\epsilon\}$ and $M\setminus Z \subseteq Z\cup \{\epsilon\}$.
Let $y\in Y$ and $z\in Z$, then $\Delta_{y}= \vee (M\setminus y)$ cannot be the same as $\Delta_{z}= \vee (M\setminus z)$. So, $M$ is not $\Delta$-pure Garside .\\
Assume $M$ is not $\Delta$-pure Garside. Let $x_{k} \in X$, we denote by $Y_{k}$ the set $(M\setminus x_{k})$ from which we remove $\{\epsilon\}$. So, $\Delta_{x_{k}}= \vee (Y_{k})$, where $Y_{k}$ is a subset of $X$ from Lemma \ref{lem_MXin X}. Let $x_{i},x_{j}$ be in $X$, then from \cite{picantin}, either $\Delta_{x_{i}}=\Delta_{x_{j}}$ or the left gcd of $\Delta_{x_{i}}$ and $\Delta_{x_{j}}$ is $\epsilon$. If $\Delta_{x_{i}}=\Delta_{x_{j}}$, then $Y_{i}=Y_{j}$ and if the left gcd of $\Delta_{x_{i}}$ and $\Delta_{x_{j}}$ is $\epsilon$, then $Y_{i}$ and $Y_{j}$ are disjoint subsets of $X$. Since $M$ is not $\Delta$-pure Garside, there exist $x_{i_{1}}$, $x_{i_{2}}$, .., $x_{i_{m}}$ in $X$ such that $Y_{i_{1}}$, $Y_{i_{2}}$,.., $Y_{i_{m}}$ are disjoint subsets of $X$. Moreover,
$X =Y_{i_{1}} \cup Y_{i_{2}} \cup..\cup Y_{i_{m}}$, since each $x \in X$ is equal to an element $x_{k} \setminus x_{i}$ for some $x_{k},x_{i} \in X$ (from the existence of left lcms). We show that $Y_{i_{j}}$ is an invariant subset of $X$, that is $S(Y_{i_{j}},Y_{i_{j}}) \subseteq (Y_{i_{j}},Y_{i_{j}})$. Let $x \in X$ and $y \in Y_{i_{j}}$, then $x \setminus y = x \setminus (w \setminus x_{i_{j}})$, for some $w \in M$. Using the following rule of computation on the complement from \cite[Lemma 1.7]{deh_francais}: $x \setminus (u \setminus v) =(ux) \setminus v$, we have $x \setminus y = (wx) \setminus x_{i_{j}}$, that is $x \setminus y$ belongs to $Y_{i_{j}}$. In particular, if $x \in Y_{i_{j}}$ then $S(x,y')=(y,y'')$, where $y',y'' \in Y_{i_{j}}$. So, $Y_{i_{j}}$ is an invariant subset of $X$ for $1 \leq j \leq m$ and this implies that $(X,S)$ is decomposable.
\end{proof}
\section{From Garside groups to structure groups}
We establish the converse implication in the one-to-one correspondence between the Garside groups and the structure groups, that is we prove the following:
\begin{thm}\label{garside_struct_group}
Let $\operatorname{Mon} \langle X\mid R \rangle$ be a Garside monoid such that:\\
$(i)$ There are $n(n-1)/2$ relations in $R$, where $n$ is the cardinality of $X$, and each side of a relation in $R$ has length 2.\\
$(ii)$ If the word $x_{i}x_{j}$ appears in $R$, then it appears only once.\\
Then there exists a function $S: X \times X \rightarrow X \times X$ such that $(X,S)$ is a non-degenerate, involutive and braided set-theoretical solution and $\operatorname{Gp} \langle X\mid R \rangle$ is its structure group.\\
If additionally: $(iii)$ There is no word $x_{i}^{2}$ in $R$, then $(X,S)$ is square-free.
\end{thm}
\subsection{Proof of Theorem \ref{garside_struct_group}}
In order to prove Theorem \ref{garside_struct_group}, we use the concepts of left lcm and left coherence
from \cite{deh_francais} and \cite{deh_homologie}, but we do not use exactly the same notations.
The notation for the \emph{left lcm of $x$ and $y$} is $z=x\widetilde{\vee} y$ and for \emph{the complement at left of $y$ on $x$} the notation is $x \widetilde{\setminus} y$, where $x\widetilde{\vee} y=(x \widetilde{\setminus} y)y$.
\begin{defn}\cite{deh_homologie}
Let $M$ be a monoid. The \emph{left coherence condition} on $M$ is satisfied if it holds for any $x,y,z \in M$: $((x\widetilde{\setminus} y)\widetilde{\setminus} (z\widetilde{\setminus} y))\widetilde{\setminus} ((x\widetilde{\setminus} z)\widetilde{\setminus} (y\widetilde{\setminus} z))\equiv^{+} \epsilon$.
\end{defn}
This property is also called the \emph{left cube condition}.
We show that if $(X,S)$ is a non-degenerate and involutive set-theoretical solution, then $(X,S)$ is braided if and only if $X$ is coherent and left coherent.
The left coherence of $X$ is satisfied if the following condition on all $x_{i},x_{j},x_{k}$ in $X$ is satisfied: $(x_{i}\widetilde{\setminus} x_{j})\widetilde{\setminus} (x_{k}\widetilde{\setminus} x_{j})= (x_{i}\widetilde{\setminus} x_{k})\widetilde{\setminus} (x_{j}\widetilde{\setminus} x_{k})$, where the equality is in the free monoid since the complement on the left is totally defined and its range is $X$. Note that as in the proof of the coherence, left coherence on $X$ implies left coherence on $M$, since the monoid $M$ is atomic. Clearly, the following implication is derived from Theorem \ref{theo:garside}:
\begin{lem}\label{braided_implies_leftcoherent}
Assume $(X,S)$ is non-degenerate and involutive. If $(X,S)$ is braided, then $X$ is coherent and left coherent.
\end{lem}
The proof of the converse implication is less trivial and requires a lot of computations. Before we proceed, we first express the left complement in terms of the functions $f_{i}$ that define $(X,S)$. As the proofs are symmetric to those done in Section 3.2 with the right complement we omit them.
\begin{lem}\label{compl_gauche}
Assume $(X,S)$ is non-degenerate. Let $x_{i},x_{j}$ be different elements in $X$.
Then $x_{j}\widetilde{\setminus} x_{i}=f_{i}^{-1}(j)$.
\end{lem}
\begin{lem}\label{formule_gauche}
Assume $(X,S)$ is non-degenerate and involutive. Let $x_{i}, x_{k}$ be elements in $X$.
Then $f^{-1}_{k}(i)=g_{f^{-1}_{i}(k)}(i)$.
\end{lem}
\begin{lem}\label{cor_Xcoherent_equations}
Assume $(X,S)$ is non-degenerate. If $X$ is coherent and left coherent, then for every $i,j,k$ the following equations hold:\\
(A) $f_{j}f_{f^{-1}_{j}(k)}=f_{k}f_{f_{k}^{-1}(j)}$\\
(B) $ g_{i}g_{g_{i}^{-1}(k)}=g_{k}g_{g_{k}^{-1}(i)}$
\end{lem}
\begin{proof}
From lemma \ref{compl_gauche}, we have for all different $1 \leq i,j,k \leq n$ that $(x_{i}\widetilde{\setminus} x_{j})\allowbreak \widetilde{\setminus} (x_{k}\widetilde{\setminus} x_{j})=
f^{-1}_{f^{-1}_{j}(k)} f_{j}^{-1} (i)$ and $(x_{i}\widetilde{\setminus} x_{k})\widetilde{\setminus} (x_{j}\widetilde{\setminus} x_{k})= f^{-1}_{f_{k}^{-1}(j)} f_{k}^{-1} (i)$.
If $X$ is left coherent, then for all different $1 \leq i,j,k \leq n$, we have $(*)$ $f^{-1}_{f^{-1}_{j}(k)} f_{j}^{-1} (i)=$ $f^{-1}_{f_{k}^{-1}(j)} f_{k}^{-1} (i)$. If $j=k$, the equality (A) holds trivially, so let fix $j$ and $k$ such that $j \neq k$. We denote
$F_{1}=$ $f^{-1}_{f^{-1}_{j}(k)} f_{j}^{-1}$ and $F_{2}=$ $f^{-1}_{f_{k}^{-1}(j)} f_{k}^{-1}$, these functions are bijective, since these are compositions of bijective functions and satisfy $F_{1}(i)=F_{2}(i)$ whenever $i \neq j,k$. It remains to show that $F_{1}(k)=F_{2}(k)$ and $F_{1}(j)=F_{2}(j)$. Assume by contradiction that $F_{1}(k)=F_{2}(j)$ and $F_{1}(j)=F_{2}(k)$, so there is $1 \leq m \leq n$ such that
$m$= $f^{-1}_{f^{-1}_{j}(k)} f_{j}^{-1} (k)=$ $f^{-1}_{f_{k}^{-1}(j)} f_{k}^{-1} (j)$, that is $f_{f^{-1}_{j}(k)}(m)= f_{j}^{-1} (k)$ and $f_{f_{k}^{-1}(j)} (m)=f_{k}^{-1} (j)$. That is $S(m,f^{-1}_{j}(k))=(m,f^{-1}_{j}(k))$ and $S(m,f^{-1}_{k}(j))=(m,f^{-1}_{k}(j))$, since $(X,S)$ is involutive. So, $g_{m}\allowbreak (f^{-1}_{j}(k))=m$ and $g_{m}(f^{-1}_{k}(j))=m$. Since $g_{m}$ is bijective, this implies that there is $1 \leq l \leq n$ such that $l=$ $f^{-1}_{j}(k)=f^{-1}_{k}(j)$, that is $S(l,j)=(l,k)$.
But, since $j\neq k$, this contradicts the fact that $(X,S)$ is involutive.
So, since the functions $f_{.}$ are bijective, (*) is equivalent to (A).
Equation (B) is obtained in the same way using the coherence of $X$ (see lemma \ref{form_compl}).
\end{proof}
\begin{prop}\label{coherence_implies_braided}
Let $(X,S)$ be a non-degenerate and involutive set-theoretical solution. If $X$ is coherent and left coherent, then $(X,S)$ is braided.
\end{prop}
\begin{proof}
We need to show that the functions $f_{i}$ and $g_{i}$ satisfy the following equations from lemma \ref{debut_form}:\\
\emph{$(i)$} $f_{j}f_{i}=f_{f_{j}(i)}f_{g_{i}(j)}$, $1 \leq i,j \leq n$. \\
\emph{$(ii)$} $g_{i}g_{j}=g_{g_{i}(j)}g_{f_{j}(i)}$, $1 \leq i,j \leq n$. \\
\emph{$(iii)$} $f_{g_{f_{l}(m)}(j)}g_{m}(l)=g_{f_{g_{l}(j)}(m)}f_{j}(l)$, $1 \leq j,l,m \leq n$.\\
From lemma \ref{cor_Xcoherent_equations}, we have for $1 \leq j, k \leq n$ that (A) $f_{j}f_{f^{-1}_{j}(k)}=f_{k}f_{f^{-1}_{k}(j)}$. Assume $m=f^{-1}_{j}(k)$, that is $k=f_{j}(m)$ and we replace in formula (A) $f^{-1}_{j}(k)$ by $m$ and $k$ by $f_{j}(m)$, then we obtain $f_{j}f_{m}=f_{f_{j}(m)}f_{f^{-1}_{f_{j}(m)}(j)}$.
In order to show that \emph{$(i)$} holds, we show that $f^{-1}_{f_{j}(m)}(j)=g_{m}(j)$. From lemma \ref{formule_gauche}, we have $f^{-1}_{l}(j)=g_{f^{-1}_{j}(l)}(j)$ for every $j,l$, so by replacing $l$ by $f_{j}(m)$, we obtain $f^{-1}_{f_{j}(m)}(j)=g_{m}(j)$. So, \emph{$(i)$} holds.\\
From Corollary \ref{cor_Xcoherent_equations}, we have for $1 \leq j \neq k \leq n$ that (B) $ g_{i}g_{g_{i}^{-1}(k)}=g_{k}g_{g_{k}^{-1}(i)}$. Assume $m=g^{-1}_{i}(k)$, that is $k=g_{i}(m)$ and we replace in formula (B) $g^{-1}_{i}(k)$ by $m$ and $k$ by $g_{i}(m)$, then we obtain $g_{i}g_{m}=g_{g_{i}(m)}g_{g^{-1}_{g_{i}(m)}(i)}$.
In order to show that \emph{$(ii)$} holds, we show that $g^{-1}_{g_{i}(m)}(i)=f_{m}(i)$. From lemma \ref{formule}, we have $g^{-1}_{l}(i)=f_{g^{-1}_{i}(l)}(i)$, so by replacing $l$ by $g_{i}(m)$, we obtain $g^{-1}_{g_{i}(m)}(i)=f_{m}(i)$. So, \emph{$(ii)$} holds.\\
It remains to show that \emph{$(iii)$} holds.
From \emph{$(i)$}, we have for $1 \leq i,j \leq n$ that $f_{j}f_{i}=f_{f_{j}(i)}f_{g_{i}(j)}$ and this is equivalent to
$f_{g_{i}(j)}=f^{-1}_{f_{j}(i)}f_{j}f_{i}$.
We replace $i$ by $f_{l}(m)$ for some $1 \leq l,m \leq n$ in the formula.
We obtain $f_{g_{f_{l}(m)}(j)}=f^{-1}_{f_{j}f_{l}(m)}f_{j}f_{f_{l}(m)}$.
By applying these functions on $g_{m}(l)$ on both sides, we obtain
$f_{g_{f_{l}(m)}(j)}g_{m}(l)=f^{-1}_{f_{j}f_{l}(m)}f_{j}f_{f_{l}(m)}g_{m}(l)$.
Since $(X,S)$ is involutive, we have $f_{f_{l}(m)}g_{m}(l)=l$ (see lemma \ref{debut_form}). So,
$f_{g_{f_{l}(m)}(j)}g_{m}(l)=f^{-1}_{f_{j}f_{l}(m)}f_{j}(l)$.
From lemma \ref{formule_gauche}, we have $f^{-1}_{i}(k)=g_{f^{-1}_{k}(i)}(k)$ for every $i,k$, so replacing $i$ by $f_{j}f_{l}(m)$ and $k$ by $f_{j}(l)$ gives
$f^{-1}_{f_{j}f_{l}(m)}(f_{j}(l))=g_{f^{-1}_{f_{j}(l)}f_{j}f_{l}(m)}(f_{j}(l))$.
So, $f_{g_{f_{l}(m)}(j)}g_{m}(l)=g_{f^{-1}_{f_{j}(l)}f_{j}f_{l}(m)}(f_{j}(l))$.
From \emph{$(i)$}, we have that $f^{-1}_{f_{j}(l)}f_{j}f_{l}(m)=f_{g_{l}(j)}(m)$, so $f_{g_{f_{l}(m)}(j)}g_{m}(l)=g_{f_{g_{l}(j)}(m)}f_{j}(l)$, that is \emph{$(iii)$} holds.
\end{proof}
\begin{proof}[Proof of Theorem \ref{garside_struct_group}]
First, we define a function $S: X \times X \rightarrow X \times X$ and $2n$ functions $f_{i},g_{i}$ for $1 \leq i \leq n$, such that $S(i,j)=(g_{i}(j),f_{j}(i))$ in the following way: if there is a relation $x_{i}x_{j}=x_{k}x_{l}$ then we define $S(i,j)=(k,l)$, $S(k,l)=(i,j)$ and we define $g_{i}(j)=k$, $f_{j}(i)=l$, $g_{k}(l)=i$ and $f_{l}(k)=j$.
If the word $x_{i}x_{j}$ does not appear as a side of a relation, then we define $S(i,j)=(i,j)$ and we define $g_{i}(j)=i$ and $f_{j}(i)=j$.
We show that the functions $f_{i}$ and $g_{i}$ are well defined for $1 \leq i \leq n$:
assume $g_{i}(j)=k$ and $g_{i}(j)=k'$ for some $1 \leq j,k,k' \leq n$ and $k \neq k'$, then it means from the definition of $S$ that
$S(i,j)=(k,.)$ and $S(i,j)=(k',..)$ that is the word $x_{i}x_{j}$ appears twice in $R$ and this contradicts \emph{$(ii)$}. The same argument holds for the proof that the functions $f_{i}$ are well defined.\\
We show that the functions $f_{i}$ and $g_{i}$ are bijective for $1 \leq i \leq n$:
assume $g_{i}(j)=k$ and $g_{i}(j')=k$ for some $1 \leq j,j',k \leq n$ and $j \neq j'$, then from the
definition of $S$ we have $S(i,j)=(k,l)$ and $S(i,j')=(k,l')$ for some $1 \leq l,l'\leq n$, that is there are the following two defining relations in $R$: $x_{i}x_{j}=x_{k}x_{l}$ and $x_{i}x_{j'}=x_{k}x_{l'}$.
But this means that $x_{i}$ and $x_{k}$ have two different right lcms and this contradicts the assumption that the monoid is Garside. So, these functions are injective and since $X$ is finite they are bijective.
Assuming $f_{i}$ not injective yields generators with two different left lcms.
So, $S$ is well-defined and $(X, S)$ is non-degenerate and from \emph{$(ii)$} $(X, S)$ is also involutive. It remains to show that $(X, S)$ is braided:
since $\operatorname{Mon} \langle X \mid R \rangle$ is Garside, it is coherent and left coherent so from lemma \ref{coherence_implies_braided}, $(X,S)$ is braided.
Obviously condition \emph{$(iii)$} implies that $(X,S)$ is also square-free.
\end{proof}
\subsection{The one-to-one correspondence}
It remains to establish the one-to-one correspondence between structure groups of non-degenerate, involutive and braided set-theoretical solutions of the quantum Yang-Baxter equation and a class of Garside groups admitting a certain presentation and in order to that we need the following terminology and claims.
\begin{defn}
A \emph{tableau monoid} is a monoid $\operatorname{Mon} \langle X\mid R \rangle$ satisfying the condition that each side of a relation in $R$ has length 2.
\end{defn}
\begin{defn}
We say that two tableau monoids $\operatorname{Mon} \langle X\mid R \rangle$ and $\operatorname{Mon} \langle X'\mid R' \rangle$ are \emph{t-isomorphic} if there exists a bijection $s:X \rightarrow X'$ such that $x_{i}x_{j}=x_{k}x_{l}$ is a relation in $R$ if and only if $s(x_{i})s(x_{j})=s(x_{k})s(x_{l})$ is a relation in $R'$.
\end{defn}
Clearly, if two tableau monoids are t-isomorphic then they are isomorphic and the definition is enlarged to groups.
Set-theoretical solutions $(X,S)$ and $(X',S')$ are \emph{isomorphic} if there exists a bijection $\phi: X \rightarrow X'$ which maps $S$ to $S'$, that is $S'(\phi(x),\phi(y))=(\phi(S_{1}(x,y)),\phi(S_{2}(x,y)))$.
\begin{prop}
Let $(X,S)$ and $(X',S')$ be non-degenerate, involutive and braided set-theoretical solutions.
Assume $(X,S)$ and $(X',S')$ are isomorphic.
Then their structure groups (monoids) $G$ and $G'$ are t-isomorphic tableau groups (monoids). Conversely, if $\operatorname{Mon} \langle X\mid R \rangle$ and $\operatorname{Mon} \langle X\mid R' \rangle$ are t-isomorphic tableau Garside monoids each satisfying the conditions $(i)$ and $(ii)$ from Theorem \ref{garside_struct_group}, then the solutions $(X,S)$ and $(X',S')$ defined respectively by $\operatorname{Mon} \langle X\mid R \rangle$ and $\operatorname{Mon} \langle X\mid R' \rangle$ are isomorphic.
\end{prop}
\begin{proof}
Clearly, the structure groups (monoids) $G$ and $G'$ are tableau groups (monoids).
We need to show that $G$ and $G'$ are t-isomorphic.
Since $(X,S)$ and $(X',S')$ are isomorphic, there exists a bijection $\phi: X \rightarrow X'$ which maps $S$ to $S'$, that is $S'(\phi(x),\phi(y))=(\phi(S_{1}(x,y)),\allowbreak \phi(S_{2}(x,y)))$.
So, since by definition $S(x,y)=(S_{1}(x,y),S_{2}(x,y))$, we have $xy=tz$ if and only if $\phi(x)\phi(y)=\phi(t)\phi(z)$. That is, if we take $s$ to be equal to $\phi$ we have that $G$ and $G'$ are t-isomorphic. For the converse, take $\phi$ to be equal to $s$ and from the definition of $S$ and $S'$ from their tableau we have
$S'(\phi(x),\phi(y))=(\phi(S_{1}(x,y)),\phi(S_{2}(x,y)))$, that is $(X,S)$ and $(X',S')$ are isomorphic.
\end{proof}
\section{The structure group of a permutation solution}
In this part, we consider a special case of set-theoretical solutions of the quantum Yang-Baxter equation,
namely the permutation solutions. These solutions were defined by Lyubashenko (see \cite{etingof}). Let $X$ be a set and let $S:X^{2} \rightarrow X^{2}$ be a mapping. A permutation solution is a set-theoretical solution of the form $S(x,y)=(g(y),f(x))$, where $f,g:X\rightarrow X$. The solution $(X,S)$ is nondegenerate iff $f,g$ are bijective, $(X,S)$ is braided iff $fg=gf$ and $(X,S)$ is involutive iff $g=f^{-1}$. Note that these solutions are defined by only two functions, while for a general set-theoretical solution the number of defining functions is twice the cardinality of the set $X$.
\subsection{About permutation solutions that are non-involutive}
In this subsection, we consider the special case of non-degenerate and braided permutation solutions that are not necessarily involutive and we show that their structure group is Garside.
Let $X$ be a finite set and let $S:X^{2} \rightarrow X^{2}$ be defined by $S(x,y)=(g(y),f(x))$, where $f,g:X\rightarrow X$ are bijective and satisfy $fg=gf$. So, $(X,S)$ is a non-degenerate and braided permutation solution that is not necessarily involutive, as we do not require $g=f^{-1}$. Let $G$ be the structure group of $(X,S)$ and let $M$ be the monoid with the same presentation.
We define an equivalence relation on the set $X$ in the following way:\\
$x\equiv x'$ if and only if there is an integer $k$ such that $(fg)^{k}(x)=x'$. We define $X'=X/ \equiv$ and we define functions $f',g': X' \rightarrow X'$ such that $f'([x])=[f(x)]$ and $g'([x])=[g(x)]$, where $[x]$ denotes the equivalence class of $x$ modulo $\equiv$.
We then define $S':X'\times X'\rightarrow X'\times X'$ by $S'([x],[y])= (g'([y]),f'([x])) = ([g(y)],[f(x)])$. Our aim is to show that $(X',S')$ is a well-defined non-degenerate, involutive and braided solution (a permutation solution) and that its structure group $G'$ is isomorphic to $G$. Before doing this, we illustrate the main ideas of the proofs to come with an example.
\begin{ex}
$X=\{x_{1},x_{2},x_{3},x_{4},x_{5}\}$ and let $f=(1,4)(2,3)$ and $g=(1,2)(3,4)$.
Then $f,g$ are bijective and satisfy $fg=gf=(1,3)(2,4)$ but $fg \neq Id$, so $(X,S)$ is a non-degenerate and braided (permutation) solution, where $S(x,y)=(g(y),f(x))$. The set of relations $R$ is:\\
$\begin{array}{ccc}
x_{1}^{2}=x_{2}x_{4}=x_{3}^{2}=x_{4}x_{2}&&
x_{1}x_{2}=x_{1}x_{4}=x_{3}x_{4}=x_{3}x_{2}\\
x_{2}^{2}=x_{1}x_{3}=x_{4}^{2}=x_{3}x_{1}&&
x_{1}x_{5}=x_{5}x_{4}=x_{3}x_{5}=x_{5}x_{2}\\
x_{2}x_{1}=x_{2}x_{3}=x_{4}x_{3}=x_{4}x_{1}&&
x_{2}x_{5}=x_{5}x_{3}=x_{4}x_{5}=x_{5}x_{1}\\
\end{array}$\\
Using $\equiv$ defined above, we have $X'=\{ [x_{1}],[x_{2}],[x_{5}]\}$, with $x_{1}\equiv x_{3}$
and $x_{2}\equiv x_{4}$, since in this example it holds that $fg(1)=3$ and $fg(2)=4$.
Applying the definition of $S'$ yields $S'([x_{1}],[x_{1}])=([g(1)],[f(1)])=([2],[4])=([2],[2])$ and so on. So, $G'= \operatorname{Gp}\langle[x_{1}],[x_{2}],[x_{5}] \mid
[x_{1}]^{2}=[x_{2}]^{2},[x_{1}][x_{5}]=[x_{5}][x_{2}],
[x_{2}][x_{5}]=[x_{5}][x_{1}]\rangle$.
Note that in $G$, it holds that $x_{1}=x_{3}$ and $x_{2}=x_{4}$ since many of the defining relations from $R$ involve cancellation and $G= \operatorname{Gp}\langle x_{1},x_{2},x_{5} \mid x_{1}^{2}=x_{2}^{2}, x_{1}x_{5}=x_{5}x_{2},x_{2}x_{5}=x_{5}x_{1} \rangle$. So, $G$ and $G'$ have the same presentation, up to a renaming of the generators.
\end{ex}
Before we proceed to the general case, we need the following general lemma. The proof, by induction on k, is omitted because it is straightforward and technical (see \cite{chou}).
\begin{lem}\label{lem:calcul_Sk}
If $k$ is even, then $S^{k}(x,y)=(f^{k/2}g^{k/2}(x),f^{k/2}g^{k/2}(y))$.\\
If $k$ is odd, then $S^{k}(x,y)=(f^{(k-1)/2}g^{(k+1)/2}(y),f^{(k+1)/2}g^{(k-1)/2}(x))$.
\end{lem}
\begin{lem}\label{equiv_cancel}
Let $x,x' \in X $. If $x \equiv x'$, then $x$ and $x'$ are equal in $G$.
\end{lem}
\begin{proof}
Let $x,x'$ be in $X$ such that $x \equiv x'$.
If $x \equiv x'$ then it means that there is an integer $k$ such that
$(fg)^{k}(x)=f^{k}g^{k}(x)=x'$.
If $k$ is odd, then let $y$ in $X$ be defined in the following way:
$y=f^{(k+1)/2}g^{(k-1)/2}(x)$.
So, from lemma \ref{lem:calcul_Sk}, $S^{k}(x,y)=(f^{(k-1)/2}g^{(k+1)/2}(y),f^{(k+1)/2}g^{(k-1)/2}(x))
\allowbreak =(f^{(k-1)/2}g^{(k+1)/2}f^{(k+1)/2}g^{(k-1)/2}(x),y)=((fg)^{k}(x),y)=(x',y)$. So, there is a relation $xy=x'y$ in $G$ which implies that $x=x'$ in $G$.
If $k$ is even and $(fg)^{k}(x)=x'$, then there is an element $x'' \in X$ such that
$(fg)^{k-1}(x)=x''$, where $k-1$ is odd. So, from the subcase studied above,
there is $y \in X$, $y=f^{k/2}g^{(k-2)/2}(x)$, such that there is a relation $xy=x''y$ in $G$ and this
implies that $x=x''$ in $G$.
Additionally, $(fg)(x'')=x'$, so from the same argument as above there is $z \in X$ such that there is a relation $x'z=x''z$ in $G$
and this implies that $x'=x''$ in $G$. So, $x=x'$ in $G$.
\end{proof}
We now show that $(X',S')$ is a well-defined non-degenerate, involutive and braided solution and this implies from Theorem \ref{theo:garside} that its structure group $G'$ is Garside.
\begin{lem}
$(i)$ $f'$ and $g'$ are well defined, so $S'$ is well-defined.\\
$(ii)$ $f'$ and $g'$ are bijective, so $S'$ is non-degenerate.\\
$(iii)$ $f'$ and $g'$ satisfy $f'g'=g'f'$, so $S'$ is braided.\\
(iv) $f'$ and $g'$ satisfy $f'g'=g'f'=id_{X'}$, so $S'$ is involutive.
\end{lem}
\begin{proof}
\emph{$(i)$} Let $x,x'$ be in $X$ such that $x\equiv x'$.
We show that $f'([x']) = f'([x])$, that is $[f(x')]=[f(x)]$.
Since $x\equiv x'$, there is an integer $k$ such that $x'=(fg)^{k}(x)$, so $f'([x'])=[f(x')]=[f(fg)^{k}(x)]=[(fg)^{k}f(x)]=[f(x)]=f'([x])$, using the fact that $fg=gf$.
The same proof holds for $g'([x'])=g'([x])$.\\
\emph{$(ii)$} Assume that there are $x,y \in X$ such that $f'([x])=f'([y])$, that is $[f(x)]=[f(y)]$.
By the definition of $\equiv$, this means that there is an integer $k$ such that $f(x)=(fg)^{k}f(y)$, that is $f(x)=f(fg)^{k}(y)$, since $fg=gf$. But $f$ is bijective, so $x=(fg)^{k}(y)$, which means that $[x]=[y]$. The same proof holds for $g'$, using the fact that $g$ is bijective.\\
\emph{$(iii)$} Let $[x]\in X'$, so $f'(g'([x]))=f'([g(x)])=[f(g(x))]$ and $g'f'([x])=g'([f(x)])=[g(f(x))]$. Since $fg=gf$, $f'g'=g'f'$.\\
\emph{(iv)} Let $[x]\in X'$, so from the definition of $\equiv$, we have $[fg(x)]=[x]$, so $f'g'([x])=[x]$, that is $f'g'=id_{X'}$.
\end{proof}
\begin{lem} \label{lem:G'isGarside}
Let $(X,S)$ be a not necessarily involutive permutation solution. Let $X'=X/ \equiv$ and let $G'$ be the structure group corresponding to $(X',S')$, as defined above.
Then $G'$ is Garside. Furthermore, if $x_{i}x_{j}=x_{k}x_{l}$ is a defining relation in $G$, then $[x_{i}][x_{j}]=[x_{k}][x_{l}]$ is a defining relation in $G'$.
\end{lem}
\begin{proof}
It holds that $(X',S')$ is a non-degenerate, braided and involutive permutation solution,
so by Theorem \ref{theo:garside} the group $G'$ is Garside. Assume that $x_{i}x_{j}=x_{k}x_{l}$ is a defining relation in $G$, that is $S(x_{i},x_{j})=(x_{k},x_{l})$.
From the definition of $S'$, $S'([x_{i}],[x_{j}])=([g(x_{j})],[f(x_{i})])=([x_{k}],[x_{l}])$, that is there is a defining relation $[x_{i}][x_{j}]=[x_{k}][x_{l}]$ in $G'$. Note that this relation may be a trivial one if
$[x_{i}]=[x_{k}]$ and $ [x_{j}]=[x_{l}]$ in $G'$
\end{proof}
Now, it remains to show that the structure group $G$ is isomorphic
to the group $G'$.
\begin{thm}
Let $G$ be the structure group of a non-degenerate and braided permutation solution $(X,S)$ that is not necessarily involutive. Then $G$ is a Garside group.
\end{thm}
\begin{proof}
We show that the group $G$ is isomorphic to the group $G'$, where $G'$ is the structure group of $(X',S')$ and
$X'=X/ \equiv$, and from lemma \ref{lem:G'isGarside} this implies that $G$ is a Garside group.
Let $\Phi: X \rightarrow X'$ be the quotient map defined by $\Phi (x)=[x]$ for all $x \in X$.
From lemma \ref{lem:G'isGarside}, $\Phi:G \rightarrow G'$ is an homomorphism of groups, so $\Phi $ is an epimorphism.
We need to show that $\Phi$ is injective.
We show that if $[x][y]=[t][z]$ is a non-trivial defining relation in $G'$, then $xy=tz$ is a defining relation in $G$.
If $[x][y]=[t][z]$ is a non-trivial defining relation in $G'$, then since
$S'([x],[y])= ([g(y)],[f(x)])$, we have that $[g(y)]=[t]$ and $[f(x)]=[z]$. That is, there are
$z' \in [z]$ and $t' \in [t]$ such that $g(y)=t'$ and $f(x)=z'$.
This implies that $S(x,y)=(g(y),f(x))=(t',z')$, that is $xy=t'z'$ is a defining relation in $G$.
It holds that $t \equiv t'$ and $z \equiv z'$, so from lemma \ref{equiv_cancel}, $t=t'$ and $z=z'$ in $G$.
So, $xy=tz$ is a defining relation in $G$. \\
Note that if $[x][y]=[t][z]$ is a trivial relation in $G'$, that is $[x]=[t]$ and $[y]=[z]$, then
from lemma \ref{equiv_cancel} $x=t$ and $y=z$ in $G$ and so $xy=tz$ holds trivially in $G$.
So, $\Phi$ is an isomorphism of the groups $G$ and $G'$ and from lemma \ref{lem:G'isGarside}, $G$ is Garside.
\end{proof}
\subsection{Computation of $\Delta$ for a permutation solution}
In this subsection, we consider the structure group of a non-degenerate, braided and involutive permutation solution. We claim that given the decomposition of the defining function of the permutation solution as the product of disjoint cycles, one can easily find a Garside element in its structure group. This result can be extended to non-degenerate and braided permutation solutions that are not involutive, using the construction from section $7.1$.
\begin{prop}
Let $X=\{x_{1},..,x_{n}\}$, and $(X,S)$ be a non-degenerate, braided and involutive permutation solution defined by $S(i,j)=(f(j),f^{-1}(i))$, where $f$ is a permutation on $\{1,..,n\}$. Let $M$ be the monoid with the same presentation as the structure group. Assume that $f$ can be described as the product of disjoint cycles: \\ $f=(t_{1,1},..,t_{1,m_{1}})(t_{2,1},..,t_{2,m_{2}})
(t_{k,1},..,t_{k,m_{k}})(s_{1})..(s_{l})$, $t_{i,j},s_{*}\in \{1,..,n\}$. Then
$(i)$ For $1 \leq i \leq k$, $x_{t_{i,1}}^{m_{i}}=x_{t_{i,2}}^{m_{i}}=..=x_{t_{i,m_{i}}}^{m_{i}}$ in $M$ and this element is denoted by $x_{t_{i}}^{m_{i}}$.\\
$(ii)$ The element $\Delta=x_{t_{1}}^{m_{1}}x_{t_{2}}^{m_{2}}..x_{t_{k}}^{m_{k}}x_{s_{1}}..x_{s_{l}}$
is a Garside element in $M$.
\end{prop}
We refer the reader to \cite{chou} for the proof.
|
1,477,468,750,995 | arxiv | \section{Hamiltonian and molecular potential}
The Hamiltonian of the $i$-th Rydberg atom is given by
\begin{eqnarray}
H_{i} &=& \hbar \omega_{0} \sum_{j_z=-3/2}^{3/2}\ketbra{np_{3/2},j_z}{np_{3/2},j_z}_{i} \notag \\
&&+ \hbar (\omega_{0}+\Delta) \sum_{j_{z}=-1/2}^{1/2}\ketbra{np_{1/2},j_z}{np_{1/2},j_z}_{i},
\end{eqnarray}
where $\omega_{0}$ is the resonance frequency of the $ns \leftrightarrow np_{3/2}$ transition, and $\hbar \abs{\Delta}$ is the energy spacing between the $np_{1/2}$ and $np_{3/2}$ states. The dipole-dipole interaction~\cite{tannoudji:api} between atoms $i$ and $j$ located at
positions $\V{R}_{i}$ and $\V{R}_{j}$ is defined as
\begin{equation}
V_{ij} = \frac{1}{4\pi\varepsilon_0 R^3}
[ \VO{d}^{(i)} \cdot \VO{d}^{(j)}
- 3 (\VO{d}^{(i)} \cdot \vec{\V{R}}) (\VO{d}^{(j)} \cdot \vec{\V{R}}) ].
\end{equation}
Here $\epsilon_{0}$ is the dielectric constant. $\VO{d}^{(i)}$ is the electric dipole-moment operator of the $i$-th atom. $\V{R} = \V{R}_{i} - \V{R}_{j}$, $R=\vert \V{R}\vert$, and $\vec{\V{R}} = \V{R} / R$ is the corresponding unit vector. The matrix elements of the electric-dipole-moment operator $\VO{d}$ of an individual atom are evaluated via the Wigner-Eckert theorem~\cite{walker:08,edmonds:amq}. We define the reduced dipole matrix element as $D = e\langle np \vert r\vert ns\rangle / \sqrt{3}$. For alkali-metal atoms with $n\geq 40$, we have $\langle np\vert r\vert ns\rangle \simeq n^{2}a_{0}$, where $a_{0}$ is the Bohr radius~\cite{walker:08}. The characteristic strength of the dipole-dipole interaction is given by $\hbar \Omega = \vert D\vert ^{2} / (4\pi \epsilon_{0}R^{3})$. The characteristic length scale $r_{0}$ at which bound states occur is obtained by equating $\Omega$ with $\vert \Delta\vert$. This gives $r_{0}=[ \vert D\vert ^{2} / (4\pi \epsilon_{0} \hbar \vert \Delta \vert) ]^{1/3}$. For $^{85}$Rb with $n=40$, the splitting is $\Delta\simeq 2\pi \times 1$ $\mathrm{GHz}$, which yields $r_{0}\simeq 1$ $\mathrm{\mu m}$ \cite{kiffner:14}.
\section{Susceptibility}
The Hamiltonian describing the interaction between a probe atom and EIT lasers is
\begin{eqnarray}
H_{\text{EIT}} &=& \hbar \sum_{j_z=-1/2}^{1/2}[\Delta_{p} \ketbra{e,j_z}{e,j_z} \notag \\
&&+(\Delta_{p}+\Delta_{c}) \ketbra{ns,j_z}{ns,j_z} \notag \\
&& + \frac{ \Omega_{p}}{2} ( \ketbra{e,j_z}{g,j_z}+\text{H.c.}) \notag \\
&&+ \frac{\Omega_{c}}{2} (\ketbra{ns,j_z}{e,j_z} + \text{H.c.} )],
\end{eqnarray}
where $g$ and $e$ represent the ground and excited states. The probe $\Omega_{p}$ and control $\Omega_{c}$ lasers drive the $g \leftrightarrow e$ and $e \leftrightarrow ns$ transitions with detunings $\Delta_{p}$ and $\Delta_{c}$, respectively. $\Gamma_{p}$ and $\Gamma_{c}$ are the spontaneous decay rates of the $e$ and $ns$ states.
In the limit of weak probe $\Omega_{p} \ll \Omega_{c}$ and small excitations to the $e$ and $ns$ states, the susceptibility corresponding to the probe transition is~\cite{badger:01,sevincli:11,gunter:12}
\begin{eqnarray}
\chi = \frac{i\Gamma_p}
{ \Gamma_p - i\Delta_p
+ \sum_k \Omega_c^2 F_{k}^2 (\Gamma_c - i\Delta_k)^{-1} },
\label{chi}
\end{eqnarray}
where the sum over $k$ accounts for the effect of multiple energy levels $\vert \psi_{k}\rangle$ of three-Rydberg-atom states~\cite{badger:01}. We label the $k$-th eigenenergy and eigenstate of three Rydberg atoms as $E_{k}$ and $\vert \psi_{k}\rangle$, respectively. $F_{k}^2=\sum_{j_z=-1/2}^{1/2}\abs{\braket{ns,j_z,\text{m}}{\psi_k}}^2$ with $\ket{ns,j_z,\text{m}}$ denotes a state in which the probe atom is in the $\ket{ns,j_z}$ state and the other two atoms are in the molecular state. $E_{0}$ is the energy of the $\ket{ns,j_z,\text{m}}$ state when the probe atom is far away from the molecule. The detuning $\Delta_{k}=\Delta_{p}+\Delta_{c}+E_{k}-E_{0}$.
|
1,477,468,750,996 | arxiv | \section{IGRB Anisotropy}
In all-sky high-energy gamma-ray observations an intense diffuse emission originating from the Milky Way is visible.
Above 30~MeV the large majority of this emission is produced
by cosmic-ray (CR) ions interacting with the Galactic interstellar gas via neutral pion production and decay, and
inverse Compton (IC) scattering of interstellar radiation fields photons off CR electrons.
A fainter, almost isotropic on large angular scales, gamma-ray background (IGRB)
has also been detected, first by the SAS-2 mission \cite{fichtel}.
Later the IGRB energy spectrum was measured with good accuracy by the Energetic Gamma-Ray
Experiment Telescope \cite{Sreekumar},\cite{strong} on-board
the Compton Observatory and recently with the LAT detector on-board the Fermi
gamma-ray observatory \cite{abdoiso}.
A considerable part of the IGRB has likely extragalactic
origin (EGB) and carries information about the non-thermal phenomena in the universe.
The IGRB has a guaranted contribution from the known extragalactic gamma-ray sources.
AGN represent the largest source population above 30~MeV detected by EGRET
\cite{hart} and the LAT detector
\cite{latcal1},\cite{latcal2}. Therefore, undetected AGN (those below the current
detection threshold) are the most likely candidates for the origin of the
bulk of the EGB emission. The estimates of the EGB fraction originating from AGN
vary from 20 and $\approx$100\% depending on the energy and the model (see \cite{abdoegb} and references therein).
Another main extragalactic candidate is star-burst galaxies \cite{starburst}.
A fraction of the IGRB might also originate from the sum of Galactic nearby sources
such as millisecond pulsars (MSPs) \cite{MSPs}.
The IGRB anisotropy study provides us also with a method for the indirect search for
dark matter (DM) with gamma rays, which is very complementary to the study of the
main DM overdensities, such as the Galactic center \cite{vitale} and
dwarf spheroidal galaxies \cite{dwarfs} and also complementary to the search of lines \cite{lines} or other spectral features
from large areas of the sky.
In fact the sub-structures of the Milky Way dark matter halo and the
dark matter halos in the local universe might produce an imprint in the IGRB.
Dark matter overdensities might shine in gamma rays both in the case of
pair-annihilating DM particles (with the resulting gamma-ray flux proportional to
the square of the dark matter density) and pseudo-stable decaying ones (with flux proportional to the density).
The largest substructures of the Galactic halo might be individually detectable while
the majority of them are likely to be under detection threshold but still be
able to contribute to the IGRB \cite{ullio} and also induce small scale anisotropies \cite{JSG1}.
Information on the IGRB origins is carried by its small scale angular fluctuations. If a diffuse emission originates from
unresolved source populations then small angular scale fluctuations will arise because of the variation of number density
of the sources in different sky directions.
These fluctuations are a feature of the source distribution and will persist also in the limit of infinite statistics,
contrary to Poisson fluctuations (photon noise)
which decreses with increasing event statistics.
It is therefore possible to discriminate between sources induced anisotropies and photon noise,
if one is provided with a sufficient data statistics.
One method for the study of the IGRB anisotropy is the APS measurement.
In theoretical studies of the IGRB anisotropy the following contributors were considered:
(a) blazars \cite{10},\cite{11},\cite{12};
(b) star-forming galaxies \cite{13};
(c) Galactic MSPs \cite{14};
(d) annihilating or decaying dark matter in Galactic sub-halos \cite{15},\cite{16},\cite{17};
(e) dark matter in extragalactic structures \cite{9},\cite{10},\cite{12},\cite{17},\cite{18} and \cite{19},\cite{20}, \cite{21},\cite{22}.
In these studies it is also shown that intrinsic clustering of many populations candidates
has a sub-dominant effect on the angular power spectrum in multipole range between 100-500.
Here we report on a search for anisotropies in the IGRB, performed with the data taken
with the LAT detector on-board of the Fermi satellite.
\label{Context}
\section{The LAT and the Data Analysis}
The Fermi Large Area Telescope has a wide field of view (2.4 sr) and a large effective
area ($\approx$8000 cm$^{2}$ for normally-incident photons above 1~GeV).
LAT is a pair-conversion telescope with a modular structure made of 4$\times$4 $\emph{towers}$.
Each tower is composed by: (i) a precision silicon tracker (18 planes of Si-strip detectors
coupled with W conversion planes, with a total of 1.5 X$_{0}$ for the normal incidence.
The Si tracker is made of a first thinner segment, called front, with a better
angular resolution and a second thicker one, called back);
(ii) a CsI homogeneous calorimeter (8.5 X$_{0}$ for the normal incidence).
The pair-conversion telescope is covered with an anti-coincidence detector that
allows for rejection of charged particle events. Full details of the instrument, including
technical information about the detector, on-board and ground data processing, and
mission-oriented support, are given in \cite{24}.
Data taken during the first 56.6Ms of observation ($\approx$22 months) were used.
The experimental data and simulations were analyzed with the LAT analysis software
Science Tools version v9r15p4 with P6$_{-}$V3 LAT instrument response functions (IRFs).
The main analysis steps were:
\begin{itemize}
\item Data Preparation. By means of the gtselect tool:
(i) events of $\emph{diffuse}$ class were selected and with energy between 1 and 50~GeV;
(ii) data with zenith angle exceeding 105$^{\circ}$ were rejected
to reduce Earth gamma-ray albedo contamination;
(iii) events converted in the front and back tracker segments have also
been divided in order to be analyzed separately.
Periods in which with LAT was in the South Atlantic Anomaly, not in survey mode or with rocking angle exceeding 52$^{\circ}$
were discarded with the tool gtmktime.
The integrated livetime was calculated using
the gtltcube tool (photon injection step size cos($\theta$) = 0.025,
pixel size of 0.125$^{\circ}$ corresponding to a HEALPix \cite{23} map with Nside = 512 resolution)
\item Exposure. Exposure maps were calculated using the gtexpcube tool
with the same pixel size as for gtltcube, for 42 logarithmic energy bins spanning
1.04 to 50~GeV, in order to have a good knowledge of the energy dependence of the exposure.
The $\emph{event shuffling technique}$ is an alternative method for the exposure calculation
that does not rely on the Monte Carlo based calculation of the exposure implemented in
the Science Tools. It was used for cross-checking possible exposure systematic errors.
With this method the arrival directions of pairs of detected events are swapped in the instrument coordinates, this produces anexposure map wih arbitrary normalization.
The same technique has also been used to search for anisotropy in the CR electrons arrival
directions, measured with the LAT, in \cite{eanis}.
\item Intensity Maps. Counts maps were built with the selected data.
Both the photon counts and exposure maps were converted into HEALPix-format maps
with Nside = 512, and HEALPix gamma-ray intensity maps in four energy bins were obtained.
\item Map Masking. Regions of the sky heavily contaminated by Galactic diffuse emission were excluded by masking Galactic
latitudes $|b| <$ 30$^{\circ}$, and masking sources in the Fermi 11-month catalog \cite{latcal1} within a 2$^{\circ}$ angular
radius. In this study we focused on multipoles $l\gtrsim$ 150 (corresponding to angular scales $<$ 2$^{\circ}$),
since lower multipoles (corresponding to correlations over larger angular scales) are likely more
contaminated by Galactic diffuse emission.
\item Intensity and Fluctuation APS.
The angular power spectra were calculated for the intensity map
I($\psi$), where $\psi$ is the sky direction.
The angular power spectrum is given by the coefficients
C$_{l}$ = $<|a_{lm}|^{2}>$ with the a$_{lm}$ determined by expanding the map in spherical harmonics.
The intensity APS indicates the dimensionful amplitude of the anisotropy
and can be compared with predictions for source classes whose collective
intensity is known or assumed.
A dimensionless fluctuation APS has also been calculated by dividing the intensity
APS spectrum C$_{l}$ of a map by the mean sky intensity (outside of the mask) squared $<$I$>^{2}$.
In the case of the shuffling technique only the intensity fluctuation APS can be obtained.
The angular power spectra of the maps were calculated using the HEALPix package \cite{23}.
The measured APS were corrected for the power suppression due to the beam and
pixel window functions, and an approximate correction, valid at multipoles $l>$100, was applied
to account for the reduction in angular power due to masking.
For each energy bin, the APS of the maps of front- and back-converting events were calculated separately and then
combined by weighted average.
\item Validation Studies. Careful checks of the procedures and the related parameters were
performed as well as tests for assessing the origin of the measured angular power.
The full analysis was applied to a simulated point source population, in order to compare the APS determined
with the analysis chain and the one which could be analytically calculated.
The dependence of the results on the instrument response functions (IRFs), as also
on the masking have also been studied. For the latter in particular both the latitude cut with which the Galactic
plane was excluded and the radius of the circle with which each know source was masked, were varied.
In order to further evaluate the effect of residual Galactic diffuse emission on the
APS the analysis was repeated on the data after the subtraction of a model of the
Galactic diffuse emission ($\emph{Galactic Foreground Cleaning}$).
The details of these studies will be given in the final publication of the anisotropy study.
\end{itemize}
\begin{figure}
\centering
\includegraphics[width=8cm, height=5cm]{1.eps}
\includegraphics[width=8cm, height=5cm]{2.eps}
\includegraphics[width=8cm, height=5cm]{3.eps}
\includegraphics[width=8cm, height=5cm]{4.eps}
\caption{PRELIMINARY. Fluctuation APS of the IGRB minus photon noise, in four energy bins. For isotropic emission this difference would be consistent with zero. The large angular power at $l<$ 155 is likely originating from Galactic diffuse emission. For $l>$ 155 the measured power excess is approximately constant in multipole, suggesting that it originates from one or more unclustered source populations. The APS obtained with the event shuffling technique are also reported. The two different methods provide consistent results and show that possible inaccuracies in the exposure calculation have a negligible impact on these results.}
\label{fig:1}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm, height=5cm]{5.eps}
\includegraphics[width=8cm, height=5cm]{6.eps}
\includegraphics[width=8cm, height=5cm]{7.eps}
\includegraphics[width=8cm, height=5cm]{8.eps}
\caption{PRELIMINARY. Intensity APS of the IGRB minus photon noise, in four energy bins. APS of the experimental data, simulated default model and high resolution models are reported. The measured power above l $\approx$ 155 is not found in either of the two models. Points from different data sets are offset slightly in multipole for clarity.}\label{fig:2}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=8cm, height=5cm]{9.eps}
\includegraphics[width=8cm, height=5cm]{10.eps}
\includegraphics[width=8cm, height=5cm]{11.eps}
\includegraphics[width=8cm, height=5cm]{12.eps}
\caption{PRELIMINARY. Intensity APS of the IGRB minus photon noise, in four energy bins. The APS of the single model components are reported. The large power in the models at $l<$ 155 is due to the GAL component. The isotropic component (ISO) provides APS compatible with zero as expected for isotropic emission; likewise, the source component (CAT) provides no contribution because all sources were effectively masked. Total Model points are offset slightly in multipole for clarity.}\label{fig:3}
\end{figure}
\section{Simulated Models}
Detailed Monte Carlo simulations of Fermi-LAT all-sky observations were performed.
The purpose was to have a reference model to be compared with the real data set.
The status-of-art of the LAT modelling was used by means of the gtobssim tool.
This code required as an input a detailed spatial and spectral model of the
emission to be simulated.
Details of real LAT observations, such as spacecraft pointing and live-time history, can be also included in the simulations.
The gtobssim tool generates simulated photon events.
The P6\_V3\_DIFFUSE IRFs and the actual spacecraft pointing and live-time history matching the observational time
interval of the data were used to generate the simulated data sets. Two models of the gamma-ray sky were simulated.
Each model is the sum of three components:
\begin{enumerate}
\item GAL - a model of the Galactic diffuse emission;
\item CAT - the 1451 sources in the first Fermi-LAT source catalog (1FGL) \cite{latcal1};
\item ISO - an isotropic background.
\end{enumerate}
The same CAT and ISO components are included in both models.
These differ only for the model for the GAL component in use.
GAL describes both the spatial distribution and the energy spectrum of the Galactic diffuse emission.
The GAL component for the reference sky model used in this analysis (hereafter, Model) is the recommended Galactic
diffuse model for LAT data analysis, gll iem\_v02.fit \cite{galmod}, which has an angular resolution of 0.5$^{\circ}$.
This model was used to obtain the 1FGL catalog; a detailed description can be found in \cite{galref}.
An higher-resolution model (Hi-Res Model) was simulated for comparison, so that was possible to test
impact of smaller details in the Galactic diffuse emission. This model (ring 21month v1.fit) is internal to the LAT collaboration,
and was built using the same method as gll iem\_v02.fit, but differs primarily in the following ways: (i) this model was
constructed using 21 months of Fermi -LAT observations, while gll iem\_v02.fit was based on 9 months of data;
(ii) the grid angular resolution for this model is 0.125$^{\circ}$ , in order fully exploit the angular resolution of the
CO maps \cite{comap} used to build it; and (iii) additional large-scale structures, such as the Fermi bubbles \cite{bubbles}, are included
in the model through the use of simple templates.
The single power law spectrum is assumed for all the sources in CAT and
the locations, average integral fluxes, and photon spectral indices are as reported in the 1FGL catalog.
All 1451 sources were included in the simulation. ISO represents the sum of the Fermi-measured IGRB and an additional
isotropic component presumably due to unrejected charged particles; for this component the spectrum template isotropic
iem\_v02.txt was used. For both the Model and the Hi-Res Model , the sum of the three simulated components results in a
description of the gamma-ray sky that closely approximates the angular-dependent flux and energy spectrum of the all-sky
emission measured by the Fermi -LAT. Although the simulated models may not accurately reproduce some large-scale structures,
e.g., Loop I \cite{loop1} and the Fermi bubbles \cite{bubbles}, these features are not expected to induce anisotropies on the small angular scales on which
we focus in this work.
The simulated models were processed and their angular power spectra calculated using the same analysis pipeline as the data.
In Figure 2 the APS of the data and models are shown.
The contributions to APS of the individual components of the default model are shown in Figure 3.
At all energies the only component of the models contributing significantly to $l<155$ is the Galactic diffuse emission.
The contribution from the isotropic component is negligible, since this component is isotropic
by construction and thus, after the photon noise is subtracted, it should only contribute to the monopole ($l$=0)
term.
The source catalog component contributes zero power at all energies and multipoles because all these sources were
effectively masked, giving rise to a negligible residual effect.
We remark that in general the APS of distinct components are not linearly additive due to contributions from cross-correlations between the components.
\section{Results}
Significant ($>$3$\sigma$ CL in each energy bin) angular power above the photon noise level (see Figure 1) is detected in the data at multipoles 155 $\le l \le$ 504 for energies below 10~GeV, and at lower significance in the 10-50~GeV energy bin.
The sensitivity of the measurement at high energies is limited by the small statistics.
The measured angular power is consistent with a constant value within each energy bin, which suggests that it originates from one or more unclustered populations of point sources.
The fluctuation angular power detected in this analysis falls below the level predicted for some models of blazars, MSPs, and Galactic and extragalactic DM structures, and so the measured amplitude of the fluctuation angular power can limit the contribution to the total IGRB intensity of each source class.
The measured fluctuation angular power is almost independent of energy, and so it might originate from a single dominant source class, although a mild energy dependence cannot be excluded.
The energy dependence of the intensity angular power of the data is well-described by that arising from a single source class
with a power-law energy spectrum with photon index $\Gamma$= −2.40$\pm$ 0.07. This value is compatible with the mean intrinsic spectral index for blazars as determined from recent Fermi-LAT measurements.
The study of the currently available Fermi-LAT data has provided us with the first detection of small scale-angular power in the IGRB.
Future analyses on larger data sets could allow anisotropy searches to be extended to higher energies, and will likely be more sensitive to features in the anisotropy energy spectrum.
Detailed studies of population models are required in order to:
(i) identify the source classes which are actually contributing to the IGRB anisotropy and
(ii) provide upper-limits to the contribution of other candidate source populations.
A possible approach is the study of the APS induced by source population models and the
following Monte Carlo simulation of the model emission, with the best possible characterization of the instrument response.
For an example, in the DM case a model of all its distribution structures and sub-structure (Galactic and in the local Universe)
which can provide contribution should be considered \cite{fornasa}, and
furthermore a given particle physics model should be assumed, or at least the DM particle mass and
the annihilation cross-section for self-annihilating DM, or the mean life-time for pseudo-stable decaying DM.
Then the APS induced by these DM models can be obtained with the same procedure which was used for the simulated
Model and Hi-Res Model of section 3.
\
\
{\bf Acknowledgements.}
The \textit{Fermi} LAT Collaboration acknowledges generous ongoing support
from a number of agencies and institutes that have supported both the
development and the operation of the LAT as well as scientific data analysis.
These include the National Aeronautics and Space Administration and the
Department of Energy in the United States, the Commissariat \`a l'Energie Atomique
and the Centre National de la Recherche Scientifique / Institut National de Physique
Nucl\'eaire et de Physique des Particules in France, the Agenzia Spaziale Italiana
and the Istituto Nazionale di Fisica Nucleare in Italy, the Ministry of Education,
Culture, Sports, Science and Technology (MEXT), High Energy Accelerator Research
Organization (KEK) and Japan Aerospace Exploration Agency (JAXA) in Japan, and
the K.~A.~Wallenberg Foundation, the Swedish Research Council and the
Swedish National Space Board in Sweden.
Additional support for science analysis during the operations phase is gratefully
acknowledged from the Istituto Nazionale di Astrofisica in Italy and the Centre National d'\'Etudes Spatiales in France.
\bibliographystyle{model3-num-names}
|
1,477,468,750,997 | arxiv | \section{Introduction}
Motivic homotopy theory intertwines classical algebraic geometry and stable algebraic topology.
In this paper we study obstruction theory for $E_\infty$-structures in the motivic setup.
An $E_\infty$-structure on a spectrum refers as usual to a ring structure which is not just given up to homotopy,
but where the homotopies encode a coherent homotopy commutative multiplication.
Many of the examples of motivic ring spectra begin life as commutative monoids in the motivic stable homotopy category.
We are interested in the following questions:
When can the multiplicative structure of a given commutative monoid in the motivic stable homotopy category be refined to an $E_\infty$-ring spectrum?
And if such a refinement exists, is it unique?
The questions of existence and uniqueness of $E_\infty$-structures and their many ramifications have been studied extensively in topology.
The first motivic examples worked out in this paper are of $K$-theoretic interest.
\vspace{0.1in}
The complex cobordism spectrum $\mathsf{MU}$ and its motivic analogue $\mathsf{MGL}$ have natural $E_\infty$-structures.
In the topological setup,
Baker and Richter \cite{bakerrichter} have shown that the complex $K$-theory spectrum $\mathsf{KU}$,
the Adams summand $\mathsf{L}$ and the real $K$-theory spectrum $\mathsf{KO}$ admit unique $E_\infty$-structures.
The results in \cite{bakerrichter} are approached via the obstruction theory developed by Robinson in \cite{Robinson},
where it is shown that existence and uniqueness of $E_\infty$-structures are guaranteed provided certain $\Gamma$-cohomology groups vanish.
\vspace{0.1in}
In our approach we rely on analogous results in the motivic setup,
see \cite{robinsonrevisited} for a further generalization.
We show that the relevant motivic $\Gamma$-cohomology groups vanish in the case of the algebraic $K$-theory spectrum $\mathsf{KGL}$ (Theorem \ref{gammacomputation})
and the motivic Adams summand $\mathsf{ML}$ (see \S\ref{section:ThemotivicAdamssummandsMLandml}).
The main ingredients in the proofs are new computations of the $\Gamma$-homology complexes of $\mathsf{KU}$ and $\mathsf{L}$,
see Theorem \ref{k-theory-coops-gamma-cotangentcomplex} and Lemma \ref{lemma:L_0L},
and the Landweber base change formula for the motivic cooperations of $\mathsf{KGL}$ and $\mathsf{ML}$.
Our main result for $\mathsf{KGL}$ can be formulated as follows:
\begin{theorem*}
The algebraic $K$-theory spectrum $\mathsf{KGL}$ has a unique $E_{\infty}$-structure refining its multiplication
in the motivic stable homotopy category.
\end{theorem*}
The existence of the $E_\infty$-structure on $\mathsf{KGL}$ was already known using the Bott inverted model for algebraic $K$-theory,
see \cite{RSO}, \cite{SO:Bottinverted}, \cite{gepnersnaith},
but the analogous result for $\mathsf{ML}$ is new.
The uniqueness part of the Theorem is new,
and it rules out the existence of any exotic $E_\infty$-structures on $\mathsf{KGL}$.
We note that related motivic $E_\infty$-structures have proven useful in the recent constructions of Atiyah-Hirzebruch types of spectral sequences for motivic twisted $K$-theory
\cite{SO:twistedKtheory}.
\vspace{0.1in}
One may ask if the uniqueness of $E_{\infty}$-structures on $\mathsf{KGL}$ has any consequences for the individual algebraic $K$-theory spectra of smooth schemes over a fixed base scheme.
If the base scheme is regular,
consider the following presheaves of $E_\infty$-ring spectra.
The first one arises from evaluating the $E_\infty$-spectrum $\mathsf{KGL}$ on individual smooth schemes,
and the second one from a functorial construction of algebraic $K$-theory spectra,
cf.~\cite{Jardine}.
It is natural to ask if these two presheaves are equivalent in some sense.
If the second presheaf is obtained from a motivic $E_\infty$-spectrum,
then our uniqueness result would answer this question in the affirmative.
The $K$-theory presheaf has this property when viewed as an $A_\infty$-object,
see \cite{youngsoo-kim},
but as an $E_\infty$-object this is still an open problem.
\vspace{0.1in}
In topology,
the Goerss-Hopkins-Miller obstruction theory \cite{goersshopkins} allows to gain control over moduli spaces of $E_\infty$-structures.
In favorable cases,
such as for Lubin-Tate spectra,
the moduli spaces are $K(\pi,1)$'s giving rise to actions of certain automorphism groups as $E_\infty$-maps.
A motivic analogue of this obstruction theory has not been worked out.
One reason for doing so is that having a homotopy ring structure on a spectrum is often not sufficient in order to form homotopy fixed points under a group action.
In Subsection \ref{hgdfsd} we note an interesting consequence concerning $E_{\infty}$-structures on hermitian $K$-theory.
\vspace{0.1in}
In Section \ref{juytd} we show that the connective cover $\mathsf{kgl}$ of the algebraic $K$-theory spectrum has a unique $E_{\infty}$-structure,
and ditto in Section \ref{section:ThemotivicAdamssummandsMLandml} for the connective cover of the Adams summand.
\section{Algebraic $K$-theory $\mathsf{KGL}$}
In this section we shall present the $\Gamma$-cohomology computation showing there is a unique $E_{\infty}$-structure on the algebraic $K$-theory spectrum $\mathsf{KGL}$.
Throughout we work over some noetherian base scheme of finite Krull dimension, which we omit from the notation.
There are two main ingredients which make this computation possible:
First, the $\Gamma$-homology computation of $\mathsf{KU}_0\mathsf{KU}$ over $\mathsf{KU}_0=\mathbf{Z}$,
where $\mathsf{KU}$ is the complex $K$-theory spectrum.
Second,
we employ base change for the motivic cooperations of algebraic $K$-theory,
as shown in our previous work \cite{motiviclandweber}.
\subsection{The $\Gamma$-homology of $\mathsf{KU}_0\mathsf{KU}$ over $\mathsf{KU}_0$}
\label{subsection:TheGamma-homologyofKU_0KUoverKU_0}
For a map $A\rightarrow B$ between commutative algebras we denote Robinson's $\Gamma$-homology complex by $\widetilde{{\mathcal K}}(B\vert A)$ \cite[Definition 4.1]{Robinson}.
Recall that $\widetilde{{\mathcal K}}(B\vert A)$ is a homological double complex of $B$-modules concentrated in the first quadrant.
The same construction can be performed for maps between graded and bigraded algebras.
In all cases we let ${{\mathcal K}}(B\vert A)$ denote the total complex associated with the double complex $\widetilde{{\mathcal K}}(B\vert A)$.
\vspace{0.1in}
The $\Gamma$-cohomology
$$
\mathsf{H}\Gamma^*(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0, -)=\mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_0\mathsf{KU}}({{\mathcal K}}(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0),-)
$$
has been computed for various coefficients in \cite{bakerrichter}.
In what follows we require precise information about the complex ${{\mathcal K}}(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0)$,
since it satisfies a motivic base change property,
cf.~Lemma \ref{naivebasechange}.
\begin{lemma}
\label{rational}
Let $X\in \mathsf{Ch}_{\ge 0}(\mathsf{Ab})$ be a non-negative chain complex of abelian groups.
The following are equivalent:
\begin{enumerate}
\item[i)] The canonical map $X\longrightarrow X\otimes_\mathbf{Z}^{\mathbf{L}} \mathbf{Q}=X\otimes_\mathbf{Z}\mathbf{Q}$ is a quasi isomorphism.
\item[ii)] For every prime $p$, there is a quasi isomorphism $X\otimes ^{\mathbf{L}}_\mathbf{Z}{\mathbf F}_p\simeq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is well known that $X$ is formal \cite[pg.~164]{goerssjardine},
i.e.~there is a quasi isomorphism
\[
X\simeq \bigoplus_{n\ge 0}H_n(X)[n].
\]
(For an abelian group $A$ and integer $n$,
we let $A[n]$ denote the chain complex that consists of $A$ concentrated in degree $n$.)
Hence for every prime $p$,
\[
X\otimes^{\mathbf{L}}_\mathbf{Z} {\mathbf F}_p\simeq \bigoplus_{n\ge 0}\left( H_n(X)[n]\otimes^{\mathbf{L}}_\mathbf{Z}{\mathbf F}_p\right).
\]
By resolving ${\mathbf F}_p=(\mathbf{Z}\stackrel{\cdot p}{\longrightarrow}\mathbf{Z})$ one finds an isomorphism
\[
H_*(A[n]\otimes_{\mathbf{Z}}^{\mathbf{L}}{\mathbf F}_p)
\cong
(A/pA)[n]\oplus A\{p\}[n+1]
\]
for every abelian group $A$ and integer $n$.
Here $A\{p\}$ is shorthand for $\{ x\in A\, |\, px=0\}$.
In summary,
ii) holds if and only if the multiplication by $p$ map
\[ \cdot p:H_*(X)\longrightarrow H_*(X)\]
is an isomorphism for every prime $p$. The latter is equivalent to i).
\end{proof}
We shall use the previous lemma in order to study cotangent complexes introduced by Illusie in \cite{illusie}.
Let $R$ be a ring and set $R_\mathbf{Q}:=R\otimes_\mathbf{Z}\mathbf{Q}$.
Then there is a canonical map
\[
\xymatrix{
\tau_R: {\mathbb L}_{R/\mathbf{Z}} \ar[r] &
{\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}\mathbf{Q}\simeq
{\mathbb L}_{R/\mathbf{Z}}\otimes_R^{\mathbf{L}} R_\mathbf{Q}\ar[r]^(.7){\simeq} &
{\mathbb L}_{R_\mathbf{Q}/\mathbf{Q}} }
\]
of cotangent complexes in $\mathsf{Ho}(\mathsf{Ch}_{\ge 0}(\mathbf{Z}))$.
The first quasi isomorphism is obvious,
while the second one is an instance of flat base change for cotangent complexes.
\begin{lemma}\label{test}
The following are equivalent:
\begin{enumerate}
\item[i)] $\tau_R$ is a quasi isomorphism.
\item[ii)] For every prime $p$,
there is a quasi isomorphism ${\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}{\mathbf F}_p\simeq 0$.
\end{enumerate}
If the abelian group underlying $R$ is torsion free,
then i) and ii) are equivalent to
\begin{enumerate}
\item[iii)] For every prime $p$, ${\mathbb L}_{(R/pR)/{\mathbf F}_p}\simeq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence of i) and ii) follows by applying Lemma \ref{rational} to $X={\mathbb L}_{R/\mathbf{Z}}$.
If $R$ is torsion free,
then it is flat as a $\mathbf{Z}$-algebra.
Hence,
by flat base change,
there exists a quasi isomorphism
\[
{\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}{\mathbf F}_p\simeq {\mathbb L}_{(R/pR)/{\mathbf F}_p}.
\]
\end{proof}
The following is our analogue for Robinson's $\Gamma$-homology complex of the Baker-Richter result \cite[Theorem 5.1]{bakerrichter}.
\begin{theorem}
\label{k-theory-coops-gamma-cotangentcomplex}
\begin{itemize}
\item [i)] Let $R$ be a torsion free ring such that ${\mathbb L}_{(R/pR) / {\mathbf F}_p}\simeq 0$ for every prime $p$,
e.g.~assume that ${\mathbf F}_p\to R/pR$ is ind-\'etale for all $p$.
Then there is a quasi isomorphism
\[
{{\mathcal K}}(R|\mathbf{Z})\simeq {{\mathcal K}}(R_\mathbf{Q}|\mathbf{Q})
\]
in the derived category of $R$-modules.
\item[ii)] There is a quasi isomorphism
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)\simeq (\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}[0]
\]
in the derived category of $\mathsf{KU}_0\mathsf{KU}$-modules.
\end{itemize}\end{theorem}
\begin{proof}
\begin{itemize}\item[i)]
The Atiyah-Hirzebruch spectral sequence noted in \cite[Remark 2.3]{richter} takes the form
\[
E^2_{p,q}=H^p({\mathbb L}_{R/\mathbf{Z}} \otimes_\mathbf{Z}^{\mathbf{L}} \Gamma^q(\mathbf{Z}[x] |\mathbf{Z}))\Rightarrow H^{p+q}({{\mathcal K}}(R|\mathbf{Z})).
\]
Our assumptions on $R$ and Lemma \ref{test} imply that the $E^2$-page is comprised of $\mathbf{Q}$-vector spaces.
Hence so is the abutment,
and there exists a quasi isomorphism between complexes of $R$-modules
\[
{{\mathcal K}}(R|\mathbf{Z})\stackrel{\simeq}{\to} {{\mathcal K}}(R|\mathbf{Z})\otimes_\mathbf{Z}\mathbf{Q}.
\]
Moreover,
by Lemma \ref{naivebasechange},
there is a quasi isomorphism
\[{{\mathcal K}}(R|\mathbf{Z})\otimes_\mathbf{Z}\mathbf{Q} \simeq {{\mathcal K}}(R_\mathbf{Q}|\mathbf{Q}).\]
\item[ii)]
According to \cite[Theorem 3.1, Corollary 3.4, (a)]{bakerrichter} and the Hopf algebra isomorphism $A^{st}\simeq \mathsf{KU}_0\mathsf{KU}$ \cite[Proposition 6.1]{bakerrichter},
the ring $R:=\mathsf{KU}_0\mathsf{KU}$ satisfies the assumptions of part i)\footnote{This follows also easily from Landweber exactness of $\mathsf{KU}$.}.
Now since $\mathsf{KU}_0\cong\mathbf{Z}$,
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)\simeq {{\mathcal K}}((\mathsf{KU}_0\mathsf{KU})_\mathbf{Q} | \mathbf{Q}).
\]
We have that $(\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}\simeq\mathbf{Q}[w^{\pm 1}]$ \cite[Theorem 3.2, (c)]{bakerrichter} is a smooth $\mathbf{Q}$-algebra.
Hence,
since $\Gamma$-cohomology agrees with Andr\'e-Quillen cohomology over $\mathbf{Q}$,
there are quasi isomorphisms
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)
\simeq
\Omega^1_{\mathbf{Q}[w^{\pm 1}]| \mathbf{Q}}[0]
\simeq
(\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}[0].
\]
\end{itemize}
\end{proof}
\subsection{The $\Gamma$-homology of $\mathsf{KGL}_{\ast\ast} \mathsf{KGL}$ over $\mathsf{KGL}_{\ast\ast}$}
The strategy in what follows is to combine the computations for $\mathsf{KU}$ in \S\ref{subsection:TheGamma-homologyofKU_0KUoverKU_0} with motivic Landweber exactness \cite{motiviclandweber}.
To this end we require the following general base change result,
which was also used in the proof of Theorem \ref{k-theory-coops-gamma-cotangentcomplex}.
\begin{lemma}
\label{naivebasechange}
For a pushout of ordinary, graded or bigraded commutative algebras
\begin{equation*}
\xymatrix{
A\ar[d]\ar[r] & B\ar[d] \\
C\ar[r] & D }
\end{equation*}
there are isomorphisms between complexes of $D$-modules
$$
{{\mathcal K}}(D\vert C)
\cong
{{\mathcal K}}(B\vert A)\otimes_{B}D
\cong
{{\mathcal K}}(B\vert A)\otimes_{A}C.
$$
If $B$ is flat over $A$,
then $\widetilde{{\mathcal K}}(B\vert A)$ is a first quadrant homological double complex of flat $B$-modules;
thus, in the derived category of $D$-modules there are quasi isomorphisms
$$
{{\mathcal K}}(D\vert C)
\simeq
{{\mathcal K}}(B\vert A)\otimes^{\mathbf{L}}_{B}D
\simeq
{{\mathcal K}}(B\vert A)\otimes^{\mathbf{L}}_{A}C.
$$
\end{lemma}
\begin{proof}
Following the notation in \cite[\S4]{Robinson},
let $(B\vert A)^{\otimes}$ denote the tensor algebra of $B$ over $A$.
Then $(B\vert A)^{\otimes}\otimes_{A}B$ has a natural $\Gamma$-module structure over $B$,
cf.~\cite[\S4]{Robinson}.
Here $\Gamma$ denotes the category of finite based sets and basepoint preserving maps.
It follows that $((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D$ is a $\Gamma$-module over $D$.
Moreover,
by base change for tensor algebras,
there exists an isomorphism of $\Gamma$-modules in $D$-modules
$$
((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D
\cong
(D\vert C)^{\otimes}\otimes_{C}D.
$$
Here we use that the $\Gamma$-module structure on $(B\vert A)^{\otimes}\otimes_{A} M$,
for $M$ a $B$-module, is given as follows:
For a map $\varphi \colon [m] \to [n]$ between finite pointed sets,
$$
(B \otimes_A B \otimes_A \cdots \otimes_A B)\otimes_A M \to (B \otimes_A B \otimes_A \cdots \otimes_A B) \otimes_A M
$$
sends $b_1 \otimes \cdots \otimes b_m \otimes m$ to
$$
(\prod_{i \in \varphi^{-1}(1)} b_i) \otimes \cdots \otimes (\prod_{i \in \varphi^{-1}(n)} b_i) \otimes ((\prod_{i \in \varphi^{-1}(0)} b_i) \cdot m).
$$
By convention,
if $\varphi^{-1}(j)=\emptyset$ then $\prod_{i \in \varphi^{-1}(j)} b_i=1$.
Robinson's $\Xi$-construction yields an isomorphism between double complexes of $D$-modules
$$
\widetilde{{\mathcal K}}(D\vert C)=
\Xi((D\vert C)^{\otimes}\otimes_{C}D)
\cong
\Xi(((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D).
$$
Inspection of the $\Xi$-construction reveals there is an isomorphism
$$
\Xi(((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D)
\cong
\Xi((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D.
$$
By definition,
this double complex of $D$-modules is $\widetilde{{\mathcal K}}(B\vert A)\otimes_{B}D\cong\widetilde{{\mathcal K}}(B\vert A)\otimes_{A}C$.
This proves the first assertion by comparing the corresponding total complexes.
The remaining claims follow easily.
\end{proof}
Next we recall the structure of the motivic cooperations of the algebraic $K$-theory spectrum $\mathsf{KGL}$.
The algebras we shall consider are bigraded as follows:
$\mathsf{KU}_0\cong\mathbf{Z}$ in bidegree $(0,0)$ and $\mathsf{KU}_*\cong\mathbf{Z}[\beta^{\pm 1}]$ with the Bott-element $\beta$ in bidegree $(2,1)$.
With these conventions,
there is a canonical bigraded map
\[
\mathsf{KU}_*\to \mathsf{KGL}_{\ast\ast}.
\]
\begin{lemma}\label{coops}
There are pushouts of bigraded algebras
\begin{equation*}
\xymatrix{
\mathsf{KU}_{\ast} \ar[d]\ar[r]^-{\eta_L} & \mathsf{KU}_{\ast}\mathsf{KU} \ar[d] \\
\mathsf{KGL}_{\ast\ast} \ar[r]^-{\eta_L} & \mathsf{KGL}_{\ast\ast}\mathsf{KGL} }
\;\;\;\;\;\;
\xymatrix{
\mathsf{KU}_{0} \ar[d]\ar[r]^-{(\eta_L)_0} & \mathsf{KU}_{0}\mathsf{KU}\ar[d] \\
\mathsf{KU}_{\ast}\ar[r]^-{\eta_L} & \mathsf{KU}_{\ast}\mathsf{KU} }
\end{equation*}
and a quasi isomorphism in the derived category of
$\mathsf{KGL}_{\ast\ast}\mathsf{KGL}$-modules
$$
{{\mathcal K}}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast})
\simeq
{{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathsf{KU}_{0})\otimes^{\mathbf{L}}_{\mathsf{KU}_{0}\mathsf{KU}}\mathsf{KGL}_{\ast\ast}\mathsf{KGL}.
$$
\end{lemma}
\begin{proof}
Here, $\eta_L$ is a generic notation for the left unit of some flat Hopf-algebroid.
The first pushout is shown in \cite[Proposition 9.1, (c)]{motiviclandweber}.
The second pushout is in \cite{bakerrichter}.
Applying Lemma \ref{naivebasechange} twice gives the claimed quasi isomorphism.
\end{proof}
Next we compute the $\Gamma$-cohomology of the motivic cooperations of $\mathsf{KGL}$.
\begin{theorem}
\begin{itemize}\label{gammacomputation}
\item[i)] There is an isomorphism
\[
\mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})\cong \mathsf{H}^*\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q}[0],\mathsf{KGL}_{\ast\ast}).
\]
\item[ii)] For all $s\ge 2$,
\[
\mathsf{H}\Gamma^{s,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})=0.
\]
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
\item[i)] By the definition of $\Gamma$-cohomology and the results in this Subsection there are isomorphisms
\begin{equation*}
\begin{array}{rl}
& \mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})\\
= & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KGL}_{\ast\ast}\mathsf{KGL}}({{\mathcal K}}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast}),\mathsf{KGL}_{\ast\ast}) \\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KGL}_{\ast\ast}\mathsf{KGL}}({{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathbf{Z})\otimes^{\mathbf{L}}_{\mathsf{KU}_{0}\mathsf{KU}}\mathsf{KGL}_{\ast\ast}\mathsf{KGL},\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_{0}\mathsf{KU}}({{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathbf{Z}),\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_{0}\mathsf{KU}}((\mathsf{KU}_{0}\mathsf{KU})_{\mathbf{Q}}[0],\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathbf{Z}}(\mathbf{Q}[0],\mathsf{KGL}_{\ast\ast}).\\
\end{array}
\end{equation*}
\item[ii)] This follows from i) since $\mathbf{Z}$ has global dimension $1$.
\end{itemize}
\end{proof}
\begin{remark}
It is an exercise to compute $\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q},-)$ for finitely generated abelian groups.
This explicates our Gamma-cohomology computation in degrees $0$ and $1$ for base schemes with finitely generated algebraic $K$-groups,
e.g.~finite fields and number rings.
The computation $\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q},\mathbf{Z})\simeq{\hat{\mathbf{Z}}/\mathbf{Z}}[1]$ shows our results imply \cite[Corollary 5.2]{bakerrichter}.
\end{remark}
The vanishing result Theorem \ref{gammacomputation}, ii) together with the motivic analogues of the results in \cite[Theorem 5.6]{Robinson},
as detailed in \cite{robinsonrevisited},
conclude the proof of the Theorem for $\mathsf{KGL}$ formulated in the Introduction.
\subsection{A remark on hermitian $K$-theory $\mathsf{KQ}$}
\label{hgdfsd}
In this short Subsection we discuss one instance in which the motivic obstruction theory used here falls short of a putative motivic analogue of the obstruction theory of Goerss,
Hopkins and Miller \cite{goersshopkins}.
By \cite[Theorem 9.7, (ii), Remark 9.8, (iii)]{motiviclandweber} we may realize the stable Adams operation $\Psi^{-1}$ on algebraic $K$-theory by a motivic ring spectrum map
\begin{equation}
\label{hgtsf}
\Psi^{-1}:\mathsf{KGL}\longrightarrow\mathsf{KGL}.
\end{equation}
In many cases of interest one expects that $\mathsf{fib}(\psi^{-1}-1)$ represents Hermitian $K$-theory $\mathsf{KQ}$.
A motivic version of the Goerss-Hopkins-Miller obstruction theory in \cite{goersshopkins} implies,
in combination with Theorem \ref{gammacomputation},
that (\ref{hgtsf}) can be modelled as an $E_\infty$-map.
With this result in hand,
it would follow that $\mathsf{KQ}$ admits an $E_\infty$-structure.
It seems the obstruction theory we use is intrinsically unable to provide such results by ``computing'' $E_\infty$-mapping spaces.
However,
there might be a more direct way of showing that $\mathsf{KQ}$ has a unique $E_\infty$-structure,
using the obstruction theory in this paper.
A first step would be to compute the motivic cooperations of $\mathsf{KQ}$.
\section{Connective algebraic $K$-theory $\mathsf{kgl}$}
\label{juytd}
We define the connective algebraic $K$-theory spectrum $\mathsf{kgl}$ as the effective part $\mathsf{f}_{0}\mathsf{KGL}$ of $\mathsf{KGL}$.
Recall that the functor $\mathsf{f}_i$ defined in \cite{voevodskyopen} projects from the motivic stable homotopy category to its $i$th effective part.
Note that $\mathsf{f}_{0}\mathsf{KGL}$ is a commutative monoid in the motivic stable homotopy category since projection to the effective part is a lax symmetric monoidal functor
(because it is right adjoint to a monoidal functor).
For $i\in\mathbf{Z}$ there exists a natural map $\mathsf{f}_{i+1}\mathsf{KGL}\rightarrow\mathsf{f}_{i}\mathsf{KGL}$ in the motivic stable homotopy category with cofiber the $i$th slice of $\mathsf{KGL}$.
With these definitions,
$\mathsf{KGL}\cong\mathrm{hocolim}\,\mathsf{f}_{i}\mathsf{KGL}$ (this is true for any motivic spectrum,
cf.~\cite[Lemma 4.2]{voevodskyopen}).
Bott periodicity for algebraic $K$-theory implies that $\mathsf{f}_{i+1}\mathsf{KGL}\cong\Sigma^{2,1}\mathsf{f}_{i}\mathsf{KGL}$.
This allows to recast the colimit as $\mathrm{hocolim}\,\Sigma^{2i,i}\mathsf{kgl}$ with multiplication by the Bott element $\beta$ in $\mathsf{kgl}^{-2,-1}\cong\mathsf{KGL}^{-2,-1}$ as the transition map at each stage.
We summarize these observations in a lemma.
\begin{lemma}
\label{lemma:KGLbottkgl}
The algebraic $K$-theory spectrum $\mathsf{KGL}$ is isomorphic in the motivic stable homotopy category to the Bott inverted connective algebraic $K$-theory spectrum $\mathsf{kgl}[\beta^{-1}]$.
\end{lemma}
\begin{theorem}
The connective algebraic $K$-theory spectrum $\mathsf{kgl}$ has a unique $E_{\infty}$-structure refining its multiplication in the motivic stable homotopy category.
\end{theorem}
\begin{proof}
The connective cover functor $\mathsf{f}_{0}$ preserves $E_{\infty}$-structures \cite{GRSO}.
Thus the existence of an $E_{\infty}$-structure on $\mathsf{kgl}$ is ensured.
We note that inverting the Bott element can be refined to the level of motivic $E_{\infty}$-ring spectra by the methods employed in \cite{RSO}.
Thus,
by Lemma \ref{lemma:KGLbottkgl},
starting out with any two $E_{\infty}$-structures on $\mathsf{kgl}$ produces two $E_{\infty}$-structures on $\mathsf{KGL}$,
which coincide by the uniqueness result for $E_{\infty}$-structures on $\mathsf{KGL}$ .
Applying $\mathsf{f}_{0}$ recovers the two given $E_{\infty}$-structures on $\mathsf{kgl}$:
If $X$ is $E_\infty$ with $\varphi \colon X\simeq\mathsf{kgl}$ as ring spectra,
then there is a canonical $E_\infty$-map $X \to X[{\beta'}^{-1}]$,
where $\beta'$ is the image of the Bott element under $\varphi$.
Since $X$ is an effective motivic spectrum,
this map factors as an $E_\infty$-map $X \to \mathsf{f}_0(X[\beta'^{-1}])$.
By construction of $\mathsf{kgl}$ the latter map is an equivalence.
This shows the two given $E_\infty$-structures on $\mathsf{kgl}$ coincide.
\end{proof}
\section{The motivic Adams summands $\mathsf{ML}$ and $\mathsf{ml}$}
\label{section:ThemotivicAdamssummandsMLandml}
Let $\mathsf{BP}$ denote the Brown-Peterson spectrum for a fixed prime number $p$.
Then the coefficient ring $\mathsf{KU}_{(p)\ast}$ of the $p$-localized complex $K$-theory spectrum is a $\mathsf{BP}_{\ast}$-module via the ring map $\mathsf{BP}_{\ast}\rightarrow\mathsf{MU}_{(p)\ast}$
which classifies the $p$-typicalization of the formal group law over $\mathsf{MU}_{(p)\ast}$.
The $\mathsf{MU}_{(p)\ast}$-algebra structure on $\mathsf{KU}_{(p)\ast}$ is induced from the natural orientation $\mathsf{MU} \to \mathsf{KU}$.
With this $\mathsf{BP}_{\ast}$-module structure,
$\mathsf{KU}_{(p)\ast}$ splits into a direct sum of the $\Sigma^{2i}\mathsf{L}_{\ast}$ for $0\leq i\leq p-2$,
where $\mathsf{L}$ is the Adams summand of $\mathsf{KU}_{(p)}$.
Thus motivic Landweber exactness \cite{motiviclandweber} over the motivic Brown-Peterson spectrum $\mathsf{MBP}$ produces a splitting of motivic spectra
$$
\mathsf{KGL}_{(p)}
=
\bigvee_{i=0}^{p-2}
\Sigma^{2i,i}\mathsf{ML}.
$$
We refer to $\mathsf{ML}$ as the motivic Adams summand of algebraic $K$-theory.
Since $\mathsf{L}_{\ast}$ is an $\mathsf{BP}_{\ast}$-algebra and there are no nontrivial phantom maps from any smash power of $\mathsf{ML}$ to $\mathsf{ML}$,
which follows from \cite[Remark 9.8, (ii)]{motiviclandweber} since $\mathsf{ML}$ is a retract of $\mathsf{KGL}_{(p)}$,
we deduce that the corresponding ring homology theory induces a commutative monoid structure on $\mathsf{ML}$ in the motivic stable homotopy category.
We define the connective motivic Adams summand $\mathsf{ml}$ to be $f_{0}\mathsf{ML}$.
It is also a commutative monoid in the motivic homotopy category.
\begin{theorem}
\label{theorem:MLml}
The motivic Adams summand $\mathsf{ML}$ has a unique $E_{\infty}$-structure refining its multiplication in the
motivic stable homotopy category.
The same result holds for the connective motivic Adams summand $\mathsf{ml}$.
\end{theorem}
The construction of $\mathsf{ML}$ as a motivic Landweber exact spectrum makes the following result evident on account of
the proof of Lemma \ref{coops}.
\begin{lemma}
\label{coopsII}
There exist pushout squares of bigraded algebras
\begin{equation*}
\xymatrix{
\mathsf{L}_{\ast} \ar[d]\ar[r]^-{\eta_L} & \mathsf{L}_{\ast}\mathsf{L} \ar[d] \\
\mathsf{ML}_{\ast\ast} \ar[r]^-{\eta_L} & \mathsf{ML}_{\ast\ast}\mathsf{ML} }
\;\;\;\;\;\;
\xymatrix{
\mathsf{L}_{0} \ar[d]\ar[r]^-{(\eta_L)_0} & \mathsf{L}_{0}\mathsf{L}\ar[d] \\
\mathsf{L}_{\ast}\ar[r]^-{\eta_L} & \mathsf{L}_{\ast}\mathsf{L} }
\end{equation*}
and a quasi isomorphism in the derived category of $\mathsf{ML}_{\ast\ast}\mathsf{ML}$-modules
$$
{{\mathcal K}}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast})
\simeq
{{\mathcal K}}(\mathsf{L}_{0}\mathsf{L}\vert\mathsf{L}_{0})\otimes^{\mathbf{L}}_{\mathsf{L}_{0}\mathsf{L}}\mathsf{ML}_{\ast\ast}\mathsf{ML}.
$$
\end{lemma}
Next we show the analog of Theorem \ref{k-theory-coops-gamma-cotangentcomplex}, ii) for the motivic Adams summand.
\begin{lemma}
\label{lemma:L_0L}
In the derived category of $\mathsf{L}_0\mathsf{L}$-modules, there is a quasi isomorphism
$$
{{\mathcal K}}(\mathsf{L}_0\mathsf{L}|\mathsf{L}_0)
\simeq
(\mathsf{L}_0\mathsf{L})_\mathbf{Q}[0].
$$
\end{lemma}
\begin{proof}
In the notation of \cite[Proposition 6.1]{bakerrichter} there is an isomorphism between Hopf algebras $\mathsf{L}_0\mathsf{L}\cong {}^{\zeta}A^{st}_{(p)}$.
Recall that ${}^{\zeta}A^{st}_{(p)}$ is a free $\mathbf{Z}_{(p)}$-module on a countable basis and ${}^{\zeta}A^{st}_{(p)}/p{}^{\zeta}A^{st}_{(p)}$ is a formally
\'etale ${\mathbf F}_{p}$-algebra \cite[Theorem 3.3(c), Corollary 4.2]{bakerrichter}.
Applying Theorem \ref{k-theory-coops-gamma-cotangentcomplex}, i) to $R=\mathsf{L}_0\mathsf{L}$ and using that $(\mathsf{L}_0\mathsf{L})_\mathbf{Q}\simeq\mathbf{Q}[v^{\pm 1}]$
by Landweber exactness,
where $v=w^{p-1}$ and $(\mathsf{KU}_0 \mathsf{KU})_\mathbf{Q}\cong\mathbf{Q}[w^{\pm 1}]$,
we find
\[ {{\mathcal K}}(\mathsf{L}_0\mathsf{L} | \mathsf{L}_0)
\simeq
\Omega^1_{\mathbf{Q}[v^{\pm 1}]| \mathbf{Q}}[0]
\simeq
(\mathsf{L}_0\mathsf{L})_\mathbf{Q}[0].\]
\end{proof}
Lemmas \ref{coopsII} and \ref{lemma:L_0L} imply there is a quasi isomorphism
\[
\mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast};\mathsf{ML}_{\ast\ast})\simeq \mathsf{H}^*\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q}[0],\mathsf{ML}_{\ast\ast}).
\]
Thus the part of Theorem \ref{theorem:MLml} dealing with $\mathsf{ML}$ follows, since for all $s\ge 2$,
\begin{equation}
\label{gammahomologyvanishingofML}
\mathsf{H}\Gamma^{s,\ast,\ast}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast};\mathsf{ML}_{\ast\ast})=0.
\end{equation}
The assertion about $\mathsf{ml}$ follows by the exact same type of argument as for $\mathsf{kgl}$.
The periodicity operator in this case is $v_1 \in \mathsf{ml}^{2(1-p),1-p}=\mathsf{ML}^{2(1-p),1-p}$.
\vspace{0.1in}
{\bf Acknowledgements.}
The main result of this paper was announced by the first named author at the 2009 M{\"u}nster workshop on Motivic Homotopy Theory.
He thanks the organizers E.~M.~Friedlander, G.~Quick and P.~A.~{\O}stv{\ae}r for the invitation,
and P.~A.~{\O}stv{\ae}r for hospitality while visiting the University of Oslo, where the major part of this work was finalized.
\bibliographystyle{plain}
\section{Introduction}
Motivic homotopy theory intertwines classical algebraic geometry and modern algebraic topology.
In this paper we study obstruction theory for $E_\infty$-structures in the motivic setup.
An $E_\infty$-structure on a spectrum refers to a ring structure which is not just given up to homotopy,
but where the homotopies encode a coherent homotopy commutative multiplication.
Many of the examples of motivic ring spectra begin life as commutative monoids in the motivic stable homotopy category.
We are interested in the following questions:
When can the multiplicative structure of a given commutative monoid in the motivic stable homotopy category be refined to an $E_\infty$-ring spectrum?
And if such a refinement exists, when is it unique?
The questions of existence and uniqueness of $E_\infty$-structures and their many ramifications have been studied extensively in topology.
The first motivic examples worked out in this paper are of $K$-theoretic interest.
\vspace{0.1in}
The complex cobordism spectrum $\mathsf{MU}$ and its motivic analogue $\mathsf{MGL}$ have natural $E_\infty$-structures.
In the topological setup,
Baker and Richter \cite{bakerrichter} have shown that the complex $K$-theory spectrum $\mathsf{KU}$,
the Adams summand $\mathsf{L}$ and the real $K$-theory spectrum $\mathsf{KO}$ admit unique $E_\infty$-structures.
The results in \cite{bakerrichter} are approached via the obstruction theory developed by Robinson in \cite{Robinson},
where it is shown that existence and uniqueness of $E_\infty$-structures are guaranteed provided certain $\Gamma$-cohomology groups vanish.
\vspace{0.1in}
In our approach we rely on analogous results in the motivic setup.
We show that the relevant motivic $\Gamma$-cohomology groups vanish in the case of the algebraic $K$-theory spectrum $\mathsf{KGL}$
(Theorem \ref{gammacomputation}) and the motivic Adams summand $\mathsf{ML}$ introduced in this paper (see \S\ref{section:ThemotivicAdamssummandsMLandml}).
The main ingredients in the proofs are new computations of the $\Gamma$-homology complexes of $\mathsf{KU}$ and $\mathsf{L}$,
see Theorem \ref{k-theory-coops-gamma-cotangentcomplex} and Lemma \ref{lemma:L_0L},
and the Landweber base change formula for the motivic cooperations of $\mathsf{KGL}$ and $\mathsf{ML}$.
Our main result for $\mathsf{KGL}$ can be formulated as follows:
\begin{theorem}\label{uniqueKGL}
The algebraic $K$-theory spectrum $\mathsf{KGL}$ has a unique $E_{\infty}$-structure refining its multiplication
in the motivic stable homotopy category.
\end{theorem}
The existence of the $E_\infty$-structure on $\mathsf{KGL}$ was already known using the Bott inverted model for algebraic $K$-theory,
see \cite{RSO}, \cite{SO:Bottinverted}, \cite{gepnersnaith},
but the analogous result for $\mathsf{ML}$ is new.
The uniqueness part of the theorem is also new;
it rules out the existence of any exotic $E_\infty$-structures on $\mathsf{KGL}$.
We note that related motivic $E_\infty$-structures have proven useful in the recent constructions of Atiyah-Hirzebruch types
of spectral sequences for motivic twisted $K$-theory \cite{SO:twistedKtheory}.
\vspace{0.1in}
In topology,
the Goerss-Hopkins-Miller obstruction theory \cite{goersshopkins} allows to gain control over {\em moduli spaces}
of $E_\infty$-structures.
In favorable cases,
such as for Lubin-Tate spectra,
the moduli spaces are $K(\pi,1)$'s giving rise to actions of certain automorphism groups as $E_\infty$-maps.
A motivic analogue of this obstruction theory seems to be within reach,
but it has not been worked out.
\vspace{0.1in}
In Section \ref{juytd} we show that the connective cover $\mathsf{kgl}$ of the algebraic $K$-theory spectrum has a unique $E_{\infty}$-structure,
and ditto in Section \ref{section:ThemotivicAdamssummandsMLandml} for the connective cover of the Adams summand.
For the analogous topological results we refer to \cite{bakerrichterconnective}.
\vspace{0.1in}
We conclude the introduction with an overview of the paper:
In Section \ref{motivicobs} we state the straightforward adaption of Robinson's obstruction theory to the motivic context,
and point out its relevance in the proof of Theorem \ref{uniqueKGL}.
In Section \ref{multstructures} we explain the consequences of our work for multiplicative structures on algebraic $K$-theory spectra.
In Section \ref{computeKGL} we show the basic input required for the obstruction theory is explicitly
computable in case of algebraic $K$-theory, the main result being Theorem \ref{gammacomputation}.
Sections \ref{connective} and \ref{adams} discuss further examples to which we can successfully apply the
obstruction theory,
namely connective algebraic $K$-theory and the motivic analogue of the Adams summand.
\section{Motivic obstruction theory}\label{motivicobs}
The aim of this section is to formulate a key result in motivic obstruction theory.
It should be noted that a proof of Theorem \ref{motivicgamma} has not yet appeared in print,
cf.~\cite{robinsonrevisited}.
To begin,
fix a noetherian base scheme of finite Krull dimension with motivic stable homotopy category $SH$.
(Modelled for example by the monoidal model category of motivic symmetric spectra developed by Jardine \cite{jardinemotivicspectra}.)
Let $\mathsf{E}$ be a {\em commutative motivic ring spectrum},
i.e.~a commutative and associative unitary monoid in $SH(S)$.
Denote its coefficients by $R:=\mathsf{E}_{**}$ and its cooperations by $\Lambda:=\mathsf{E}_{**}\mathsf{E}$.
We say {\em $\mathsf{E}$ satisfies the universal coefficient theorem} if for all $n\ge 1$ the Kronecker product yields an isomorphism
\[
\mathsf{E}^{**}(\mathsf{E}^{\wedge\,n})
\stackrel{\cong}{\longrightarrow}
Hom_R(\Lambda^{\otimes_R n},R).
\]
Algebraic $K$-theory satisfies the universal coefficient theorem by \cite[Theorem 9.3 (i)]{motiviclandweber}.
In this situation one can define trigraded motivic $\Gamma$-cohomology groups $\mathsf{H}\Gamma^{*}$ associated to $R$ and $\Lambda$,
cf.~Section \ref{computeKGL} for more details.
The almost identical motivic version of Robinson's result \cite[Theorem 5.6]{Robinson} takes the following form.
\begin{theorem}\label{motivicgamma}
Suppose $\mathsf{E}$ is a commutative motivic ring spectrum satisfying the universal coefficient theorem and the vanishing conditions
$\mathsf{H}\Gamma^{n,2-n,*}(\Lambda|R;R)=0$ for $n \ge 4$ and $\mathsf{H}\Gamma^{n,1-n,*}(\Lambda|R;R)=0$ for $n\ge 3$.
Then $\mathsf{E}$ admits an $E_\infty$-structure unique up to homotopy.
\end{theorem}
We note that our {\em proof of Theorem \ref{uniqueKGL}} is obtained by combining
Theorem \ref{motivicgamma} for $\mathsf{E}=\mathsf{KGL}$ and Theorem \ref{gammacomputation}.
\section{Multiplicative structures on algebraic $K$-theory spectra}\label{multstructures}
Let $X$ be a scheme.
The bipermutative structure on the
category of coherent ${\mathcal O}_X$-modules gives rise to an $E_\infty$-structure on the
algebraic $K$-theory spectrum $K(X)$.
One may ask if this is the only $E_\infty$-structure refining its underlying
homotopy commutative ring spectrum structure. It is known that for suitable finite Postnikov
sections of the connective real $K$-theory spectrum $\mathsf{ko}$,
the analogous question has a negative answer, i.e.~there {\em do} exist
``exotic'' $E_\infty$-structures.
We are unaware of any (classical) scheme $X$ for which the answer to the above question is known,
but one can show the following.
\begin{theorem}\label{uniquemult} Let $S$ be a separated and regular Noetherian scheme of finite Krull dimension.
Assume
\[ {\mathcal K}:
(Sm/S)^{op}\longrightarrow
\,
\{ E_\infty-\mbox{ ring spectra} \}
\]
is a presheaf of $E_\infty$-ring spectra on the category of smooth $S$-schemes of finite type such that there is an
equivalence $\Phi: {\mathcal K}\stackrel{\cong}{\to} K$ of commutative monoids in the homotopy category of presheaves of $S^1$-spectra.
Then, for every $X/S$ smooth, $\Phi(X)$ is an equivalence of $E_\infty$-ring spectra.
\end{theorem}
Put informally,
while we cannot rule out the existence of exotic multiplications on an individual algebraic $K$-theory spectrum,
no such multiplications exist for the $K$-theory presheaf.
Theorem \ref{uniquemult} will be deduced from Theorem \ref{uniqueKGL} in a forthcoming work by the second author \cite{markus}.
In principle,
one may approach this problem by studying,
for a fixed scheme $X$,
the $\Gamma$-cohomology of the extension
\[
K_*(X)\to K(X)_*K(X).
\]
However,
it seems difficult to carry out such an analysis for non-empty schemes.
\section{Algebraic $K$-theory $\mathsf{KGL}$}\label{computeKGL}
In this section we shall present the $\Gamma$-cohomology computation showing there is a unique $E_{\infty}$-structure on the algebraic $K$-theory spectrum $\mathsf{KGL}$.
Throughout we work over some noetherian base scheme of finite Krull dimension, which we omit from the notation.
There are two main ingredients which make this computation possible:
First, the $\Gamma$-homology computation of $\mathsf{KU}_0\mathsf{KU}$ over $\mathsf{KU}_0=\mathbf{Z}$,
where $\mathsf{KU}$ is the complex $K$-theory spectrum.
Second,
we employ base change for the motivic cooperations of algebraic $K$-theory,
as shown in our previous work \cite{motiviclandweber}.
\subsection{The $\Gamma$-homology of $\mathsf{KU}_0\mathsf{KU}$ over $\mathsf{KU}_0$}
\label{subsection:TheGamma-homologyofKU_0KUoverKU_0}
For a map $A\rightarrow B$ between commutative algebras we denote Robinson's $\Gamma$-homology complex by $\widetilde{{\mathcal K}}(B\vert A)$ \cite[Definition 4.1]{Robinson}.
Recall that $\widetilde{{\mathcal K}}(B\vert A)$ is a homological double complex of $B$-modules concentrated in the first quadrant.
The same construction can be performed for maps between graded and bigraded algebras.
In all cases we let ${{\mathcal K}}(B\vert A)$ denote the total complex associated with the double complex $\widetilde{{\mathcal K}}(B\vert A)$.
\vspace{0.1in}
The $\Gamma$-cohomology
$$
\mathsf{H}\Gamma^*(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0, -)=\mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_0\mathsf{KU}}({{\mathcal K}}(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0),-)
$$
has been computed for various coefficients in \cite{bakerrichter}.
In what follows we require precise information about the complex ${{\mathcal K}}(\mathsf{KU}_0\mathsf{KU}|\mathsf{KU}_0)$ itself,
since it satisfies a motivic base change property,
cf.~Lemma \ref{naivebasechange}.
\begin{lemma}
\label{rational}
Let $X\in \mathsf{Ch}_{\ge 0}(\mathsf{Ab})$ be a non-negative chain complex of abelian groups.
The following are equivalent:
\begin{enumerate}
\item[i)] The canonical map $X\longrightarrow X\otimes_\mathbf{Z}^{\mathbf{L}} \mathbf{Q}=X\otimes_\mathbf{Z}\mathbf{Q}$ is a quasi isomorphism.
\item[ii)] For every prime $p$, there is a quasi isomorphism $X\otimes ^{\mathbf{L}}_\mathbf{Z}{\mathbf F}_p\simeq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
It is well known that $X$ is formal \cite[pg.~164]{goerssjardine},
i.e.~there is a quasi-isomorphism
\[
X\simeq \bigoplus_{n\ge 0}H_n(X)[n].
\]
(For an abelian group $A$ and integer $n$,
we let $A[n]$ denote the chain complex that consists of $A$ concentrated in degree $n$.)
Hence for every prime $p$,
\[
X\otimes^{\mathbf{L}}_\mathbf{Z} {\mathbf F}_p\simeq \bigoplus_{n\ge 0}\left( H_n(X)[n]\otimes^{\mathbf{L}}_\mathbf{Z}{\mathbf F}_p\right).
\]
By resolving ${\mathbf F}_p=(\mathbf{Z}\stackrel{\cdot p}{\longrightarrow}\mathbf{Z})$ one finds an isomorphism
\[
H_*(A[n]\otimes_{\mathbf{Z}}^{\mathbf{L}}{\mathbf F}_p)
\cong
(A/pA)[n]\oplus A\{p\}[n+1]
\]
for every abelian group $A$ and integer $n$.
Here $A\{p\}$ is shorthand for $\{ x\in A\, |\, px=0\}$.
In summary,
ii) holds if and only if the multiplication by $p$ map
\[ \cdot p:H_*(X)\longrightarrow H_*(X)\]
is an isomorphism for every prime $p$. The latter is equivalent to i).
\end{proof}
We shall use the previous lemma in order to study cotangent complexes introduced by Illusie in \cite{illusie}.
Let $R$ be a ring and set $R_\mathbf{Q}:=R\otimes_\mathbf{Z}\mathbf{Q}$.
Then there is a canonical map
\[
\xymatrix{
\tau_R: {\mathbb L}_{R/\mathbf{Z}} \ar[r] &
{\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}\mathbf{Q}\simeq
{\mathbb L}_{R/\mathbf{Z}}\otimes_R^{\mathbf{L}} R_\mathbf{Q}\ar[r]^(.7){\simeq} &
{\mathbb L}_{R_\mathbf{Q}/\mathbf{Q}} }
\]
of cotangent complexes in $\mathsf{Ho}(\mathsf{Ch}_{\ge 0}(\mathbf{Z}))$.
The first quasi isomorphism is obvious,
while the second one is an instance of flat base change for cotangent complexes.
\begin{lemma}\label{test}
The following are equivalent:
\begin{enumerate}
\item[i)] $\tau_R$ is a quasi isomorphism.
\item[ii)] For every prime $p$,
there is a quasi isomorphism ${\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}{\mathbf F}_p\simeq 0$.
\end{enumerate}
If the abelian group underlying $R$ is torsion free,
then i) and ii) are equivalent to
\begin{enumerate}
\item[iii)] For every prime $p$, ${\mathbb L}_{(R/pR)/{\mathbf F}_p}\simeq 0$.
\end{enumerate}
\end{lemma}
\begin{proof}
The equivalence of i) and ii) follows by applying Lemma \ref{rational} to $X={\mathbb L}_{R/\mathbf{Z}}$.
If $R$ is torsion free,
then it is flat as a $\mathbf{Z}$-algebra.
Hence,
by flat base change,
there exists a quasi isomorphism
\[
{\mathbb L}_{R/\mathbf{Z}}\otimes_\mathbf{Z}^{\mathbf{L}}{\mathbf F}_p\simeq {\mathbb L}_{(R/pR)/{\mathbf F}_p}.
\]
\end{proof}
The following is our analogue for Robinson's $\Gamma$-homology complex of the Baker-Richter result \cite[Theorem 5.1]{bakerrichter}.
\begin{theorem}
\label{k-theory-coops-gamma-cotangentcomplex}
\begin{itemize}
\item [i)] Let $R$ be a torsion free ring such that ${\mathbb L}_{(R/pR) / {\mathbf F}_p}\simeq 0$ for every prime $p$,
e.g.~assume that ${\mathbf F}_p\to R/pR$ is ind-\'etale for all $p$.
Then there is a quasi isomorphism
\[
{{\mathcal K}}(R|\mathbf{Z})\simeq {{\mathcal K}}(R_\mathbf{Q}|\mathbf{Q})
\]
in the derived category of $R$-modules.
\item[ii)] There is a quasi isomorphism
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)\simeq (\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}[0]
\]
in the derived category of $\mathsf{KU}_0\mathsf{KU}$-modules.
\end{itemize}\end{theorem}
\begin{proof}
\begin{itemize}\item[i)]
The Atiyah-Hirzebruch spectral sequence noted in \cite[Remark 2.3]{richter} takes the form
\[
E^2_{p,q}=H^p({\mathbb L}_{R/\mathbf{Z}} \otimes_\mathbf{Z}^{\mathbf{L}} \Gamma^q(\mathbf{Z}[x] |\mathbf{Z}))\Rightarrow H^{p+q}({{\mathcal K}}(R|\mathbf{Z})).
\]
Our assumptions on $R$ and Lemma \ref{test} imply that the $E^2$-page is comprised of $\mathbf{Q}$-vector spaces.
Hence so is the abutment,
and there exists a quasi isomorphism between complexes of $R$-modules
\[
{{\mathcal K}}(R|\mathbf{Z})\stackrel{\simeq}{\to} {{\mathcal K}}(R|\mathbf{Z})\otimes_\mathbf{Z}\mathbf{Q}.
\]
Moreover,
by Lemma \ref{naivebasechange},
there is a quasi isomorphism
\[{{\mathcal K}}(R|\mathbf{Z})\otimes_\mathbf{Z}\mathbf{Q} \simeq {{\mathcal K}}(R_\mathbf{Q}|\mathbf{Q}).\]
\item[ii)]
According to \cite[Theorem 3.1, Corollary 3.4, (a)]{bakerrichter} and the Hopf algebra isomorphism $A^{st}\simeq \mathsf{KU}_0\mathsf{KU}$ \cite[Proposition 6.1]{bakerrichter},
the ring $R:=\mathsf{KU}_0\mathsf{KU}$ satisfies the assumptions of part i)\footnote{This also follows easily from Landweber exactness of $\mathsf{KU}$.}.
Now since $\mathsf{KU}_0\cong\mathbf{Z}$,
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)\simeq {{\mathcal K}}((\mathsf{KU}_0\mathsf{KU})_\mathbf{Q} | \mathbf{Q}).
\]
We have that $(\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}\simeq\mathbf{Q}[w^{\pm 1}]$ \cite[Theorem 3.2, (c)]{bakerrichter} is a smooth $\mathbf{Q}$-algebra.
Hence,
since $\Gamma$-cohomology agrees with Andr\'e-Quillen cohomology over $\mathbf{Q}$,
there are quasi isomorphisms
\[
{{\mathcal K}}(\mathsf{KU}_0\mathsf{KU} | \mathsf{KU}_0)
\simeq
\Omega^1_{\mathbf{Q}[w^{\pm 1}]| \mathbf{Q}}[0]
\simeq
(\mathsf{KU}_0\mathsf{KU})_\mathbf{Q}[0].
\]
\end{itemize}
\end{proof}
\subsection{The $\Gamma$-homology of $\mathsf{KGL}_{\ast\ast} \mathsf{KGL}$ over $\mathsf{KGL}_{\ast\ast}$}
The strategy in what follows is to combine the computations for $\mathsf{KU}$ in \S\ref{subsection:TheGamma-homologyofKU_0KUoverKU_0} with motivic Landweber exactness \cite{motiviclandweber}.
To this end we require the following general base change result,
which was also used in the proof of Theorem \ref{k-theory-coops-gamma-cotangentcomplex}.
\begin{lemma}
\label{naivebasechange}
For a pushout of ordinary, graded or bigraded commutative algebras
\begin{equation*}
\xymatrix{
A\ar[d]\ar[r] & B\ar[d] \\
C\ar[r] & D }
\end{equation*}
there are isomorphisms between complexes of $D$-modules
$$
{{\mathcal K}}(D\vert C)
\cong
{{\mathcal K}}(B\vert A)\otimes_{B}D
\cong
{{\mathcal K}}(B\vert A)\otimes_{A}C.
$$
If $B$ is flat over $A$,
then $\widetilde{{\mathcal K}}(B\vert A)$ is a first quadrant homological double complex of flat $B$-modules;
thus, in the derived category of $D$-modules there are quasi isomorphisms
$$
{{\mathcal K}}(D\vert C)
\simeq
{{\mathcal K}}(B\vert A)\otimes^{\mathbf{L}}_{B}D
\simeq
{{\mathcal K}}(B\vert A)\otimes^{\mathbf{L}}_{A}C.
$$
\end{lemma}
\begin{proof}
Following the notation in \cite[\S4]{Robinson},
let $(B\vert A)^{\otimes}$ denote the tensor algebra of $B$ over $A$.
Then $(B\vert A)^{\otimes}\otimes_{A}B$ has a natural $\Gamma$-module structure over $B$,
cf.~\cite[\S4]{Robinson}.
Here $\Gamma$ denotes the category of finite based sets and basepoint preserving maps.
It follows that $((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D$ is a $\Gamma$-module over $D$.
Moreover,
by base change for tensor algebras,
there exists an isomorphism of $\Gamma$-modules in $D$-modules
$$
((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D
\cong
(D\vert C)^{\otimes}\otimes_{C}D.
$$
Here we use that the $\Gamma$-module structure on $(B\vert A)^{\otimes}\otimes_{A} M$,
for $M$ a $B$-module, is given as follows:
For a map $\varphi \colon [m] \to [n]$ between finite pointed sets,
$$
(B \otimes_A B \otimes_A \cdots \otimes_A B)\otimes_A M \to (B \otimes_A B \otimes_A \cdots \otimes_A B) \otimes_A M
$$
sends $b_1 \otimes \cdots \otimes b_m \otimes m$ to
$$
(\prod_{i \in \varphi^{-1}(1)} b_i) \otimes \cdots \otimes (\prod_{i \in \varphi^{-1}(n)} b_i) \otimes ((\prod_{i \in \varphi^{-1}(0)} b_i) \cdot m).
$$
By convention,
if $\varphi^{-1}(j)=\emptyset$ then $\prod_{i \in \varphi^{-1}(j)} b_i=1$.
Robinson's $\Xi$-construction yields an isomorphism between double complexes of $D$-modules
$$
\widetilde{{\mathcal K}}(D\vert C)=
\Xi((D\vert C)^{\otimes}\otimes_{C}D)
\cong
\Xi(((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D).
$$
Inspection of the $\Xi$-construction reveals there is an isomorphism
$$
\Xi(((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D)
\cong
\Xi((B\vert A)^{\otimes}\otimes_{A}B)\otimes_{B}D.
$$
By definition,
this double complex of $D$-modules is $\widetilde{{\mathcal K}}(B\vert A)\otimes_{B}D\cong\widetilde{{\mathcal K}}(B\vert A)\otimes_{A}C$.
This proves the first assertion by comparing the corresponding total complexes.
The remaining claims follow easily.
\end{proof}
Next we recall the structure of the motivic cooperations of the algebraic $K$-theory spectrum $\mathsf{KGL}$.
The algebras we shall consider are bigraded as follows:
$\mathsf{KU}_0\cong\mathbf{Z}$ in bidegree $(0,0)$ and $\mathsf{KU}_*\cong\mathbf{Z}[\beta^{\pm 1}]$ with the Bott-element $\beta$ in bidegree $(2,1)$.
With these conventions,
there is a canonical bigraded map
\[
\mathsf{KU}_*\to \mathsf{KGL}_{\ast\ast}.
\]
\begin{lemma}\label{coops}
There are pushouts of bigraded algebras
\begin{equation*}
\xymatrix{
\mathsf{KU}_{\ast} \ar[d]\ar[r]^-{\eta_L} & \mathsf{KU}_{\ast}\mathsf{KU} \ar[d] \\
\mathsf{KGL}_{\ast\ast} \ar[r]^-{\eta_L} & \mathsf{KGL}_{\ast\ast}\mathsf{KGL} }
\;\;\;\;\;\;
\xymatrix{
\mathsf{KU}_{0} \ar[d]\ar[r]^-{(\eta_L)_0} & \mathsf{KU}_{0}\mathsf{KU}\ar[d] \\
\mathsf{KU}_{\ast}\ar[r]^-{\eta_L} & \mathsf{KU}_{\ast}\mathsf{KU} }
\end{equation*}
and a quasi isomorphism in the derived category of
$\mathsf{KGL}_{\ast\ast}\mathsf{KGL}$-modules
$$
{{\mathcal K}}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast})
\simeq
{{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathsf{KU}_{0})\otimes^{\mathbf{L}}_{\mathsf{KU}_{0}\mathsf{KU}}\mathsf{KGL}_{\ast\ast}\mathsf{KGL}.
$$
\end{lemma}
\begin{proof}
Here, $\eta_L$ is a generic notation for the left unit of some flat Hopf-algebroid.
The first pushout is shown in \cite[Proposition 9.1, (c)]{motiviclandweber}.
The second pushout is in \cite{bakerrichter}.
Applying Lemma \ref{naivebasechange} twice gives the claimed quasi isomorphism.
\end{proof}
Next we compute the $\Gamma$-cohomology of the motivic cooperations of $\mathsf{KGL}$.
\begin{theorem}
\begin{itemize}\label{gammacomputation}
\item[i)] There is an isomorphism
\[
\mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})\cong \mathsf{H}^*\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q}[0],\mathsf{KGL}_{\ast\ast}).
\]
\item[ii)] For all $s\ge 2$,
\[
\mathsf{H}\Gamma^{s,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})=0.
\]
\end{itemize}
\end{theorem}
\begin{proof}
\begin{itemize}
\item[i)] By the definition of $\Gamma$-cohomology and the results in this Subsection there are isomorphisms
\begin{equation*}
\begin{array}{rl}
& \mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast};\mathsf{KGL}_{\ast\ast})\\
= & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KGL}_{\ast\ast}\mathsf{KGL}}({{\mathcal K}}(\mathsf{KGL}_{\ast\ast}\mathsf{KGL}\vert\mathsf{KGL}_{\ast\ast}),\mathsf{KGL}_{\ast\ast}) \\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KGL}_{\ast\ast}\mathsf{KGL}}({{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathbf{Z})\otimes^{\mathbf{L}}_{\mathsf{KU}_{0}\mathsf{KU}}\mathsf{KGL}_{\ast\ast}\mathsf{KGL},\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_{0}\mathsf{KU}}({{\mathcal K}}(\mathsf{KU}_{0}\mathsf{KU}\vert\mathbf{Z}),\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathsf{KU}_{0}\mathsf{KU}}((\mathsf{KU}_{0}\mathsf{KU})_{\mathbf{Q}}[0],\mathsf{KGL}_{\ast\ast})\\
\cong & \mathsf{H}^*\mathbf{R}\mathsf{Hom}_{\mathbf{Z}}(\mathbf{Q}[0],\mathsf{KGL}_{\ast\ast}).\\
\end{array}
\end{equation*}
\item[ii)] This follows from i) since $\mathbf{Z}$ has global dimension $1$.
\end{itemize}
\end{proof}
\begin{remark}
It is an exercise to compute $\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q},-)$ applied to finitely generated abelian groups.
This explicates our Gamma-cohomology computation in cohomological degrees $0$ and $1$ for base schemes with finitely generated algebraic $K$-groups,
e.g.~finite fields and number rings.
The computation $\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q},\mathbf{Z})\simeq{\hat{\mathbf{Z}}/\mathbf{Z}}[1]$ shows our results imply \cite[Corollary 5.2]{bakerrichter}.
\end{remark}
\section{Connective algebraic $K$-theory $\mathsf{kgl}$}\label{connective}
\label{juytd}
We define the connective algebraic $K$-theory spectrum $\mathsf{kgl}$ as the effective part $\mathsf{f}_{0}\mathsf{KGL}$ of $\mathsf{KGL}$.
Recall that the functor $\mathsf{f}_i$ defined in \cite{voevodskyopen} projects from the motivic stable homotopy category to its $i$th effective part.
Note that $\mathsf{f}_{0}\mathsf{KGL}$ is a commutative monoid in the motivic stable homotopy category since projection to the effective part is a lax symmetric monoidal functor
(because it is right adjoint to a monoidal functor).
For $i\in\mathbf{Z}$ there exists a natural map $\mathsf{f}_{i+1}\mathsf{KGL}\rightarrow\mathsf{f}_{i}\mathsf{KGL}$ in the motivic stable homotopy category with cofiber the $i$th slice of $\mathsf{KGL}$.
With these definitions,
$\mathsf{KGL}\cong\mathrm{hocolim}\,\mathsf{f}_{i}\mathsf{KGL}$ (this is true for any motivic spectrum,
cf.~\cite[Lemma 4.2]{voevodskyopen}).
Bott periodicity for algebraic $K$-theory implies that $\mathsf{f}_{i+1}\mathsf{KGL}\cong\Sigma^{2,1}\mathsf{f}_{i}\mathsf{KGL}$.
This allows to recast the colimit as $\mathrm{hocolim}\,\Sigma^{2i,i}\mathsf{kgl}$ with multiplication by the Bott element $\beta$ in $\mathsf{kgl}^{-2,-1}\cong\mathsf{KGL}^{-2,-1}$ as the transition map at each stage.
We summarize these observations in a lemma.
\begin{lemma}
\label{lemma:KGLbottkgl}
The algebraic $K$-theory spectrum $\mathsf{KGL}$ is isomorphic in the motivic stable homotopy category to the Bott inverted connective algebraic $K$-theory spectrum $\mathsf{kgl}[\beta^{-1}]$.
\end{lemma}
\begin{theorem}
The connective algebraic $K$-theory spectrum $\mathsf{kgl}$ has a unique $E_{\infty}$-structure refining its multiplication in the motivic stable homotopy category.
\end{theorem}
\begin{proof}
The connective cover functor $\mathsf{f}_{0}$ preserves $E_{\infty}$-structures \cite{GRSO}.
Thus the existence of an $E_{\infty}$-structure on $\mathsf{kgl}$ is ensured.
We note that inverting the Bott element can be refined to the level of motivic $E_{\infty}$-ring spectra by the methods employed in \cite{RSO}.
Thus,
by Lemma \ref{lemma:KGLbottkgl},
starting out with any two $E_{\infty}$-structures on $\mathsf{kgl}$ produces two $E_{\infty}$-structures on $\mathsf{KGL}$,
which coincide by the uniqueness result for $E_{\infty}$-structures on $\mathsf{KGL}$ .
Applying $\mathsf{f}_{0}$ recovers the two given $E_{\infty}$-structures on $\mathsf{kgl}$:
If $X$ is $E_\infty$ with $\varphi \colon X\simeq\mathsf{kgl}$ as ring spectra,
then there is a canonical $E_\infty$-map $X \to X[{\beta'}^{-1}]$,
where $\beta'$ is the image of the Bott element under $\varphi$.
Since $X$ is an effective motivic spectrum,
this map factors as an $E_\infty$-map $X \to \mathsf{f}_0(X[\beta'^{-1}])$.
By construction of $\mathsf{kgl}$ the latter map is an equivalence.
This shows the two given $E_\infty$-structures on $\mathsf{kgl}$ coincide.
\end{proof}
\section{The motivic Adams summands $\mathsf{ML}$ and $\mathsf{ml}$}\label{adams}
\label{section:ThemotivicAdamssummandsMLandml}
Let $\mathsf{BP}$ denote the Brown-Peterson spectrum for a fixed prime number $p$.
Then the coefficient ring $\mathsf{KU}_{(p)\ast}$ of the $p$-localized complex $K$-theory spectrum is a $\mathsf{BP}_{\ast}$-module via the ring map $\mathsf{BP}_{\ast}\rightarrow\mathsf{MU}_{(p)\ast}$
which classifies the $p$-typicalization of the formal group law over $\mathsf{MU}_{(p)\ast}$.
The $\mathsf{MU}_{(p)\ast}$-algebra structure on $\mathsf{KU}_{(p)\ast}$ is induced from the natural orientation $\mathsf{MU} \to \mathsf{KU}$.
With this $\mathsf{BP}_{\ast}$-module structure,
$\mathsf{KU}_{(p)\ast}$ splits into a direct sum of the $\Sigma^{2i}\mathsf{L}_{\ast}$ for $0\leq i\leq p-2$,
where $\mathsf{L}$ is the Adams summand of $\mathsf{KU}_{(p)}$.
Thus motivic Landweber exactness \cite{motiviclandweber} over the motivic Brown-Peterson spectrum $\mathsf{MBP}$ produces a splitting of motivic spectra
$$
\mathsf{KGL}_{(p)}
=
\bigvee_{i=0}^{p-2}
\Sigma^{2i,i}\mathsf{ML}.
$$
We refer to $\mathsf{ML}$ as the motivic Adams summand of algebraic $K$-theory.
Since $\mathsf{L}_{\ast}$ is an $\mathsf{BP}_{\ast}$-algebra and there are no nontrivial phantom maps from any smash power of $\mathsf{ML}$ to $\mathsf{ML}$,
which follows from \cite[Remark 9.8, (ii)]{motiviclandweber} since $\mathsf{ML}$ is a retract of $\mathsf{KGL}_{(p)}$,
we deduce that the corresponding ring homology theory induces a commutative monoid structure on $\mathsf{ML}$ in the motivic stable homotopy category.
We define the connective motivic Adams summand $\mathsf{ml}$ to be $f_{0}\mathsf{ML}$.
It is also a commutative monoid in the motivic homotopy category.
\begin{theorem}
\label{theorem:MLml}
The motivic Adams summand $\mathsf{ML}$ has a unique $E_{\infty}$-structure refining its multiplication in the
motivic stable homotopy category.
The same result holds for the connective motivic Adams summand $\mathsf{ml}$.
\end{theorem}
The construction of $\mathsf{ML}$ as a motivic Landweber exact spectrum makes the following result evident on account of
the proof of Lemma \ref{coops}.
\begin{lemma}
\label{coopsII}
There exist pushout squares of bigraded algebras
\begin{equation*}
\xymatrix{
\mathsf{L}_{\ast} \ar[d]\ar[r]^-{\eta_L} & \mathsf{L}_{\ast}\mathsf{L} \ar[d] \\
\mathsf{ML}_{\ast\ast} \ar[r]^-{\eta_L} & \mathsf{ML}_{\ast\ast}\mathsf{ML} }
\;\;\;\;\;\;
\xymatrix{
\mathsf{L}_{0} \ar[d]\ar[r]^-{(\eta_L)_0} & \mathsf{L}_{0}\mathsf{L}\ar[d] \\
\mathsf{L}_{\ast}\ar[r]^-{\eta_L} & \mathsf{L}_{\ast}\mathsf{L} }
\end{equation*}
and a quasi isomorphism in the derived category of $\mathsf{ML}_{\ast\ast}\mathsf{ML}$-modules
$$
{{\mathcal K}}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast})
\simeq
{{\mathcal K}}(\mathsf{L}_{0}\mathsf{L}\vert\mathsf{L}_{0})\otimes^{\mathbf{L}}_{\mathsf{L}_{0}\mathsf{L}}\mathsf{ML}_{\ast\ast}\mathsf{ML}.
$$
\end{lemma}
Next we show the analog of Theorem \ref{k-theory-coops-gamma-cotangentcomplex}, ii) for the motivic Adams summand.
\begin{lemma}
\label{lemma:L_0L}
In the derived category of $\mathsf{L}_0\mathsf{L}$-modules, there is a quasi isomorphism
$$
{{\mathcal K}}(\mathsf{L}_0\mathsf{L}|\mathsf{L}_0)
\simeq
(\mathsf{L}_0\mathsf{L})_\mathbf{Q}[0].
$$
\end{lemma}
\begin{proof}
In the notation of \cite[Proposition 6.1]{bakerrichter} there is an isomorphism between Hopf algebras $\mathsf{L}_0\mathsf{L}\cong {}^{\zeta}A^{st}_{(p)}$.
Recall that ${}^{\zeta}A^{st}_{(p)}$ is a free $\mathbf{Z}_{(p)}$-module on a countable basis and ${}^{\zeta}A^{st}_{(p)}/p{}^{\zeta}A^{st}_{(p)}$ is a formally
\'etale ${\mathbf F}_{p}$-algebra \cite[Theorem 3.3(c), Corollary 4.2]{bakerrichter}.
Applying Theorem \ref{k-theory-coops-gamma-cotangentcomplex}, i) to $R=\mathsf{L}_0\mathsf{L}$ and using that $(\mathsf{L}_0\mathsf{L})_\mathbf{Q}\simeq\mathbf{Q}[v^{\pm 1}]$
by Landweber exactness,
where $v=w^{p-1}$ and $(\mathsf{KU}_0 \mathsf{KU})_\mathbf{Q}\cong\mathbf{Q}[w^{\pm 1}]$,
we find
\[ {{\mathcal K}}(\mathsf{L}_0\mathsf{L} | \mathsf{L}_0)
\simeq
\Omega^1_{\mathbf{Q}[v^{\pm 1}]| \mathbf{Q}}[0]
\simeq
(\mathsf{L}_0\mathsf{L})_\mathbf{Q}[0].\]
\end{proof}
Lemmas \ref{coopsII} and \ref{lemma:L_0L} imply there is a quasi isomorphism
\[
\mathsf{H}\Gamma^{\ast,\ast,\ast}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast};\mathsf{ML}_{\ast\ast})\simeq \mathsf{H}^*\mathbf{R}\mathsf{Hom}_\mathbf{Z}(\mathbf{Q}[0],\mathsf{ML}_{\ast\ast}).
\]
Thus the part of Theorem \ref{theorem:MLml} dealing with $\mathsf{ML}$ follows, since for all $s\ge 2$,
\begin{equation}
\label{gammahomologyvanishingofML}
\mathsf{H}\Gamma^{s,\ast,\ast}(\mathsf{ML}_{\ast\ast}\mathsf{ML}\vert\mathsf{ML}_{\ast\ast};\mathsf{ML}_{\ast\ast})=0.
\end{equation}
The assertion about $\mathsf{ml}$ follows by the exact same type of argument as for $\mathsf{kgl}$.
The periodicity operator in this case is $v_1 \in \mathsf{ml}^{2(1-p),1-p}=\mathsf{ML}^{2(1-p),1-p}$.
\vspace{0.1in}
{\bf Acknowledgements.}
The main result Theorem \ref{uniqueKGL} of this paper was announced by the first named author at the 2009 M{\"u}nster workshop
on Motivic Homotopy Theory.
He thanks the organizers E.~M.~Friedlander, G.~Quick and P.~A.~{\O}stv{\ae}r for the invitation,
and P.~A.~{\O}stv{\ae}r for hospitality while visiting the University of Oslo, where the major part of this work was finalized.
The authors thank A.~Robinson for discussions and an anonymous referee for an insightful report which led to an improvement of the exposition.
\bibliographystyle{plain}
|
1,477,468,750,998 | arxiv | \section{Introduction}
The Magellanic Clouds present a nearby, easily observable laboratory in which to study minor galaxy interactions. In this system, the Magellanic Bridge is thought to be a product of the tidal interaction between the Large and Small Magellanic Clouds (LMC \& SMC). It contains both gas and stellar components, with the stellar population first discovered by \citet{IrwinKunkelDemers1985} and comprising only a young ($<300$\,Myr) stellar component \citep{Harris2007}, which is thought to have formed in situ. One identified massive X-ray binary has been detected in the western bridge \citep{KahabkaHilker2005}. Given the tidal disruption in this region, a number of star formation tracers are expected in this active region. The focus of this work is to identify high mass X-ray binaries as tracers of star formation in the Magellanic Bridge.
Multiwavelength studies of the SMC have shown that it contains a large number of X-ray binary pulsars -- all but one of them in Be/X-ray binary systems \citep{CoeCorbetMcGowan2009,CoeEdgeGalache2005}. This preponderance of young systems is most likely explained by tidally triggered star formation precipitated by the most recent close approach of the SMC and LMC \citep{GardinerNoguchi1996}. However, the most recent close approach of the SMC and LMC was $\sim200$ Myr ago -- much longer than the evolutionary timescale of Be/X-ray binaries, which implies that either there has been a significant delay between the encounter of the SMC and LMC and the onset of star formation, or that subsequent waves of star formation have given rise to these Be/X-ray binaries \citep{HarrisZaritsky2004}. These objects, being tracers of star formation \citep{GrimmGilfanovSunyaev2003}, give direct insights into the star formation history of their host galaxies.
The LMC, by comparison, has a very different population at high energies: whereas the SMC population comprises almost exclusively Be/X-ray binaries, the population of the LMC has representatives from all members of the X-ray binary classes, including black hole systems, low mass X-ray binaries, and Be and supergiant high mass systems. With the mass of the LMC being $\sim10\times$ that of the SMC, tidal interactions between the galaxies would have a much greater effect on the SMC, and this may be reflected by the differing stellar populations.
The high energy population of the Magellanic Bridge is not well known, but extrapolation from optical wavelengths indicates that there should be many young stellar systems towards the western bridge \citep{Harris2007}, resembling the population in the SMC. The work presented here is the first systematic study of this region at hard X-ray energies. The rest of the paper is structured as follows: section 2 presents the observations and data analysis. In section 3, the new transient objects and their characteristics are presented, while other candidate high energy sources are discussed in section 4. The discussion and conclusions are presented in sections 5 and 6 respectively.
\begin{figure*}
\includegraphics[width=0.7\textwidth]{ibis_final_map.eps}
\caption{IBIS 15--35\,keV mosaic of the SMC and Magellanic Bridge from revolutions 753 -- 756. The contours show the H\,I distribution in the SMC and Bridge, and are from \citet{PutmanStaveley-SmithFreeman2003}. }
\label{fig:mosaic}
\end{figure*}
\section{OBSERVATIONS \& ANALYSIS}
The IBIS telescope \citep{UbertiniLebrunDiCocco2003} aboard \textit{INTEGRAL}, which is optimised for an energy range of 15--200\,keV and has a field of view of $30^\circ\times30^\circ$, is uniquely suited to observing large sky areas for point sources. As part of a key programme monitoring campaign on the SMC and 47 Tuc, \textit{INTEGRAL} observed the SMC and Magellanic Bridge for approximately 90\,ks per satellite revolution ($\sim3$ days) from 2008 November 11 to 2009 June 25. The exposure on the Magellanic Bridge (Table~\ref{tab:expo}) is, in general, much smaller than the 90\,ks observation time, both due to the fact that the Bridge is a few degrees from the main pointing position, and that the figures for exposure time have been corrected for instrument dead time.
Individual pointings (science windows) were processed using the \textit{INTEGRAL} Offline Science Analysis v.7.0 (OSA, \citealt{GoldwurmDavidFoschini2003}) and were mosaicked using the weighted mean of the flux in the 3--10\,keV (JEM-X) and 15--35\,keV (IBIS) energy ranges. Proprietary software was used to mosaic the observations from successive revolutions to improve the exposure and thereby the sensitivity to faint sources. Lightcurves in these energy bands were generated on science window ($\sim$2000\,s) and revolution time-scales. The IBIS energy band was chosen to maximise the detection significance of \hbox{SMC~X-1}, and hence other SMC accreting X-ray pulsars, which have similar spectral shapes to SMC~X-1 in this energy range. An IBIS mosaic of data from revolutions 752--756 in the 15--35\,keV is shown in Fig.~\ref{fig:mosaic}.
\begin{table}
\caption{INTEGRAL observation log.}
\label{tab:expo}
\begin{tabular}{llll}
\hline
Rev Number & MJD$_{\rm start}$ & Exposure (ks) & \\
\hline
745 & 54788.7 & 37.7 & \\
746 & 54791.8 & 40.2 & \\
747 & 54794.8 & 37.0 & \\
748 & 54797.7 & 41.1 & \\
749 & 54800.8 & 39.7 & \\
750 & 54803.7 & 46.0 & \\
751 & 54806.7 & 49.2 & \\
752 & 54809.7 & 51.9 & \\
753 & 54812.7 & 50.0 & \\
754 & 54815.7 & 39.2 & \\
755 & 54818.7 & 50.6 & \\
756 & 54821.7 & 49.5& \\
796 & 54941.4 & 69.8 & \\
797 & 54944.4 & 55.6 & \\
812 & 54989.2 & 49.5 & \\
813 & 54992.2 & 40.2 & \\
814 & 54995.2 & 49.3 & \\
815 & 54998.2 & 48.7 & \\
816 & 55001.2 & 52.9 & \\
817 & 55004.2 & 55.4 & \\
818 & 55007.2 & 31.5 & \\
\hline
\end{tabular}
\end{table}
\section{New sources}
\subsection{IGR~J015712$-$7259}
A new source, IGR~J015712$-$7259 \citep{CoeMcBrideBird2008} was discovered in revolution 752. It showed transient behaviour with a maximum significance of 7.7$\sigma$ from revolutions 812 to 814 in IBIS, corresponding to an average flux of $3.3\pm0.3\times10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$ ($1.4\times10^{37}$\,erg\,s$^{-1}$ at 60\,kpc)
The source was detected in JEM-X at an average flux of $1.6\pm0.5\times10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$ in 3--10\, keV. The IBIS lightcurve is shown in Fig.~\ref{fig:igrlc}. Timely follow-up with Swift/XRT on 2008 December 20 (MJD 54820) allowed a precise determination of the source position: 1$^{\rm h}$57$^{\rm m}$16$^{\rm s}$, $-72^\circ$58$^\prime$32.8$^{\prime\prime}$ (J2000.0)
with a 90\% error circle of radius 3.8$^{\prime\prime}$. This identifies the source with a star (USNO-B1 0170-0064697, \citealt{MonetLevineCanzian2003}) having B and R magnitudes of 15.48 and 15.51 respectively (see Fig.~\ref{fig:igrfinder}.) A total of $\sim$300 counts extracted from a region around the Swift/XRT source showed pulsations at $\sim11$\,s.
\begin{figure}
\centering
\includegraphics[width=84mm]{lc_bridge1.ps}
\caption{IBIS lightcurve of IGR J015712$-$7259 in the 15--35\,keV energy band.}
\label{fig:igrlc}
\end{figure}
\begin{figure}
\centering
\fbox{ \includegraphics[width=84mm]{final_finder_igr0157.ps}}
\caption{DSS II R band image of a 10$^\prime\times10^\prime$ field around the source IGR~J015712$-$7259, showing the IBIS 90\% error circle and with the inset showing an expanded view (indicated by the square) of the counterpart and the Swift XRT 90\% error circle. North is up and east is to the left.}
\label{fig:igrfinder}
\end{figure}
A 15\,ks RXTE observation on 2008 December 24
revealed that the source was pulsating with a period of 11.57809$\pm$0.00002\,s
(see Fig.~\ref{fig:SXP11.6}). The hard X-ray behaviour is illustrated in the IBIS lightcurve (Fig.~\ref{fig:igrlc}), the RXTE folded lightcurve (Fig.~\ref{fig:p_profile}) and the broadband spectrum of the source around MJD 54824 (Fig.~\ref{fig:igrspec}). The combined Swift/XRT and IBIS spectrum can be adequately fit ($\chi^2_\nu=1.05$ for 33 d.o.f.) with an absorbed, exponentially cut-off power law and a free constant factor between the two instruments to account for the fact that the soft and hard spectra were not observed simultaneously. The constant factor is $\sim0.3$, while the photon index is $0.4\pm0.2$, and the folding energy is $8^{+5}_{-3}$\, keV. The absorption has been fixed to the neutral density of hydrogen along the line of sight to the SMC: $6\times10^{20}$\,cm$^{-2}$. The photon index is similar to those determined for X-ray pulsars in the wing of the SMC \citep{McGowanCoeSchurch2007}. The transient nature of the source, combined with the detected pulsations, hard spectrum and the optical magnitudes of the counterpart are strongly suggestive of a Be/X-ray binary.
\begin{figure}
\centering
\includegraphics[width=84mm]{screen.ps}
\caption{Periodogram of the \textit{RXTE} observation of IGR~J015712$-$7259 on MJD 54824.}
\label{fig:SXP11.6}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=84mm]{IGRJ0157flc.ps}
\caption{RXTE 3--10\,keV lightcurve of IGR J015712$-$7259 folded at the pulse period of 11.578\,s}
\label{fig:p_profile}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=84mm]{igr0157_spectrum752.ps}
\caption{Combined Swift/XRT and IBIS spectrum of IGR~J015712$-$7259, showing the residuals as $\Delta\chi$ in the lower panel. The data are plotted as crosses, the model fit as the histogram, while the smooth curve shows the unfolded spectrum as plotted on the secondary Y-axis.}
\label{fig:igrspec}
\end{figure}
\subsection{SWIFT~J0208.4-7428}
Another source was discovered during revolution 756 (MJD 54821.7) in the IBIS map at a position of $02^{\rm h}07^{\rm m}09^{\rm s}$ $-74^\circ28^\prime07^{\prime\prime}$with a 3.6$^\prime$ error circle. It reached a maximum significance of 7.1$\sigma$ during revolutions 753--756, with a corresponding average flux of {2.5$\times10^{-11}$\,erg\,cm$^{-2}$\,s$^{-1}$
in the 15-35\,keV band ($\sim1\times10^{37}$\,erg\,s$^{-1}$ at 60\,kpc). The source was weakly detected with JEM-X, at a flux of 1.5$\pm0.6\times10^{-12}$\,erg\,cm$^{-2}$\,s$^{-1}$.
The lightcurve in the 15--35\,keV band in Fig.~\ref{fig:swift0208lc} clearly shows the flux rising through the observation sequence.
Archival data from Swift/XRT showed that a source in the Magellanic Bridge was observed with Swift on MJDs 54764.5 and 54809.5. Analysis of the XRT data shows that no source is present within the field of view during the observation on MJD 54764.5, but during MJD 54809, a source is detected at $02^{\rm h}06^{\rm m} 45.7^{\rm s}$, $-74^\circ27^\prime 46.3^{\prime\prime}$ J2000.0 with a 90\% error circle of 4$^{\prime\prime}$. This source falls well within the 90\% error circle of the INTEGRAL source, and can be clearly distinguished from RX~J0209.6$-$7427 (the only previously known HMXB in the Magellanic Bridge, see Fig.~\ref{fig:swift0208_finder})
These contemporaneous observations (see lightcurve in Fig.~\ref{fig:swift0208lc}) provide strong evidence for another new X-ray source in the Magellanic Bridge. Only one object falls within the Swift/XRT error circle and can be identified with a $B=14.41$ and $V=14.75$ star \citep{DemersIrwin1991}, for which the $B-V$ colour and magnitudes are consistent with an early-type dwarf at the distance of the SMC.
\begin{figure}
\centering
\includegraphics[width=84mm]{swift0208lc_all.ps}
\caption{IBIS lightcurve of SWIFT~J0208.4$-$7428 (red line shows Swift/XRT detection).}
\label{fig:swift0208lc}
\end{figure}
\begin{figure}
\centering
\fbox{ \includegraphics[width=84mm]{final_finder_swiftj0208.ps}}
\caption{DSS II R-band image of a $10^\prime\times15^\prime$ field surrounding Swift~J0208.4$-$7428, showing the $3.6^\prime$ radius 90\% IBIS error circle, the nearby known HMXB, RX~J0209.6$-$7427, and with the inset showing an expanded view of the square region along with the Swift XRT error circle. North is up and east is to the left.}
\label{fig:swift0208_finder}
\end{figure}
\section{Candidate sources}
IBIS 15--35\,keV and JEM-X 3--10\,keV maps on time-scales varying between single revolutions up to five consecutive revolutions were searched for excesses which could indicate potential new sources in the Magellanic Bridge. A list of candidates is supplied in Table~\ref{Tab:cands}. None of these candidate sources correlate with known ROSAT sources \citep{VogesAschenbachBoller1999,VogesAschenbachBoller2000}, but it is worth bearing in mind that the ROSAT area coverage in this part of the sky is not complete. With the large error circles on these objects, it is not yet possible to constrain the nature of these sources, and, although they may be background AGN, their transient nature makes it more likely that they are binary systems in the Magellanic Bridge.
\begin{table*}
\caption{Candidate sources in the Magellanic Bridge. Columns 5 and 6 indicate the revolutions that were mosaicked for each candidate detection. For IBIS detections the fluxes are given in the 15--35\,keV range, while JEM-X fluxes are quoted in the 3--10\,keV range.}
\begin{tabular}{l l l l l l l}
\hline
Name & RA (J2000.0) & Dec (J2000.0) & Error & IBIS & JEM-X & Flux (Error)\\
& \emph{h\,m\,s} & \emph{d\,m\,s} & 90\% & Revs & Revs & erg\,cm$^{-2}$\,s$^{-1}$ \\
\hline
IGR J02048$-$7315 & 02 04 49 &-73 15 27 & 1.8$^\prime$ & &754--756 & $1.7(0.7)\times10^{-12}$ \\
IGR J02220$-$7558 & 02 22 01 & -75 57 59 & 1.8$^\prime$ & & 755--756 & $3.5(1.1)\times10^{-12}$\\
IGR J03144$-$7404 & 03 14 23 &-74 04 23& 4.9$^\prime$ & 746--747& & $4.4(0.9)\times10^{-11}$ \\%2 arcmin from 10th mag star (C1)\\
\hline
\end{tabular}
\label{Tab:cands}
\end{table*}
\section{Discussion}
Before this work, only a single massive X-ray binary system was known in the Magellanic Bridge (RX~J0209.6$-$7427, \citealt{KahabkaHilker2005}). So far, all X-ray sources discovered and identified in the Magellanic Bridge have been transient, with no persistent sources detected down to a luminosity of 2.5$\times10^{36}$\,erg\,s$^{-1}$ (15--35\,keV). In this respect the population of the Magellanic Bridge seems to resemble that of the SMC more closely than it does the LMC. The Magellanic Bridge contains a significant population of young blue stars \citep{DemersIrwin1991,Harris2007}. \citet{Harris2007} showed that the distribution of blue stars is most dense towards the SMC side and that there appear to be very few old, red giant branch stars in the fields studied. As most HMXBs in the SMC are found in regions populated by young stars \citep{YokogawaImanishiTsujimoto2003}, this seems to indicate that the trend will be to find more HMXBs on the SMC side of the Bridge, where the stellar population is younger. However, in our work, the exposure is by no means uniform across the Magellanic Bridge (see Fig.~\ref{fig:expo}) and this introduces a selection effect whereby new HMXBs are selectively found where the exposure is highest, i.e. towards the SMC side of the Bridge. Thus, we cannot yet be confident that the population of X-ray binaries so far uncovered in the Bridge is indicative of the distribution in the Bridge.
The prevalence of a young stellar population, and lack of an older one, suggests that the stellar population of the Magellanic Bridge formed in-situ, rather than being tidally extracted from the Large and Small Magellanic Clouds. In addition to the HMXBs discovered here, the region also shows strong evidence for recent star formation with $\sim100$ OB associations identified in the western Magellanic Bridge \citep{BicaSchmitt1995} as well as molecular clouds \citep{MizunoMillerMaeda2006}, which act as fuel for ongoing star formation.
\begin{figure}
\centering
\fbox{\includegraphics[width=84mm]{ibis_exp_everything.ps}}
\caption{IBIS exposure map for the sum of all observations presented in this paper. The exposure on the eastern edge of the bridge is almost half of that on the western edge (SMC side). The image size is $10^\circ\times15^\circ$, and the contours once again show the H\,I distribution throughout the SMC and Magellanic Bridge.}
\label{fig:expo}
\end{figure}
It is very unlikely that the HMXBs so far discovered in the Magellanic Bridge could have been ejected from the SMC through supernovae kicks. Not only would they require systemic velocities in excess of 200\,km\,s$^{-1}$ (where most SMC HMXBs have estimated systemic velocities of $\sim30$\,km\,s$^{-1}$, \citealt{Coe2005}), but we would expect to find a distribution of HMXBs around the SMC rather than preferentially concentrated in the Bridge.
To estimate what fraction of a potential HMXB population in the Magellanic Bridge has been observed to date, we use an estimate of 1 Be+NS binary system per square degree, as obtained by \citet{KahabkaHilker2005}. This estimate is based on the observed stellar density of stars $>2.5M_\odot$ in the Magellanic Bridge \citep{GardinerHatzidimitiou1992}, corrected through the initial mass function to give the number of B stars per square degree. Through estimates of the Be/B fraction and an assumption of how many Be stars have a neutron star companion, these authors estimate the number of Be+NS systems as between 0.68 and 0.96 per square degree, dependent on the shape of the IMF. Given that the Magellanic Bridge spans an area of $\sim6$ square degrees on the sky, we may expect up to 6 Be+NS systems in the Magellanic Bridge, which is roughly consistent with the number of sources and potential candidates reported up until now, including this work. At this early stage, it is premature to attempt quantitative estimates of an upper limit to the potential number of HMXBs in the Magellanic Bridge. A prediction based on star formation rate (e.g. \citealt{GrimmGilfanovSunyaev2003}) is prone to significant uncertainties in both the X-ray luminosity function of this potential population, and in estimates of the star formation rate. However, the X-ray active population in the SMC during the same set of observations, is very similar in number to that of the Magellanic Bridge. In the SMC, only four out of the $\sim60$ HMXBs were X-ray active during these observations, so by observational comparison with the SMC, we may expect a population significantly larger than 6 Be/X-ray binaries in the Bridge.
\section{Conclusions}
In this paper, we presented two new hard X-ray sources in the Magellanic Bridge identified through wide-field, hard X-ray imaging. We described their timing and spectral characteristics and identified them as likely high mass X-ray binaries located at the distance to the Magellanic Bridge. This puts the number of high mass X-ray binaries in the Magellanic Bridge at three, with a further three candidate sources to add to the emerging population. Optical observations of this region show a large number of young stars, especially towards the western edge, consistent with a picture of in-situ star formation as result of tidal interaction between the SMC and LMC. The short-lived high mass X-ray binaries presented here echo this scenario for star formation in the Bridge.
\section*{Acknowledgements}
Based on observations with \emph{INTEGRAL}, an ESA project with instruments and science data centre funded by ESA members states (especially the PI countries: Denmark, France, Germany, Italy, Switzerland, Spain), Czech Republic and Poland, and with the participation of Russia and the USA. LJT wishes to the University of Southampton, whose support has made this research possible.
\bibliographystyle{mn2e}
|
1,477,468,750,999 | arxiv | \section{Introduction}\label{intro}
As usual, let $\sigma(N)$ denote the sum of divisors of a positive integer $N$.
$N$ is called perfect if $\sigma(N)=2N$.
Though it is not known whether or not an odd perfect number exists,
many conditions which must be satisfied by such a number are known.
Suppose that $N$ is an odd perfect number.
Euler has shown that
\[
N=p^{\alpha} q_1^{2\beta_1}\cdots q_r^{2\beta_r}
\]
for distinct odd primes $p, q_1, \ldots, q_r$ and positive integers
$\alpha, \beta_1, \ldots, \beta_r$ with $p\equiv \alpha\equiv 1\pmod{4}$.
The special case $\beta_1=\beta_2=\cdots =\beta_r=\beta$
has been considered by many authors.
Steuerwald \cite{St} proved that we cannot have $\beta_1=\cdots=\beta_r=1$.
McDaniel \cite{Mc} proved that we cannot have $\beta_1\equiv\cdots\equiv\beta_r\equiv1\pmod{3}$.
If $\beta_1=\cdots=\beta_r=\beta$, then it is known
that $\beta\neq 2$ (Kanold \cite{Ka1}), $\beta\neq 3$ (Hagis
and McDaniel \cite{HMD}), $\beta\neq 5, 12, 17, 24, 62$
(McDaniel and Hagis \cite{MDH}), and $\beta\neq 6, 8, 11, 14, 18$
(Cohen and Williams \cite{CW}).
In their paper \cite{HMD}, Hagis and McDaniel conjectured
that $\beta_1=\cdots=\beta_r=\beta$ does not occur.
We \cite{Ymd} proved that if $\beta_1=\cdots=\beta_r=\beta$,
then $r\leq 4\beta^2+2\beta+2$ and
\[N<2^{4^{4\beta^2+2\beta+3}}.\]
We call this upper bound for $N$ \textit{the classical bound}.
In the RIMS workshop on analytic number theory in 2014,
we have given an improved upper bound for such numbers,
although this result has been published nowhere
(even in the preprint form or in an unrefereed proceedings).
We proved that $r<2\beta^2+O(\beta\log\beta)$
with an effectively computable implicit constant.
There we used the arithmetic in quadratic fields and
lower bounds for linear forms in logarithms.
In this paper, we shall give a slightly stronger result and a simpler proof,
using less arithmetic in quadratic fields and
linear algebraic argument instead of Baker's method.
\begin{thm}\label{th}
If $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ with $p, q_1, \ldots, q_r$
distinct primes is an odd perfect number,
then $r\leq 2\beta^2+8\beta+2$ and \[N<2^{4^{2\beta^2+8\beta+3}}.\]
Further, the coefficient $8$ of $\beta$ can be replaced by $7$
if $2\beta+1$ is not a prime, or $\beta\geq 29$.
\end{thm}
The upper bound for $N$ immediately follows from the upper bound for $r$
and Nielsen's result \cite{Nie} that if $N$ is an odd perfect number, then $N<2^{4^{\omega(N)}}$,
where $\omega(N)$ denotes the number of distinct prime factors of $N$.
In Section \ref{reduction}, we use a method used in \cite{Ymd}
to reduce the theorem to an upper bound for the number of solutions of some diophantine equations.
\begin{lem}\label{lm0}
Assume that $l=2\beta+1$ is a prime $\geq 19$.
If $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ with $p, q_1, \ldots, q_r$
distinct primes is an odd perfect number,
then, for each prime $q_j\equiv 1\pmod{l}$,
there exist at most five primes $q_i\not\equiv 1\pmod{l}$ such that $(q_i^l-1)/(q_i-1)=p^m q_j$ for any prime $l\geq 59$
and at most six such primes for each prime $19\leq l\leq 53$.
\end{lem}
In Section \ref{pr}, we solve this diophantine problem to prove the theorem.
Here, we avoid the use of Baker's method by adopting a linear algebraic technique used by Beukers in \cite{Beu},
who gave upper bounds for the numbers of solutions of generalized Ramanujan-Nagell equations.
\section{Preliminaries}\label{lemmas}
In this section, we shall introduce some notations and lemmas.
We begin by introducing two well-known lemmas concerning prime factors
of the $n$-th cyclotomic polynomial, which we denote by $\Phi_n(X)$.
Lemma \ref{lm1} follows from Theorems 94 and 95 in Nagell \cite{Nag}.
Lemma \ref{lm2} has been proved by Bang \cite{Ban} and rediscovered by many authors
such as Zsigmondy \cite{Zsi}, Dickson \cite{Dic} and Kanold \cite{Ka1, Ka2}.
\begin{lem}\label{lm1}
Let $p, q$ be distinct primes with $q\neq 2$ and $c$ be a positive integer.
If $p\equiv 1\pmod{q}$, then $q$ divides $\sigma(p^c)$ if and only if $q$ divides $c+1$.
Moreover, if $p\not\equiv 1\pmod{q}$, then
$q$ divides $\sigma(p^c)$ if and only if the multiplicative order of $q$ modulo $p$
divides $c+1$.
\end{lem}
\begin{lem}\label{lm2}
If $a$ is an integer greater than $1$, then $\Phi_n(a)$ has
a prime factor which does not divide $a^m-1$ for any $m<n$,
unless $(a, n)=(2, 1), (2, 6)$ or $n=2$ and $a+1$ is a power of $2$.
\end{lem}
Next, we need some notations and results from the arithmetic of a quadratic field.
Let $l>3$ be a prime and $D=(-1)^\frac{l-1}{2}l$.
Let $\OK$ and $\OO$ denote $\Q(\sqrt{D})$ and its ring of integers $\Z[(1+\sqrt{D})/2]$ respectively.
We use the overline symbol to express the conjugate in $\OK$.
In the case $D>0$, $\ep$ and $R=\log\ep$ shall denote the fundamental unit and the regulator in $\OK$.
In the case $D<-4$, we set $\ep=-1$ and $R=\pi i$. We note that neither $D=-3$ nor $-4$ occurs
since we have assumed that $l>3$.
We shall introduce the following lemma on the value of the cyclotomic polynomial $\Phi_l(x)$.
\begin{lem}\label{lm3}
Assume that $l$ is a prime $\geq 19$ and $x$ is an integer $>3^{\floor{(l+1)/6}}$. Then $\Phi_l(x)$ can be written in the form $X^2-DY^2$ for some coprime integers $X$ and $Y$
with $0.3791/x<\abs{Y/(X-Y\sqrt{D})}<0.6296/x$.
Moreover, if $p, q$ are primes $\equiv 1\pmod{l}$ and $\Phi_l(x)=p^m q$ for some integer $m$, then,
\begin{equation}\label{eq21}
\left[\frac{X+Y\sqrt{D}}{X-Y\sqrt{D}}\right]=\left(\frac{\bar\p}{\p}\right)^{\pm m}\left(\frac{\bar\q}{\q}\right)^{\pm 1},
\end{equation}
where $[p]=\p\bar\p$ and $[q]=\q\bar\q$ are prime ideal factorizations in $\OO$.
\end{lem}
\begin{proof}
Let $\zeta$ be a primitive $l$-th root of unity.
We can factor $(x^l-1)/(x-1)=\psi^+(x)\psi^-(x)$ in $\OK$, where
\begin{equation*}
\begin{split}
\psi^+(x)=& \prod_{\left(\frac{i}{l}\right)=1}(x-\zeta^i)=\sum_{i=0}^{\frac{l-1}{2}}a_i x^{\frac{l-1}{2}-i}, \\
\psi^-(x)=& \prod_{\left(\frac{i}{l}\right)=-1}(x-\zeta^i).
\end{split}
\end{equation*}
Hence, taking $P(x)=\psi^+(x)+\psi^-(x)$ and $Q(x)=(\psi^+(x)-\psi^-(x))/\sqrt{D}$, we have
\begin{equation}
\frac{x^l-1}{x-1}=\psi^+(x)\psi^-(x)=\frac{P^2(x)-DQ^2(x)}{4}.
\end{equation}
Now, putting $X=P(x)$ and $Y=Q(x)$, we have $\Phi_l(x)=(X^2-DY^2)/4$ with
$\psi^+(x)=(X+Y\sqrt{D})/2$ and $\psi^-(x)=(X-Y\sqrt{D})/2$.
If $\Phi_l(x)=p^m q$, then
we have the ideal factorizations $[x-\zeta^i]=\p^{(i) m} \q^{(i)}$ for $i=1, 2, \ldots, l-1$
in $\Q(\zeta)$ with $[p]=\prod_{i=1}^{l-1} \p^{(i)}$ and $[q]=\prod_{i=1}^{l-1} \q^{(i)}$.
We see that
$\prod_{\left(\frac{i}{l}\right)=1}\p^{(i)}=\p$ or $\bar\p$
and $\prod_{\left(\frac{i}{l}\right)=1}\q^{(i)}=\q$ or $\bar\q$.
Now $[X+Y\sqrt{D}]$ can be factored into one of the forms $\p^m \q, \bar\p^m \q, \p^m \bar\q$ or $\bar\p^m \bar\q$ in $\OO$
and (\ref{eq21}) holds.
Now it remains to show that $0.3791/x<\abs{Y/(X-Y\sqrt{D})}<0.6296/x$.
We begin by dealing the case $x\geq l^2$.
We clearly have $a_0=1$.
It follows from the well known result for the Gauss sum that $a_1=\frac{1\pm \sqrt{D}}{2}$.
Moreover, it immediately follows from the definition of $\psi^+(x)$ that
\[\abs{a_i}\leq\binom{(l-1)/2}{i}<\left(\frac{l-1}{2}\right)^i\] for each $i\leq\frac{l-1}{2}$.
Combining these facts on $a_i$'s, we obtain
\begin{equation}
\begin{split}
\abs{P(x)-2x^\frac{l-1}{2}-x^\frac{l-3}{2}}
& \leq 2\sum_{i=2}^{\frac{l-1}{2}}\left(\frac{l-1}{2}\right)^i x^{\frac{l-1}{2}-i} \\
& <\frac{(l-1)^2 x^\frac{l-3}{2}}{2x-l-1}\leq \frac{(l-1)^2 x^\frac{l-3}{2}}{2l^2-l-1} \\
& <\frac{x^\frac{l-3}{2}}{2}
\end{split}
\end{equation}
and
\begin{equation}
\begin{split}
\abs{\abs{Q(x)}-x^\frac{l-3}{2}}
& \leq \frac{2}{\sqrt{l}}\sum_{i=2}^{\frac{l-1}{2}}\left(\frac{l-1}{2}\right)^i x^{\frac{l-1}{2}-i} \\
& <\frac{(l-1)^2 x^\frac{l-3}{2}}{\sqrt{l}(2x-l-1)}<\frac{(l-1)^2 x^\frac{l-3}{2}}{\sqrt{l}(2l^2-l-1)} \\
& <\frac{x^\frac{l-3}{2}}{2\sqrt{l}}.
\end{split}
\end{equation}
From these inequalities, we deduce that
\begin{equation}
\abs{\frac{Q(x)}{P(x)-Q(x)\sqrt{D}}}<\frac{1+\frac{1}{2\sqrt{l}}}{2x+\frac{1}{2}-\left(1+\frac{1}{2\sqrt{l}}\right)\sqrt{l}}<\frac{0.6296}{x}
\end{equation}
and
\begin{equation}
\abs{\frac{Q(x)}{P(x)-Q(x)\sqrt{D}}}>\frac{1-\frac{1}{2\sqrt{l}}}{2x+\frac{3}{2}+\left(1+\frac{1}{2\sqrt{l}}\right)\sqrt{l}}>\frac{0.3791}{x}
\end{equation}
for $l\geq 19$, proving the lemma in this case.
In the remaining case $x<l^2$, then we have $l\leq 37$ since we have assumed that $x>3^{\floor{(l+1)/6}}$.
For each $l$, we can confirm the desired inequality for $3^{\floor{(l+1)/6}}<x<l^2$ by calculation.
Now the lemma is completely proved.
\end{proof}
\section{Reduction to a diophantine problem}\label{reduction}
Let $N=p^{\alpha} q_1^{2\beta}\cdots q_r^{2\beta}$ be an odd perfect number.
In this section, we shall show that our theorem can be reduced to Lemma \ref{lm0}.
Various results referred in the introduction of this paper allows us to assume that $\beta\geq 9$
without loss of generality.
We see that we can take a prime factor $l$ of $2\beta+1$ which is one of the $q_i$'s.
Indeed, if $2\beta+1$ has at least two distinct prime factors $l_1$ and $l_2$,
then at least one of them must be one of the $q_i$'s and,
if $2\beta+1=l^\gamma$ is a power of a prime $l$, then
we must have $l=q_{i_0}$ for some $i_0$ by Kanold \cite{Ka1}.
As we did in \cite{Ymd}, we divide $q_1, \ldots, q_r$ into four disjoint sets. Let
\begin{equation*}
S=\{i:q_i\equiv 1\pmod{l}\},
\end{equation*}
\begin{equation*}
T=\{i:q_i\nequiv 1\pmod{l}, i\neq i_0, q_j\mid\sigma(q_i^{2\beta})\text{ for some }1\leq j\leq r\},
\end{equation*}
and
\begin{equation*}
U=\{i:q_i\nequiv 1\pmod{l}, i\neq i_0, q_j\nmid\sigma(q_i^{2\beta})\text{ for any }1\leq j\leq r\}.
\end{equation*}
Hence, we can write $\{i: 1\leq i\leq r\}=S\cup T\cup U\cup \{i_0\}$.
In \cite{Ymd}, we proved that $\#S\leq 2\beta$.
Moreover, if $2\beta+1=l^\gamma$ is a prime power, then $\#T\leq (2\beta)^2$ and $\#U\leq 1$,
implying that $r\leq 4\beta^2+2\beta+2$
and, if $2\beta+1$ has $s>1$ distinct prime factors, then $\#S\leq 2\beta$ and $r\leq 2\beta\#S/(2^{s-1}-1)$.
For each $i\in T$, let $f(i)$ denote the number of prime factors in $S$ dividing $\sigma(q_i^{2\beta})$
counted with multiplicity. Then, we can easily see that, for any $i\in T$, $\sigma(q_i^{2\beta})$ has at least one prime factor in $S$ from Lemmas \ref{lm1} and \ref{lm2}.
Hence, we have $f(i)\geq 1$ for any $i\in T$.
This immediately gives that $\#T\leq \sum_{i\in T}f(i)\leq (2\beta)\#S\leq 4\beta^2$,
which is Lemma 3.2 in \cite{Ymd}. This yields the dominant term $4\beta^2$ in the exponent of the classical bound, which we would like to improve.
To this end, we denote by $\delta$ the number of $i$'s for which $f(i)=1$.
Then we have \[2\#T-\delta=\delta+2(\#T-\delta)\leq \sum_{i\in T}f(i)\leq 4\beta^2;\]
that is, $\#T\leq 2\beta^2+(\delta/2)$.
If $2\beta+1$ is composite, then, by Lemma \ref{lm2}, for each divisor $d$ of $(2\beta+1)/l$,
$\Phi_{ld}(q_i)$ has a prime factor $\equiv 1\pmod{l}$ not dividing $\Phi_{lk}(q_i)$ for any other divisor $k<d$ of $(2\beta+1)/l$.
Hence, we see that $U$ must be empty.
Moreover, if $i\in T$ and $f(i)=1$, then
$2\beta+1=l^2$ and $\Phi_l(q_i)=q_j$ or $\Phi_{l^2}(q_i)=q_j$ for some $j\in S$,
or $2\beta+1=l_1 l$ for some prime $l_1$ and $\Phi_l(q_i)=q_j$ or $\Phi_{l_1 l}(q_i)=q_j$ for some $j\in S$.
From this, we can deduce that $f(i)=1$ holds for at most $2\#S\leq 4\beta$ indices $i\in T$.
That is, $\delta\leq 2\#S\leq 4\beta$. Since $U$ is empty, we have $\#T+\#U\leq 2\beta^2+2\beta$.
If $2\beta+1=l$ is prime, $i\in T$ and $f(i)=1$,
then $\sigma(q_i^{2\beta})=\Phi_l(q_i)=p^m q_j$ for an index $j\in S$ and an integer $m\geq 0$.
Moreover, we have $\#U\leq 1$ as mentioned above.
Now, observing that $r\leq \#S+\#T+\#U+1\leq 2\beta^2+2\beta+(\delta/2)+2$,
we conclude that Theorem \ref{th} can be derived if we show that,
for each prime $q_j\equiv 1\pmod{l}$,
there exist at most five primes $q_i$ with $i\in T$ such that $\Phi_l(q_i)=(q_i^l-1)/(q_i-1)=p^m q_j$ for any prime $l\geq 59$
and at most six such primes for each prime $19\leq l\leq 53$.
Hence, Theorem \ref{th} would follow from Lemma \ref{lm0}, which we prove in the next section.
\section{Proof of the theorem}\label{pr}
We begin by proving a gap principle using elementary modular arithmetic.
\begin{lem}\label{lm4}
If $x_2>x_1>0$ are two multiplicatively independent integers such that
$\Phi_l(x_1)=p^{m_1} q_j$ and $\Phi_l(x_2)=p^{m_2} q_j$,
then $x_2>x_1^{\floor{(l+1)/6}}$.
\end{lem}
\begin{proof}
Assume that $x_2\leq x_1^{\floor{(l+1)/6}}$.
We begin by observing that $(x_1^{f_1} x_2^{f_2})^l\equiv 1\pmod{q_j}$ for any integers $f_1$ and $f_2$.
In the case $p^{m_1}<q_j$, we must have $q_j>(\Phi_l(x_1))^{1/2}>x_1^{(l-1)/2}$
and therefore
\begin{equation}
1\leq x_1^{f_1}x_2^{f_2}\leq x_1^{f_1+f_2\floor{(l+1)/6}}\leq x_1^{(l-1)/2}<q_j
\end{equation}
for $0\leq f_1\leq (l-1)/2-f_2\floor{(l+1)/6}$.
This implies that each integer of the form $x_1^{f_1} x_2^{f_2}$
with $0\leq f_1\leq (l-1)/2-f_2\floor{(l+1)/6}$ must give a solution of the congruence
$X^l\equiv 1\pmod{q_j}$ and these solutions are not congruent to each other. For each fixed $f_2$,
we have $(l+1)/2-f_2\floor{(l+1)/6}$ such solutions.
Hence, recalling that $l\geq 19$,
the congruence $X^l\equiv 1\pmod{q_j}$ should have at least
\[\sum_{f_2=0}^2\left(\frac{l+1}{2}-f_2\floor{\frac{l+1}{6}}\right)=\frac{3(l+1)}{2}-3\floor{\frac{l+1}{6}}\geq l+1\]
solutions in $1\leq X<q_j$, which is impossible.
Similarly, in the case $p^{m_1}>q_j$, the congruence $X^l\equiv 1\pmod{p^{m_1}}$
should have at least $l+1$ solutions in $1\leq X<p^{m_1}$,
a contradiction again. Hence, we must have $x_2>x_1^{\floor{(l+1)/6}}$.
\end{proof}
Using Lemma \ref{lm3}, we shall prove another gap principle, which is more conditional
but much stronger than the first gap principle.
\begin{lem}\label{lm5}
If $\Phi_l(x_i)=p^{m_i} q_j$ for three integers $x_3>x_2>x_1>0$ with $x_2>x_1^{\floor{(l+1)/6}}$, then $m_3>0.397\abs{R}x_1$.
\end{lem}
\begin{proof}
We write $\xi_i=(X_i+Y_i\sqrt{D})/(X_i-Y_i\sqrt{D})$ for each $i=1, 2, 3$.
Factoring $[p]=\p\bar\p, [q_j]=\q_j\bar\q_j$ in $\OO$ and applying Lemma \ref{lm3} with $q_j$ in place of $q$,
we obtain that, for each $i=1, 2, 3$,
\begin{equation}
[\xi_i]=\left(\frac{\bar\p}{\p}\right)^{\pm m_i}\left(\frac{\bar\q_j}{\q_j}\right)^{\pm 1},
\end{equation}
holds with $0<Y_i/(X_i-Y_i\sqrt{D})<(\Phi_l(x_i))^{-1/(l-1)}$.
Hence, taking an appropriate combination of signs, we obtain
\begin{equation}
[\xi_1]^{\pm m_2\pm m_3}[\xi_2]^{\pm m_3\pm m_1}[\xi_3]^{\pm m_1\pm m_2}=[1],
\end{equation}
and therefore
\begin{equation}
\xi_1^{\pm m_2\pm m_3}\xi_2^{\pm m_3\pm m_1}\xi_3^{\pm m_1\pm m_2}=\pm \ep^a
\end{equation}
for some integer $a$.
Hence, if we let each logarithm $\log\xi_i$ take its principal value, we have
\begin{equation}
(\pm m_2\pm m_3)\log\xi_1+(\pm m_3\pm m_1)\log\xi_2+(\pm m_1\pm m_2)\log\xi_3=bR
\end{equation}
for some integer $b$.
If $b\neq 0$, then
\begin{equation}
(m_2+m_3)\abs{\log \xi_1}+(m_3+m_1)\abs{\log \xi_2}+(m_1+m_2)\abs{\log \xi_3}\geq \abs{R}.
\end{equation}
Recalling that $0<Y_i/(X_i-Y_i\sqrt{D})<0.6296/x_i$ from Lemma \ref{lm3}
and each complex logarithm takes its principal value,
we have $\abs{\log \xi_i}<1.2592/x_i$ and therefore
\begin{equation}
\begin{split}
& 2.5184m_3\left(\frac{1}{x_1}+\frac{1}{x_2}+\frac{1}{x_3}\right) \\
& \geq 1.2592\left(\frac{m_2+m_3}{x_1}+\frac{m_3+m_1}{x_2}+\frac{m_1+m_2}{x_3}\right)>\abs{R}.
\end{split}
\end{equation}
From this and the assumption that $x_3>x_2>x_1^{\floor{(l+1)/6}}\geq x_1^3$
(recall that we have assumed that $l\geq 19$), we can deduce that $m_3>0.397x_1 \abs{R}$.
If $b=0$, then
$\abs{\log \xi_1}\leq 2m_3\abs{\log \xi_2}+2m_2\abs{\log\xi_3}$.
We see that $\abs{\log \xi_2}<1.2592/x_2, \abs{\log \xi_3}<1.2592/x_3$ by Lemma \ref{lm3} and
$\abs{\log \xi_1}>0.3791/x_1$.
Hence, we have
\[\frac{0.15}{x_1}<m_3\left(\frac{1}{x_2}+\frac{1}{x_3}\right).\]
Moreover, since $x_2>x_1^{\floor{(l+1)/6}}\geq x_1^3$ and $x_3>x_2^{\floor{(l+1)/6}}$, we have
\begin{equation}
m_3>\frac{0.15x_2}{x_1}>0.15x_1^2>0.15\times 3^{\floor{\frac{l+1}{6}}} x_1>\abs{R} x_1,
\end{equation}
where the last inequality follows observing that,
if $l\equiv 3\pmod{4}$, then $0.15\times 3^{\floor{(l+1)/6}}>0.2l>\pi=\abs{R}$
and, if $l\equiv 1\pmod{4}$, then $D=l\geq 29$ and therefore $0.15\times 3^{\floor{(l+1)/6}}>l>R$
using the estimate $R<D^{1/2}\log (4D)$ from \cite{Fai}.
Hence, we conclude that, whether $b=0$ or not, $m_3>0.397\abs{R}x_1$, proving the lemma.
\end{proof}
Now we shall prove Lemma \ref{lm0}.
Assume that $q_1<q_2<\cdots <q_6$ are six primes not congruent to $1$ modulo $l$ such that
$\Phi_l(q_i)=p^{g_i} q_j$ for $i=1, 2, \ldots, 6$.
Moreover, assume that $q_7$ is a prime in $T$ greater than $q_6$
and $\Phi_l(q_7)=p^{g_7} q_j$ if $19\leq l\leq 53$.
Write $R^\prime=0.397\abs{R}$.
Since $q_2>q_1^{\floor{(l+1)/6}}\geq 3^{\floor{(l+1)/6}}$,
we can apply Lemma \ref{lm5} with
$(x_i, m_i)=(q_{i+1}, g_{i+1}) (i=1, 2, 3)$
to obtain \[\log q_4>\frac{g_4\log p}{l-1}> \frac{q_2 R^\prime \log p}{l-1}\geq \frac{3^{\floor{\frac{l+1}{6}}} R^\prime \log (2l+1)}{l-1},\]
where we use the fact $p\geq 2l+1$ by Lemma \ref{lm1} and the assumption $q_i\not\equiv 1\pmod{l}$.
Similarly, we have $\log q_6>q_4 R^\prime (\log (2l+1))/(l-1)$
and
\begin{equation}
\frac{\log q_6}{\log 2}>\exp\left(\frac{3^{\floor{\frac{l+1}{6}}}R^\prime \log (2l+1)}{l-1}+\log\frac{R^\prime \log (2l+1)}{l-1}\right).
\end{equation}
Hence, for $l\geq 59$, we must have
\[\frac{\log q_6}{\log 2}>4^{l^2}=4^{4\beta^2+4\beta+1},\]
which is impossible since it implies that $N\geq q_6$ exceeding the classical bound.
If $23\leq l\leq 53$, applying Lemma \ref{lm4}, we have $q_1\geq 3, q_2\geq 3^4$ and $q_3\geq 3^{16}$.
Applying Lemma \ref{lm5} with $(x_i, m_i)=(q_{i+2}, g_{i+2})$
$(i=1, 2, 3)$ and then $(x_i, m_i)=(q_{i+4}, g_{i+4}) (i=1, 2, 3)$, we have
$q_5>\exp(3040000)$ and $q_7>\exp(\exp(3000000))$.
If $l=19$, applying Lemma \ref{lm5} with $(x_i, m_i)=(q_{i+2}, g_{i+2}) (i=1, 2, 3)$
and then $(x_i, m_i)=(q_{i+4}, g_{i+4}) (i=1, 2, 3)$, we have
$q_1\geq 3, q_2\geq 29, q_3\geq 24391, q_5>\exp(6238)$ and $q_7>\exp(\exp(6000))$.
Thus, $q_7$ must exceed the classical bound if $19\leq l\leq 53$,
which is a contradiction again.
Hence, we conclude that, for each given $j\in S$,
there are at most five indices $i\in T$ with $f(i)=1$ and $q_j\mid \Phi_l(q_i)=\sigma(q_i^{2\beta})$ if $l\geq 59$
and there are at most six indices $i\in T$ and $q_j\mid \Phi_l(q_i)=\sigma(q_i^{2\beta})$ with $f(i)=1$ if $19\leq l\leq 53$.
This completes the proof of Lemma \ref{lm0}, which in turn implies Theorem \ref{th}.
|
1,477,468,751,000 | arxiv | \section{Introduction}
A long-term goal of AI research is to build embodied agents that can perceive the environment, communicate with humans, and perform real-world tasks to benefit human society.
However, since the agent interacts closely with humans and environments, it often receives sensitive information during training and inference.
For example, as shown in Fig.~\ref{centralized vln}, in the task of Vision-and-Language Navigation (VLN)~\cite{r2r}, where an agent learns to navigate towards a target location in an indoor environment given natural language instruction, the training and inference data may include private information such as what the user's house looks like, what the user said, and what the user did. Data privacy is a central problem for building trustworthy embodied agents but seldomly studied before~\cite{gu-etal-2022-vision}. Thus, in this work, we introduce privacy-preserving embodied agent learning for the task of vision-and-language navigation.
\begin{figure}[t]
\centering
\subfigure[Centralized VLN learning]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{centralized_vln.pdf}
\end{minipage}
\label{centralized vln}
}
\subfigure[Federated VLN learning]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{fed_vln.pdf}
\end{minipage}
\label{fig:fedvln}
}
\caption{Data privacy: centralized VLN learning \textit{vs.} our federated VLN learning. Existing VLN approaches centralize all the client data in a server, including house environments and user instructions, which ignores users' privacy concerns. Our federated VLN framework keeps client data used only locally and the server receives nothing other than local model updates, so the client data privacy is preserved.}
\label{centalized vs fed}
\end{figure}
VLN models are typically trained on seen environments with ground-truth instruction-trajectory pairs and then deployed to unseen environments without any labeled data.
After deployment, the agent may explore the unseen environment and adapt to the new environment for better performance, which is known as pre-exploration.
However, as shown in Fig.~\ref{centralized vln}, most of the existing methods assemble all the data in a server to train a navigation agent for both seen environment training and unseen environment pre-exploration.
This is not practical in reality since users may not want to share the data in their own house due to privacy concerns.
Privacy-preserving VLN requires the agent to protect the data privacy during both seen environment training and unseen environment pre-exploration while maintaining comparable navigation performance.
In this paper, we propose a novel Federated Vision-and-Language Navigation (FedVLN) framework, to address the aforementioned data privacy issues and improve the adaptation performance on unseen environments at the same time.
Specifically, on the seen environment training stage, as shown in Fig.~\ref{fig:fedvln}, we treat each house environment as a client. The client's local models (a VLN agent for navigation and a speaker for data augmentation) are trained on private local data, and then the model updates are sent to the server for model aggregation. No private data except the local model updates will be shared with the server and there is no communication between clients.
During pre-exploration, we train the client models on seen environments and unseen environments simultaneously under federated learning paradigm---the client models do partial model aggregation (language encoder only) and partial local adaptation, enabling better adaptation to the local visual environment while maintaining general language understanding.
Under our FedVLN framework, users do not need to share their data with any other party, thus the privacy of training data and inference data is protected.
Our experiments on Room-to-Room (R2R)~\cite{r2r} and Room-across-Room (RxR) \cite{rxr}\footnote{We conduct experiments on the English data of the RxR dataset.} datasets validate the effectiveness of our framework. Our federated learning framework achieves comparable results with centralized training while preserving data privacy. More importantly, on the pre-exploration stage, we show that centralized pre-exploration hinders the agent from adapting to each specific environment, and our federated pre-exploration method achieves the best performance among prior pre-exploration methods such as centralized~\cite{RCM,Envdrop} and environment-based pre-exploration~\cite{APS}. Our contributions are three-fold:
\begin{itemize}
\item We are the first to discuss data privacy concerns for vision-and-language navigation and define the privacy-preserving embodied AI problem for the two learning stages in VLN.
\item We propose a novel federated learning framework for privacy-preserving VLN to ensure that users do not need to share their data to any party.
\item Extensive results on R2R and RxR show that our federated learning framework not only achieves comparable results with centralized training, but also outperforms centralized and environment-based pre-exploration methods.
\end{itemize}
\section{Related Work}
\subsection{Vision-and-Language Navigation}
With the development of deep learning and human's vision of more helpful AI agents, embodied AI becomes an emerging research area. Vision-and-language navigation(VLN)~\cite{r2r,rxr,jain-etal-2019-room-for-room,reverie,zhu-etal-2022-diagnosing} is one of the most popular tasks of embodied AI, in which an embodied agent learns to navigation to a goal location in indoor environments following language instruction and given dynamic visual information. Anderson et al. \cite{r2r} first propose a LSTM-based seq-to-seq model for navigation. For better understanding vision-and-language information, there are works working on vision-and-language pre-training~\cite{Hao_2020_CVPR_prevalent,oscar,rec-vlnbert,Qi_2021_ICCV_object,Guhur_2021_ICCV_airbert} and model structures~\cite{rec-vlnbert,chen2021hamt,Gao_2021_CVPR_room-and-Object,xiang-etal-2020-learning}. Reinforcement learning and navigation planning methods were also introduced into VLN to perform better action decisions~\cite{RCM,Wang_2018_ECCV_look_before_you_leap,Krantz_2021_ICCV_waypoint,chasing_ghosts}. Limited labeled data was another bottleneck to train a better model. To this end, Fried et al.~\cite{Fried2018SpeakerFollowerMF} propose a speaker-follower model which can generate pseudo instructions for a sampled path by a trained speaker. Further, to mitigate the gap between seen and unseen environments, pre-exploration was proposed~\cite{RCM,Envdrop,APS}, which can learn and adapt to new environments after deployment.
However, most current research ignores the practicality in real-life application scenarios, especially data privacy issues. Fu et al.~\cite{APS} consider the implementation problem of pre-exploration and proposed environment-based pre-exploration, but they did not consider the privacy issue of training data. Also, we showed that environment-based pre-exploration might suffer from data scarcity and data bias.
\subsection{Privacy-preserving Machine Learning}
Over the years, researchers propose many methods~\cite{FasterCrypto,PATE,fedavg,texthide} to address different data privacy problems~\cite{ModelInversion,MembershipInfernece,PropertyInference,MembershipInferenceMT} in machine learning.
First, during the training stage, if the training data are from different parties, sharing their data with other parties might leads to privacy concerns. At the inference stage, there are multiple privacy attacks, especially in the scenario of Machine Learning as a Service (MLaaS), in which cloud providers offer machine inference hosted on the cloud~\cite{FasterCrypto}. For example, membership inference attack~\cite{MembershipInfernece} can judge if a specific data sample exists in training data, model inversion attack~\cite{ModelInversion,SecretRevealer} aims to infer training data given white-box or black-box access to the model. Also, in MLaaS, users might not be willing to directly upload their data to the cloud server~\cite{FasterCrypto}. Facing these privacy problems for training data and inference data, researchers propose many privacy-preserving methods, including federated learning, homomorphic encryption, differential privacy, etc~\cite{fedavg,SHE,PATE,Li_2021_CVPR}.
However, most of their work focuses on single modality tasks and static data, and seldomly study the data privacy of embodied AI. In embodied AI tasks like vision-and-language navigation, the data contains more human-robot interaction and more complex private information, such as corresponding language-image pairs, dynamic visual information in the indoor environments. VLN also has a unique training stage, pre-exploration. Both of these may make the privacy problems and solutions for VLN more complex. In our work, we elaborate on privacy-preserving VLN training scenarios and propose a solution.
\subsection{Federated Learning}
Federated learning~\cite{fedavg} is a technique that allows client models to train locally and then be sent to the central server for model aggregation. In this way, the clients do not need to send their sensitive data to any party. Thus the privacy of training data is protected. The first federated learning algorithm~\cite{fedavg} uses weighted sum for aggregating clients' models. Later, researchers proposed different federated learning algorithms for heterogeneous data distribution and personalization~\cite{Li_2021_CVPR,ShareRep,HsuECCV2020Real-World,Personalized_Cross-Silo}. Especially, Collins et al.~\cite{ShareRep} proposed to keep classification head locally for personalization. Compared with our framework, they were trying to solve the problem of label heterogeneity and learn a general data representation, and their setting does not have the difference between validation data and training data. Reddi et al.~\cite{reddi2021adaptive} summarized these first-order aggregation methods into one framework as {\large F}ED{\large O}PT, whose server aggregation is:
\begin{equation}
\begin{aligned}
w_{t+1} = {\rm {\large S}ERVER{\large O}PT}(w_{t}, -\Delta w_{t}, \eta, t)
\end{aligned}
\end{equation}
Where SERVEROPT is the aggregation algorithm, $\eta$ is server learning rate.
Application wise, federated learning framework has been used on various tasks in computer vision~\cite{Guo_2021_CVPR_Multi-Institutional,HsuECCV2020Real-World} and natural language processing~\cite{lu2021federated,huang-etal-2020-federated}. Recently, there are also some works for federated learning on multi-modal machine learning~\cite{Liu_Wu_Ge_Fan_Zou_2020_fed_ground,zhao2022multimodal}. Zhao, et al.~\cite{zhao2022multimodal} try horizontal federated learning(FL), vertical FL, and Federated Transfer Learning on different multi-modal tasks and datasets, and~\cite{Liu_Wu_Ge_Fan_Zou_2020_fed_ground} using semi-supervised FL to extract hidden representations of multi-modality. However, the tasks they discussed is not embodied agent for individual users. In vision-and-language navigation, the training paradigm is different from formerly discussed tasks, which has two different training objectives, training scenarios in two training stages. To solve this, we proposed a novel Federated Vision-and-Language Navigation(FedVLN) framework.
\section{Privacy-preserving Vision-and-Language Navigation}
\subsection{Vision-and-Language Navigation (VLN)} \label{background}
The goal of the VLN task is to navigate from a given location and reach a destination following natural language instruction. The task can be formally defined as follow: given an language instruction $U = \{u_{1}, u_{2}, ..., u_{n}\}$. At each step, the agent will receive current visual information $v_{t}$ as input. The agent will need to choose an action $a_{t}$ at each step based on the instruction $U$, current/history visual information$\{v_{\tau}\}_{\tau=1}^{t}$, and history actions$\{a_{\tau}\}_{\tau=1}^{t-1}$. The agent's state, which consists of the agent's navigation history and current spatial location, will change according to the agent's action. The navigation terminates when the agent chooses a `stop' action. The environments that contain labeled training data are seen environments. There are also unseen environments that do not have training data and are invisible during training.
\noindent\textbf{VLN agents~} In general, VLN agents consist of a language encoding module to understand the instruction, a trajectory encoder to encode visual observation and actions,
and a multimodal decision module to jointly process multi-modal information including encoded language information $L_{enc}$, visual information $V_{enc}$, and action information $A_{enc}$ and predict the next action $a_{t}$:
\begin{gather}
L_{enc} = E_{L}(u_{1}, u_{2},...,u_{n})\\
V_{enc}, A_{enc} = E_{T}(v_{1}, v_{2},...,v_{t}, a_{1}, a_{2},...,a_{t-1}) \label{trajectory encoder}\\
a_{t} = M(L_{enc}, V_{enc}, A_{enc})
\end{gather}
\noindent\textbf{Speaker-based data augmentation~}
To tackle the problem of data scarcity, Fried et al.~\cite{Fried2018SpeakerFollowerMF} propose a back-translation speaker model which can generate corresponding instructions $U$ from the visual information and action sequence of sampled routes in the environment:
\begin{equation}
U = Speaker(v_{1}, v_{2},...,v_{t}, a_{1}, a_{2},...,a_{t})
\end{equation}
The speaker is trained by original labeled route-instruction pairs, which takes the visual and actions information of routes as input and predict the instructions. The generated pseudo instructions along with sampled routes can be the augmented training data for better agent learning.
\noindent\textbf{Pre-exploration~}
After training on seen environments and deploying on unseen environment, the agent can adapt to the new environment via pre-exploration~\cite{APS,RCM,Envdrop}. There are different variants of pre-exploration includes self-imitation learning~\cite{RCM}, graph-based methods~\cite{spacial_route_prior,Chen_2021_CVPR_topological}, etc. In our work, we consider the paradigm that sampling routes $R^{'}$ from a new environment and generate instructions $I^{'}$ using the trained speaker mentioned before. Then the agent can be trained on the new environment using sampled routes and pseudo instructions$(R^{'},I^{'})$.
\subsection{Privacy-preserving VLN}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{private_pre_explore.pdf}
\caption{Comparison between different pre-exploration strategies on performance-privacy trade-off. Federated pre-exploration achieves the best navigation performance while maintaining good inference data privacy protection.}
\label{private pre explore}
\end{figure}
Considering the data may have sensitive information, users may have different levels of concern about the privacy of their data. In our work, we consider the case that the users do not want their data to be directly shared with the server (e.g., companies) and any other parties. Based on this, we define privacy-preserving vision-and-language navigation learning setting on two training stages: seen environment training and unseen environment pre-exploration. For seen environment training, including the training of navigation agent, speaker model and data augmentation process, no labeled data within the house environment will be directly shared with the server or any other client to prevent the leak of private information. And our primary purpose is to train a model that can generalize well on unseen environments. Thus, we need to utilize all the data indirectly to train one model.
For pre-exploration, the unlabeled data in unseen environments also can not be shared with others. However, the purpose in this stage is to adapt the model to a specific environment. Thus, training on data in one environment (environment-based pre-exploration) might not be a bad choice. In fact, our experiments show that environment-based pre-exploration performs better than centralized pre-exploration. Nevertheless, as elaborated in Sec.~\ref{fed pre-exploration}, we can indirectly utilize all the data in pre-exploration to boost the performance and preserve privacy. As in Fig.~\ref{private pre explore}, we aim to achieve the best performance-privacy trade-off in pre-exploration.
\section{The FedVLN Approach}
\begin{figure}[t]
\centering
\subfigure[Decentralized Federated Training]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{stage1.pdf}
\label{decentralized fig}
\end{minipage}
}
\subfigure[Federated Pre-exploration]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{stage2.pdf}
\label{fed pre-explore fig}
\end{minipage}
\label{fedvln fig}
}
\caption{The FedVLN framework. In the first stage (a), agents in seen environments will be trained on local data and upload the model updates to the server for aggregation (AGG), then download the global model from the server. In the second stage (b), all the agents in seen and unseen environments join the federated training. During local training, all the modules will be optimized, while only the language encoder will be uploaded/downloaded.}
\label{fedvln fig}
\end{figure}
We propose a federated vision-and-language navigation framework (FedVLN) as shown in Fig.~\ref{fedvln fig}, in which user's data can be kept locally during both training and pre-exploration. In this section, we will introduce our FedVLN framework for two training stages: Decentralized Federated Training and Federated Pre-exploration. In decentralized federated training, each environment has a local agent, which will be trained on local data, then uploaded to the server. Then the global model on the server will be updated by the aggregation of local model updates and sent to all the environments. In federated pre-exploration, to enable the agent to both adapt to the new environment and maintain the ability to understand language, only the language encoder will be shared with the server after local training, instead of sharing the full model. All the agents from seen and unseen environments will join the federated pre-exploration process.
\subsection{Decentralized Federated Training}
\noindent\textbf{Original training data~} When training on the original training data, we first divide the VLN dataset by environments. We treat each environment as a client, then assign a local navigation agent $w_{i}^{0}$ on each environment, which is initialized as the same as global navigation agent $w^{0}$. At each communication round between clients and server, a certain percentage of clients will be randomly selected for training, the local agent on each selected client will be trained for a certain number of epochs on their own data $d_{i}$:
\begin{equation}
w_{i}^{t} = {\rm ClientUpdate}(w^{t-1}, d_{i})
\end{equation}
Where $\rm ClientUpdate$ is the local training process. Then each selected client will send the update $\Delta w_{i, t} = w_{i}^{t} - w^{t-1}$ of their model to the server, and the server will aggregate all the models with a server learning rate $\eta$:
\begin{equation} \label{server aggregation}
w^{t} = w^{t-1} + \eta \sum_{i\in \phi_{t}}\frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta w_{i}^{t}
\end{equation}
Here the weight of each local model $\frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}}$ is the proportion of the use's sample in the total training sample of this communication round.
\begin{comment}
\begin{algorithm} [t]
\caption{Decentralized VLN Training}
\label{Decentralized VLN Training}
\begin{algorithmic}[1]
\STATE Parameters: Participation rate $r$; local learning rate $\lambda$; server learning rate $\eta$; number of communication rounds $T$; number of seen environments $n$; local training epochs $\tau$; local data $\{d_{i}\}_{i=1}^{n}$.
\STATE Initialize: $w^{0}$ and $w_{i}^{0} = w^{0}$, for i in \{1,2,...,n\}
\FOR{t in [1,T]}
\STATE Server sample $rn$ seen environments as $\phi_{t}$\
\STATE Server send global agent $w_{t-1}$ to selected environments
\FOR{client in $\phi_{t}$}
\STATE Client update local agent: $w_{i}^{t-1}=w^{t-1}$
\STATE Client train on local data: $w_{i}^{t} = {\rm ClientUpdate}(w_{i}^{t-1}, \tau_{i}, \lambda, d_{i})$
\STATE Client upload delta of the agent $\Delta w_{i}^{t}=w_{i}^{t}-w^{t-1}$ to the server
\ENDFOR
\STATE Server update global agent: $w_{i}^{t} = w^{t-1} + \eta \sum_{i \in \phi_{t}} \frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta w_{i}^{t}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\end{comment}
\noindent\textbf{Augmentation~}
For data augmentation, we will assign each client a local speaker. Following the federated learning paradigm mentioned above and the training procedure of speaker from Sec.~\ref{background}, at each communication round, each speaker from the selected clients will be trained on the labeled route-instruction pairs in its environment.
The best global model (according to BLUE score) during the training process will be sent to all clients. Each client can use the speaker model to generate pseudo instructions $I_{i}^{aug}$ for sampled routes within the environment. Then the augmented training of the agent will also follow the federated training process mentioned above, except the local data will be the combination of original data and augmented data $\{(d_{i}, d_{i}^{aug})\}_{i=1}^{n}$.
Notice that during the whole training process, including original data training, speaker training, and augmented data training, no client share their data to other clients or the server. Thus the training data privacy is preserved.
\begin{figure}[t]
\centering
\begin{minipage}{0.36\textwidth}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrr}
\toprule
\textbf{Statistics} & \textbf{GT} & \textbf{Speaker} \\
\midrule
Length & 29.58 & 21.89 \\
Var(Length) & 155.70 & 20.88 \\
NoS & 2.44 & 2.42 \\
Var(NoS) & 1.21 & 0.47 \\
\bottomrule
\end{tabular}
}
\captionof{table}{Comparison between ground-truth (GT) and speaker generated instructions on seen validation. NoS means the average number of sentences.}
\label{instr statistics}
\end{minipage}
\begin{minipage}{0.58\textwidth}
\centering
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[height = 3.2cm]{gt_verb_frequency.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[height = 3.3cm]{pseudo_verb_frequency.pdf}
\end{minipage}
\caption{Comparison of verb frequency between ground truth instructions and generated pseudo instructions. }
\label{verb freq}
\end{minipage}
\end{figure}
\subsection{Federated Pre-exploration~} \label{fed pre-exploration}
Pre-exploration allows the agent to explore the newly deployed environment and update itself based on the new information. From the perspective of data privacy, centralized pre-exploration is impractical here, since it assumes one navigation agent can get access to data from all the unseen environments. Fu et al.\cite{APS} proposed environment-based pre-exploration, which allows each agent to train on only one environment. Thus no data will be shared with other parties, and the data privacy in unseen environments is preserved. From the performance point of view, for centralized training, since the agent is trained on all the data from all the environments, it should have better generalizability. However, training in all the environments may hinder the agent from better adapting to one specific environment. For environment-based pre-exploration, the agent can focus on one specific environment, while the limited data amount and data bias in one environment may lead to a less generalized agent.
Furthermore, as shown in Table~\ref{instr statistics} and Fig.~\ref{verb freq}, we found that the instructions generated by the speaker are statistically significantly different from human-generated instructions. Moreover, the language pattern is much simpler than human language. Since current methods only use augmented data with speaker-generated instructions for training during pre-exploration, the agent might suffer from the huge distribution shift between instructions in augmented data and validation data, and can not understand instructions in validation data well. This problem could be even worse on environment-based pre-exploration since the data for one agent is of a smaller amount and from a single environment.
What is more, according to former research~\cite{diagnosingEnv}, the agent will perform better on seen paths or environments. Thus, the best solution is to maintain the generalizability to understand language and adapt to a specific visual environment. To this end, as in Fig.~\ref{fed pre-explore fig}, we propose federated pre-exploration. In federated pre-exploration, The server will only maintain a global language encoder, which is initialized with the global encoder after decentralized federated VLN training. During each communication round, the server will send the global language encoder $E^{t-1}$ to the selected clients. Then the selected clients will update its language encoder with $E^{t-1}$, and train the full agent on its local data:
\begin{equation}
E_{L,i}^{t}, E_{T,i}^{t}, M_{i}^{t} = {\rm ClientUpdate}(E_{L,i}^{t-1}, E_{T,i}^{t-1}, M_{i}^{t-1}, \tau, \lambda)
\end{equation}
After local training, the model will send only the language encoder $E_{L,i}^{t}$ to the server for aggregation as lines 9,11 in Alg.~\ref{pre explore alg}. In this way, the language encoder will be jointly updated on data from all the participated environments, thus being more generalized. Meanwhile, to further improve the generalizability of the language encoder, we randomly sample a fraction of seen environments at each communication round, where agents will also follow the training process above.
The trajectory encoding module $E_{T,i}$ and multi-modal decision module $M_{i}$ will keep training locally, which can help local agents adapt to their own environments. For validation, we used the local models after local training. The whole training procedure is in Alg.~\ref{pre explore alg}.
\begin{algorithm} [t]
\caption{Federated Pre-exploration}
\label{pre explore alg}
\begin{algorithmic}[1]
\STATE Parameters: Seen participation rate $r_{1}$, unseen participation rate $r_{2}$; local learning rate $\lambda$; server learning rate $\eta$; number of communication rounds $T$; number of seen environments $n$; number of unseen environments $m$; local training epochs $\tau$.
\STATE Initialize: $E_{L,i}^{0} = E_{L}^{0}$, $E_{T,i}^{0} = E_{T}^{0}$, $M_{i}^{0} = M^{0}$, for i in \{1,2,...,n+m\}
\FOR{t in [1,T]}
\STATE Server sample $r_{1}n$ seen environments and $r_{2}m$ unseen environments as $\phi_{t}$\
\STATE Server send global language encoder to selected environments $E^{t-1}$
\FOR{client in $\phi_{t}$}
\STATE Client update language encoder: $E_{i}^{t-1}=E^{t-1}$
\STATE Client local training: $E_{L,i}^{t}, E_{T,i}^{t}, M_{i}^{t} = {\rm ClientUpdate}(E_{L,i}^{t-1}, E_{T,i}^{t-1}, M_{i}^{t-1}, \tau, \lambda)$
\STATE Client upload delta of the language encoder $\Delta E_{i}^{t}=E_{i}^{t}-E^{t-1}$ to the server
\ENDFOR
\STATE Server update language encoder: $E_{i}^{t} = E^{t-1} + \eta \sum_{i \in \phi_{t}} \frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta E_{i}^{t}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{comment}
\subsection{Training paradigm in real-world application}
In real-world application, users who are willing and unwilling to label data will continue to increase. We also further design a training paradigm for this situation based on our FedVLN framework. In this paradigm, the server will maintain a full model for seen environments training and a language encoder for unseen environment exploration. As the number of seen environments increasing, the full model will consistently trained by federated learning on seen environments. The pre-exploration training will happens periodically as seen environments increase, based on better trained models. \xin{This should be discussed earlier. And we do need a method figure for federated pre-exploration.}
\end{comment}
\section{Experimental Setup}
\subsection{Datasets}
We implement our federated learning framework on two datasets: Room-to-Room (R2R)~\cite{r2r} and Room-across-Room (RxR)(en)~\cite{rxr}. Both datasets are developed on the Matterport3D Simulator~\cite{r2r}, a photorealistic 3D environment for embodied AI research.
\noindent\textbf{R2R~\cite{r2r}} is constructed by generating the shortest paths from sampled start and end points. Then collect three associated navigation instructions for each path using Amazon Mechanical Turk (AMT). The dataset contains 7,189 paths from 90 environments, and each path contains 3 instructions. The environments are split into 61 environments for training and seen validation, 11 for unseen validation, and 18 for testing. The environments in unseen validation and unseen test set do not appear in the training environments.
\noindent\textbf{RxR~\cite{rxr}} is proposed to mitigate shortcomings of former VLN datasets. Specifically, it is a large-scale dataset with multilingual instructions. It contains 16,522 paths and 126,069 instructions, among which 42,002 instructions are in English. RxR also ensures spatiotemporal alignments between instructions, visual percepts, and actions for agent training. The RxR dataset samples arbitrary paths from point to point (not necessarily shortest paths) to avoid data bias.
\subsection{Evaluation Metrics}
For both datasets, we report Success Rate (SR), Success Rate weighted by Path Length (SPL), Oracle Success Rate (OSR), and navigation Error (NE) as goal-oriented metrics. SR is calculated as the percentage of the agent stop within 3 meters from the end point. SPL~\cite{spl} is defined as Success weighted by normalized inverse Path Length, which considers both navigation effectiveness and efficiency. OSR is the percentage of the agent visiting a point within 3 meters from the end point. NE is the average distance between the agent's final location and the end point. We also report Coverage weighted by Length Score (CLS)~\cite{jain-etal-2019-stay} and normalized Dynamic Time Warping (nDTW)~\cite{ndtw} to validate the fidelity of navigation paths, which penalize the deviation from the reference path. SR and SPL are often considered as the primary metrics for VLN evaluation.
\subsection{Baselines}
Currently, we do not consider pre-training privacy, and VLN data pre-training infringes on data privacy. Thus, we choose two strong VLN baselines without VLN pre-training for experiments:
\begin{enumerate}
\item \textbf{Envdrop}~\cite{Envdrop}: the environment dropout model uses a Bi-directional LSTM as the language encoder, an attentive LSTM as the action decoder, and a mixed learning objective of imitation learning and reinforcement learning.
\item \textbf{CLIP-ViL}~\cite{CLIP_on_vl}: the CLIP-ViL model adapts CLIP~\cite{clip} visual encoder to improve vision and language encoding and matching for vision-and-language navigation.
\end{enumerate}
\subsection{Implementation Details}
When training on seen environments, all models are trained till convergence. At each communication round, we use the participation rate of $r=0.2$, and train each local agent for $\tau=3$ epochs on local data for Envdrop model and $\tau=5$ for CLIP-ViL model.\footnote{The discussion and ablation study of local training epochs is in appendix.} For federated speaker training, we set the local epochs $\tau=5$ and select the best model on seen validation data according to BLEU score to generate instructions.
During pre-exploration, we test the participation rate of $r_{1}=\{0.5,0.6,0.7\}$ for unseen environments. And we train each agent for $\tau_{1}=1$ epoch over unseen local dataset as the model converges quickly. When training across seen and unseen environments, we use the participation rate of $r_{2}=0.18$ for seen environments. To validate the effectiveness of our framework, we use federated trained speaker to generate pseudo-instruction and train from federated trained navigation agent for centralized pre-exploration, environment-based pre-exploration and federated pre-exploration.
\section{Results}
\subsection{Decentralized Federated Training} \label{results seen training}
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|cccccc|cccccc}
\toprule
\multirow{2}*{\textbf{Model}} &
\multicolumn{6}{c|}{\textbf{Val-Seen}} & \multicolumn{6}{c}{\textbf{Val-Unseen}}\\
\cmidrule(lr){2-7}\cmidrule(lr){8-13}
& NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ \\
\midrule
Envdrop & 4.71 & 65.6 & 53.2 & 56.1 & 66.8 & 55.0
& 5.87 & 52.7 & 40.9 & 44.5 & 57.1 & 42.3 \\
FedEnvdrop & 5.20 & 60.2 & 48.3 & 51.2 & 64.3 & 51.2
& 5.52 & 55.5 & 43.9 & 47.5 & 59.2 & 45.7\\
\hdashline[1pt/2pt]
Envdrop$\rm _{aug}$ & 3.93 & 71.6 & 61.0 & 64.1 & 71.4 & 61.3
& 5.36 & 57.7 & 45.8 & 49.9 & 59.3 & 45.3 \\
FedEnvdrop$\rm _{aug}$ & 4.14 & 70.4 & 58.7 & 62.0 & 69.8 & 59.2 & 5.22 & 58.5 & 47.5 & 51.3 & 60.8 & 47.0 \\
\midrule
CLIP-ViL & 4.07 & 70.7 & 57.9 & 62.9 & 67.7 & 55.8
& 5.02 & 63.1 & 47.5 & 53.6 & 58.1 & 44.5\\
FedCLIP-ViL & 4.13 & 66.5 & 57.1 & 60.6 & 68.0 & 55.6
& 4.91 & 61.1 & 49.0 & 53.4 & 60.8 & 47.8 \\
\hdashline[1pt/2pt]
CLIP-ViL$\rm _{aug}$ & 3.52 & 75.0 & 61.7 & 66.8 & 69.3 & 58.6
& 4.59 & 67.4 & 50.7 & 57.0 & 59.2 & 46.4 \\
FedCLIP-ViL$\rm _{aug}$ & 3.69 & 71.4 & 60.1 & 64.6 & 68.9 & 57.2 & 4.53 & 65.6 & 51.7 & 56.9 & 61.0 & 48.3\\
\bottomrule
\end{tabular}
}
\caption{R2R Results of seen environment training. Envdrop is the centralized Envdrop model, and FedEnvdrop is the federated Envdrop model. $\rm Envdrop_{aug}$ means the Envdrop model trained with augmented data. Our decentralized federated training outpeforms centralized training with Envdrop and achieves comparable results with CLIP-ViL on unseen environments.
}
\label{seen r2r}
\end{table}
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|cccccc|cccccc}
\toprule
\multirow{2}*{\textbf{Model}} &
\multicolumn{6}{c|}{\textbf{Val-Seen}} & \multicolumn{6}{c}{\textbf{Val-Unseen}}\\
\cmidrule(lr){2-7}\cmidrule(lr){8-13}
& NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ \\
\midrule
Envdrop & 7.97 & 51.6 & 38.0 & 40.7 & 58.8 & 54.0 & 8.42 & 45.1 & 31.8 & 35.0 & 55.6 & 50.6 \\
FedEnvdrop & 8.30 & 53.3 & 35.9 & 39.7 & 57.2 & 51.8 & 8.58 & 47.4 & 30.2 & 34.4 & 55.4 & 48.8 \\
\midrule
CLIP-ViL & 6.92 & 56.8 & 42.3 & 46.5 & 60.0 & 56.2 & 7.38 & 50.6 & 34.9 & 39.5 & 55.6 & 51.2 \\
FedCLIP-ViL & 7.31 & 52.0 & 39.7 & 43.5 & 59.4 & 55.1 & 7.41 & 48.8 & 35.0 & 39.2 & 57.5 & 53.1 \\
\bottomrule
\end{tabular}
}
\caption{RxR results of seen environment training. Decentralized federated training obtains comparable results with centralized training on unseen environments (e.g., only 0.1\% SPL difference with the CLIP-ViL model).}
\label{seen rxr}
\end{table}
\noindent\textbf{Original data training}
In Table~\ref{seen r2r} and Table~\ref{seen rxr}, we report the results for seen environment training on R2R and RxR datasets for both baselines.
First, federated learning performs worse than centralized training on seen environments with an average of 2.8\% SR gap. This is reasonable, as centralized training can easily overfit to the seen training data for better performance on seen environments, while for federated learning, because of the decentralized optimization over protected local data, the global model can not overfit to the seen environments as well as centralized training.
The performance on unseen environments tests the generalization ability of VLN models and is used for VLN evaluation.
As shown in Table~\ref{seen r2r} and Table~\ref{seen rxr}, on unseen environments, decentralized federated training achieves comparable results with centralized training on original data training across different VLN models. For example, FedEnvdrop performs better than Envdrop on R2R and nearly the same on RxR, and FedCLIP-ViL obtains comparable results with CLIP-ViL on both R2R and RxR.
Thus, in terms of generalization ability, our decentralized federated training method is comparable with centralized training while protecting the training data privacy.
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Val-seen} & \textbf{Val-unseen} \\
\midrule
CLIP Cent & 33.5 & 30.2 \\
CLIP Fed & 31.3 & 31.6 \\
\hdashline[1pt/2pt]
ResNet Cent & 33.6 & 30.7 \\
ResNet Fed & 31.7 & 31.9 \\
\bottomrule
\end{tabular}
\caption{Comparison of BLEU score between federated speaker and centralized speaker based on the CLIP encoder and the ResNet encoder on R2R.}
\label{speaker bleu}
\end{table}
\noindent\textbf{Augmented data training}
First, the performance of speaker model determines the quality of augmented data, and thus influences the performance of both augmented training and pre-exploration. Table~\ref{speaker bleu} shows that federated speaker performs 2.05 worse than centralized speaker on seen validation on BLEU score, but is 1.3 better on unseen validation data, which is quite aligned with the navigation results. And thus federated trained speaker is comparable with centralized speaker on instruction generation.
From Table~\ref{seen r2r}, when training on augmented data, the performance of federated learning is also comparable with centralized training on unseen environments, although the quality of pseudo data generated by federated speaker might be worse according to BLEU score on validation seen data. This shows that federated agent can learn generalized navigation strategies on pseudo data with lower quality.
\begin{comment}
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\begin{minipage}{0.45\textwidth}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Val-seen} & \textbf{Val-unseen} \\
\midrule
CLIP Cent & 33.5 & 30.2 \\
CLIP Fed & 32.7 & 31.5 \\
\hdashline[1pt/2pt]
ResNet Cent & 33.6 & 30.7 \\
ResNet Fed & 32.1 & 30.7 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of BLUE2 score between federated speaker and centralized speaker based on the CLIP encoder and the ResNet encoder on R2R.}
\label{speaker bleu}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Envdrop} & \textbf{CLIP-ViT} \\ \midrule
Centralized & 49.9 & 56.6 \\
Federated & 49.7 & 55.9 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of SR between two baselines on data generated by centralized speaker and federated speaker on R2R validation unseen averaged on two runs. Using centralized speaker improves the navigation performance.
}
\label{cent fed speaker}
\end{minipage}
\end{table*}
\noindent\textbf{Federated speaker \textit{vs.} centralized speaker} From Table~\ref{seen r2r}, we notice that when training with augmented data, federated learning has an average gap of 0.45\% on unseen SR compared with centralized training, which is slightly worse than results without augmented data.
We suspect that it's because the federated speaker produce slightly worse augmented data than the centralized speaker.
So we compare the BLEU score between federated speaker and centralized speaker in Table~\ref{speaker bleu}. Results show that federated speaker performs 1.15 worse than centralized speaker on seen validation on BLEU2 score, but is 0.65 better on unseen validation data, which is quite aligned with the navigation results.
Since we do data augmentation on seen environments, federated speaker generates slightly lower-quality pseudo instructions.
To further validate this, we replace federated speaker with centralized speaker to generate the augmented data. Results are on Table~\ref{cent fed speaker}, when trained with the same augmented data by centralized speaker, whose quality is still worse than original data, decentralized federated training also obtains comparable performance with centralized training. Thus, federated learning can train a generalized model on both original data and pseudo data.
\end{comment}
\subsection{Federated Pre-exploration}
To validate the effectiveness of federated pre-exploration on unseen environments, we compare centralized pre-exploration, environment-based pre-exploration, and different federated pre-exploration methods: full model sharing (Fed-Full), sharing language encoder only (Fed-Lan), and sharing language encoder across seen and unseen environments (Fed-Lan+seen). Results are shown in Table~\ref{pre-pxplore table}.
\noindent\textbf{Navigation performance} For centralized pre-exploration and Fed-Full, in which one agent is optimized on data from all the environments, the agent can not adapt very well on each specific environment. For example, there is a gap of 3.1\% on SR between centralized training and environment-based pre-exploration. When sharing only the language encoder during federated learning, the validation results improve significantly comparing with full model sharing (e.g. 4.2\% on SR and 4.4\% on nDTW) since the agents can adapt to each environment better. Also, the generalization ability of language encoder is better than environment-based pre-exploration, since it is trained on more data across different environments. Thus, even under federated optimization, sharing only the language encoder in federated pre-exploration achieves similar results comparing with environment-based pre-exploration.
Federated pre-exploration with seen environments further improves the performance benefiting from human labeled data, and achieves around 0.8\% SR improvement than environment-based pre-exploration. To sum up, our Fed-Lan+seen method achieves superior navigation performance in terms of both navigation success and path fidelity metrics.
\begin{comment}
We notice that training with seen environments does not necessarily lead to better CLS and nDTW scores.
We further compare Success weighted by normalized Dynamic Time Warping (sDTW), which evaluate the path fidelity of successful examples only. Fed-Lan+seen has the best sDTW result, which is 57.3 on Envdrop and 60.2 on CLIP-ViT. And Fed-Lan achieves 56.8 on Envdrop and 58.7 on CLIP-ViT.
This means that, in the success cases, the Fed-Lan+seen agent performs the best in terms of both success rate and path fidelity.
In the failure cases, the Fed-Lan+seen agent tends to explore more and produce longer paths, which leads to lower scores on nDTW and CLS. Thus, federated pre-exploration with seen environments indeed enable more effective and faithful navigation with better exploration ability.
\end{comment}
\noindent\textbf{Degree of privacy} From the perspective of privacy preserving, environment-based pre-exploration is the best, where nothing in the unseen environments will be shared with others.
Centralized training is clearly the worse, where all the observable data from unseen environments will be directly shared with the server. Federated pre-exploration only uploads the model updates to the server.
Among federated methods, sharing only the language encoder protects data privacy better than full model sharing: it only shares the updates of language encoder, which accounts for only 24.6\% of the parameters and keeps other modules completely local.
Training with seen environments will not make the training process less private, as seen environments already shared their parameter updates with the server in decentralized federated training process.
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{llccccccl}
\toprule
Model & Method & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & \qquad \quad \; \; Privacy$\uparrow$\\ \midrule
\multirow{5}*{Envdrop}
& Centralized &3.89 &73.7 &61.7 &64.8 &71.5 &64.6 & 0 - sharing data\\
& Env-based &\underline{3.49} &\textbf{78.5} &\underline{64.0}& \underline{67.4}&\underline{73.2} &\underline{67.5} & \textbf{3} - no sharing\\
&Fed-Full & 3.96 & 74.1 & 59.1 & 62.4 & 70.9 & 63.3 & 1 - model sharing (100\%)\\
&Fed-Lan &3.52 & 77.6 & 63.6 & 67.2 & \underline{73.2} & \textbf{67.6} &\underline{2} - model sharing (24.6\%)\\
&Fed-Lan+seen &\textbf{3.47} &\underline{78.1} &\textbf{64.8} &\textbf{68.3} &\textbf{73.5} &67.3 & \underline{2} - model sharing (24.6\%)\\
\midrule
\multirow{5}*{CLIP-ViT}
& Centralized &3.70 &76.0 &60.8 &65.3 &70.5 &62.1 & 0 - sharing data\\
& Env-based &\underline{3.31} &\underline{79.2} &65.2 &68.9 &\textbf{74.4} &\textbf{69.3} & \textbf{3} - no sharing\\
&Fed-Full&3.68 & 74.9& 61.8& 65.8 & 70.5 & 61.9 & 1 - model sharing (100\%)\\
&Fed-Lan &3.33 &\underline{79.2} &\textbf{65.4} &\underline{69.1} &74.0 &68.3 &\underline{2} - model sharing (24.6\%)\\
&Fed-Lan+seen &\textbf{3.25} &\textbf{80.6} &\textbf{65.4} &\textbf{69.5} &\textbf{74.4} &\underline{68.9} &\underline{2} - model sharing (24.6\%)\\
\bottomrule
\end{tabular}
}
\caption{Comparison between different pre-exploration methods on R2R unseen validation. Fed-Full means full model sharing federated learning, Fed-Lan means sharing only language encoder in federated learning, Fed-Enc+seen means federated training with seen environments and sharing encoder only.}
\label{pre-pxplore table}
\end{table*}
Overall, our federated pre-exploration method achieves a good performance-privacy trade-off. Centralized training is both worst in terms of navigation ability and privacy protection. Environment-based pre-exploration has the best privacy protection of unseen environment data. Federated pre-exploration achieves the best navigation results with little privacy cost by keeping all client data locally, and sharing only the language encoder model updates with the server.
\begin{comment}
Envdrop &64.6 & 65.2 & 62.6 & 66.3 & 67.6 \\
Envdrop+CLIP &66.1 & 69.2 & 64.1 & 69.7 & 71.0 \\
\subsection{Ablation study}
\textbf{Effectiveness of server learning rate}
\xin{this ablation does not contribute to the main claims in our paper. considering removing it or leaving it to appendix.}
We notice that when using small server learning rate in server aggregation eq~\ref{server aggregation}, the global model converge slowly and can not achieve comparable performance in given training steps, since using small server learning rate leads to a small global update step. If $lr_{s} = 1$, after the total local updates of $n$ steps, the scale of global update approximately equals to $\frac{n}{|S_{t}|}$. Although we can use a larger local learning rate, this will results in over-fitting of local models, which may be not good for global optimization. In real life federated learning application, there will be numerous clients participate in federated learning process, and faster convergence can reduce the local computation and client-server communication overhead, which is quite important. \cite{charles2021onLarge} tried to scale up server learning rate for large cohort federated learning while did not achieve positive results, and they did not discuss the convergence speed.
To validate the effectiveness of server learning rate in federated learning, we conduct experiments on Fig.~\ref{lr sr} and Fig.~\ref{rounds sr} to compare the best success rate and convergence speed between using larger learning rate and server learning rate. In Fig.~\ref{lr sr}, we observed that larger learning rate and server learning rate both improve the performance, and increasing server learning rate always produce better unseen success rate than increasing local learning rate at the same scale. Especially when the local learning rate goes to $12*10^{-4}$, the performance dropped significantly, while increasing the server learning rate at the same scale keep the performance.
In terms of convergence speed, both larger local learning rate and server learning rate leads to faster convergence. And the advantage of server learning rate increases as the scale increases. Overall, federated learning can achieve a decent performance in limited communication rounds in our experiments, especially with the help of larger server learning rate. For example, the model can achieve 46\% SR in 89 rounds when server learning rate is $12*10^{-4}$.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=6cm]{lr_sr.png}
\caption{Success rate on R2R unseen validation data under different local learning rate and server learning rate.}
\label{lr sr}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=6cm]{sr_steps.png}
\caption{Number of communication rounds needed to achieve certain success rate on R2R unseen validation data.
\xin{(1) swap x and y axis as we would like to know how SR changes as \# comms rounds changes. (2) we do not care about the LR here... so let's just use the one for our final model. (3) what are the takeaways of this ablation? The more frequent the better results? If so, how should we determine the number of communication rounds?}
}
\label{rounds sr}
\end{minipage}
\end{figure}
\end{comment}
\section{Conclusion and Future Work}
In this paper, we study the data privacy problems in vision-and-language navigation with respect to two learning scenarios: seen environment training and unseen environment pre-exploration. We propose a novel federated vision-and-language navigation (FedVLN) framework to preserve data privacy in two learning stages while maintaining comparable navigation performance. Furthermore, we present that federated pre-exploration can even outperforms all previous pre-exploration methods and achieves the best performance-privacy trade-off.
As the first work along this direction, our work does not consider adversarial attacks that can potentially recover data information from shared local model updates, and we believe future work can consider more embodied AI tasks and defend against privacy attacks for more data security.
\\
\noindent\textbf{Acknowledgement}
We thank Jing Gu, Eliana Stefani, Winson Chen, Yang Liu, Hao Tan, Pengchuan Zhang, and anonymous reviewers for their valuable feedback.
This work is partially supported by the PI's UCSC start-up funding.
\clearpage
\bibliographystyle{splncs04}
\section{Introduction}
A long-term goal of AI research is to build embodied agents that can perceive the environment, communicate with humans, and perform real-world tasks to benefit human society.
However, since the agent interacts closely with humans and environments, it often receives sensitive information during training and inference.
For example, as shown in Fig.~\ref{centralized vln}, in the task of Vision-and-Language Navigation (VLN)~\cite{r2r}, where an agent learns to navigate towards a target location in an indoor environment given natural language instruction, the training and inference data may include private information such as what the user's house looks like, what the user said, and what the user did. Data privacy is a central problem for building trustworthy embodied agents but seldomly studied before~\cite{gu-etal-2022-vision}. Thus, in this work, we introduce privacy-preserving embodied agent learning for the task of vision-and-language navigation.
\begin{figure}[t]
\centering
\subfigure[Centralized VLN learning]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{centralized_vln.pdf}
\end{minipage}
\label{centralized vln}
}
\subfigure[Federated VLN learning]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{fed_vln.pdf}
\end{minipage}
\label{fig:fedvln}
}
\caption{Data privacy: centralized VLN learning \textit{vs.} our federated VLN learning. Existing VLN approaches centralize all the client data in a server, including house environments and user instructions, which ignores users' privacy concerns. Our federated VLN framework keeps client data used only locally and the server receives nothing other than local model updates, so the client data privacy is preserved.}
\label{centalized vs fed}
\end{figure}
VLN models are typically trained on seen environments with ground-truth instruction-trajectory pairs and then deployed to unseen environments without any labeled data.
After deployment, the agent may explore the unseen environment and adapt to the new environment for better performance, which is known as pre-exploration.
However, as shown in Fig.~\ref{centralized vln}, most of the existing methods assemble all the data in a server to train a navigation agent for both seen environment training and unseen environment pre-exploration.
This is not practical in reality since users may not want to share the data in their own house due to privacy concerns.
Privacy-preserving VLN requires the agent to protect the data privacy during both seen environment training and unseen environment pre-exploration while maintaining comparable navigation performance.
In this paper, we propose a novel Federated Vision-and-Language Navigation (FedVLN) framework, to address the aforementioned data privacy issues and improve the adaptation performance on unseen environments at the same time.
Specifically, on the seen environment training stage, as shown in Fig.~\ref{fig:fedvln}, we treat each house environment as a client. The client's local models (a VLN agent for navigation and a speaker for data augmentation) are trained on private local data, and then the model updates are sent to the server for model aggregation. No private data except the local model updates will be shared with the server and there is no communication between clients.
During pre-exploration, we train the client models on seen environments and unseen environments simultaneously under federated learning paradigm---the client models do partial model aggregation (language encoder only) and partial local adaptation, enabling better adaptation to the local visual environment while maintaining general language understanding.
Under our FedVLN framework, users do not need to share their data with any other party, thus the privacy of training data and inference data is protected.
Our experiments on Room-to-Room (R2R)~\cite{r2r} and Room-across-Room (RxR) \cite{rxr}\footnote{We conduct experiments on the English data of the RxR dataset.} datasets validate the effectiveness of our framework. Our federated learning framework achieves comparable results with centralized training while preserving data privacy. More importantly, on the pre-exploration stage, we show that centralized pre-exploration hinders the agent from adapting to each specific environment, and our federated pre-exploration method achieves the best performance among prior pre-exploration methods such as centralized~\cite{RCM,Envdrop} and environment-based pre-exploration~\cite{APS}. Our contributions are three-fold:
\begin{itemize}
\item We are the first to discuss data privacy concerns for vision-and-language navigation and define the privacy-preserving embodied AI problem for the two learning stages in VLN.
\item We propose a novel federated learning framework for privacy-preserving VLN to ensure that users do not need to share their data to any party.
\item Extensive results on R2R and RxR show that our federated learning framework not only achieves comparable results with centralized training, but also outperforms centralized and environment-based pre-exploration methods.
\end{itemize}
\section{Related Work}
\subsection{Vision-and-Language Navigation}
With the development of deep learning and human's vision of more helpful AI agents, embodied AI becomes an emerging research area. Vision-and-language navigation(VLN)~\cite{r2r,rxr,jain-etal-2019-room-for-room,reverie,zhu-etal-2022-diagnosing} is one of the most popular tasks of embodied AI, in which an embodied agent learns to navigation to a goal location in indoor environments following language instruction and given dynamic visual information. Anderson et al. \cite{r2r} first propose a LSTM-based seq-to-seq model for navigation. For better understanding vision-and-language information, there are works working on vision-and-language pre-training~\cite{Hao_2020_CVPR_prevalent,oscar,rec-vlnbert,Qi_2021_ICCV_object,Guhur_2021_ICCV_airbert} and model structures~\cite{rec-vlnbert,chen2021hamt,Gao_2021_CVPR_room-and-Object,xiang-etal-2020-learning}. Reinforcement learning and navigation planning methods were also introduced into VLN to perform better action decisions~\cite{RCM,Wang_2018_ECCV_look_before_you_leap,Krantz_2021_ICCV_waypoint,chasing_ghosts}. Limited labeled data was another bottleneck to train a better model. To this end, Fried et al.~\cite{Fried2018SpeakerFollowerMF} propose a speaker-follower model which can generate pseudo instructions for a sampled path by a trained speaker. Further, to mitigate the gap between seen and unseen environments, pre-exploration was proposed~\cite{RCM,Envdrop,APS}, which can learn and adapt to new environments after deployment.
However, most current research ignores the practicality in real-life application scenarios, especially data privacy issues. Fu et al.~\cite{APS} consider the implementation problem of pre-exploration and proposed environment-based pre-exploration, but they did not consider the privacy issue of training data. Also, we showed that environment-based pre-exploration might suffer from data scarcity and data bias.
\subsection{Privacy-preserving Machine Learning}
Over the years, researchers propose many methods~\cite{FasterCrypto,PATE,fedavg,texthide} to address different data privacy problems~\cite{ModelInversion,MembershipInfernece,PropertyInference,MembershipInferenceMT} in machine learning.
First, during the training stage, if the training data are from different parties, sharing their data with other parties might leads to privacy concerns. At the inference stage, there are multiple privacy attacks, especially in the scenario of Machine Learning as a Service (MLaaS), in which cloud providers offer machine inference hosted on the cloud~\cite{FasterCrypto}. For example, membership inference attack~\cite{MembershipInfernece} can judge if a specific data sample exists in training data, model inversion attack~\cite{ModelInversion,SecretRevealer} aims to infer training data given white-box or black-box access to the model. Also, in MLaaS, users might not be willing to directly upload their data to the cloud server~\cite{FasterCrypto}. Facing these privacy problems for training data and inference data, researchers propose many privacy-preserving methods, including federated learning, homomorphic encryption, differential privacy, etc~\cite{fedavg,SHE,PATE,Li_2021_CVPR}.
However, most of their work focuses on single modality tasks and static data, and seldomly study the data privacy of embodied AI. In embodied AI tasks like vision-and-language navigation, the data contains more human-robot interaction and more complex private information, such as corresponding language-image pairs, dynamic visual information in the indoor environments. VLN also has a unique training stage, pre-exploration. Both of these may make the privacy problems and solutions for VLN more complex. In our work, we elaborate on privacy-preserving VLN training scenarios and propose a solution.
\subsection{Federated Learning}
Federated learning~\cite{fedavg} is a technique that allows client models to train locally and then be sent to the central server for model aggregation. In this way, the clients do not need to send their sensitive data to any party. Thus the privacy of training data is protected. The first federated learning algorithm~\cite{fedavg} uses weighted sum for aggregating clients' models. Later, researchers proposed different federated learning algorithms for heterogeneous data distribution and personalization~\cite{Li_2021_CVPR,ShareRep,HsuECCV2020Real-World,Personalized_Cross-Silo}. Especially, Collins et al.~\cite{ShareRep} proposed to keep classification head locally for personalization. Compared with our framework, they were trying to solve the problem of label heterogeneity and learn a general data representation, and their setting does not have the difference between validation data and training data. Reddi et al.~\cite{reddi2021adaptive} summarized these first-order aggregation methods into one framework as {\large F}ED{\large O}PT, whose server aggregation is:
\begin{equation}
\begin{aligned}
w_{t+1} = {\rm {\large S}ERVER{\large O}PT}(w_{t}, -\Delta w_{t}, \eta, t)
\end{aligned}
\end{equation}
Where SERVEROPT is the aggregation algorithm, $\eta$ is server learning rate.
Application wise, federated learning framework has been used on various tasks in computer vision~\cite{Guo_2021_CVPR_Multi-Institutional,HsuECCV2020Real-World} and natural language processing~\cite{lu2021federated,huang-etal-2020-federated}. Recently, there are also some works for federated learning on multi-modal machine learning~\cite{Liu_Wu_Ge_Fan_Zou_2020_fed_ground,zhao2022multimodal}. Zhao, et al.~\cite{zhao2022multimodal} try horizontal federated learning(FL), vertical FL, and Federated Transfer Learning on different multi-modal tasks and datasets, and~\cite{Liu_Wu_Ge_Fan_Zou_2020_fed_ground} using semi-supervised FL to extract hidden representations of multi-modality. However, the tasks they discussed is not embodied agent for individual users. In vision-and-language navigation, the training paradigm is different from formerly discussed tasks, which has two different training objectives, training scenarios in two training stages. To solve this, we proposed a novel Federated Vision-and-Language Navigation(FedVLN) framework.
\section{Privacy-preserving Vision-and-Language Navigation}
\subsection{Vision-and-Language Navigation (VLN)} \label{background}
The goal of the VLN task is to navigate from a given location and reach a destination following natural language instruction. The task can be formally defined as follow: given an language instruction $U = \{u_{1}, u_{2}, ..., u_{n}\}$. At each step, the agent will receive current visual information $v_{t}$ as input. The agent will need to choose an action $a_{t}$ at each step based on the instruction $U$, current/history visual information$\{v_{\tau}\}_{\tau=1}^{t}$, and history actions$\{a_{\tau}\}_{\tau=1}^{t-1}$. The agent's state, which consists of the agent's navigation history and current spatial location, will change according to the agent's action. The navigation terminates when the agent chooses a `stop' action. The environments that contain labeled training data are seen environments. There are also unseen environments that do not have training data and are invisible during training.
\noindent\textbf{VLN agents~} In general, VLN agents consist of a language encoding module to understand the instruction, a trajectory encoder to encode visual observation and actions,
and a multimodal decision module to jointly process multi-modal information including encoded language information $L_{enc}$, visual information $V_{enc}$, and action information $A_{enc}$ and predict the next action $a_{t}$:
\begin{gather}
L_{enc} = E_{L}(u_{1}, u_{2},...,u_{n})\\
V_{enc}, A_{enc} = E_{T}(v_{1}, v_{2},...,v_{t}, a_{1}, a_{2},...,a_{t-1}) \label{trajectory encoder}\\
a_{t} = M(L_{enc}, V_{enc}, A_{enc})
\end{gather}
\noindent\textbf{Speaker-based data augmentation~}
To tackle the problem of data scarcity, Fried et al.~\cite{Fried2018SpeakerFollowerMF} propose a back-translation speaker model which can generate corresponding instructions $U$ from the visual information and action sequence of sampled routes in the environment:
\begin{equation}
U = Speaker(v_{1}, v_{2},...,v_{t}, a_{1}, a_{2},...,a_{t})
\end{equation}
The speaker is trained by original labeled route-instruction pairs, which takes the visual and actions information of routes as input and predict the instructions. The generated pseudo instructions along with sampled routes can be the augmented training data for better agent learning.
\noindent\textbf{Pre-exploration~}
After training on seen environments and deploying on unseen environment, the agent can adapt to the new environment via pre-exploration~\cite{APS,RCM,Envdrop}. There are different variants of pre-exploration includes self-imitation learning~\cite{RCM}, graph-based methods~\cite{spacial_route_prior,Chen_2021_CVPR_topological}, etc. In our work, we consider the paradigm that sampling routes $R^{'}$ from a new environment and generate instructions $I^{'}$ using the trained speaker mentioned before. Then the agent can be trained on the new environment using sampled routes and pseudo instructions$(R^{'},I^{'})$.
\subsection{Privacy-preserving VLN}
\begin{figure}[t]
\centering
\includegraphics[width=9cm]{private_pre_explore.pdf}
\caption{Comparison between different pre-exploration strategies on performance-privacy trade-off. Federated pre-exploration achieves the best navigation performance while maintaining good inference data privacy protection.}
\label{private pre explore}
\end{figure}
Considering the data may have sensitive information, users may have different levels of concern about the privacy of their data. In our work, we consider the case that the users do not want their data to be directly shared with the server (e.g., companies) and any other parties. Based on this, we define privacy-preserving vision-and-language navigation learning setting on two training stages: seen environment training and unseen environment pre-exploration. For seen environment training, including the training of navigation agent, speaker model and data augmentation process, no labeled data within the house environment will be directly shared with the server or any other client to prevent the leak of private information. And our primary purpose is to train a model that can generalize well on unseen environments. Thus, we need to utilize all the data indirectly to train one model.
For pre-exploration, the unlabeled data in unseen environments also can not be shared with others. However, the purpose in this stage is to adapt the model to a specific environment. Thus, training on data in one environment (environment-based pre-exploration) might not be a bad choice. In fact, our experiments show that environment-based pre-exploration performs better than centralized pre-exploration. Nevertheless, as elaborated in Sec.~\ref{fed pre-exploration}, we can indirectly utilize all the data in pre-exploration to boost the performance and preserve privacy. As in Fig.~\ref{private pre explore}, we aim to achieve the best performance-privacy trade-off in pre-exploration.
\section{The FedVLN Approach}
\begin{figure}[t]
\centering
\subfigure[Decentralized Federated Training]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{stage1.pdf}
\label{decentralized fig}
\end{minipage}
}
\subfigure[Federated Pre-exploration]{
\begin{minipage}[t]{0.45\textwidth}
\centering
\includegraphics[width=5.5cm]{stage2.pdf}
\label{fed pre-explore fig}
\end{minipage}
\label{fedvln fig}
}
\caption{The FedVLN framework. In the first stage (a), agents in seen environments will be trained on local data and upload the model updates to the server for aggregation (AGG), then download the global model from the server. In the second stage (b), all the agents in seen and unseen environments join the federated training. During local training, all the modules will be optimized, while only the language encoder will be uploaded/downloaded.}
\label{fedvln fig}
\end{figure}
We propose a federated vision-and-language navigation framework (FedVLN) as shown in Fig.~\ref{fedvln fig}, in which user's data can be kept locally during both training and pre-exploration. In this section, we will introduce our FedVLN framework for two training stages: Decentralized Federated Training and Federated Pre-exploration. In decentralized federated training, each environment has a local agent, which will be trained on local data, then uploaded to the server. Then the global model on the server will be updated by the aggregation of local model updates and sent to all the environments. In federated pre-exploration, to enable the agent to both adapt to the new environment and maintain the ability to understand language, only the language encoder will be shared with the server after local training, instead of sharing the full model. All the agents from seen and unseen environments will join the federated pre-exploration process.
\subsection{Decentralized Federated Training}
\noindent\textbf{Original training data~} When training on the original training data, we first divide the VLN dataset by environments. We treat each environment as a client, then assign a local navigation agent $w_{i}^{0}$ on each environment, which is initialized as the same as global navigation agent $w^{0}$. At each communication round between clients and server, a certain percentage of clients will be randomly selected for training, the local agent on each selected client will be trained for a certain number of epochs on their own data $d_{i}$:
\begin{equation}
w_{i}^{t} = {\rm ClientUpdate}(w^{t-1}, d_{i})
\end{equation}
Where $\rm ClientUpdate$ is the local training process. Then each selected client will send the update $\Delta w_{i, t} = w_{i}^{t} - w^{t-1}$ of their model to the server, and the server will aggregate all the models with a server learning rate $\eta$:
\begin{equation} \label{server aggregation}
w^{t} = w^{t-1} + \eta \sum_{i\in \phi_{t}}\frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta w_{i}^{t}
\end{equation}
Here the weight of each local model $\frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}}$ is the proportion of the use's sample in the total training sample of this communication round.
\begin{comment}
\begin{algorithm} [t]
\caption{Decentralized VLN Training}
\label{Decentralized VLN Training}
\begin{algorithmic}[1]
\STATE Parameters: Participation rate $r$; local learning rate $\lambda$; server learning rate $\eta$; number of communication rounds $T$; number of seen environments $n$; local training epochs $\tau$; local data $\{d_{i}\}_{i=1}^{n}$.
\STATE Initialize: $w^{0}$ and $w_{i}^{0} = w^{0}$, for i in \{1,2,...,n\}
\FOR{t in [1,T]}
\STATE Server sample $rn$ seen environments as $\phi_{t}$\
\STATE Server send global agent $w_{t-1}$ to selected environments
\FOR{client in $\phi_{t}$}
\STATE Client update local agent: $w_{i}^{t-1}=w^{t-1}$
\STATE Client train on local data: $w_{i}^{t} = {\rm ClientUpdate}(w_{i}^{t-1}, \tau_{i}, \lambda, d_{i})$
\STATE Client upload delta of the agent $\Delta w_{i}^{t}=w_{i}^{t}-w^{t-1}$ to the server
\ENDFOR
\STATE Server update global agent: $w_{i}^{t} = w^{t-1} + \eta \sum_{i \in \phi_{t}} \frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta w_{i}^{t}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\end{comment}
\noindent\textbf{Augmentation~}
For data augmentation, we will assign each client a local speaker. Following the federated learning paradigm mentioned above and the training procedure of speaker from Sec.~\ref{background}, at each communication round, each speaker from the selected clients will be trained on the labeled route-instruction pairs in its environment.
The best global model (according to BLUE score) during the training process will be sent to all clients. Each client can use the speaker model to generate pseudo instructions $I_{i}^{aug}$ for sampled routes within the environment. Then the augmented training of the agent will also follow the federated training process mentioned above, except the local data will be the combination of original data and augmented data $\{(d_{i}, d_{i}^{aug})\}_{i=1}^{n}$.
Notice that during the whole training process, including original data training, speaker training, and augmented data training, no client share their data to other clients or the server. Thus the training data privacy is preserved.
\begin{figure}[t]
\centering
\begin{minipage}{0.36\textwidth}
\centering
\resizebox{\columnwidth}{!}{
\begin{tabular}{lrr}
\toprule
\textbf{Statistics} & \textbf{GT} & \textbf{Speaker} \\
\midrule
Length & 29.58 & 21.89 \\
Var(Length) & 155.70 & 20.88 \\
NoS & 2.44 & 2.42 \\
Var(NoS) & 1.21 & 0.47 \\
\bottomrule
\end{tabular}
}
\captionof{table}{Comparison between ground-truth (GT) and speaker generated instructions on seen validation. NoS means the average number of sentences.}
\label{instr statistics}
\end{minipage}
\begin{minipage}{0.58\textwidth}
\centering
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[height = 3.2cm]{gt_verb_frequency.pdf}
\end{minipage}
\begin{minipage}{0.48\textwidth}
\centering
\includegraphics[height = 3.3cm]{pseudo_verb_frequency.pdf}
\end{minipage}
\caption{Comparison of verb frequency between ground truth instructions and generated pseudo instructions. }
\label{verb freq}
\end{minipage}
\end{figure}
\subsection{Federated Pre-exploration~} \label{fed pre-exploration}
Pre-exploration allows the agent to explore the newly deployed environment and update itself based on the new information. From the perspective of data privacy, centralized pre-exploration is impractical here, since it assumes one navigation agent can get access to data from all the unseen environments. Fu et al.\cite{APS} proposed environment-based pre-exploration, which allows each agent to train on only one environment. Thus no data will be shared with other parties, and the data privacy in unseen environments is preserved. From the performance point of view, for centralized training, since the agent is trained on all the data from all the environments, it should have better generalizability. However, training in all the environments may hinder the agent from better adapting to one specific environment. For environment-based pre-exploration, the agent can focus on one specific environment, while the limited data amount and data bias in one environment may lead to a less generalized agent.
Furthermore, as shown in Table~\ref{instr statistics} and Fig.~\ref{verb freq}, we found that the instructions generated by the speaker are statistically significantly different from human-generated instructions. Moreover, the language pattern is much simpler than human language. Since current methods only use augmented data with speaker-generated instructions for training during pre-exploration, the agent might suffer from the huge distribution shift between instructions in augmented data and validation data, and can not understand instructions in validation data well. This problem could be even worse on environment-based pre-exploration since the data for one agent is of a smaller amount and from a single environment.
What is more, according to former research~\cite{diagnosingEnv}, the agent will perform better on seen paths or environments. Thus, the best solution is to maintain the generalizability to understand language and adapt to a specific visual environment. To this end, as in Fig.~\ref{fed pre-explore fig}, we propose federated pre-exploration. In federated pre-exploration, The server will only maintain a global language encoder, which is initialized with the global encoder after decentralized federated VLN training. During each communication round, the server will send the global language encoder $E^{t-1}$ to the selected clients. Then the selected clients will update its language encoder with $E^{t-1}$, and train the full agent on its local data:
\begin{equation}
E_{L,i}^{t}, E_{T,i}^{t}, M_{i}^{t} = {\rm ClientUpdate}(E_{L,i}^{t-1}, E_{T,i}^{t-1}, M_{i}^{t-1}, \tau, \lambda)
\end{equation}
After local training, the model will send only the language encoder $E_{L,i}^{t}$ to the server for aggregation as lines 9,11 in Alg.~\ref{pre explore alg}. In this way, the language encoder will be jointly updated on data from all the participated environments, thus being more generalized. Meanwhile, to further improve the generalizability of the language encoder, we randomly sample a fraction of seen environments at each communication round, where agents will also follow the training process above.
The trajectory encoding module $E_{T,i}$ and multi-modal decision module $M_{i}$ will keep training locally, which can help local agents adapt to their own environments. For validation, we used the local models after local training. The whole training procedure is in Alg.~\ref{pre explore alg}.
\begin{algorithm} [t]
\caption{Federated Pre-exploration}
\label{pre explore alg}
\begin{algorithmic}[1]
\STATE Parameters: Seen participation rate $r_{1}$, unseen participation rate $r_{2}$; local learning rate $\lambda$; server learning rate $\eta$; number of communication rounds $T$; number of seen environments $n$; number of unseen environments $m$; local training epochs $\tau$.
\STATE Initialize: $E_{L,i}^{0} = E_{L}^{0}$, $E_{T,i}^{0} = E_{T}^{0}$, $M_{i}^{0} = M^{0}$, for i in \{1,2,...,n+m\}
\FOR{t in [1,T]}
\STATE Server sample $r_{1}n$ seen environments and $r_{2}m$ unseen environments as $\phi_{t}$\
\STATE Server send global language encoder to selected environments $E^{t-1}$
\FOR{client in $\phi_{t}$}
\STATE Client update language encoder: $E_{i}^{t-1}=E^{t-1}$
\STATE Client local training: $E_{L,i}^{t}, E_{T,i}^{t}, M_{i}^{t} = {\rm ClientUpdate}(E_{L,i}^{t-1}, E_{T,i}^{t-1}, M_{i}^{t-1}, \tau, \lambda)$
\STATE Client upload delta of the language encoder $\Delta E_{i}^{t}=E_{i}^{t}-E^{t-1}$ to the server
\ENDFOR
\STATE Server update language encoder: $E_{i}^{t} = E^{t-1} + \eta \sum_{i \in \phi_{t}} \frac{n_{j}}{\sum_{j\in \phi_{t}}n_{j}} \Delta E_{i}^{t}$
\ENDFOR
\end{algorithmic}
\end{algorithm}
\begin{comment}
\subsection{Training paradigm in real-world application}
In real-world application, users who are willing and unwilling to label data will continue to increase. We also further design a training paradigm for this situation based on our FedVLN framework. In this paradigm, the server will maintain a full model for seen environments training and a language encoder for unseen environment exploration. As the number of seen environments increasing, the full model will consistently trained by federated learning on seen environments. The pre-exploration training will happens periodically as seen environments increase, based on better trained models. \xin{This should be discussed earlier. And we do need a method figure for federated pre-exploration.}
\end{comment}
\section{Experimental Setup}
\subsection{Datasets}
We implement our federated learning framework on two datasets: Room-to-Room (R2R)~\cite{r2r} and Room-across-Room (RxR)(en)~\cite{rxr}. Both datasets are developed on the Matterport3D Simulator~\cite{r2r}, a photorealistic 3D environment for embodied AI research.
\noindent\textbf{R2R~\cite{r2r}} is constructed by generating the shortest paths from sampled start and end points. Then collect three associated navigation instructions for each path using Amazon Mechanical Turk (AMT). The dataset contains 7,189 paths from 90 environments, and each path contains 3 instructions. The environments are split into 61 environments for training and seen validation, 11 for unseen validation, and 18 for testing. The environments in unseen validation and unseen test set do not appear in the training environments.
\noindent\textbf{RxR~\cite{rxr}} is proposed to mitigate shortcomings of former VLN datasets. Specifically, it is a large-scale dataset with multilingual instructions. It contains 16,522 paths and 126,069 instructions, among which 42,002 instructions are in English. RxR also ensures spatiotemporal alignments between instructions, visual percepts, and actions for agent training. The RxR dataset samples arbitrary paths from point to point (not necessarily shortest paths) to avoid data bias.
\subsection{Evaluation Metrics}
For both datasets, we report Success Rate (SR), Success Rate weighted by Path Length (SPL), Oracle Success Rate (OSR), and navigation Error (NE) as goal-oriented metrics. SR is calculated as the percentage of the agent stop within 3 meters from the end point. SPL~\cite{spl} is defined as Success weighted by normalized inverse Path Length, which considers both navigation effectiveness and efficiency. OSR is the percentage of the agent visiting a point within 3 meters from the end point. NE is the average distance between the agent's final location and the end point. We also report Coverage weighted by Length Score (CLS)~\cite{jain-etal-2019-stay} and normalized Dynamic Time Warping (nDTW)~\cite{ndtw} to validate the fidelity of navigation paths, which penalize the deviation from the reference path. SR and SPL are often considered as the primary metrics for VLN evaluation.
\subsection{Baselines}
Currently, we do not consider pre-training privacy, and VLN data pre-training infringes on data privacy. Thus, we choose two strong VLN baselines without VLN pre-training for experiments:
\begin{enumerate}
\item \textbf{Envdrop}~\cite{Envdrop}: the environment dropout model uses a Bi-directional LSTM as the language encoder, an attentive LSTM as the action decoder, and a mixed learning objective of imitation learning and reinforcement learning.
\item \textbf{CLIP-ViL}~\cite{CLIP_on_vl}: the CLIP-ViL model adapts CLIP~\cite{clip} visual encoder to improve vision and language encoding and matching for vision-and-language navigation.
\end{enumerate}
\subsection{Implementation Details}
When training on seen environments, all models are trained till convergence. At each communication round, we use the participation rate of $r=0.2$, and train each local agent for $\tau=3$ epochs on local data for Envdrop model and $\tau=5$ for CLIP-ViL model.\footnote{The discussion and ablation study of local training epochs is in appendix.} For federated speaker training, we set the local epochs $\tau=5$ and select the best model on seen validation data according to BLEU score to generate instructions.
During pre-exploration, we test the participation rate of $r_{1}=\{0.5,0.6,0.7\}$ for unseen environments. And we train each agent for $\tau_{1}=1$ epoch over unseen local dataset as the model converges quickly. When training across seen and unseen environments, we use the participation rate of $r_{2}=0.18$ for seen environments. To validate the effectiveness of our framework, we use federated trained speaker to generate pseudo-instruction and train from federated trained navigation agent for centralized pre-exploration, environment-based pre-exploration and federated pre-exploration.
\section{Results}
\subsection{Decentralized Federated Training} \label{results seen training}
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|cccccc|cccccc}
\toprule
\multirow{2}*{\textbf{Model}} &
\multicolumn{6}{c|}{\textbf{Val-Seen}} & \multicolumn{6}{c}{\textbf{Val-Unseen}}\\
\cmidrule(lr){2-7}\cmidrule(lr){8-13}
& NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ \\
\midrule
Envdrop & 4.71 & 65.6 & 53.2 & 56.1 & 66.8 & 55.0
& 5.87 & 52.7 & 40.9 & 44.5 & 57.1 & 42.3 \\
FedEnvdrop & 5.20 & 60.2 & 48.3 & 51.2 & 64.3 & 51.2
& 5.52 & 55.5 & 43.9 & 47.5 & 59.2 & 45.7\\
\hdashline[1pt/2pt]
Envdrop$\rm _{aug}$ & 3.93 & 71.6 & 61.0 & 64.1 & 71.4 & 61.3
& 5.36 & 57.7 & 45.8 & 49.9 & 59.3 & 45.3 \\
FedEnvdrop$\rm _{aug}$ & 4.14 & 70.4 & 58.7 & 62.0 & 69.8 & 59.2 & 5.22 & 58.5 & 47.5 & 51.3 & 60.8 & 47.0 \\
\midrule
CLIP-ViL & 4.07 & 70.7 & 57.9 & 62.9 & 67.7 & 55.8
& 5.02 & 63.1 & 47.5 & 53.6 & 58.1 & 44.5\\
FedCLIP-ViL & 4.13 & 66.5 & 57.1 & 60.6 & 68.0 & 55.6
& 4.91 & 61.1 & 49.0 & 53.4 & 60.8 & 47.8 \\
\hdashline[1pt/2pt]
CLIP-ViL$\rm _{aug}$ & 3.52 & 75.0 & 61.7 & 66.8 & 69.3 & 58.6
& 4.59 & 67.4 & 50.7 & 57.0 & 59.2 & 46.4 \\
FedCLIP-ViL$\rm _{aug}$ & 3.69 & 71.4 & 60.1 & 64.6 & 68.9 & 57.2 & 4.53 & 65.6 & 51.7 & 56.9 & 61.0 & 48.3\\
\bottomrule
\end{tabular}
}
\caption{R2R Results of seen environment training. Envdrop is the centralized Envdrop model, and FedEnvdrop is the federated Envdrop model. $\rm Envdrop_{aug}$ means the Envdrop model trained with augmented data. Our decentralized federated training outpeforms centralized training with Envdrop and achieves comparable results with CLIP-ViL on unseen environments.
}
\label{seen r2r}
\end{table}
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{l|cccccc|cccccc}
\toprule
\multirow{2}*{\textbf{Model}} &
\multicolumn{6}{c|}{\textbf{Val-Seen}} & \multicolumn{6}{c}{\textbf{Val-Unseen}}\\
\cmidrule(lr){2-7}\cmidrule(lr){8-13}
& NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ \\
\midrule
Envdrop & 7.97 & 51.6 & 38.0 & 40.7 & 58.8 & 54.0 & 8.42 & 45.1 & 31.8 & 35.0 & 55.6 & 50.6 \\
FedEnvdrop & 8.30 & 53.3 & 35.9 & 39.7 & 57.2 & 51.8 & 8.58 & 47.4 & 30.2 & 34.4 & 55.4 & 48.8 \\
\midrule
CLIP-ViL & 6.92 & 56.8 & 42.3 & 46.5 & 60.0 & 56.2 & 7.38 & 50.6 & 34.9 & 39.5 & 55.6 & 51.2 \\
FedCLIP-ViL & 7.31 & 52.0 & 39.7 & 43.5 & 59.4 & 55.1 & 7.41 & 48.8 & 35.0 & 39.2 & 57.5 & 53.1 \\
\bottomrule
\end{tabular}
}
\caption{RxR results of seen environment training. Decentralized federated training obtains comparable results with centralized training on unseen environments (e.g., only 0.1\% SPL difference with the CLIP-ViL model).}
\label{seen rxr}
\end{table}
\noindent\textbf{Original data training}
In Table~\ref{seen r2r} and Table~\ref{seen rxr}, we report the results for seen environment training on R2R and RxR datasets for both baselines.
First, federated learning performs worse than centralized training on seen environments with an average of 2.8\% SR gap. This is reasonable, as centralized training can easily overfit to the seen training data for better performance on seen environments, while for federated learning, because of the decentralized optimization over protected local data, the global model can not overfit to the seen environments as well as centralized training.
The performance on unseen environments tests the generalization ability of VLN models and is used for VLN evaluation.
As shown in Table~\ref{seen r2r} and Table~\ref{seen rxr}, on unseen environments, decentralized federated training achieves comparable results with centralized training on original data training across different VLN models. For example, FedEnvdrop performs better than Envdrop on R2R and nearly the same on RxR, and FedCLIP-ViL obtains comparable results with CLIP-ViL on both R2R and RxR.
Thus, in terms of generalization ability, our decentralized federated training method is comparable with centralized training while protecting the training data privacy.
\begin{table}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Val-seen} & \textbf{Val-unseen} \\
\midrule
CLIP Cent & 33.5 & 30.2 \\
CLIP Fed & 31.3 & 31.6 \\
\hdashline[1pt/2pt]
ResNet Cent & 33.6 & 30.7 \\
ResNet Fed & 31.7 & 31.9 \\
\bottomrule
\end{tabular}
\caption{Comparison of BLEU score between federated speaker and centralized speaker based on the CLIP encoder and the ResNet encoder on R2R.}
\label{speaker bleu}
\end{table}
\noindent\textbf{Augmented data training}
First, the performance of speaker model determines the quality of augmented data, and thus influences the performance of both augmented training and pre-exploration. Table~\ref{speaker bleu} shows that federated speaker performs 2.05 worse than centralized speaker on seen validation on BLEU score, but is 1.3 better on unseen validation data, which is quite aligned with the navigation results. And thus federated trained speaker is comparable with centralized speaker on instruction generation.
From Table~\ref{seen r2r}, when training on augmented data, the performance of federated learning is also comparable with centralized training on unseen environments, although the quality of pseudo data generated by federated speaker might be worse according to BLEU score on validation seen data. This shows that federated agent can learn generalized navigation strategies on pseudo data with lower quality.
\begin{comment}
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\begin{minipage}{0.45\textwidth}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Val-seen} & \textbf{Val-unseen} \\
\midrule
CLIP Cent & 33.5 & 30.2 \\
CLIP Fed & 32.7 & 31.5 \\
\hdashline[1pt/2pt]
ResNet Cent & 33.6 & 30.7 \\
ResNet Fed & 32.1 & 30.7 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of BLUE2 score between federated speaker and centralized speaker based on the CLIP encoder and the ResNet encoder on R2R.}
\label{speaker bleu}
\end{minipage}
\begin{minipage}{0.45\textwidth}
\centering
\resizebox{0.9\columnwidth}{!}{
\begin{tabular}{lcc}
\toprule
\textbf{Speaker} & \textbf{Envdrop} & \textbf{CLIP-ViT} \\ \midrule
Centralized & 49.9 & 56.6 \\
Federated & 49.7 & 55.9 \\
\bottomrule
\end{tabular}
}
\caption{Comparison of SR between two baselines on data generated by centralized speaker and federated speaker on R2R validation unseen averaged on two runs. Using centralized speaker improves the navigation performance.
}
\label{cent fed speaker}
\end{minipage}
\end{table*}
\noindent\textbf{Federated speaker \textit{vs.} centralized speaker} From Table~\ref{seen r2r}, we notice that when training with augmented data, federated learning has an average gap of 0.45\% on unseen SR compared with centralized training, which is slightly worse than results without augmented data.
We suspect that it's because the federated speaker produce slightly worse augmented data than the centralized speaker.
So we compare the BLEU score between federated speaker and centralized speaker in Table~\ref{speaker bleu}. Results show that federated speaker performs 1.15 worse than centralized speaker on seen validation on BLEU2 score, but is 0.65 better on unseen validation data, which is quite aligned with the navigation results.
Since we do data augmentation on seen environments, federated speaker generates slightly lower-quality pseudo instructions.
To further validate this, we replace federated speaker with centralized speaker to generate the augmented data. Results are on Table~\ref{cent fed speaker}, when trained with the same augmented data by centralized speaker, whose quality is still worse than original data, decentralized federated training also obtains comparable performance with centralized training. Thus, federated learning can train a generalized model on both original data and pseudo data.
\end{comment}
\subsection{Federated Pre-exploration}
To validate the effectiveness of federated pre-exploration on unseen environments, we compare centralized pre-exploration, environment-based pre-exploration, and different federated pre-exploration methods: full model sharing (Fed-Full), sharing language encoder only (Fed-Lan), and sharing language encoder across seen and unseen environments (Fed-Lan+seen). Results are shown in Table~\ref{pre-pxplore table}.
\noindent\textbf{Navigation performance} For centralized pre-exploration and Fed-Full, in which one agent is optimized on data from all the environments, the agent can not adapt very well on each specific environment. For example, there is a gap of 3.1\% on SR between centralized training and environment-based pre-exploration. When sharing only the language encoder during federated learning, the validation results improve significantly comparing with full model sharing (e.g. 4.2\% on SR and 4.4\% on nDTW) since the agents can adapt to each environment better. Also, the generalization ability of language encoder is better than environment-based pre-exploration, since it is trained on more data across different environments. Thus, even under federated optimization, sharing only the language encoder in federated pre-exploration achieves similar results comparing with environment-based pre-exploration.
Federated pre-exploration with seen environments further improves the performance benefiting from human labeled data, and achieves around 0.8\% SR improvement than environment-based pre-exploration. To sum up, our Fed-Lan+seen method achieves superior navigation performance in terms of both navigation success and path fidelity metrics.
\begin{comment}
We notice that training with seen environments does not necessarily lead to better CLS and nDTW scores.
We further compare Success weighted by normalized Dynamic Time Warping (sDTW), which evaluate the path fidelity of successful examples only. Fed-Lan+seen has the best sDTW result, which is 57.3 on Envdrop and 60.2 on CLIP-ViT. And Fed-Lan achieves 56.8 on Envdrop and 58.7 on CLIP-ViT.
This means that, in the success cases, the Fed-Lan+seen agent performs the best in terms of both success rate and path fidelity.
In the failure cases, the Fed-Lan+seen agent tends to explore more and produce longer paths, which leads to lower scores on nDTW and CLS. Thus, federated pre-exploration with seen environments indeed enable more effective and faithful navigation with better exploration ability.
\end{comment}
\noindent\textbf{Degree of privacy} From the perspective of privacy preserving, environment-based pre-exploration is the best, where nothing in the unseen environments will be shared with others.
Centralized training is clearly the worse, where all the observable data from unseen environments will be directly shared with the server. Federated pre-exploration only uploads the model updates to the server.
Among federated methods, sharing only the language encoder protects data privacy better than full model sharing: it only shares the updates of language encoder, which accounts for only 24.6\% of the parameters and keeps other modules completely local.
Training with seen environments will not make the training process less private, as seen environments already shared their parameter updates with the server in decentralized federated training process.
\begin{table*}[t]
\centering
\setlength{\abovecaptionskip}{8pt}
\setlength{\belowcaptionskip}{8pt}
\resizebox{\columnwidth}{!}{
\begin{tabular}{llccccccl}
\toprule
Model & Method & NE$\downarrow$ & OSR$\uparrow$ & SPL$\uparrow$ & SR$\uparrow$ & CLS$\uparrow$ & nDTW$\uparrow$ & \qquad \quad \; \; Privacy$\uparrow$\\ \midrule
\multirow{5}*{Envdrop}
& Centralized &3.89 &73.7 &61.7 &64.8 &71.5 &64.6 & 0 - sharing data\\
& Env-based &\underline{3.49} &\textbf{78.5} &\underline{64.0}& \underline{67.4}&\underline{73.2} &\underline{67.5} & \textbf{3} - no sharing\\
&Fed-Full & 3.96 & 74.1 & 59.1 & 62.4 & 70.9 & 63.3 & 1 - model sharing (100\%)\\
&Fed-Lan &3.52 & 77.6 & 63.6 & 67.2 & \underline{73.2} & \textbf{67.6} &\underline{2} - model sharing (24.6\%)\\
&Fed-Lan+seen &\textbf{3.47} &\underline{78.1} &\textbf{64.8} &\textbf{68.3} &\textbf{73.5} &67.3 & \underline{2} - model sharing (24.6\%)\\
\midrule
\multirow{5}*{CLIP-ViT}
& Centralized &3.70 &76.0 &60.8 &65.3 &70.5 &62.1 & 0 - sharing data\\
& Env-based &\underline{3.31} &\underline{79.2} &65.2 &68.9 &\textbf{74.4} &\textbf{69.3} & \textbf{3} - no sharing\\
&Fed-Full&3.68 & 74.9& 61.8& 65.8 & 70.5 & 61.9 & 1 - model sharing (100\%)\\
&Fed-Lan &3.33 &\underline{79.2} &\textbf{65.4} &\underline{69.1} &74.0 &68.3 &\underline{2} - model sharing (24.6\%)\\
&Fed-Lan+seen &\textbf{3.25} &\textbf{80.6} &\textbf{65.4} &\textbf{69.5} &\textbf{74.4} &\underline{68.9} &\underline{2} - model sharing (24.6\%)\\
\bottomrule
\end{tabular}
}
\caption{Comparison between different pre-exploration methods on R2R unseen validation. Fed-Full means full model sharing federated learning, Fed-Lan means sharing only language encoder in federated learning, Fed-Enc+seen means federated training with seen environments and sharing encoder only.}
\label{pre-pxplore table}
\end{table*}
Overall, our federated pre-exploration method achieves a good performance-privacy trade-off. Centralized training is both worst in terms of navigation ability and privacy protection. Environment-based pre-exploration has the best privacy protection of unseen environment data. Federated pre-exploration achieves the best navigation results with little privacy cost by keeping all client data locally, and sharing only the language encoder model updates with the server.
\begin{comment}
Envdrop &64.6 & 65.2 & 62.6 & 66.3 & 67.6 \\
Envdrop+CLIP &66.1 & 69.2 & 64.1 & 69.7 & 71.0 \\
\subsection{Ablation study}
\textbf{Effectiveness of server learning rate}
\xin{this ablation does not contribute to the main claims in our paper. considering removing it or leaving it to appendix.}
We notice that when using small server learning rate in server aggregation eq~\ref{server aggregation}, the global model converge slowly and can not achieve comparable performance in given training steps, since using small server learning rate leads to a small global update step. If $lr_{s} = 1$, after the total local updates of $n$ steps, the scale of global update approximately equals to $\frac{n}{|S_{t}|}$. Although we can use a larger local learning rate, this will results in over-fitting of local models, which may be not good for global optimization. In real life federated learning application, there will be numerous clients participate in federated learning process, and faster convergence can reduce the local computation and client-server communication overhead, which is quite important. \cite{charles2021onLarge} tried to scale up server learning rate for large cohort federated learning while did not achieve positive results, and they did not discuss the convergence speed.
To validate the effectiveness of server learning rate in federated learning, we conduct experiments on Fig.~\ref{lr sr} and Fig.~\ref{rounds sr} to compare the best success rate and convergence speed between using larger learning rate and server learning rate. In Fig.~\ref{lr sr}, we observed that larger learning rate and server learning rate both improve the performance, and increasing server learning rate always produce better unseen success rate than increasing local learning rate at the same scale. Especially when the local learning rate goes to $12*10^{-4}$, the performance dropped significantly, while increasing the server learning rate at the same scale keep the performance.
In terms of convergence speed, both larger local learning rate and server learning rate leads to faster convergence. And the advantage of server learning rate increases as the scale increases. Overall, federated learning can achieve a decent performance in limited communication rounds in our experiments, especially with the help of larger server learning rate. For example, the model can achieve 46\% SR in 89 rounds when server learning rate is $12*10^{-4}$.
\begin{figure}[t]
\centering
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=6cm]{lr_sr.png}
\caption{Success rate on R2R unseen validation data under different local learning rate and server learning rate.}
\label{lr sr}
\end{minipage}
\begin{minipage}[t]{0.48\textwidth}
\centering
\includegraphics[width=6cm]{sr_steps.png}
\caption{Number of communication rounds needed to achieve certain success rate on R2R unseen validation data.
\xin{(1) swap x and y axis as we would like to know how SR changes as \# comms rounds changes. (2) we do not care about the LR here... so let's just use the one for our final model. (3) what are the takeaways of this ablation? The more frequent the better results? If so, how should we determine the number of communication rounds?}
}
\label{rounds sr}
\end{minipage}
\end{figure}
\end{comment}
\section{Conclusion and Future Work}
In this paper, we study the data privacy problems in vision-and-language navigation with respect to two learning scenarios: seen environment training and unseen environment pre-exploration. We propose a novel federated vision-and-language navigation (FedVLN) framework to preserve data privacy in two learning stages while maintaining comparable navigation performance. Furthermore, we present that federated pre-exploration can even outperforms all previous pre-exploration methods and achieves the best performance-privacy trade-off.
As the first work along this direction, our work does not consider adversarial attacks that can potentially recover data information from shared local model updates, and we believe future work can consider more embodied AI tasks and defend against privacy attacks for more data security.
\\
\noindent\textbf{Acknowledgement}
We thank Jing Gu, Eliana Stefani, Winson Chen, Yang Liu, Hao Tan, Pengchuan Zhang, and anonymous reviewers for their valuable feedback.
This work is partially supported by the PI's UCSC start-up funding.
\clearpage
\bibliographystyle{splncs04}
|
1,477,468,751,001 | arxiv | \section{Introduction}
The traditional and widely used picture of elastic multiple scattering of light is one of a plane wave that exponentially decays on the length scale of a mean free path while
propagating from one particle to another, with an electric field orthogonal to the direction of propagation, and with subsequent scattering into a different direction in space,
with exactly the same frequency.
It is well known that transversality in real space ($\mathbf{r} \cdot \mathbf{E}(\mathbf{r})=0$) is only valid in the far field of the scatterers,
at distances much larger than the wavelength.
In the near field of a dielectric object, the electric field achieves a ``dipolar'' structure, with a component directed along the propagation direction,
while still being divergence-free, i.e. $\bm{\nabla} \cdot\mathbf{ E }(\mathbf{r})= 0$. In many approaches of multiple light scattering,
these longitudinal modes are widely appreciated, yet considered ``virtual", in the sense that they do not carry a Poynting vector so that they
cannot transport electromagnetic energy themselves, though they can mediate the propagation of other waves, such as mechanical \cite{levitov} or matter \cite{matterdipole} waves.
However, in inhomogeneous media, the
dielectric constant
$\varepsilon(\mathbf{r})$ of the matter varies in space, and
Gauss' equation imposes $\bm{\nabla} \cdot [\varepsilon(\mathbf{r}) \mathbf{ E }]= 0$. As a result, true longitudinal electric fields exist,
with $\bm{\nabla} \cdot\mathbf{ E }\neq 0$ and a finite density of states (DOS) in phase space, to which elastic scattering could take place.
Induced polarization charges possess
Coulomb energy, and also stock dipole-dipole energy
among different scatterers but have no Poynting vector, so how can they transport energy? In atomic physics, the well-known process of F\"{o}rster coupling \cite{forster} facilitates a non-radiative
transport mechanism to exchange quantum states and to move Coulomb energy from one atom to another. Like spontaneous emission, this process is inherently inelastic and incoherent,
and is \emph{de facto} excluded in a picture where only elastic multiple scattering, including interferences, is allowed. Much in the spirit of F\"{o}rster coupling,
Ref.~\cite{theoL} added explicitly the quasi-static dipole-dipole coupling as a new channel in transport theory of electromagnetic waves.
In this work we will show that this transport channel
naturally emerges from a rigorous electromagnetic transport theory. The finite Poynting vector of this channel is shown to originate from the interference between longitudinal and transverse modes.
The transverse picture of electromagnetic waves emerges naturally in the so-called ``independent scattering approximation'' (ISA) of diffuse transport. In this approximation, the
longitudinal waves are usually ignored, and only transverse, propagating states are counted, associated with damped plane waves with
wave numbers close to the frequency
shell $p \approx k = \omega/ c_0 $ in phase space. A fundamental question is whether this picture is significantly altered, \emph{within and beyond} the ISA,
or if just quantitative modifications occur.
Longitudinal states have a finite density of states (DOLS), proportional to the imaginary part of the (longitudinal) dielectric
constant of the effective medium. Being mainly confined to scatterers,
they exist far from the frequency shell in phase space, typically at very large wave vectors $p \gg k$.
We will show that, due to the dipole size that is much smaller than the optical wavelength, excitations with large wave numbers can scatter and mode-convert to both
transverse and longitudinal states. As such they take fully part in the diffuse transport.
\section{road map of this work} \label{roadmap}
The purpose of this work is to elucidate the role of longitudinal waves in the
transport of electromagnetic waves propagating in disordered media at scales well beyond the mean free path. Our basic starting point is that light propagation over a distance
$\mathbf{r}$ is described by the full vector Green's function $\mathbf{G}(k,\mathbf{ r})$. Its longitudinal part dominates, at any frequency $\omega = kc_0$, for small distances, or in Fourier space
at large wave numbers $p$, typically
$p \gg k$. This affects many aspects of multiple scattering. The consideration of the full vector Green's function in both the scattering and the propagation
guarantees energy conservation in all orders of scattering, and the neglect of longitudinal fields would violate this principle fundamental for long-range diffusion.
The following 4 main sections revisit each a well-known result of standard multiple scattering theory to include the
longitudinal waves.
In section~\ref{sectionEM} we revisit the effective medium theory of electromagnetic waves, associated with the average electromagnetic field. It is fully characterized by a complex self-energy tensor $\mathbf{\Sigma}(k, \mathbf{p})$
that depends on both circular frequency $\omega= k c_0$ and wave vector $\mathbf{p}$, including its direction. We discuss the role of the longitudinal fields in several issues that are directly related to
the effective medium:
independent and recurrent scattering, density of states and
equipartition between longitudinal and transverse electromagnetic waves. The longitudinal component is related to the subtle Lorentz cavity described in many text books
\cite{bw,jackson}.
An electric dipole scatterer that is impenetrable to the light (arguably a simple ``atom") is the simplest model that highlights longitudinal excitations in resonant scattering.
In this model, the energy stored by the
radiating dipole is large and \emph{entirely }of longitudinal nature.
We study the effective medium of
a 3D space filled with a volume density $n$ of such dipoles. Longitudinal fields give rise to
singularities at large wave numbers, and show up later in diffusion and localization of light. They are treated
while respecting the conservation of energy. The longitudinal self-energy has a non-trivial behavior for $p\rightarrow \infty$ that will play a role in
long-range diffusion discussed later. It gives rise to a new complex wave number
$K_L$ associated with the density of states of longitudinal excitations (DOLS) with large wave numbers. The contribution of binary
dipole-dipole coupling to DOLS will be calculated analytically, and is obtained numerically in all orders
of the dipole density. Finally we show that for multiple scattering in this model, longitudinal excitations rapidly dominate the total density of states.
Section~\ref{kubosection} deals with the transport theory of the average light intensity and in particular with the role of longitudinal fields in the rigorous
Kubo formalism for the diffusion constant. At length scales well beyond the mean free path, the exact ``Bethe-Salpeter" transport equation simplifies to the diffusion current tensor
$\mathbf{J}(k, \mathbf{p},\mathbf{q})$ that determines
the flow of electromagnetic energy in phase space for excitations near the resonant frequency of the dipoles and for arbitrary wave number and polarization, induced by an energy gradient (described by $\mathbf{q}$ in
Fourier space).
We demonstrate the presence of \emph{four} different channels that
affect this flow differently, among which\emph{ two} directly affect the diffusion constant. The first so-called $J_0$ channel is the widely studied diffusion of transverse electromagnetic waves, with wave numbers close
to the frequency shell
$p \approx k $ of the effective medium. The second $J_2$ channel has - to our knowledge - never
been discussed before, and originates from the interference between
longitudinal and transverse excitations.
Because longitudinal waves alone do not possess a Poynting vector they need this interference to produce long-range diffusion. Like weak localization in the $J_0$ channel ,
the $J_2$ channel is a high-order
effect in perturbation theory and is absent in the Boltzmann approximation valid for weak disorder. The leading contributions to $J_2$ come from the
Drude diffusion (defined as the contribution of the effective medium to diffusion \cite{mahan}) and the weak localization (interference of counter-propagating waves) induced by two close dipoles. In the channel $J_0$ the latter is
known to be of order $-1/k\ell$
(with $\ell$ the mean free path) \cite{bartpre, cherro,kwong}, the weak localization in the $J_2$ channel turns out to be \emph{positive} and of order
$+1/(k\ell)^2$. We demonstrate that by summing all diagrams involving two dipoles, large contributions to $J_2$ diffusion stemming from large wave vectors cancel rigourously. This is a very convenient albeit non-trivial conclusion.
In Section~\ref{sectionrad} we recall the radiative force exerted by a diffuse electromagnetic flow, well-know and widely used in astrophysics. We demonstrate that the stored
longitudinal excitations, although not contributing to the Poynting vector, resonantly enhance the radiative force density in the disordered medium.
Finally in section~\ref{sectionAL} we make a first attempt to incorporate the longitudinal transport channels into the self-consistent theory of localization. For scalar waves (acoustic waves or spinless electrons)
this theory predicts without much effort a mobility edge in disordered three-dimensional media when $k\ell \approx 1$ \cite{vw,zhang}. Static electric dipole coupling was already identified as a possible source of
delocalization of mechanical waves \cite{levitov}. Recent numerical simulations with electromagnetic wave scattering from point-like electric dipoles
revealed the absence of a mobility edge \cite{sergey0} and are difficult to explain within the
traditional picture that only acknowledges the transverse field as a mechanism for diffuse transport.
{Also experiments \cite{naraghi} have revealed that diffusion of electromagnetic waves in dense media cannot be explained by a
the familiar scenario of a transition from diffusion to localization, most likely due to near-field couplings. }
We demonstrate that the standard self-consistent theory, when applied with the usual approximations
to electromagnetic waves,
couples the newly identified diffusion channel $J_2$ to the channel $J_3$. This channel does not affect the Poynting vector, proportional to $\mathrm{Re}\, ( {\mathbf{E} }\times \bar{\mathbf{B}}) $
but rather induces a non-zero value for $\mathrm{Im}\, ({\mathbf{E} }\times \bar{\mathbf{B}})$, a vector discussed e.g. by Jackson \cite{jackson}.
Its coupling to $J_2$ in the self-consistent equations leads to a minimum value
for the diffusion constant. We have worked out this theory assuming the effective medium to be the same for transverse and longitudinal waves, and
characterized by a single complex wave number $k_e + i/2 \ell$ that was calculated numerically for frequencies near the resonance.
An unfortunate technical complication is that the self-consistent theory - even in its scalar version applied to point-like particles - suffers from a genuine
singularity at large wave numbers that was not
discussed before in literature. This divergency is actually identical to the one encountered in Sec.~\ref{kubosection} where it was seen to cancel in low orders of the density when
summing all diagrams.
We postulate that this singularity is artificial and for scalar waves recover the usual results in literature. The resulting self-consistent theory for electromagnetic waves
is seen to be in good agreement with the
exact numerical results.
\section{Effective medium theory of electromagnetic waves}\label{sectionEM}
In standard transport theory \cite{PR}, the dispersion and extinction of waves are described by a complex self-energy $\mathbf{\Sigma} (k, \mathbf{p})$, associated with the effective medium. For
electromagnetic waves this is a second-rank tensor,
depending on frequency and wave
vector $\mathbf{p}$. Scattering between two states in phase space is described by the four-rank scattering vertex $\mathbf{U}_{\mathbf{p}\mathbf{p}'}(k)$.
{In this work we disregard optical absorption and assume throughout the conservation
of electromagnetic energy in multiple scattering, as expressed }
by the Ward identity \cite{PR,sheng},
\begin{equation}\label{ward}
{-\mathrm{Im}\, \mathbf{\Sigma}(k+i\epsilon, \mathbf{p})}= \sum_{\mathbf{p}'} \mathbf{U}_{\mathbf{p}\mathbf{p}'}(k) \cdot -\mathrm{Im} \, \mathbf{G}(k+i\epsilon,
\mathbf{p}')
\end{equation}
with the notation $\mathrm{Im}\,\mathbf{ A} \equiv (\mathbf{A} - \mathbf{A}^*)/2i$
{where $\mathbf{A}^*$ denotes the Hermitian conjugate of a $3 \times 3$ matrix $\mathbf{A}$ ($A^*_{ij} = \bar{A}_{ji}$)}.
The left hand side stands for the extinction of an electromagnetic excitation at wave vector $\mathbf{p}$, the right hand side puts this equal to the elastic scattering of the
same excitation from $\mathbf{p}$ towards all other accessible states $\mathbf{p}'$ in the phase space.
The ``spectral tensor'' $-\mathrm{Im} \, \mathbf{G}(k+i\epsilon, \mathbf{p}')$ is positive (as $\epsilon \downarrow 0$, for positive frequencies)
and determines the availability of microstates
at the wave vector $\mathbf{p}'$, given the frequency $\omega = k c_0$ that is conserved in elastic scattering. For convenience we will drop explicit reference to $\epsilon$ and assume its presence in $k +i\epsilon$
implicitly. Both $\mathbf{\Sigma}(k, \mathbf{p})$ and
$ \mathbf{U}_{\mathbf{p}\mathbf{p}'}(k)$
will be discussed in more detail below.
\subsection{Dyson Green's function}\label{greensection}
In Fourier space the Dyson Green's tensor of an electromagnetic ``quasiexcitation''
with frequency $\omega = kc_0$ and wave vector $\mathbf{p}$ of the
effective medium is given by \cite{PR}
\begin{eqnarray}\label{dyson}
\mathbf{G}(k, \mathbf{p}) &=& \left[k^2 - p^2\mathbf{ \Delta}_p -\mathbf{\Sigma}(k, \mathbf{p})\right]^{-1} \nonumber \\
&=& \frac{\mathbf{\hat{p}}\mathbf{\hat{p}} }{k^2 -{\Sigma_L}(k, {p})} + \frac{\mathbf{ \Delta}_p}{k^2 -p^2 -{\Sigma_T}(k,p)}
\end{eqnarray}
split up into a longitudinal and a
transverse part, with $\mathbf{\Sigma}(k, \mathbf{p}) = {\Sigma_L}(k, {p}) \mathbf{\hat{p}}\mathbf{\hat{p}}
+ {\Sigma_T}(k, {p})\mathbf{ \Delta}_p$, with $\mathbf{ \Delta}_p = \mathbf{1} - \mathbf{\hat{p}}\mathbf{\hat{p}} $ the projection tensor to transverse states.
In transport theory, the tensor $\mathbf{G }(k,\mathbf{p})\otimes \mathbf{G}^*(k,\mathbf{p}') $ is the building block of multiple scattering, and it is important to understand
$ \mathbf{G }(k,\mathbf{p})$ in great detail on all scales.
The longitudinal part of $\mathbf{G}(k, \mathbf{p})$ is associated with local Coulomb interactions between
induced charges inside scatterers, often referred to as ``non-radiative, static'', dipole-dipole coupling at a distance. The transverse part describes propagating waves. In the following we investigate both components in real space for small and large distances. We demonstrate that at small distances, the longitudinal part
of the Dyson Green's function dominates very generally
and takes the form of
dipole-dipole coupling with the usual Lorentz contact term \cite{jackson}, and surprisingly, is seen not to be static. At large distances,
only transverse excitations contribute and $\mathbf{G}(k,\mathbf{r}) $ is under very general conditions equal to an exponentially small,
propagating excitation with a polarization transverse to the direction of propagation $\mathbf{r}$.
This implies that $\mathbf{G}(k,\mathbf{r})$ contains the familiar near and far fields of electromagnetism, without the need to add the first
by hand \cite{theoL}. At large distances the traditional picture, described earlier, emerges.
In real space, the Green's tensor $\mathbf{G}(k,\mathbf{r})$ is the Fourier transform of Eq.~(\ref{dyson}) and describes the propagation of electromagnetic
waves over a distance $\mathbf{r}$ in the effective medium.
The near-field component is ``non-radiative'' in the sense that a longitudinal field $\mathbf{E} \parallel \mathbf{k}$ induces no magnetic field as
$k \mathbf{B } \sim \mathbf{k }\times \mathbf{E}=0$. Alone, it carries therefore no Poynting vector. However, we will show later in this work that
the interference of longitudinal and transverse components in the tensor product $\mathbf{G } \otimes \mathbf{G}^* $ does carry a Poynting vector
and facilitates a new channel to transport energy.
With $K_L^2(p) \equiv k^2 - {\Sigma_L}(k,p)$ the square of a complex longitudinal wave vector, one obtains in real space,
\begin{eqnarray}\label{GL}
\mathbf{G}_L(k,\mathbf{r}) &=& \sum_\mathbf{p} \frac{\mathbf{\hat{p}}\mathbf{\hat{p}}}{K_L^2(p)} \exp(i\mathbf{p}\cdot\mathbf{ r})
\nonumber \\
\, &=& \frac{\delta(\mathbf{r})}{3 K_L^2(\infty)} + \frac{1-3\mathbf{\hat{r}}\mathbf{\hat{r}}}{4 \pi K_L^2(\infty) r^3} + \mathbf{D}(\mathbf{r})
\end{eqnarray}
where we have split off the singularity of the integral at large wave numbers, leaving the remaining term $\mathbf{D}(\mathbf{r})$ as a
contribution
to the traceless dipole-dipole coupling described by the second term. Since $\mathbf{D}(\mathbf{r})$ is, by construction, the Fourier transform of a function that decays to zero for large $p$,
it is free from a Dirac distribution, and even non-singular as $\mathbf{r}\rightarrow 0$. We will show this explicitly in Sec.~\ref{sec2dipoles} for the recurrent scattering from two dipoles. As a result,
the first two terms in Eq.~(\ref{GL}) dominate on small scales.
The first, subtle Lorentz contact term is a genuine Dirac distribution and vanishes for $\mathbf{r}\neq 0$, but for $\mathbf{r}=0$ makes a genuine contribution to DOS.
Since the transverse field $\mathbf{G}_T(\mathbf{r}) \sim 1/r$
for $kr < 1 $ is much less singular, we conclude that
\begin{eqnarray}\label{Gr0}
\mathbf{G}(k,\mathbf{r}\rightarrow 0) \rightarrow \mathbf{G}_{0,L}(K_L(\infty),\mathbf{r})
\end{eqnarray}
This takes the same form as the familiar dipole-dipole regime of the bare Green's function $\mathbf{G}_0(\mathbf{r})$, with however the wave number $k= \omega/c_0$ in vacuum replaced by a complex-valued and frequency-dependent wave-vector $K_L(\infty)$.
For finite-size dielectric scatterers one may argue that at small
scales described by $p\rightarrow \infty$ the effective medium is homogeneous and $ K_L(\infty)$ must be some real-valued wave number.
For atomic atomic dipolar scatterers however, we will see that the complex value of
$K_L(p)$ extends up to infinity.
The complex value of $ K_L(\infty)$ indicates that the dipole-dipole coupling, dominating in the near field,
is not static but depends on frequency and contributes to the DOS. In Sec.~\ref{sectionDOS} we will calculate $K_L(\infty)$ numerically in all orders of the density for a model of randomly positioned electric dipoles.
At long distances $kr \rightarrow \infty$, small wave numbers prevail in Eq.~(\ref{GL}) so that
\begin{equation}\label{DD}
\mathbf{G}_L(k,\mathbf{r}\rightarrow \infty) = \frac{1-3\mathbf{\hat{r}}\mathbf{\hat{r}}}{4 \pi K_L^2(0) r^3}
\end{equation}
with $K_L(p)$ now evaluated at $p=0$. If this propagator would not be compensated, the far field would contain an algebraically small longitudinal term
which would severely affect the random-walk picture of transverse electromagnetic wave transport.
However, it is compensated very generally by a part of the transverse
propagator $\mathbf{G}_T(k,\mathbf{p})$.
For $kr \gg 1$ it is useful to make the following decomposition,
\begin{eqnarray}\label{GT}
\mathbf{G}_T(k,\mathbf{r}) &=& \sum_\mathbf{p} \frac{\mathbf{\Delta}_p}{K^2_T(p) -p^2} \exp(i\mathbf{p}\cdot\mathbf{ r}) \nonumber \\
&=&\frac{1}{2\pi^2}\left(-\bm{\nabla}^2 + \bm{\nabla} \bm{\nabla}\right)
\frac{1}{2ir}\int_\Gamma dp \frac{e^{ipr}}{p}
\frac{1}{K^2_T(p) -p^2} \nonumber \\
&+& \left(-\bm{\nabla}^2 + \bm{\nabla} \bm{\nabla}\right) \frac{1}{4\pi K_T^2(0)r}
\end{eqnarray}
Here $\Gamma$ denotes the line $(-\infty, +\infty)$ that avoids the origin $p=0$ via a small contour in the upper complex $p$-plane, and
which generates the last term.
In the far field, since necessarily $K_T(0)= K_L(0)$, the last term of Eq.~(\ref{GT}) cancels \emph{exactly }against the longitudinal far field in Eq.~(\ref{DD}).
The Green's function $\mathbf{G}(k, \mathbf{r})$ as a whole is therefore determined by the denominator of the first
term and
\begin{eqnarray}\label{GTL}
\mathbf{G}(k,\mathbf{r}\rightarrow \infty) =
\frac{\mathbf{\Delta}_r }{4\pi^2 ir}\int_{-\infty}^{\infty} dp \,
\frac{p\, e^{ipr}}{K^2_T(p) -p^2}
\end{eqnarray}
This indicates that the electric field is asymptotically dominated by transverse modes and also transverse to the direction of propagation $\mathbf{r}$.
If $K_{T}(p)$ has an analytical extension at least over a small sheet $\mathrm{Im}\, p < K_T''$ in the upper
complex $p$-plane, $\mathbf{G}(k,\mathbf{r})$ will
decay at least as $\exp (-K_T''r)/r$. Different ``effective medium'' approaches exist to calculate $\mathbf{G}(k,\mathbf{r})$ for various models \cite{sheng}.
The easiest method is to assume the presence of a simple pole
$K_{T}(p) = k_T + i/2\ell $, in
which case normal exponential behavior emerges with
the decay length equal to (twice) the elastic scattering mean free path $\ell$.
We conclude that the Green's tensor of the effective medium has a true longitudinal component
($\partial_i G_{ij}(k,\mathbf{r}) \neq 0$) that affects wave propagation at small scales $r < 1/k$.
In the far field, the electric field is always transverse to propagation ($\hat{r}_i G_{ij}(k,\mathbf{r}) = 0$).
Decay is exponential under broad conditions with a decay length $\ell$. This implies that radiative transfer should still be compatible with a
random walk with step length $\ell$, though with possibly new
mechanisms for energy transport
in the near field provided by the presence of longitudinal fields, that can become dominant when $k\ell \approx 1$.
This idea will be worked out concretely in the next subsections for an ensemble of randomly distributed dipolar electric scatterers (``dipoles'' for short).
\subsection{Independent electric dipole scattering}\label{section1D}
In the independent scattering approximation (ISA) applied to point-like electric dipole scatterers with number density $n$ and $T$-matrix $t(k)$,
$\mathbf{\Sigma}_{\mathrm{ISA}}(k, \mathbf{p}) = nt(k)$. In this work we assume each dipole to be impenetrable for
light outside, and to have only longitudinal excitations in its vicinity, at scales much smaller than the wavelength. This conveniently labels material energy as longitudinal
states that take part in the scattering process. By definition, the $T$-operator of a general polarizable scatterer perturbs wave propagation in
free space according to $\mathbf{G}(k) = \mathbf{G}_0(k) + \mathbf{G}_0(k) \cdot \mathbf{T}(k)\cdot
\mathbf{ G}_0(k)$. If we set $\mathbf{T}(k)= | \mathbf{r}_d \rangle \mathbf{t}(k)\langle \mathbf{r}_d | $ to describe an a electric
dipole at position $\mathbf{r}_d$, and impose
$\langle \mathbf{r} | \mathbf{G}({k}) |\mathbf{r}_d \rangle =0 $ for any $\mathbf{r }\neq \mathbf{r}_d$ for it to be ``impenetrable'', then it follows that
\begin{equation}\label{born}
\mathbf{t}(k) = \frac{-1}{\langle \mathbf{r}_d | \mathbf{G}_0({k}) |\mathbf{r}_d \rangle} = - \left[\sum_\mathbf{p}
\left( \frac{\mathbf{\hat{p}}\mathbf{\hat{p}} }{k^2} + \frac{\mathbf{ \Delta}_p}{k^2 -p^2 + i0}\right)\right]^{-1} \
\end{equation}
This model can be refined to acknowledge finite penetration of light into the dipoles \cite{theo},
but the present choice highlights the role of longitudinal waves and is arguably
the best description of elastic scattering from an atom without going into the details of atomic physics. Both the longitudinal and the
transverse integral diverge, the first essentially due to the Lorentz contact term.
We will regularize the first as
$ \sum_\mathbf{p} \mathbf{\hat{p}}\mathbf{\hat{p}} = 1/3u$ and the transverse part as $ \sum_\mathbf{p} \mathbf{ \Delta}_p/p^2 = 1/ 6\pi \Gamma $.
It follows that
\begin{equation}\label{tED}
\mathbf{t}(k)= \frac{-6\pi \Gamma k^2}{k_0^2 -k^2 - i k^3 \Gamma }
\end{equation}
Both $\Gamma$ (with dimension of length) and $u$ (a volume) are genuine properties of the dipole, independent of frequency or
polarization of the light. In particular $k_0^2 = 2\pi \Gamma /u$ determines the resonant frequency of the dipole. For $k=k_0$ longitudinal and transverse
singularities, opposite in sign, cancel each other.
For small $k$, the static
polarizability $\alpha(0)$ is related to the $t$-matrix as $t =-\alpha(0) k^2 $ \cite{PR}, and we can identify $\alpha(0) = 3u $. This relation can be understood from classical electrodynamics.
We recall the Lorentz relation $\mathbf{E}(0) = \mathbf{E}-
\frac{1}{3} \mathbf{P}$ for the homogeneous electric field inside the dipole, assumed spherical. Since we have imposed $\mathbf{E}(0) = 0$, the polarization density must
equal $3$ times the local electric field
$\mathbf{E}$. The dipole moment is thus $ u \mathbf{P} \equiv \alpha(0)\mathbf{ E} = 3 u \mathbf{E}$ with $u$ the volume of the dipole, and hence $\alpha(0) = 3u$.
The line width in frequency near the resonance is related to $\Gamma$ according to
$\gamma = k_0^2c_0\Gamma = \alpha(0) k_0^4 c_0/6\pi $, a known relation
for the radiative decay rate of a semi-classical two-level atom in the electric-dipole approximation \cite{AllanEberly}. We can identify the quality factor
$Q_0= \omega_0/\gamma = 6\pi /\alpha(0) k_0^3$. Near the resonance, we can thus write
\begin{equation}\label{tEDreso}
\mathbf{t}(k= \omega/c_0)= -\frac{6\pi}{k_0} \frac{\gamma/2 }{\omega_0 - \omega - i\gamma/2 }
\end{equation}
The $t$-matrix satisfies the optical theorem,
\begin{equation*}
-\mathrm{Im}\, t = \sum_{\mathbf{p}'} {|t(k)|^2} \cdot \mathbf{ \Delta}_p \, \pi \delta (k^2-p^2)= \frac{|t(k)|^2 k }{6\pi}
\end{equation*}
This expression is consistent with Eq.~(\ref{ward}), worked out linearly in the dipole density $n$ on both sides,
with $U^{\mathrm{ISA}}_{\mathbf{pp}'}= n|t(k)|^2$ the ISA collision operator and
$\mathbf{\Sigma}_{\mathrm{ISA}}(\mathbf{p}) = nt(k) $. For its relative simplicity, many exact
numerical simulations have been carried out with media filled randomly with electric dipoles \cite{felipe, sergey0, pool, remi},
and many theoretical treatments exist already \cite{jpc, bartpre, Dalibard}, not only because one can go far without making further approximations
but also because they constitute a good and complete model for multiple scattering of light from simple atoms. We notice that the $t$-matrix of a single dipole is independent
of both polarization, $\mathbf{p}$ and $\mathbf{p}'$. As a result, a single dipole can scatter microstates with arbitrary state of polarization, and with arbitrary $\mathbf{p}$ towards arbitrarily large $\mathbf{p}'$.
\begin{figure}
\includegraphics[width=7cm]{realboomerang.pdf}\\
\includegraphics[width=7cm]{imaginaryboomerang.pdf}
\caption{Real part (top, on resonance $\omega = \omega_0$) and imaginary part (bottom, for a detuning {$\delta = (\omega-\omega_0)/\gamma = -0.75$}) of the ``boomerang'' self-energy
associated with two dipoles as a function of wave number $p$.
The wave number latter is expressed in units of $k=\omega/c_0$ in free space, the self-energy is expressed in units of $(4\pi n/k^3)^2 \times k^2$.
The transverse self-energy $\Sigma_T(k,p)$ converges asymptotically to zero (dashed line) for all detunings,
meaning that the Lorentz local field term
$\Sigma_{LL} (k,p) = - \frac{1}{3} n^2 t^2 $, part of the boomerang diagrams but independent of $p$, is canceled. The longitudinal self-energy $\Sigma_L(k,p)$,
on the other hand, converges asymptotically to
$-n^2t^2$
(dashed line), as expressed by Eq.~(\ref{sigma2}). The characteristic wave number to reach the asymptotic constant value is $p = 3k$ and is associated with the triple round
trip of light between two dipoles in the
boomerang diagrams, as described by
Eq.~(\ref{self}).}\label{boom}
\end{figure}
\subsection{Extinction involving two electric dipoles}
\label{sec2dipoles}
The extinction caused by recurrent scattering from two dipoles was discussed in Ref. \cite{bartpre} for scalar waves, in Ref. \cite{wellens} for low-energy electrons, and in Refs. \cite{cherro,kwong,jpc} for electromagnetic waves. The last two works mainly focused on diffusion of transverse light, but used the full Green's tensor~(\ref{dyson}) to describe recurrent scattering.
In Ref.~\cite{kwong} correlations between dipoles were included and
compared successfully to numerical simulations.
Despite the singular Green's tensor $\mathbf{G}_0(k, \mathbf{r})$ in the near field,
no new divergencies were encountered provided the whole series of recurrent scattering is summed.
In the following section we will explicitly include the longitudinal field in the transport. To that end, we need to understand the behavior of the self-energy tensor $ \mathbf{\Sigma}(k,\mathbf{p})$
at large $p$. The self-energy involving one or two different dipoles is given by \cite{jpc}
\begin{eqnarray}\label{self}
\mathbf{\Sigma}(k, \mathbf{p}) &=& nt \mathbf{1} + n^2 \int d^3 \mathbf{r} \frac{t^3 \mathbf{G}_0^2(\mathbf{r})}{\mathbf{1}-t^2 \mathbf{G}_0^2(\mathbf{r})} \nonumber \\
&+& n^2 \int d^3 \mathbf{r} \frac{t^4\mathbf{G}_0^3(\mathbf{r})}{\mathbf{1}-t^2 \mathbf{G}_0^2(\mathbf{r})}
e^{i\mathbf{p}\cdot \mathbf{r}} + \mathcal{O}\left(n^3 \ln n\right)\;\;\;\;\;\;\;
\end{eqnarray}
We have dropped the explicit reference to $k= \omega/c_0$ in $t(k)$ and in $\mathbf{G}_0(k+i\epsilon,\mathbf{r})$. The first term is the ISA, the second term involves recurrent loops between two dipoles.
They are both independent of $p$ and necessarily isotropic tensors. We will show in Sec.~\ref{sectionDOS} that, in our model, loop diagrams of arbitrary order
rigorously determine the energy stored in longitudinal modes, and exploit this notion numerically.
The third term, summing up the so-called boomerang diagrams $\mathbf{\Sigma}_B$ (see Fig. \ref{boom}) provides the first $p$-dependent contribution and causes $ \Sigma_T(p) \neq \Sigma_L(p)$.
Higher orders in number density involve 3 different dipoles or more.
The boomerang diagrams generate a subtle contribution via the Lorentz contact term $\delta (\mathbf{r}) /3 k^2$ in
$\mathbf{G}_0(\mathbf{r})$ \cite{Dalibard, nienhuis}, which gives rise to the well-known Lorenz-Lorentz correction $ - n^2 t^2/3k^4$ to both the
longitudinal and transverse dielectric functions, and that is independent of $p$. {Adding a small anti-correlation between the dipoles to avoid
their physical overlap does not eliminate this
correction but rather transfers it to a new correlation diagram} \cite{nienhuis}.
Nevertheless, as $p\rightarrow \infty$, this term is compensated, again subtly, in the transverse self-energy and reappears as a purely
longitudinal self-energy.
This can be seen by subtracting the transverse photon field
$\mathbf{G}_{0,T} (\mathbf{r})$ in free space, derived in Eq.~(\ref{GT}) for the effective medium, which is free from the Lorentz contact term. The
boomerangs become,
\begin{eqnarray}\label{boom2}
\mathbf{\Sigma}_B(\mathbf{p}) &=& n^2 t^2 \int d^3 \mathbf{r}
\, \left[\frac{\mathbf{G}_0(\mathbf{r})}{1-t^2\mathbf{G}_0^2(\mathbf{r})} - \mathbf{G}_{0,T}(\mathbf{r}) \right] \exp(i\mathbf{p}\cdot \mathbf{r}) \nonumber \\
& +& n^2 t^2 \int d^3 \mathbf{r}
\, \left[\mathbf{G}_{0,T}(\mathbf{r}) - \mathbf{G}_0(\mathbf{r}) \right] \exp(i\mathbf{p}\cdot \mathbf{r})
\end{eqnarray}
In the first term, the Lorentz contact term at $\mathbf{r}=0$ no longer contributes and the integral vanishes for large $p$.
The second integral is just equal to (minus) the longitudinal Green's tensor $\mathbf{G}_{L,0}(k,\mathbf{p})$
in Fourier space. Hence we find the somewhat surprising relation for infinite $p$,
\begin{equation}\label{sigma2}
\lim_{p\rightarrow\infty} \mathbf{\Sigma} (\mathbf{p}) = {\Sigma}_{\mathrm{ISA}} + {\Sigma}_{\mathrm{Loop}} -\frac{n^2t^2}{k^2}\mathbf{\hat{p}}\mathbf{\hat{p}}
\end{equation}
Figure~\ref{boom} illustrates
by numerical integration of Eq.~(\ref{self}) that for large wave vectors, the Lorentz contact term is canceled in the transverse (boomerang) self-energy.
It converges always to zero, whereas the longitudinal boomerang self-energy converges asymptotically to $\Sigma_L(p) = -n^2t^2/k^2$. Neither one of them
converges to $ -n^2t^2/3k^2$, associated with the Lorentz contact term.
The asymptotic limit established in Eq.~(\ref{sigma2}) is important
since it demonstrates that $K_L(\infty) \neq K_T(\infty)$, the first introduced earlier in Eq.~(\ref{GL}) describing the
dynamic dipole-dipole coupling in the near field.
It is instructive to calculate the longitudinal Green's function~(\ref{GL}) associated with the self-energy in Eq.~(\ref{self}). Only the boomerang
diagrams $\Sigma_B$ depend on wave number $p$. Hence, up to order $n^2$,
\begin{eqnarray*}
\mathbf{G}_L(k, \mathbf{r}) &=& \sum_\mathbf{p} \hat{\mathbf{p}}\hat{\mathbf{p}} \frac{1}{k^2 - \Sigma_L (p)}\exp(i\mathbf{p}\cdot \mathbf{r})\\
= &-&\bm{\nabla} \bm{\nabla} \cdot \sum_\mathbf{p} \left[ \frac{\mathbf{1}}{k^2- \Sigma_0} + \frac{1}{k^4}
\mathbf{\Sigma}_B(\mathbf{p}) \right] \frac{\exp(i\mathbf{p}\cdot \mathbf{r}) }{p^2}
\end{eqnarray*}
with $\Sigma_0= \Sigma_{\mathrm{ISA}} + \Sigma_{\mathrm{Loop}}$. Upon inserting the boomerang diagrams and using
$-\bm{\nabla} \bm{\nabla} (1/4\pi r) = \delta(\mathbf{r})/3+ (1- 3\hat{\mathbf{r}}\hat{\mathbf{r}})/4\pi r^3 = k^2 \mathbf{G}_{0,L}(\mathbf{r})$, one obtains,
\begin{eqnarray*}
\mathbf{G}_L(k, \mathbf{r}) &= & \mathbf{G}_{0,L}( (k^2 -\Sigma_0)^{1/2},\mathbf{r} ) \\
&+& \frac{n^2}{k^2} \int d^3\mathbf{r}' \, \mathbf{G}_{0,L}(\mathbf{r}-\mathbf{r}')\cdot
\frac{ t^4 \mathbf{G}^3_0(\mathbf{r}')}{1-t^2\mathbf{G}_0^2(\mathbf{r}')}
\end{eqnarray*}
This determines the longitudinal Green's tensor at all distances, and also depends on frequency for all distances. The first term stands for ordinary dipole-dipole coupling of the type $1/r^3$
with a modified prefactor from the effective medium that arises because we consider the electromagnetic Green's tensor and not the potential energy of the dipoles.
The second term really changes the propagator from $\mathbf{r}=0$ to $\mathbf{r}'$, because a dipole can be situated at $\mathbf{r}=0$, that first couples via a high-order
dipole interaction to
a dipole at $\mathbf{r}'$ (a single coupling is already counted in the effective medium) before finally arriving at $\mathbf{r}$.
In the following we show 1) that this coupling fully disappears at large distance (contrary to Ref.~\cite{theoL})
and 2) that for small distances we recover the dipole-dipole coupling found earlier in Eq.~(\ref{GL}), with the complex wave number $K_L(\infty)$.
For $kr\gg 1 $, we can take $\mathbf{G}_{0,L}(\mathbf{r})$ out of the integral, and recognize the remainder as the boomerang self-energy at $p=0$. Hence,
\begin{eqnarray}\label{DrL}
\mathbf{G}_L(k, \mathbf{r} \rightarrow \infty ) = \mathbf{G}_{0,L}(K_L(0),\mathbf{r})
\end{eqnarray}
This result agrees with Eq.~(\ref{DD}) and was seen to cancel against a similar term in the transverse part of the Dyson Green's function. For $kr \ll 1$ we can write,
\begin{eqnarray*}
\mathbf{G}_L(k ,\mathbf{r}\rightarrow 0) &= & \mathbf{G}_{0,L}((k^2 -\Sigma_0)^{1/2} ,\mathbf{r}) \\
&-&\frac{n^2 t^2 }{k^2} \int d^3\mathbf{r}' \, \mathbf{G}_{0,L}(\mathbf{r}-\mathbf{r}')\cdot
\mathbf{G}_0(\mathbf{r}') \\
&+& \frac{n^2}{k^2} \int d^3\mathbf{r}' \, \mathbf{G}_{0,L}(\mathbf{r}-\mathbf{r}')\cdot
\frac{ t^2 \mathbf{G}_0(\mathbf{r}')}{1-t^2\mathbf{G}_0^2(\mathbf{r}')}
\end{eqnarray*}
The last term is regular and $\mathbf{r}=0$ can be inserted. The second term is equal to $-(n^2t^2/k^4) \mathbf{G}_{0,L}(\mathbf{r})$ and adds up to
the first term. Since by Eq.~(\ref{sigma2}) we have
$K_L^2(\infty) = k^2 -\Sigma_0 + n^2t^2/k^2$,
\begin{eqnarray}\label{DrS}
\mathbf{G}_L(k,\mathbf{r} \rightarrow 0 ) &= & \mathbf{G}_{0,L}(K_L(\infty),\mathbf{r}) +\mathbf{D}(0)\nonumber
\end{eqnarray}
This agrees with Eq.~(\ref{GL}) and attributes a finite complex, frequency-dependent value to $\mathbf{D}(\mathbf{r}=0)$,
\begin{equation}
\mathbf{ D}(0)= \frac{n^2t^2}{k^2} \int d^3\mathbf{r}' \, \frac{ \mathbf{G}_{0,L}(\mathbf{r}')\cdot
\mathbf{G}_0(\mathbf{r}')}{1-t^2\mathbf{G}_0^2(\mathbf{r}')}
\end{equation}
We note that $\mathbf{D}(0)$ is negligible compared to the dipolar coupling $G_L \sim 1/r^3$.
\subsection{Density of states}\label{sectionDOS}
In this section we derive the density of states (DOS) of electromagnetic waves in disordered media, express it in terms of the effective medium,
identify its longitudinal part (DOLS) and calculate it for our model of randomly positioned
electric dipoles with volume number density $n$.
The total electromagnetic spectral density at frequency $\omega = kc_0$ in a polarizable medium is defined by
\begin{equation*}
N_{tot}(k) = \frac{|k|}{c_0} \mathrm{TR}\, \delta\left(k^2 - \mathcal{H}\right)
\end{equation*}
with $\mathcal{H} = \varepsilon(\mathbf{r})^{-1/2} (\mathbf{p}^2-\mathbf{pp})\varepsilon(\mathbf{r})^{-1/2}$ the Helmholtz operator and $\mathrm{TR}$ the trace
in the Hilbert space spanned by all
eigenfunctions, including a strongly degenerate longitudinal eigenspace with eigenvalue $0$. Written in this way, the spectral density is defined (and equal) for positive and negative frequencies and
normalized to the dimension of the Hilbert space,
\begin{equation*}
\int_{-\infty}^\infty d\omega \, N_{tot} (k) = \mathrm{TR}
\end{equation*}
independent of $\varepsilon (\mathbf{r})$, and formally infinite. We can work out the trace in real space as
\begin{equation*}
N_{tot}(k) = \int d^3\mathbf{r}\, \frac{|k|}{c_0} \langle \mathbf{r}| \mathrm{Tr}\, \delta\left(k^2 - \mathcal{H}\right)|\mathbf{r}\rangle
\end{equation*}
with $\mathrm{Tr}$ the trace over 3 polarizations only, and identify the integrand as the local density of states,
\begin{equation*}
N(k,\mathbf{r})= -\frac{k}{c_0} \frac{1}{\pi } \mathrm{Im} \mathrm{Tr}\, \mathbf{G}_\mathcal{H}( k+i\epsilon, \mathbf{r}, \mathbf{r})
\end{equation*}
with $\mathbf{G}_\mathcal{H} = [(k+i\epsilon)^2 - \mathcal{H})]^{-1}$ . After
ensemble-averaging it becomes independent of $\mathbf{r}$, and we can express it in terms of the Dyson Green's function (\ref{dyson}),
\begin{eqnarray}\label{DOS1}
\langle N(k,\mathbf{r})\rangle &=& \left\langle -\frac{k}{c_0} \frac{1}{\pi }
\right. \nonumber \\
&\times& \left.
\mathrm{Im} \mathrm{Tr}\,
\langle \mathbf{r}| \, \varepsilon^{1/2}(\mathbf{r})\cdot \mathbf{G}(k+i\epsilon)\cdot\varepsilon^{1/2}(\mathbf{r})
|\mathbf{r}\rangle
\vphantom{\frac{1}{1}}
\right\rangle\nonumber \\
&=& -\frac{k}{c_0} \frac{1}{\pi } \sum_\mathbf{p} \mathrm{Im} \, \mathrm{Tr }\, \frac{p^2 \mathbf{\Delta}_p}{k^2} \cdot \mathbf{G} (k+i\epsilon, \mathbf{p})
\end{eqnarray}
Both lines in this expression count, by construction, all states but, quite surprisingly, the second line projects on the transverse states only with however a
large weight on large wave numbers $p \gg k$. The reason is that the first line counts electrical energy, including the longitudinal modes, whereas the second
line counts magnetic energy, which has only transverse modes. Equation~(\ref{DOS1}) states that the density of states can be calculated from either the magnetic or electrical energy, provided
the latter includes also the longitudinal states.
For our model of electric dipoles we expect that the DOS is the sum of transverse traveling waves and stocked longitudinal waves.
To show this we go back
to the first line of Eq.~(\ref{DOS1}). For $\varepsilon (\mathbf{r}) = 1 + {\delta\varepsilon}(\mathbf{r})$, we identify $\mathbf{V}=-{\delta\varepsilon}(\mathbf{r})k^2 $
as the interaction operator in the Born series of light scattering \cite{PR}.
Before doing the configurational average, we can consider $M$ dipoles in a finite volume $V$ (see also Appendix \ref{appA}). Rigorous scattering theory imposes the operator identity
$\mathbf{V}\cdot \mathbf{G}(k) = \mathbf{T} \cdot \mathbf{G}_0(k)$. Hence
\begin{eqnarray*}
N(k,\mathbf{r}) &=& -\frac{k}{c_0} \frac{1}{\pi } \mathrm{Im} \mathrm{Tr}
\, \mathbf{G}(k+i\epsilon, \mathbf{r},\mathbf{r}) \\
&+& \frac{k}{c_0} \frac{1}{\pi } \mathrm{Im} \mathrm{Tr} \langle \mathbf{r}| \frac{\mathbf{T}}{k^2} \cdot
\mathbf{G}_0(k+i\epsilon)
|\mathbf{r}\rangle
\end{eqnarray*}
This equation is still exact and depends on the position $\mathbf{r}$. Since the polarizability density ${\delta\varepsilon}(\mathbf{r})$ has disappeared explicitly we can
consider the special case of scattering from identical, impenetrable electric dipoles, associated with a dielectric susceptibility ${\delta\varepsilon}(\mathbf{r}) \rightarrow \infty$, and described
by Eq.~(\ref{born}). For $M$ such dipoles,
\begin{equation}\label{Tmma}
\mathbf{T}(k) = \sum_{mm'}^M \mathbf{T}_{mm'}(k)|\mathbf{r}_m\rangle \langle \mathbf{r}_{m'}|
\end{equation}
with, for $m,m'$ fixed, the $3 \times 3 $ matrix $\mathbf{T}_{mm'}(k)$.
To have $\mathbf{G}(\mathbf{r}_m, \mathbf{r})=0$ inside all dipoles at $\mathbf{r}_m$ and for arbitrary $\mathbf{r}$ outside imposes that $\mathbf{T}_{mm'}(k)$ be
given by the inverse of the $3M \times 3M $ matrix $-\mathbf{G}_0 (k, \mathbf{r}_m, \mathbf{r}_m')$.
It easily follows that
\begin{equation*}
\langle \mathbf{r}| \mathbf{T}\cdot \mathbf{G}_0(k+i\epsilon)
|\mathbf{r}\rangle = -\mathbf{1} \sum_{m=1}^M \delta(\mathbf{r}-\mathbf{r}_m) = -n(\mathbf{r})
\end{equation*}
Since this is purely real-valued, it cancels in the expression above for $N(k,\mathbf{r})$. Upon averaging and letting $M,V \rightarrow \infty$ at constant number density,
the remaining term yields
\begin{equation}\label{DOS3}
\langle N(k) \rangle = -\frac{k}{c_0} \frac{1}{\pi } \sum_\mathbf{p} \mathrm{Im} \mathrm{Tr}\,
\mathbf{G}(k+i\epsilon, \mathbf{p})
\end{equation}
in terms of the Dyson Green's function~(\ref{dyson}). This is recognized as
$\langle |\mathbf{E}(\mathbf{r})|^2\rangle$, proportional to the energy density $\langle \mathbf{E}(\mathbf{r})^2\rangle /8\pi $,
averaged over disorder and cycles, and having
both longitudinal $N_L(k)$ and transverse $N_T(k)$ parts.
We emphasize that Eq.~(\ref{DOS3}) only applies for our model that excludes any light inside the scatterer. As a result
no stored energy density $\langle \mathbf{E}(\mathbf{r})\cdot\mathbf{ P}(\mathbf{r})\rangle$ exists as e.g. in Mie scattering \cite{PR}. In this model,
the stocked dipole-dipole energy is entirely described by longitudinal (electric) waves.
\begin{figure}
\includegraphics[width=7cm]{PQdelta1.pdf}
\includegraphics[width=7cm]{PQdelta-1.pdf}
\caption{The contribution of dipole-dipole coupling to the DOS as a function of distance between the dipoles.
The volume integral of the functions shown produces the second term of Eq.~(\ref{DOSISA}). $U_P(r)$ is associated with the electric field
perpendicular to $\mathbf{r}$, $U_Q(r)$ with the electric field directed along $\mathbf{r}$. Top: $\delta = (\omega-\omega_0)/\gamma = -0.5$ (redshift). Bottom: $\delta = 0.5$
(blueshift). $U_P(r)$ has a subradiant peak only for positive detuning whereas $U_Q(r)$ only for negative detunings.
}\label{PQfig}
\end{figure}
We can insert the Dyson Green's function obtained in Sec.~\ref{greensection} into Eq.~(\ref{DOS3}),
\begin{eqnarray*}
\langle N(k) \rangle &=& - \frac{k}{\pi c_0} \mathrm{Im} \left[\sum_\mathbf{p} \frac{1}{K^2_L(\infty)}
\right. \\
&& \left.+ \sum_\mathbf{p} \left( \frac{1}{K^2_L(p)}
- \frac{1}{K^2_L(\infty)} \right)\right. \\ && \left. +2 \sum_\mathbf{p} \frac{1}{K^2_T(p) -p^2} \right]
\end{eqnarray*}
The first {diverging }term { expresses the singular Lorentz cavity }$ \sum_\mathbf{p} = \delta(\mathbf{r}=0)$.
It is entirely governed by longitudinal excitations, and is regularized using
$\sum_\mathbf{p} = 3/\alpha(0)= k_0^3 Q_0/2\pi $ consistent with Eq.~(\ref{born}). The second term, proportional to $\mathbf{D}(\mathbf{r}=0)$ in Eq.~(\ref{GL}),
is non-zero but a factor $Q_0$ smaller and shall be neglected. Finally the last term is just the density of states of transverse waves. We shall assume the existence of a
well-defined complex pole $K_T= k_T + i /2\ell$. This gives
\begin{eqnarray}\label{DOSLT}
\langle N(k) \rangle = \frac{k}{2\pi^2 c_0} \left[ - Q_0 \mathrm{Im} \frac{k_0^3}{K^2_L(\infty)} + {k_T}\right]
\end{eqnarray}
The ratio of longitudinal and transverse DOS is thus
\begin{equation}\label{ratioDOSLT}
\frac{ \langle N_L(k) \rangle }{ \langle N_T(k) \rangle } = - Q_0 \frac{k_0^3 }{k_T} \mathrm{Im} \frac{1}{K^2_L(\infty)}
\end{equation}
In view of the factor $Q_0$ this can be a large number, proportional to the density of the dipoles. For low density is
$K_L(\infty) \approx K_T \approx k + i /2\ell$ so that $ \langle N_L \rangle /\langle N_T \rangle = Q_0/k\ell$. This ratio will be discussed in the next section
\begin{figure}
\includegraphics[width=8cm]{DOSPQ5.pdf}
\caption{The contribution of dipole-dipole coupling to the DOS (in units of $N_0 \times Q_0 \times (4\pi n/k_0^3)^2$) as a function of
detuning $\delta = (\omega-\omega_0)/\gamma$.
The dashed lines show the separate contributions of the modes with electric field perpendicular ($P$) and parallel $(Q)$ to the separation vector
$\mathbf{r}$.}\label{DOSDD}
\end{figure}
A rigorous expression can be derived for DOS without relying on the existence of a complex pole by taking into account the $p$-dependence of the
self-energy $\Sigma_T(k,p)$ associated with the scattering from two electric dipoles. A direct expansion in dipole density yields
\begin{eqnarray*}
\langle N(k&&) \rangle = -\frac{k}{\pi c_0} \mathrm{Im} \mathrm{Tr}\, \sum_\mathbf{p} \left[ \vphantom{\frac{\, }{\, }}
\mathbf{G}_0(k, \mathbf{p}) \right. \\
&&+ \mathbf{G}_0(k, \mathbf{p}) \cdot \mathbf{\Sigma}(k, \mathbf{p})\cdot \mathbf{G}_0(k, \mathbf{p})\\
&&+ \left.
\mathbf{G}_0(k, \mathbf{p}) \cdot \mathbf{\Sigma}(k, \mathbf{p})\cdot \mathbf{G}_0(k, \mathbf{p})\cdot \mathbf{\Sigma}(k, \mathbf{p})\cdot
\mathbf{G}_0(k, \mathbf{p}) \vphantom{\frac{\, }{\, }} \right] \\
&&+ \mathcal{O}(n^3)
\end{eqnarray*}
Several singular longitudinal terms, stemming from the Lorentz cavity can be seen to cancel.
The first term describes the free electromagnetic field and the longitudinal field drops out trivially.
The longitudinal component of the second term contains a singular Lorentz cavity $(nt + \Sigma_{\mathrm{Loop}} -n^2t^2/k^2)
\sum_\mathbf{p} \mathbf{G}_L^2(\mathbf{p})$ stemming from Eq.~(\ref{sigma2}).
Similarly, the third term generates a singular longitudinal contribution $n^2t^2 \sum_\mathbf{p} \mathbf{G}_L^3(\mathbf{p})$ that cancels exactly against the local field $-n^2t^2/k^2$ generated by the previous term.
We can work out the wave number integral in the expression for $\langle N(k) \rangle$ exactly by
inserting Eq.~(\ref{self}), and use the cyclic property of the trace,
\begin{eqnarray*}
\langle N(k) \rangle &=& \frac{k^2}{2\pi^2 c_0} + \frac{k}{\pi c_0}\mathrm{Im} \mathrm{Tr}\, \left[
nt \frac{\partial}{\partial k^2} \mathbf{G}_0 (k,0) \right. \\
& +&
n^2 t^3 \int d^3 \mathbf{r} \frac{\mathbf{G}_0^2(\mathbf{r})}{\mathbf{1}-t^2 \mathbf{G}_0^2(\mathbf{r})} \cdot \frac{\partial}{\partial k^2} \mathbf{G}_0 (k,0)
\\
&+& n^2 t^4 \int d^3 \mathbf{r} \frac{\mathbf{G}_0^3(\mathbf{r})}{\mathbf{1}-t^2 \mathbf{G}_0^2(\mathbf{r})} \cdot
\frac{\partial}{\partial k^2} \mathbf{G}_0 (k, \mathbf{r}) \\
&+& \left. n^2t^2 \int d^3\mathbf{ r} \, \mathbf{G}_0 (k, \mathbf{r}) \cdot
\frac{\partial}{\partial k^2} \mathbf{G}_0 (k, -\mathbf{r}) \right]
\end{eqnarray*}
We have transformed the integral over wave vectors $\mathbf{p}$ of the last term $ \mathbf{G}_0 \cdot \mathbf{\Sigma} \cdot \mathbf{G}_0 \cdot \mathbf{\Sigma} \cdot
\mathbf{G}_0$ in $\langle N(k) \rangle$ to real space. Using again the relation $\mathbf{1}/t(k) = - \mathbf{G}_0 (k, \mathbf{r}=0) $ this can be rearranged to
\begin{eqnarray}\label{friedel}
\langle N(k) \rangle &-& N_0(k) = - \frac{3n}{2\pi} \frac{d}{dk} \mathrm{Im}\, \ln t \nonumber \\
&-& \frac{n^2}{4\pi c_0} \frac{d}{d k} \mathrm{Im} \mathrm{Tr} \int d^3 \mathbf{r} \ln \left[\mathbf{1}-t^2 \mathbf{G}_0^2(\mathbf{r}) \right]
\end{eqnarray}
with $N_0 = {k^2}/{2\pi^2 c_0}$ the LDOS of transverse waves in free space. The appearance of a full frequency derivative in the DOS
is a manifestation of Friedel's theorem
\cite{mahan}.
The second term is recognized as the
dipole-dipole energy
expressed as the ``return trip operator'', widely used in the theory of Casimir energy in matter \cite{miloni}, and involves loop paths only. The integral is well-defined at both $r=0$ and $r\rightarrow \infty$. Since the dominating frequency dependence comes from $dt/dk \approx -2Q_0t^2/6\pi$,
\begin{eqnarray}\label{DOSISA}
\frac{ \langle \Delta N(k ) \rangle }{N_0(k)}&=& -\frac{Q_0}{3k^2}
\mathrm{Im }\mathrm{Tr} \left[ {nt }\mathbf{1} + n^2\int d^3 \mathbf{r} \frac{t^3\mathbf{G}_0^2(\mathbf{r})}{1-t^2\mathbf{G}_0^2(\mathbf{r})} \right]
\nonumber
\\ &=& -\frac{Q_0}{k^2} \mathrm{Im } \left( \Sigma_{\mathrm{ISA}} + {\Sigma}_{\mathbf{\mathrm{Loop}}} \right)
\end{eqnarray}
This expression suggests that in general the modification of DOS is dominated by ISA $+$ loop diagrams, describing longitudinal excitations, even if mediated by
transverse, propagating waves.
This, in turn, implies that
the complex longitudinal wave number $K_L(\infty)$ is governed by loop diagrams only. In Appendix \ref{appA} we demonstrate that this statement holds
rigorously. More precisely, if
we recall the $T$-matrix~(\ref{Tmma}) of $M$ electric dipoles randomly distributed in a volume $V$, then
\begin{equation}\label{allloop}
\frac{1}{K_L^2(\infty)} = \frac{1}{k^2} + \frac{1}{3k^4} n \left\langle \mathrm{Tr}\,\mathbf{T}_{mm}(k) \right\rangle
\end{equation}
for $M\rightarrow \infty$ at constant $M/V=n$. The first two terms in the density expansion clearly coincide with Eq.~(\ref{DOSISA}). All higher order terms are rigorously loop diagrams
and
the ensemble-average of the diagonal element $\mathbf{T}_{mm}$ over all other $M-1$ dipoles must make it proportional to the identity matrix.
\begin{figure}
\includegraphics[width=0.7\columnwidth, angle=-90]{fig_ldos.pdf}\\
\vspace*{-5mm}
\includegraphics[width=0.7\columnwidth, angle=-90]{fig_ldos2nd.pdf}
\vspace*{-7mm}
\caption{\label{fig_dosl}
Top: Numerical simulation of the averaged imaginary part of the diagonal elements of the $T$-matrix as a function of detuning, for several densities $n$ of the dipoles.
Via Eqs.~(\ref{DOSLT}) and (\ref{allloop}) this quantity determines the DOLS. Bottom: Same but with the ISA approximation subtracted and normalized by
$4 \pi n/k_0^3$.
The dashed line shows the second-order in density term in Eq.~(\ref{DOSISA}) as also shown in Fig.~\ref{DOSDD}.
}
\end{figure}
Near resonance the first ISA term of $\langle \Delta N\rangle/N_0$ has a Lorentzian profile with a large peak
height inversely proportional to $Q_0$, to be associated with the excitation of a single dipole.
The second term of Eq.~(\ref{DOSISA}) becomes important when $4\pi n/k_0^3 \approx 1$
and constitutes an
inhomogeneous contribution to the line-profile. Using
{$\mathbf{G}_0(k, \mathbf{r}) = -\exp(i k r)/(4 \pi r) [P(kr) \mathbf{\Delta}_r + Q(kr) \hat{\mathbf{r}}\hat{\mathbf{r}}]$ for $\mathbf{r} \ne 0$}
\cite{PR} the integrand
can be split up
into two interactions $U_Q(r)$ and $U_P(r)$, that govern the near-field coupling of two dipoles in real space. Both are shown in Fig.~\ref{PQfig}.
The total dipole-dipole coupling, shown in Fig.\ \ref{DOSDD}, is negative around the resonance.
Because local field singularities cancel in the DOS, the solid curve in Fig.\ \ref{DOSDD} is the same as found in
Ref.~\cite{cherro}.
In Fig.~\ref{fig_dosl} we have calculated numerically
the diagonal elements of the $3M\times 3M$ matrix $\mathbf{T}_{mm}$ for $M = 10^4$ dipoles homogeneously distributed in a sphere at density $n=M/V$,
thereby averaging over all $3M$ diagonal elements as well as
over 10 independent random configurations of the dipoles. This calculation confirms that $K_L(\infty)$ is a genuine complex quantity and in general different from the complex wave
vector $K_T = k_T + i/2\ell$ associated with the transverse modes (see also Figs.~\ref{fig_keff} and \ref{fig_tr} in Appendix B). For low dipole densities, the calculation agrees accurately with the analytical calculation of the loops between two dipoles in Eq.~(\ref{DOSISA}). The line profile of the longitudinal
DOS broadens significantly well beyond the single-dipole line profile as the dipole density increases. Nevertheless,
the total surface underneath remains constant. This is to be expected since each dipole contributes exactly $3$ microstates to the DOS and this number
cannot be affected by dependent scattering (see Appendix A).
\subsection{Equipartition between longitudinal and transverse waves}
{In the following we show that transverse and longitudinal excitations are mutually converted both by independent scattering from a single dipole and by recurrent scattering from two dipoles.
The efficiency of this process is proportional to the average density of longitudinal (DOLS) or transverse (DOTS) states of the disordered medium, and leads eventually to a steady ratio of longitudinal and
transverse energies. We identify \emph{exactly two } events where the longitudinal field creates a singularity and which therefore dominate the dynamics of this process. }
Let $\tau_{i} $ {be the life time of the excitation} $i$ due to scattering to all other modes. { Since scattering is assumed elastic all involved modes have the same frequency. }
The scattering implies a change in wave number $\mathbf{p}$ {and/or in polarization, characterized by either the longitudinal polarization} $\hat{\mathbf{p}}$ {or by one of the two transverse polarizations}
$\hat{\mathbf{g}}_T$. Let $\rho_i$ be the density of states of excitation $i$, and $n_i$ its average occupation number.
If $U_{ji}$ is the matrix element converting $j$ to $i$, {the transport equation takes the generic form} ,
\begin{equation}\label{transportgen}
\frac{d n_i }{dt} = - \frac{ n_i}{\tau_{i} }+ \sum_j \rho_j n_j U_{ji}
\end{equation}
{The total energy $\sum_i \rho_i n_i(t)$ is conserved in time provided that $\rho_i /\tau_{i} = \sum_j \rho_j U_{ij} $.
This is {
akin to the Ward identity}~(\ref{ward}), and we can identify - apart from a constant factor with the dimension of a velocity} -
$U_{ij}$ with the polarization matrix elements of the irreducible vertex and $ \rho_i/\tau_i$ with the imaginary part of the self energy $- \mathrm{Im }\Sigma_i$. If $U_{ij} = U_{ji} $ the transport equation has
the solution $n_i(t) \equiv n$ to which it finally converges and which corresponds to equipartition of energy in phase space.
{The conversion rate} $1/\tau(T \rightarrow L' )$ of one transverse excitation $T=(\mathbf{\hat{g}}_T,\mathbf{p})$ to all longitudinal
excitations $L'= (\mathbf{\hat{p}}',\mathbf{p}')$ of
arbitrary wave number $\mathbf{p}'$ can thus be identified as,
\begin{equation*}
\frac{ \rho_T(\mathbf{p})}{\tau(T \rightarrow L') }= \sum_{\mathbf{p}'} \rho_L(\mathbf{p}') \times \mathbf{ \hat{p'}\hat{p'}}\cdot U_{\mathbf{p}\mathbf{p}' }
\cdot \hat{\mathbf{g}}_T \hat{\mathbf{g}}_T
\end{equation*}
with $\rho_L= -\mathrm{Im} \, G_L(k,p)\approx -n\mathrm{ Im }\, t(k) /k^4$ the DOLS, independent of $\mathbf{p}$. { Independent scattering from an electric dipole }
gives rise to a conversion rate from
a transverse excitation to a longitudinal excitation of arbitrary wave number, proportional to the
vertex $U^{\mathrm{ISA}}= n |t|^2$,
\begin{eqnarray}\label{div1}
\frac{ \rho_T(\mathbf{p})}{\tau(T \rightarrow L') } = \sum_{\mathbf{p}'} U^{\mathrm{ISA}} \rho_L(\mathbf{p}') =
-Q_0\frac{n^2 |t|^2 \mathrm{Im}\, t }{6\pi k}
\end{eqnarray}
Since the integral diverges at large $p'$,
we have used the same regularization $\sum_\mathbf{p } = Q_0 k^3/2\pi$ as the one employed earlier for the $T$-matrix of one dipole. The resulting scattering rate is large and positive.
{ In independent scattering the conversion from longitudinal excitations back to transverse waves does not have the same singularity and has a rate -- in the same units --
equal to the inverse mean free path.}
The matrix element $U^{(2)}_{\mathbf{pp}'}$ involving all recurrent scattering from two dipoles \cite{cherro} can mode-convert any initial state to transverse states,
with a rate proportional to $n^2$, of same order as the process in Eq.~(\ref{div1}). { We can establish that the vertex} $U^{(2)}_{\mathbf{pp}'}$, {together with
the self-energy}~(\ref{self}) {associated with two dipoles, satisfies the Ward identity}~(\ref{ward}), {but the detailed proof is beyond the scope of this work.}
A close look identifies \emph{only one }event part of $U^{(2)}_{\mathbf{pp}'}$ that gives rise to a singular scattering rate, displayed in Fig.~\ref{figIL}.
These so-called irreducible ladder diagrams $U^{(\mathrm{LAD})}_{\mathbf{pp}'}$ add up to
\begin{eqnarray}\label{UL2}
U^{(\mathrm{LAD})}_{\mathbf{pp}'} &=& n^2 |t|^4 \int d^3 \mathbf{r} \, \nonumber \\
&& \left[ \frac{\mathbf{G}_0}{1-t^2\mathbf{G}_0^2}\left( \frac{\mathbf{G}_0}{1-t^2\mathbf{G}_0^2}\right)^*
- \mathbf{G}_0\mathbf{G}^*_0 \right]
\end{eqnarray}
Our tensor notation is $ (\mathbf{AB})_{ij|kl} = A_{ik}B_{lj}$, equivalent to $(\mathbf{AB})\cdot \mathbf{S}= \mathbf{A}\cdot \mathbf{S}\cdot \mathbf{B}$.
This vertex is independent of $\mathbf{p}$ and $\mathbf{p}'$. The second term must be subtracted since it stands for a reducible event that is not
part of the collision operator $U$. However, this subtraction creates a diverging contribution at $\mathbf{r}=0$ in the integral due to the singular longitudinal field.
To repair this in a way consistent with previous sections, we extract the transverse photon propagator $\mathbf{G}_{0,T}(\mathbf{r})$,
and write
\begin{eqnarray*}
U^{(\mathrm{LAD})}_{\mathbf{pp}'} &=& n^2 |t|^4 \int d^3 \mathbf{r}\, \left[ \frac{\mathbf{G}_0}{1-t^2\mathbf{G}_0^2}\left( \frac{\mathbf{G}_0}{1-t^2\mathbf{G}_0^2}\right)^*
\right. \nonumber \\
&-& \left. \mathbf{G}_{0,T}\mathbf{G}^*_{0,T} \vphantom{ \frac{\,}{\, }} \right] \\
&+& n^2 |t|^4 \sum_{\mathbf{p}''}\, \left[ \mathbf{G}_{0,T}(\mathbf{p}'')\mathbf{G}^*_{0,T}(\mathbf{p}'') - \mathbf{G}_0(\mathbf{p}'')\mathbf{G}^*_0(\mathbf{p}'')
\right]\end{eqnarray*}
We have used Parseval's identity to convert the second integral into an integral over wave vectors. The first term of $U^{(\mathrm{LAD})}_{\mathbf{pp}'}$ now converges at small $r$ and shall be ignored,
the second term can be dealt with as
before, giving
\begin{eqnarray}
U^{(\mathrm{LAD},s)}_{\mathbf{pp}'} = -Q_0\frac{ 3 n^2 |t|^4 }{6\pi k} {S}
+ Q_0\frac{ n^2 |t|^4 }{6\pi k_0} \left(\frac{1}{3}\mathbf{ 11} - {S}
\right) \,\,\, \, \end{eqnarray}
with $S \equiv \langle \mathbf{\hat{p}} \mathbf{\hat{p}} \mathbf{\hat{p}} \mathbf{\hat{p}}\rangle$ the fully symmetric four-rank tensor.
The first term of this collision operator can convert longitudinal waves $L= (\hat{\mathbf{p}},\mathbf{p})$ to all available transverse waves $T' =(\mathbf{g}'_T, \mathbf{p}')$. Since the sum over the two transverse polarizations
$\sum \hat{\mathbf{g}}_T'\hat{\mathbf{g}}_T' =\mathbf{ \Delta}_{p'}$, the rate is given by
\begin{eqnarray}\label{div2}
&& \frac{\rho_L(\mathbf{p})}{\tau(L \rightarrow T')} = \sum_{\mathbf{p}'}\rho_T(\mathbf{p}') \times \mathbf{ \hat{p}\hat{p}}\cdot U^{(\mathrm{LAD},s)}_{\mathbf{pp}'}\cdot \mathbf{\Delta}_{p'} \nonumber \\
&& = - Q_0 \frac{ n^2 |t|^4 }{(6\pi)^2 }
\end{eqnarray}
Since $t$ satisfies the optical theorem, the two secular scattering rates (\ref{div1}) and (\ref{div2}) are equal but of opposite sign. This is a manifestation of
the Ward identity~(\ref{ward}). Since the life-times of longitudinal and transverse modes are not singular and easily found from Eq.~(\ref{self}),
its righthand side $\sum_j U_{ij} \rho_j$ must be free of divergencies.
{However, the two singular scattering events do} \emph{not} {cancel in the transport equation}~(\ref{transportgen}) {as long as } $n_L \neq n_T$,
{and therefore govern the dynamics of the equipartition
process. }The time found in Eq.~(\ref{div1}) must be the characteristic time for equipartition to set in. We can compare it to the characteristic time for mode conversion between transverse waves,
given by $ \rho_T /\tau(T\rightarrow T') = k/\ell$ in the same units. Hence
\begin{equation}\label{tau}
\frac{\tau^{-1} (T\rightarrow L')}{\tau^{-1}(T\rightarrow T')} = \frac{1}{3}\frac{Q_0}{ k\ell}
\end{equation}
Once equipartition is established,
the ratio of average longitudinal and transverse energy densities is constant and given by
\begin{eqnarray}\label{EP}
\frac{E_L(k)}{E_T(k)} = \frac{\rho_L}{\rho_T} = \frac{Q_0}{k\ell }
\end{eqnarray}
If $k\ell /Q_0 \ll 1$ we see that $ E_L \gg E_T $ and $1/\tau_{TL} \gg 1/\tau_S$. These inequalities imply that
longitudinal states dominate in energy, and equilibrate in phase space as fast as the transverse waves. In fact, when $k\ell /Q_0 < 1$,
the intermediate scattering to a longitudinal wave becomes more efficient for
transverse waves to equilibrate among themselves than accomplished by ISA single
scattering. The atomic quality factor $Q_0$ is large, and experiments \cite{nice} and numerical simulations \cite{pool,sergey0,remi}
exist where $Q_0 \gg k\ell$.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{IRRLADD.pdf}
\caption{Diagrammatic presentation of the ladder series (without external lines) involving two different electric dipoles. Dashed lines connect identical dipoles, solid lines denote the Green's tensor $\mathbf{G}_0(\mathbf{r})$,
crosses denote transition matrix $t(k)$, bottom line denotes Hermitian conjugation. The first diagram on the left is reducible (a simple product) and is not part of the
collision operator $U_{\mathbf{pp'}}$.}\label{figIL}
\end{figure}
\section{Kubo formalism}\label{kubosection}
In this section we use the rigorous Kubo formalism for the DC conductivity, adapted from electron conduction \cite{mahan} to scalar classical
waves \cite{bara1} and electromagnetic waves \cite{bara2}. We investigate how photon diffusion is affected by the existence of longitudinal waves.
Before averaging over the disorder, the electric field at frequency $\omega = kc_0$ is given formally
by the operator identity $\mathbf{E}(k) = \mathbf{G}(k) \otimes \mathbf{s}(k)$, with $\mathbf{s}(k)$ a source ($\otimes$ stands for the matrix product in full Hilbert space, whereas $\cdot$
stands for matrix product in $3 \times 3$ polarization space). Transport theory describes the
correlation function $\phi_{ij} = \langle E_i\bar{E}_j \rangle $ of the electric field at two different frequencies and for
two different positions. We can formally
relate it to the source correlation function $\mathbf{S}$ according to $\phi_{ij} = R_{ij|kl} \otimes S_{kl}$, which introduces the reducible
four-rank vertex ${R}$. It
satisfies the Bethe-Salpeter equation,
\begin{equation}\label{BS}
{R} = \mathbf{GG}^\dag + \mathbf{GG}^\dag \otimes{U}\otimes {R}
\end{equation}
This equation identifies the irreducible vertex ${U}$ as the scattering operator, and $\mathbf{G} \mathbf{G}^\dag$ as the transport between scattering events.
(We use $ \dag$ for Hermitian conjugate in full Hilbert space as opposed to $*$ for Hermitian conjugate of a $3\times 3$ matrix with polarization components; a bar denotes complex conjugation of a scalar.)
The Green's function of the
effective medium was introduced in Eq.~(\ref{dyson}) and has transverse and longitudinal parts. We recall the tensor convention
$\mathbf{AB} \cdot \mathbf{S}= \mathbf{A}\cdot \mathbf{S}\cdot\mathbf{B}$, or equivalently $(\mathbf{A} \mathbf{B})_{ij|kl} =A_{ik}B_{lj}$, with the matrix
$\mathbf{B}$ displayed as the bottom line of a Feynman diagram, propagating backwards in time. Similarly in Hilbert space,
$(\mathbf{G} \mathbf{G}^\dag)_{\alpha \beta|\kappa \gamma} =G_{\alpha \kappa }G^\dag_{\gamma \beta} = G_{\alpha \kappa }\bar{G}_{\beta \gamma} $.
After averaging, translational symmetry can be exploited so that the vertex in Fourier space (Fig.~\ref{Reps})
can be written as
${R}_{\mathbf{pp}'}(\mathbf{q})$, with $\mathbf{p}'$ and $\mathbf{p}$ interpreted as incident and outgoing wave numbers,
and $\mathbf{q}$ conjugate to
distance between source and observer. Thus, the electromagnetic ``Wigner function'' takes the form
\begin{equation}\label{FFSS}
\phi_{ij}(\mathbf{p},\mathbf{q}) \equiv \langle E_i(\mathbf{p}^+)\bar{E}_j(\mathbf{p}^-)\rangle =
\sum_{\mathbf{p}'} R_{\mathbf{pp'}; ij|kl}(\mathbf{q}) S_{kl}
(\mathbf{p}',\mathbf{q})
\end{equation}
and
\begin{eqnarray}\label{BS2}
{R}_{\mathbf{pp}'}(\mathbf{q}) &\,&= \mathbf{G}(\mathbf{p}^+)\mathbf{G}^*(\mathbf{p}^-) \delta_{\mathbf{pp}'} \nonumber \\
+ &\,& \mathbf{G}(\mathbf{p}^+)\mathbf{G}^*(\mathbf{p}^-) \cdot \sum_{\mathbf{p}''} U_{\mathbf{pp}''}(\mathbf{q})
\cdot {R}_{\mathbf{p}''\mathbf{p}'}(\mathbf{q})
\end{eqnarray}
with $\mathbf{p}^\pm =\mathbf{p} \pm \mathbf{q}/2$ (see Fig.~\ref{Reps}) and $ \delta_{\mathbf{pp}'} \equiv (2\pi)^3 \delta(\mathbf{p}-\mathbf{p}')$. The two terms describe direct propagation with extinction of the mode $\mathbf{p}$ and
scattering from $\mathbf{p}'$ towards $\mathbf{p}$,
respectively. One important property is reciprocity \cite{bara2, rogerboek}. Since without external magnetic fields, the (unaveraged) Green's function satisfies
$\langle \mathbf{p}, i | \mathbf{G}(k+i0)| k, \mathbf{p}'\rangle = \langle -\mathbf{p}', k | \mathbf{G}(k+i0)| i, -\mathbf{p} \rangle$
we easily check that
\begin{equation}\label{reci}
{R}_{ij|kl,{\mathbf{pp}'}}(\mathbf{q}) = {R}_{kl|ij,-\mathbf{p}'-\mathbf{p}}(-\mathbf{q})
\end{equation}
\begin{figure}
\includegraphics[width=7cm]{Rdia.pdf}
\caption{The diagrammatic convention associated with the reducible vertex $R_{ij|kl,\mathbf{ pp'}}(\mathbf{q})$, with external lines. Top line denotes
retarded Green's function $\mathbf{G}(k + i0)$, bottom line $\mathbf{G}^\dag(k + i0) =\mathbf{G}(k - i0)$ is the advanced
Green's function and travel in the opposite direction. The polarization labels are $ij$ on the left hand side (``observer'') and $kl$ on the right
hand side (``source'').The sum of incoming and outgoing wave numbers is conserved.}\label{Reps}
\end{figure}
A second property follows from complex conjugation, equivalent to switching bottom and top lines of the diagram,
\begin{equation}\label{CC}
{R}_{ij|kl,{\mathbf{pp}'}}(\mathbf{q}) = \bar{{R}}_{ji|lk,\mathbf{p}\mathbf{p}'}(-\mathbf{q})
\end{equation}
If Eq.~(\ref{ward}) is satisfied, $R$ is known to exhibit long-range diffusion ($q\rightarrow 0$), as its equivalent in electron-impurity scattering \cite{mahan}, that decouples input and output, and takes the form
\begin{equation}\label{diffR}
{R}_{ij|kl,{\mathbf{pp}'}}(\mathbf{q}) = \frac{d_{ij}(\mathbf{p},\mathbf{q}) d_{kl}(\mathbf{p}',\mathbf{q})}{\pi N(k) D(k) q^2}
\end{equation}
where $N(k)$ is the DOS given in Eq.~(\ref{DOS1}) and the eigenfunction associated with long-range diffusion is written as
\begin{equation}\label{dij}
\mathbf{d}(\mathbf{p},\mathbf{q}) = -\mathrm{Im}\, \mathbf{G}(\mathbf{p}) -\frac{i}{2}\mathbf{J}(\mathbf{p},\mathbf{q}) +\mathcal{ O}(q^2)
\end{equation}
The first term $-\mathrm{Im}\, \mathbf{G} (\mathbf{p}) \equiv -[\mathbf{G}(\mathbf{p}) -\mathbf{G}^*(\mathbf{p})] /2i $ is proportional to the
spectral function and implies perfect equipartition of the electromagnetic energy in phase space.
The second term is linear in $\mathbf{q}$ and describes a small perturbation
due to gradients of $\Phi_{ij} (\mathbf{r}) $ in real space that trigger diffuse energy flow.
For it to be small for all momenta $\mathbf{p}$ imposes constraints
to be discussed later. Because $\mathbf{d}(\mathbf{p},\mathbf{q})$ describes
an electric field correlation function, it must satisfy $d_{ij}(\mathbf{p},\mathbf{q}) =\bar{d}_{ji}(\mathbf{p},-\mathbf{q}) $, consistent with Eqs.~(\ref{CC}) and (\ref{reci}).
Thus, $ \mathbf{J}(\mathbf{p},\mathbf{q})= - \mathbf{J}^*(\mathbf{p},-\mathbf{q})$ and, being
linear in $\mathbf{q}$ by construction, we conclude that the tensor $\mathbf{J}(\mathbf{p},\mathbf{q})$ is Hermitian.
Following common treatments in radiative
transfer, many microscopic approaches interpret the expansion~(\ref{dij}) as one in the angular anisotropy of scattered radiation with wave numbers in equipartition and imposed
near the frequency shell, as described by the first term. If we ignore electromagnetic polarization and without any kind of explicit anisotropy in space,
the only possible choice of this expansion is,
\begin{equation}\label{dijscal}
\mathbf{d}(\mathbf{p},\mathbf{q}) = -\mathrm{Im}\, \mathbf{G}(\mathbf{p}) \left[ 1 - iJ_0(p)\mathbf{ p}\cdot \mathbf{q} + \cdots \right]
\end{equation}
For diffusion of cold atoms, $ \mathbf{J}(\mathbf{p},\mathbf{q})$ was obtained by solving numerically the Bethe-Salpeter equation \cite{marie}.
Alternatively, the unknown function $J_0(p)$ can be chosen such
that the first angular moment $\sum_\mathbf{p} \mathbf{p}\mathbf{ d}(\mathbf{p},\mathbf{q}) $ matches the divergence $-i\mathbf{q}\cdot \mathbf{K}$ of the
energy current density. This leads to $J_0(p)= 1/p^2$ \cite{sheng} and makes the vector $\mathbf{K}$ the only unknown.
This choice conveniently circumvents divergencies
that occur in rigorous theory for large $p$. For vector waves, $ \mathbf{J}(\mathbf{p},\mathbf{q}) $ is a tensor containing longitudinal and transverse components, and even their
{interferences.
It obeys the
Bethe-Salpeter equation}~(\ref{BS2}) {linearized in the gradient vector} $\mathbf{q}$ {that can be obtained straightforwardly as for scalar waves} \cite{mahan}
\begin{eqnarray}\label{BSJ}
\mathbf{J}(\mathbf{p},\mathbf{q}) &=& \mathbf{J}^D(\mathbf{p},\mathbf{q}) \nonumber \\
&+& \mathbf{ G}(\mathbf{p})\mathbf{G}^*(\mathbf{p}) \cdot \sum_{\mathbf{p}'} {U}_{\mathbf{pp}'}\cdot \delta_\mathbf{q}
\mathbf{G}(\mathbf{p}',\mathbf{q})\nonumber \\
&+& \mathbf{ G}(\mathbf{p})\mathbf{G}^*(\mathbf{p}) \cdot \sum_{\mathbf{p}'} {U}_{\mathbf{pp}'}\cdot \mathbf{J}(\mathbf{p}',\mathbf{q})
\end{eqnarray}
The first term is often referred to as the Drude contribution to diffusion and depends only on the effective medium properties. It reads
\begin{equation}\label{Drude}
\mathbf{J}^D(\mathbf{p},\mathbf{q}) = \mathbf{G} (\mathbf{p})\cdot\mathbf{ L}(\mathbf{p},\mathbf{q}) \cdot \mathbf{G}^* (\mathbf{p})
- \delta_\mathbf{q}
\mathrm{ Re}\, \mathbf{G}(\mathbf{p},\mathbf{q})
\end{equation}
in terms of the bilinear Hermitian tensor $\mathbf{ L}_{ij}(\mathbf{p},\mathbf{q}) = 2(\mathbf{p}\cdot \mathbf{q}) \delta_{ij} -p_iq_j-q_ip_j $ and the notation is
$\delta_\mathbf{q}
\mathrm{ Re}\, \mathbf{G}(\mathbf{p},\mathbf{q}) = (\mathbf{q}\cdot \partial_\mathbf{p}) \mathrm{Re}\, \mathbf{G}(\mathbf{p}) $.
The second and third terms in Eq.~(\ref{BSJ}) are
genuine contributions from
scattering. They vanish only for isotropic events in $U_{\mathbf{pp}'}$ but not in general.
It is straightforward to demonstrate that the (cycle-averaged) Poynting vector $\mathbf{K} = c_0 \mathrm{Re}\, (\mathbf{E}\times \bar{\mathbf{B}})/8\pi $
is related to the
correlation function of the electric field according to
\begin{eqnarray}\label{Poyn}
K_n(k,\mathbf{q}) &=& \frac{c_0}{8\pi k} \sum_\mathbf{p} \left( p_n \delta_{ik}- \frac{1}{2} p_k\delta_{in} -
\frac{1}{2} p_i\delta_{kn} \right) \phi_{ki}(\mathbf{p},\mathbf{q}) \nonumber \\
&+& \frac{c_0}{8\pi k} \sum_\mathbf{p} q_k \frac{1}{2}( \phi_{kn}(\mathbf{p},\mathbf{q}) - \phi_{nk}(\mathbf{p},\mathbf{q}) )
\end{eqnarray}
In the absence of external magnetic fields,
$ \phi_{ki}(\mathbf{p},\mathbf{0}) = \phi_{ik}(\mathbf{p},\mathbf{0})$ so that the second term vanishes in linear order of $\mathbf{q}$. Upon inserting Eq.~(\ref{dij}),
the first term involves only the diffusion current tensor $\mathbf{J}$.
Some manipulations lead to
\begin{eqnarray}\label{K1}
iq_nK_n &=& \frac{1}{4\pi N(k)} \sum_\mathbf{p} L_{ik}(\mathbf{p},\mathbf{q}) J_{ki}(\mathbf{p},\mathbf{q}) \nonumber \\
&\times& \frac{1}{D(k) q^2}\frac{c_0}{8\pi k}\sum_{\mathbf{p}'} -\mathrm{Im}\, {G}_{lj}(\mathbf{p}') \cdot {S}_{lj} (\mathbf{p}')
\end{eqnarray}
The factor that has been split off on the right hand side can be identified as the (cycle-)averaged energy density $\rho(\mathbf{r})=
\langle |\mathbf{E}|^2 + |\mathbf{B}|^2 \rangle /16\pi$ released by the source and diffusing out.
This can be established by noting that the tensor
$\mathbf{J}$, being odd in $\mathbf{p}$,
does not
contribute to the energy density. Since the first term in Eq.~(\ref{dij}) obeys equipartition,
$\langle|\mathbf{E}(\mathbf{r})|^2 \rangle= \langle|\mathbf{B}(\mathbf{r})|^2\rangle$,
the energy density is
\begin{eqnarray}\label{E}
\langle\rho(\mathbf{q}) \rangle &=& \frac{1}{\pi N(k) } \frac{k}{c_0}\mathrm{Tr}\, \sum_\mathbf{p} \frac{p^2}{k^2}
\mathbf{\Delta}_p \cdot -\mathrm{Im}\, \mathbf{G}(k,\mathbf{p}) \nonumber \\
&\times& \frac{1}{D(k)q^2} \frac{c_0}{8\pi k}\mathrm{Tr }\sum_{\mathbf{p}'} -\mathrm{Im}\, \mathbf{G}(\mathbf{p}') \cdot \mathbf{S}(\mathbf{p}')
\end{eqnarray}
If we recall that the electromagnetic DOS is given by Eq.~(\ref{DOS1}),
the first factor in Eq.~(\ref{E}) equals one. In real space Eq.~(\ref{K1}) thus becomes
$\bm{\nabla} \cdot \mathbf{K} = - D \bm{\nabla}^2 \rho(\mathbf{r})$ with
\begin{equation}\label{Kubo}
\pi N(k) D(k) = \frac{1}{4}\mathrm{Tr} \, \sum_\mathbf{p}
\mathbf{L} (\mathbf{p},\hat{\mathbf{q}}) \cdot \mathbf{J}(\mathbf{p},\hat{\mathbf{q}})
\end{equation}
This is the Kubo formula for the electromagnetic diffusion constant. Since $D$ is a scalar in this work,
the right hand side does depend on the direction of $\mathbf{q}$. The left hand side can be identified as the electromagnetic DC conductivity $\sigma(k)$ (here in units of $1/$m)
expressed as the (Einstein) product of DOS and
diffusion constant. With this definition, that we prefer in view of the presence of $\pi ND$ in Eq.~(\ref{diffR}), the "electromagnetic conductance" of a slab with surface $A$ and
length $L$ takes the form of a Landauer formula $\langle
\sum_{ab}T_{ab} \rangle = 4\sigma A/L$ \cite{LB}. In terms of the energy density $\rho(\mathbf{q})$ the electric field correlation function is
\begin{eqnarray}\label{DAn}
\phi_{ij}(\mathbf{p},\mathbf{q}) = \frac{d_{ij}(\mathbf{p},\mathbf{q})}{\pi N(k) } \times \frac{8\pi k}{c_0} \rho(\mathbf{q})
\end{eqnarray}
\subsection{Diffusion current tensor}
The diffusion current tensor $\mathbf{J}(\mathbf{p},\mathbf{q})$ must be a parity-even, Hermitian tensor, linear in the gradient vector $\mathbf{q}$. For our problem, with no
explicit anisotropy present,
this
leaves
us with the following general form
\begin{eqnarray}\label{Jgeneral}
\mathbf{J}(\mathbf{p},\mathbf{q}) &=& J_0 (p) ({\mathbf{p}}\cdot \mathbf{q}) \Delta_\mathbf{p}
+ J_1(p) ({\mathbf{p}}\cdot \mathbf{q})
\hat{\mathbf{p}} \hat{\mathbf{p}}
\nonumber
\\ && + J_{2} (p) ({\mathbf{p}} \mathbf{q}+{\mathbf{q}}{ \mathbf{p}} ) +
J_{3} (p) i ({\mathbf{p}} \mathbf{q}- {\mathbf{q}} {\mathbf{p}} )
\end{eqnarray}
with four real-valued functions $J_i(p)$ to be determined. A fifth term $i\epsilon_{ijk}q_k$ is in principle allowed but is
excluded for scattering that respects parity symmetry. Alternatively, we could have defined the mode $J_2(p)$ in terms of the tensor
${\mathbf{p}}\mathbf{q}+\mathbf{q}{\mathbf{p}} - 2({\mathbf{p}}\cdot \hat{\mathbf{q}}) \hat{\mathbf{p}}\hat{\mathbf{p}}$ in which case all 4 modes would be mutually
orthogonal.
The four functions can be associated with four different aspects in diffuse transport. By restricting only to the first, the transport problem reduces to the common approximation made in Eq.~(\ref{dijscal}).
The modes $J_1$, $J_2$ and $J_3$ are clearly genuine vector effects, absent in
a scalar theory. However, only $J_0$ and $J_2$ carry a Poynting vector, with $J_0$
associated with the transport of transverse waves in the far field, and $J_2$ associated with a novel process that involves
the interference of longitudinal and transverse waves.
By restricting to the purely transverse term $J_0(p)$, transport theory almost reduces to a scalar theory. The term $J_1$
describes how the longitudinal energy density $ |E_L(\mathbf{p})|^2 $ achieves an anisotropy in phase space
due to the spatial gradient of energy, but without inducing an energy current.
The presence of $J_3$ is more subtle and can be associated with the \emph{imaginary} part of the complex Poynting vector, discussed for instance in
Ref.~\cite{jackson}.
Let us call $\mathrm{Im} \, \mathbf{K} = c_0 \mathrm{Im}\, ({\mathbf{E}}\times \bar{\mathbf{B}})/8\pi $.
We readily find, similar to the derivation of its real part in
Eq.~(\ref{Poyn}), that in terms of the field correlation function $\phi_{ik} (\mathbf{p},\mathbf{q})$,
\begin{eqnarray}\label{imPoyn}
\mathrm{Im}\, K_n(k,\mathbf{q}) &=& \frac{-ic_0}{8\pi k} \sum_\mathbf{p} \left( q_n \delta_{ik}- \frac{1}{2} q_k\delta_{in} -
\frac{1}{2} q_i\delta_{kn} \right) \phi_{ki} \nonumber \\
&-& \frac{ic_0}{8\pi k} \sum_\mathbf{p} p_k ( \phi_{ki} - \phi_{ik} )
\end{eqnarray}
The first term is independent of $\mathbf{J}$ and can be evaluated without any approximation. The integral over wave numbers is proportional to the total
DOS $N(k)$ and cancels this same factor in
the denominator of Eq.~(\ref{DAn}). As a result it is completely
independent of the presence of the dipoles.
The second term requires anti-symmetry
in the diffusion tensor $J_{ij}$, described only by $J_3(p)$. We obtain
\begin{equation}\label{DIM}
\mathrm{Im}\, \mathbf{K}(k,\mathbf{q}) = \frac{2}{3} \left(\frac{c_0}{k} + \frac{1}{\pi N(k)} \sum_\mathbf{p} p^2 J_3(p) \right) (-i\mathbf{q}) \rho(\mathbf{q})
\end{equation}
Like the real part, the ``current density'' $ \mathrm{Im}\, \mathbf{K}$ is proportional to minus the gradient in energy density, with however
a very small ``fictitious'' diffusion constant $D_I = \frac{2}{3} c_0/k $ associated with
$ \mathrm{Im}\, \mathbf{K}$, and a correction from $J_3$ calculated in the next section.
Even if $J_1$ and $J_3$ do not carry current themselves, they cannot be ignored because the Bethe-Salpeter equation~(\ref{BSJ}) couples in principle all $J_i$
through scattering. From Eq.~(\ref{BSJ}) we can identify four different contributions to $\mathbf{J}(\mathbf{p},\mathbf{q})$,
written as
\begin{equation}\label{JDall}
\mathbf{J}(\mathbf{p},\mathbf{q}) = \mathbf{J}^D + \mathbf{J}^{\delta \Sigma} + \mathbf{J}^{\delta G} +\mathbf{J}^{S}
\end{equation}
In this expression, the Drude diffusion current in Eq.~(\ref{Drude}) has been further split up into the first two terms above. The first is given by,
\begin{equation}\label{drudeagain}
\mathbf{J}^D(\mathbf{p},\mathbf{q}) = \mathbf{G} (\mathbf{p})\cdot\mathbf{ L}(\mathbf{p},\mathbf{q}) \cdot \mathbf{G}^* (\mathbf{p})
-\mathbf{G} (\mathbf{p})\cdot\mathbf{ L}(\mathbf{p},\mathbf{q}) \cdot \mathbf{G} (\mathbf{p})
\end{equation}
The second term is generated by the explicit dependence of the self-energy on wave number,
\begin{equation}\label{drudesigma}
\mathbf{J}^{\delta\Sigma}(\mathbf{p},\mathbf{q}) = - \mathrm{Re} \, \mathbf{G}(\mathbf{p})\cdot (\mathbf{q}\cdot
\partial_\mathbf{p})\mathbf{ \Sigma }(\mathbf{p}) \cdot \mathbf{G}(\mathbf{p})
\end{equation}
with the convention that $\mathrm{Re} \, \mathbf{A} = (\mathbf{A}+\mathbf{A}^*)/2$. The final
two terms $\mathbf{J}^{\delta G}$ and $\mathbf{J}^{S}$ are defined as the two last scattering terms involving $U_{\mathbf{pp}'}$ in Eq.~(\ref{BSJ}).
{To summarize the above analysis, the diffusion tensor} $\mathbf{J}(\mathbf{p},\mathbf{q}) $ {can have 4 different symmetries, denoted by} $J_i$. {Each term can originate from 4 different parts
of the Bethe-Salpeter equation}~(\ref{BSJ}).
The mode $J_2$ { implies a new mechanism of long-range diffusion stemming in all 4 cases from the mixture of longitudinal and transverse fields.}
One peculiarity is the direction of the Poynting vector associated with the diffuse mode expressed by Eq.~(\ref{dij}).
The mode $J_0$ of the pure transverse field generates a Poynting vector
whose component along the gradient varies as $\cos^2\theta$ in phase space, with $\theta$ the angle between wave vector $\mathbf{p}$ and gradient vector $\mathbf{q}$, and is
thus largest \emph{along} the gradient vector.
For the mode $J_2$ this component varies as $\sin^2\theta$, which is largest \emph{orthogonal }to the gradient vector.
In the following subsections \ref{drudesection}--\ref{scattdifsection} we discuss these 4 contributions to $\mathbf{J}$ separately, and show that the scattering from
two electric dipoles
generates all four channels in Eq.~(\ref{Jgeneral}). The results are summarized in subsection~\ref{secsummary} and in Table~(\ref{tabeldiff}).
\subsubsection{Drude current tensor}
\label{drudesection}
{The Drude current tensor} $\mathbf{J}^D(\mathbf{p},\mathbf{q})$ { is defined as the contribution of the effective medium, as expressed by } Eq.~(\ref{drudeagain}) {and is thus by definition
independent of the collision operator} $U_{\mathbf{pp}'}$ { and not subject to interference}. It is therefore the easiest to calculate.
We will split $\mathbf{J}^D(\mathbf{p},\mathbf{q}) $ further up into
a pure transverse part and an interference term and write
\begin{equation}\label{JDall2}
\mathbf{J}^D(\mathbf{p},\mathbf{q}) = \mathbf{J}^D_{TT}(\mathbf{p},\mathbf{q}) + \mathbf{J}^D_{TL}(\mathbf{p},\mathbf{q})
\end{equation}
The first part stems from purely transverse propagation and contributes only to the $J_0$-channel in Eq.~(\ref{Jgeneral}). The
second term is produced
by a mixture of longitudinal and transverse propagation and contributes to the channels $J_1$, $J_2$ and $J_3$.
Since $\mathbf{p}\cdot \mathbf{L}(\mathbf{p},\mathbf{q}) \cdot \mathbf{p} =0$, the Drude current tensor features no purely longitudinal mode $\mathbf{J}^D_{LL}(\mathbf{p},\mathbf{q})$, proportional to $|G_L(p)|^2$.
The transverse Green's function $\mathbf{G}_T(\mathbf{p})$ is given by the second term in Eq.~(\ref{dyson}). It follows
\begin{eqnarray}\label{JT}
\mathbf{J}^D_{TT}(\mathbf{p},\mathbf{q}) &=& 2 (\mathbf{p}\cdot \mathbf{q})\mathbf{ \Delta}_p \left( |G_T(p)|^2 - \mathrm{Re }\, G_T(p)^2 \right)
\nonumber \\
&=&4 (\mathbf{p}\cdot \mathbf{q})\mathbf{ \Delta}_p \mathrm{Im }^2\, G_T(p)
\end{eqnarray}
This function is heavily peaked near the frequency shell of the effective medium. We can ignore any $p$-dependence in
$\Sigma_T(p)$ and approximate it by $\Sigma_T(p=k)$. For $G_T= (K_T^2-p^2)^{-1}$ and $K_T= k_e + i/2\ell$ a complex wave vector independent of $p$,
we can use,
\begin{equation*}
\frac{\sum_\mathbf{p} 2 p^2 \mathrm{Im}^2\, G_T(p)}{\sum_\mathbf{p} -\mathrm{Im}\, G_T(p) } = k_e\ell
\end{equation*}
In terms of the density of transverse states (DOTS) this produces the classical Drude diffusion constant in the $J_0$-channel,
\begin{equation}\label{DDrude}
D^D_0(k) = \frac{1}{3}\times \left( c_0\frac{k_e}{k} \frac{N_T(k)}{N(k)} \right) \times \ell(k) \equiv \frac{1}{3} v_E \ell
\end{equation}
It is customary to write $k_e/k = c_0/v_p$ in terms of the phase velocity $v_p$. The ratio $N_T/N = N_T/(N_L + N_T)$ is a factor that can be very
small near
the resonance $\omega_0$. We recall that for our electric dipole scatterers all stored energy resides in the longitudinal field. In the Drude approximation for the transverse field
we recover the familiar picture of light
diffusion, with the extinction length as the mean free path, and $v_E$ as energy transport velocity \cite{PR}.
The perturbation expansion in $\mathbf{q}$ is valid for the transverse waves as long as $2pq |\mathrm{Im} \,G_T(p)|^2 < |\mathrm{Im} \,G_T(p)| $. This is most stringent near the frequency shell
$p=k_e$ where the spectral function $|\mathrm{Im} \,G_T(p)|$ is maximal and not stringent at all for large momenta. This gives $q < |\mathrm{Im} \,\Sigma_T(k_e)|/2k_e = 1/2\ell$.
This could have been an intuitive estimate.
The diffusion tensor $\mathbf{J}^D_{TL}$ is given by
\begin{eqnarray}\label{LT}
\mathbf{J}^D_{TL} (\mathbf{p},\mathbf{q}) &=& 2\mathrm{Im}\, G_T \mathrm{Im}\, G_L \, ( 2 \hat{\mathbf{p}} \hat{\mathbf{p}} (\mathbf{p}\cdot \mathbf{q}) -\mathbf{pq} -
\mathbf{qp} ) \nonumber \\
&+& i \mathrm{Im}\, [\bar{G}_L {G}_T] \, (\mathbf{pq} - \mathbf{qp})
\end{eqnarray}
with contributions to the channels $J_1$, $J_2$ and $J_3$ in Eq.~(\ref{Jgeneral}). We focus first on the Poynting vector for which only the channel
$J_2(p)$ is relevant. Inserting the first term
into Eq.~(\ref{Kubo}) gives
\begin{equation}\label{DDsing}
\pi N(k) D^{D}_{2}(k) = \sum_\mathbf{p}\mathrm{Im}\, G_T(p) \mathrm{Im}\, G_L(p) \, [p^2- (\mathbf{p}\cdot \hat{\mathbf{q}})^2]
\end{equation}
This diffusion is clearly determined by the overlap of transverse and longitudinal modes in phase space. Because the longitudinal spectral function is essentially independent of $p$,
this overlap is significant and the integral even diverges as $\sum_\mathbf{p } 1/p^2$. We can extract and regularize it as earlier by $Q_0k_0/4\pi$,
\begin{equation}\label{DTL}
\pi N(k) D^{D}_{2}(k) = \frac{Q_0}{6\pi }\frac{\left(\mathrm{Im }\Sigma_{\mathrm{ISA}}\right)^2 }{k^3}
\end{equation}
This singular term is proportional to the square of the density of the dipoles and will later be seen to cancel. As a result, the leading current tensor is,
\begin{eqnarray}
\mathbf{J}^D_{TL} = \frac{2\pi }{k\ell} \delta(k^2-p^2) \frac{1}{p^2}&\,&( 2 \hat{\mathbf{p}} \hat{\mathbf{p}} ({\mathbf{p}}\cdot \mathbf{q}) -
\mathbf{{p}q} -
\mathbf{q{p}} ) \nonumber \\
+ && i \mathrm{Im}\, [\bar{G}_L {G}_T] \, \,
(\mathbf{{p}q} - \mathbf{q{p}})
\end{eqnarray}
The use of the Dirac distribution implies here implicitly that a typical Kubo integral of the kind
$\sum_\mathbf{p} \mathbf{p }\mathbf{J}(\mathbf{p},\mathbf{q})$
converges for large $p$, with no need for regularization.
This expression will turn out to be leading for $J_2$ and dominating for $J_3$. The Drude diffusion constant associated
with the mixture of transverse and longitudinal waves is thus given by
\begin{equation}\label{Dlong}
D^{D}_2 = \frac{1}{3\pi N(k) }\sum_\mathbf{p} \frac{2\pi }{k\ell} \delta(k^2-p^2) \approx \frac{1}{3}v_E \frac{1}{k^2\ell}
\end{equation}
This diffusion constant can be considered as the ISA of electromagnetic diffusion in the $J_2$-channel. Its value is \emph{positive} and,
apart from the universal pre-factor $v_E$ in diffusion, depends linearly on the density of the dipoles.
In Ref.~\cite{theoL} one finds a correction induced by dipole-dipole coupling that can be written as $\Delta D = \frac{1}{3} v_E F(\delta) /k_0^3\ell^2$,
with the function $F$ varying over the resonance. Like the diffusion found in Eq.~(\ref{Dlong}),
it is positive and proportional to $v_E$, but unlike Eq.~(\ref{Dlong}) it scales as $n^2$. The interference of longitudinal and transverse waves is excluded
in Ref.~\cite{theoL} which explains why this
leading term~(\ref{Dlong}) is not found.
For the hydrodynamic expansion made in Eq.~(\ref{dij}) to hold for the transport channel $J_2$, we must have $|\mathrm{Im} \,G_T(p)| > |J^D_{TL}(p)| pq/2$, or equivalently,
$pq < 1/|\mathrm{Im}\, G_L| \approx|k^2/ |\mathrm{Im}\, \Sigma |$.
Since transverse waves already impose $q < 1/\ell$ we conclude that $p < k^3\ell^2 $. This becomes restrictive once $k\ell$
approaches unity.
It is straightforward to obtain the Drude approximation for the fictitious diffusion constant in Eq.~(\ref{DIM})
associated with $\mathrm{Im}\,\mathbf{ K}$, and which was seen to be governed by $J_3$. Since $J_3 = \mathrm{Im}\, (\bar{G_L} {G}_T)$ we can write
\begin{eqnarray*}
\frac{1}{\pi N } \mathrm{Im} \, \sum_\mathbf{p} p^2\, (\bar{G_L} {G}_T) &=& \frac{1}{\pi N } \mathrm{Im} \sum_\mathbf{p} \left[ p^2\bar{G}_L \frac{1}{-p^2} + \frac{z^2}{\bar{z}^2} G_T \right] \\
&\approx& -\frac{c_0}{k} \frac{N_L + \frac{1}{2}N_T }{N_L + N_T}
\end{eqnarray*}
where we used the expression~(\ref{DOS3}) of the DOS split up in its longitudinal part $N_L$ and its transverse part $N_T$. With $v_E= c_0 N_T/(N_L+N_T)$ we find from Eq.~(\ref{DIM}),
\begin{equation}\label{DIM2}
D^D_{I}= \frac{1}{3} v_E k^{-1}
\end{equation}
The singular longitudinal DOS, proportional to $Q_0$, cancels. In the Drude approximation the fictitious diffusion constant $D_I$ of the mode $J_3$ is
a factor $k\ell$ \emph{larger }than the diffusion constant $D_2$ of the channel $J_2$, and a factor $k\ell$ \emph{smaller} than the transverse
diffusion of mode $J_0$.
This suggests that they all become of the same order near
$k\ell =1$.
\subsubsection{Self-energy dependent on wave number}
Any dependence on $p$ of the self-energy contributes to the diffusion current via the term $\mathbf{J}^{\delta\Sigma}$ derived in Eq.~(\ref{drudesigma}).
For electric dipoles such dependence on wave number comes in via the boomerang diagrams discussed in Eq.~(\ref{self}) with the subtle local field correction at large momenta derived in
Eq.~(\ref{sigma2}), on which we shall focus.
If we insert this term into Eq.~(\ref{drudesigma}) we find an interference term between longitudinal and transverse propagation,
\begin{eqnarray*}
\mathbf{J}^{\delta\Sigma}(\mathbf{p},\mathbf{q}) &=& -\mathrm{Re }\, \left(\frac{ n^2t^2}{k^2} G_T(p) G_L(p) \right) \frac{1}{p}\\
\times && ( 2 \hat{\mathbf{p}} \hat{\mathbf{p}} (\hat{\mathbf{p}}\cdot
\mathbf{q}) -\hat{\mathbf{p}} \mathbf{q} -
\mathbf{q} \hat{\mathbf{p}} )
\end{eqnarray*}
Its contribution to the Poynting vector in the $J_2$-channel diverges again as $\sum_\mathbf{p}1/p^2$. Restricting to large wave vectors,
\begin{equation}\label{DTL2}
\pi N(k) D^{\delta\Sigma}_2(k) = \frac{Q_0}{12 \pi }\frac{\mathrm{Re}\left(n^2t^2\right) }{k^3}
\end{equation}
The remainder of $\mathbf{\Sigma}(p)$ in Eq.~(\ref{self}) provides contributions to $J_0$, $J_1$ and $J_2$, and is proportional to $n^2$ once
the
divergency has been removed. Some formula manipulation gives the following closed expression for the diffusion constant caused by the dependence on wave number of the
self-energy of two electric dipoles,
\begin{eqnarray}\label{DTL4}
\pi N(k) D^{\delta \Sigma}(k) = \frac{1}{4} n^2 \mathrm{Re}\, \mathrm{Tr}\, \int d^3\mathbf{r } (\mathbf{r}\cdot\hat{\mathbf{ q}})^2 \nonumber \\
\left(
\frac{t^2\mathbf{G}_0^2}{1-t^2\mathbf{G}_0^2}
- t^2 \mathbf{G}_0 \cdot \mathbf{G}_{0,T} \right)
\end{eqnarray}
This expression is free from any singularity but is beyond the scope of this work, being a factor $1/k\ell$ smaller than
what was found in Eq.~(\ref{Dlong}) for the $J_2$-channel, and even a factor $1/(k\ell)^3$ smaller than the leading contribution in the $J_0$-channel.
\subsubsection{Scattering diffusion current tensor}
\label{scattdifsection}
The scattering diffusion current tensor $\mathbf{J}^{\delta G}(\mathbf{p},\mathbf{q})$ is given by the second term in Eq.~(\ref{BSJ}).
It vanishes for
any isotropic scattering in $\mathbf{U}_{\mathbf{pp}'}$, among which (here) single scattering.
Among the different scattering events generated by two
electric dipoles, only the most-crossed diagrams and the forward-crossed diagrams induce an anisotropy in scattering. They are given by
\begin{equation}\label{MC2}
{U}^{\mathrm{MC}}_{\mathbf{pp}'}= n^2 |t|^2 \int d^3\mathbf{r} \,\frac{ t\mathbf{G}_0}{1-t^2 \mathbf{G}_0^2 }
\left( \frac{ t\mathbf{G}_0}{1-t^2 \mathbf{G}_0^2}\right)^* \mathrm{e}^{i(\mathbf{p}+\mathbf{p}')\cdot \mathbf{r}}
\end{equation}
and
\begin{eqnarray}\label{FC}
&& {U}^{\mathrm{FC}}_{\mathbf{pp}'} = n^2 |t|^2 \int d^3\mathbf{r} \nonumber\\ && \ \ \left[\frac{ \mathbf{1} }{1-t^2 \mathbf{G}_0^2 }
\left( \frac{\mathbf{1}}{1-t^2 \mathbf{G}_0^2}\right)^* - \mathbf{11} \right]\mathrm{e}^{i(\mathbf{p}- \mathbf{p'})\cdot \mathbf{r}}
\end{eqnarray}
The most-crossed diagrams generate a contribution to $ \mathbf{J}^{\delta G}(\mathbf{p},\mathbf{q})$ of the type $J_2$ leading to
a diffusion constant free from any singularity at large $p$, and of the same order as was found in Eq.~(\ref{DTL4}). We will ignore them for the same reason and
focus on the forward-crossed diagrams. We can write
\begin{eqnarray*}
&&\sum_\mathbf{p' } {U}^{\mathrm{FC}}_{\mathbf{pp}'} \cdot \delta_\mathbf{q} \mathbf{G}_0(\mathbf{p}') = n^2 |t|^2 \int d^3\mathbf{r}
\, (i\mathbf{q}\cdot \mathbf{r})
\\
&& \left[\frac{ \mathbf{1} }{1-t^2 \mathbf{G}_0^2 } \cdot \mathrm{Re}\, \mathbf{G}_0
\cdot \left( \frac{\mathbf{1}}{1-t^2 \mathbf{G}_0^2}\right)^* - \mathrm{Re}\, \mathbf{G}_0 \right]\mathrm{e}^{i\mathbf{p}\cdot \mathbf{r}}
\end{eqnarray*}
This integral is regular for all $p$, but does not decay fast enough with $p$ to prevent singularities in the channels $J_2$ and $J_3$. To see this,
the factor between brackets is written as $ \mathbf{F}= F_0(r) \mathbf{1} + F_1(r)
\hat{\mathbf{r}}\hat{\mathbf{r}} $.
The space integral above can be done to get,
\begin{eqnarray*}
\mathbf{J}^{\delta G}(\mathbf{p},\mathbf{q}) = n^2|t|^2 \mathbf{ G}(\mathbf{p})\mathbf{G}(\mathbf{p})^* &\cdot & \\
(\hat{\mathbf{p}}\cdot \mathbf{q}) f_0(p)+ \hat{\mathbf{p}}\hat{\mathbf{p}} f_1(p)
&+& (\hat{\mathbf{p}}{\mathbf{q}}+{\mathbf{q}}\hat{\mathbf{p}}) f_2(p)
\end{eqnarray*}
with $3$ known functions related to $F_i(r)$. The first term with $f_0(p)$ is part of $J_0$, and
constitutes a high-order correction to transverse diffusion, of no interest here.
The longitudinal term with $f_1$ produces no Poynting vector.
We concentrate on the term with $f_2$, given by
\begin{equation*}
pf_2(p) =-\int d^3 \mathbf{r} \, F_1(r) {j_2(pr)}
\end{equation*}
This integral is finite for all $p$ but does not decay with $p$ since
\begin{eqnarray*}
\lim_{p\rightarrow \infty } pf_2(p) &=& \lim_{p\rightarrow \infty } \frac{-1}{p^3}\int d^3 \mathbf{y} \, F_2(y/p) {j_2(y)} \\
&=& -\frac{3}{4\pi k^2}\int d^3 \mathbf{y} \, \frac{{j_2(y)}}{y^3}= -\frac{1}{k^2}
\end{eqnarray*}
We have used that for small $r$,
$\mathbf{F}(\mathbf{r})= -\delta(\mathbf{r})/3k^2 -(1-3\hat{\mathbf{r}}\hat{\mathbf{r}} ) /4\pi k^2 r^3$. The local contact term
does not
contribute.
The diffusion constant in the $J_2$-channel is given by,
\begin{equation}\label{S1}
\pi N(k)D^{\delta G}_2(k)=-\frac{1}{3} n^2|t|^2 \sum_\mathbf{p} p f_2(p) \mathrm{Re}\, G_{0,L}(p)\bar{G}_{0,T}(p)
\end{equation}
This equation thus suffers from a divergence. Upon splitting it off and regularizing $\sum_\mathbf{p} 1/p^2 = Q_0 k/4\pi $ we find
\begin{equation}\label{S2a}
\pi N(k)D^{\delta G}_2(k)= - Q_0 \frac{n^2|t|^2 }{12\pi k^3} + \mathcal{O}(n^2)
\end{equation}
This is the third diverging term that will cancel against the two already found earlier. The term proportional to $f_2(p)$ also produces a
contribution to the $J_3$-channel,
\begin{equation}\label{J3S}
J_3^{\delta G}(p)= -\frac{ n^2|t|^2}{k^2} \frac{f_2(p)}{p} \mathrm{Im}\, G_T(p)
\end{equation}
This function decays rapidly as $1/p^6$ for large $p$. It is easily checked that the integral $\sum_\mathbf{p} pJ_3(p)$ is not singular at large $p$ and produces a correction of order $n^2$ in
Eq.~(\ref{imPoyn}) that will not be further discussed.
\subsubsection{Weak localization}
The last term in Eq.~(\ref{BSJ}), defined as $\mathbf{J}^S(\mathbf{p},\mathbf{q})$, mixes in principle all four transport mechanisms $J_i$.
For our model of electric dipoles, the ISA makes no contribution since isotropic, but the
diagrams~(\ref{MC2}) and (\ref{FC}) do. The leading order is obtained by inserting on the right hand the Drude expression for the transverse
diffusion current tensor $\mathbf{J}^{TT}$ found in Eq.~(\ref{JT}). Since this current is strongly peaked near $p=k$ we can approximate
$ \mathbf{J}^{TT}(\mathbf{p},\mathbf{q}) = 2\pi \ell
(\hat{\mathbf{p}}\cdot\mathbf{ q})\mathbf{ \Delta}_p \delta(k^2 -p^2)$ so that,
\begin{equation}\label{WL1}
\mathbf{J}^S(\mathbf{p},\mathbf{q})= \frac{k\ell}{2\pi } \mathbf{G}(\mathbf{p})\mathbf{G}(\mathbf{p})^* \cdot
\int \frac{d\hat{\mathbf{ k}} }{4\pi}U_{\mathbf{pk}}\cdot
\mathbf{ \Delta}_k (\hat{\mathbf{k}}\cdot \mathbf{q})
\end{equation}
Only the angle-dependent scattering $U^{MC}$ and $U^{FC}$ survive this integral. For convenience we can summarize Eqs.~(\ref{MC2}) and (\ref{FC}) by
\begin{equation}\label{CminF}
U_{\mathbf{pp}'} = \int d^3 \mathbf{r}\, \left[ U^{MC}(\mathbf{r}) e^{i(\mathbf{p}+\mathbf{p}')\cdot \mathbf{r}} +
U^{FC}(\mathbf{r}) e^{i(\mathbf{p}-\mathbf{p}')\cdot \mathbf{r}}\right]
\end{equation}
The angular integral over $\hat{\mathbf{k}}$ can be performed to get
\begin{eqnarray*}
&& \mathbf{J}^{S}(\mathbf{p},\mathbf{q}) = \frac{k\ell}{2\pi i} \mathbf{G}(\mathbf{p})\mathbf{G}(\mathbf{p})^* \cdot
\int d^3 \mathbf{r} \, e^{i\mathbf{p}\cdot \mathbf{r}} \cdot \\
&&\ \ \ \ \left[U^{MC}(\mathbf{r}) -
U^{FC}(\mathbf{r}) \right] \cdot\\
&& \left[\frac{j_2(kr)}{kr} ( \hat{\mathbf{r}}\mathbf{q}+ {\mathbf{q}} \hat{\mathbf{r}}) -(\hat{\mathbf{r}}\cdot \mathbf{q}) \left( j_1(kr) -\frac{j_2(kr)}{kr} + j_3(kr)\hat{\mathbf{r}} \hat{\mathbf{r}} \right)\right]
\end{eqnarray*}
The integrand of this equation for $\mathbf{J}^S(\mathbf{p},\mathbf{q})$ involves the difference $U=U^{MC} - U^{FC}$ between most-crossed and forward-crossed diagrams.
They both contain sub-radiant poles (where $t^2\mathbf{G}_0^2 \approx 1$), and quite remarkably, this singularity cancels significantly in this subtraction.
The equation generates all 4 transport modes,
\begin{eqnarray*}
\mathbf{J}^{S}(\mathbf{p},\mathbf{q}) &=& J_0^{S}(p) (\hat{\mathbf{p}}\cdot \mathbf{q})
+ J_1^{S}(p) \hat{\mathbf{p}} \hat{\mathbf{p }} (\hat{\mathbf{p}}\cdot \mathbf{q}) \\
&+& J_2^{S}(p) (\hat{\mathbf{p}} {\mathbf{q }} +{\mathbf{q}}\hat{\mathbf{p}}) + J_3^{S}(p) i(\hat{\mathbf{p}} {\mathbf{q }} - {\mathbf{q}}
\hat{\mathbf{p }} )
\end{eqnarray*}
We will show that the mode ${J}^S_0$ exhibits the standard weak localization correction, of relative order $1/k\ell $ and
negative in diffusion constant. Also the mode ${J}^S_2$ is subject to a weak localization correction,
of order $1/(k\ell)^2$ and\emph{ positive}, showing that
not all modes are affected similarly by interference.
We first focus on $J_0^S$. Contrary to $U^{FC}$, $U^{MC}$ associated with two dipoles induces a singular
angular dependence
of the kind $1/|\mathbf{p}+\mathbf{p}'|$, and therefore dominates ${J}_0^S$.
The space integral is dominated by large $r$ so we insert $U^{MC} = (6\pi /\ell)^2 \mathbf{C}(\mathbf{r})\mathbf{ C}(\mathbf{r})^*$ with
$\mathbf{C}\approx - \mathbf{\Delta}_r (\exp(ikr)/4\pi r)$.
The angular integral over $\hat{\mathbf{r}} $ can be done. The end result is written as
\begin{eqnarray*}
J_0^{MC}(p) &=& - \frac{9}{2}\frac{k}{ \ell }|G_T(p)|^2 \\
\times \int_0^\infty &dr& \left( \frac{4}{5} j_1(kr) - \frac{1}{5}j_3(kr) \right)
\left( \frac{4}{5} j_1(pr) - \frac{1}{5}j_3(pr) \right) \\
&=& - \frac{3\pi }{20}\frac{1}{\ell}|G_T(p)|^2\left( 2 \frac{k^2}{p^2} + \frac{9}{7}\frac{k^4}{p^4} \right)
\end{eqnarray*}
The last equality holds only for $p \ge k$; $J_0^{MC}(p)$ decays rapidly as $1/p^6$ and has most of it weight near $p=k$.
The weak localization correction can be obtained from Eq.~(\ref{Kubo}),
\begin{eqnarray}\label{WL0}
\Delta D^{WL}_0 &=& \frac{1}{4\pi N(k)} \mathrm{Tr} \sum_\mathbf{p} \mathbf{L}(\mathbf{p},\hat{\mathbf{q}}) \cdot \mathbf{\Delta}_p
(\hat{\mathbf{p}}\cdot \hat{\mathbf{q}}) \nonumber \\
&\times& \frac{-3\pi^2 }{20 k }\frac{23}{7} \delta(k^2-p^2) = - \frac{1}{3}\frac{v_E }{k} \frac{69\pi }{280}
\end{eqnarray}
or equivalently $ \Delta D^{WL}_0/D^T = - 0.774/k\ell$. The numerical factor is actually larger than the leading one ($\pi/6=0.523 $) obtained for scalar
waves \cite{wellens}.
We can
compare this weak localization correction to the positive diffusion constant~(\ref{Dlong}) found for the $\mathbf{J}_2$ channel. If we extrapolate
to small values for $k\ell$, we conclude that the diffusion in the $J_2$-channel compensates the first weak localization correction
in the $J_0$-channel for $k\ell < 1.3$.
The channel $J_2^S$ is more complicated. It is instructive split the Green's function up into $\mathbf{G}_0(\mathbf{r})\sim P(r) \mathbf{\Delta}_r +
Q(r)\hat{\mathbf{ r}} \hat{\mathbf{r}}$, and to express the tensor $U(\mathbf{r}) = U^{MC} - U^{FC}$ as,
\begin{eqnarray*}
U(&&\mathbf{r}) = U^{TT} (r) \mathbf{\Delta}_r \mathbf{\Delta}_r + U^{LL} \hat{\mathbf{r}}\hat{\mathbf{r}}\hat{\mathbf{r}}\hat{\mathbf{r}}
\\
&&+ \mathrm{ Re }\, U^{TL} ( \hat{\mathbf{r}}\hat{\mathbf{r}} \mathbf{\Delta}_r
+ \mathbf{\Delta}_r \hat{\mathbf{r}}\hat{\mathbf{r}} )
+ i \mathrm{Im}\, U^{TL} ( \mathbf{\Delta}_r \hat{\mathbf{r}}\hat{\mathbf{r}} - \hat{\mathbf{r}}\hat{\mathbf{r}}\mathbf{\Delta}_r )
\end{eqnarray*}
This corresponds to 4 different scattering events involving two dipoles at distance $\mathbf{r}$ with the electric field vector
either along or perpendicular to $\mathbf{r}$,
as well as their interferences. With the angular integral
of $\hat{\mathbf{r}}$ performed analytically, they give each the following contribution to $J_2^S$,
\begin{eqnarray*}
J_2^{TT}(p) &=& \frac{2\ell}{k} \mathrm{Re}\, G_T(p) \int_0^\infty dr\, r^2 \, U^{TT}(r) \\
&\times& \left(j_1(kr)-\frac{j_2(kr)}{kr}\right)
\frac{j_2(pr)}{pr}
\end{eqnarray*}
\begin{equation*}
J_2^{LL}(p)= -\frac{4\ell}{k} \mathrm{Re}\, G_T(p) \int dr_0^\infty \, r^2 \, U^{LL}(r) \frac{j_2(kr)}{kr} \frac{j_2(pr)}{pr}
\end{equation*}
\begin{eqnarray*}
J_2^{TL1}(p) &=& \frac{2\ell}{k} \mathrm{Re}\, G_T(p) \int_0^\infty dr\,r^2\, \mathrm{Re}\, U^{TL}(r) \\
&\times& \left( j_1(pr)-2\frac{j_2(pr)}{pr} \right) \frac{j_2(kr)}{kr}
\end{eqnarray*}
and finally
\begin{eqnarray*}
J_2^{TL2}(p) = \frac{2\ell}{k} \mathrm{Im}\, G_T(p) \int_0^\infty dr\, r^2\, \mathrm{Im}\, U^{TL}(r)j_1(pr)
\frac{j_2(kr)}{kr}
\end{eqnarray*}
The weak localization correction is found by
\begin{equation}\label{D2J2}
\Delta D_2 (k) = -\frac{1}{3} \frac{1}{\pi N(k) }\sum_\mathbf{p} p^2 (J_2^{TT} + J_2^{LL}+ J_2^{TL1}+ J_2^{TL2})
\end{equation}
To perform the integral over the wave vector $\mathbf{p}$ we use that
\begin{equation*}
\sum_\mathbf{p} \frac{p}{(k+i\epsilon)^2 - p^2} j_1(pr) = -\frac{ik^2 }{4\pi }h_1^{(1)}(kr)
\end{equation*}
and
\begin{equation*}
\sum_\mathbf{p} \frac{p}{(k+i\epsilon)^2 - p^2} \frac{j_2(pr)}{pr} = -\frac{ik}{4\pi r } \left[h_2^{(1)}(kr) + \frac{3i}{(kr)^3}\right]
\end{equation*}
Consequently, only the radial integrals $\int dr$ remain to be done numerically. The weak-localization correction $\Delta D_2 (k)$ is proportional to the density of the electric dipoles.
Figure~{\ref{DTLfig} shows the total diffusion constant $D_2^{D}+\Delta D_2$ in the $J_2$ channel around the resonance frequency,
as well as the contributions stemming from the 4 individual terms in Eq.~(\ref{D2J2}).
The weak localization correction $\Delta D_2$ is dominated by the purely transverse and
longitudinal channels $TT$ and $LL$ between the two dipoles, who have competing signs. For negative detuning the purely longitudinal mode $LL$ dominates, for positive
detunings the $TT$ channel dominates and is more than twice as large as the Drude contribution $D_2^D$.
We note that the weak localization correction to the diffusion constant $D_2$ of the channel $\mathbf{J}_2(\mathbf{p},\mathbf{q})$ is of the same order as the Drude
approximation in Eq.~(\ref{Dlong}).
The sum of the 4 weak localization terms and the Drude approximation is strictly positive. At fixed density, positive detuning has the largest diffusion constants in the
$J_2$ channel. Note that the ratio $D_2/D_0$ is of same order $1/(k\ell)^2$, but of opposite sign compared to the standard (Cooperon) weak localization correction $-1/(k\ell)^2$.
This will be discussed more in detail in the next section, for which it will turn out useful to define a function $F(\delta) = (k \ell)^2 D_2/D_0$.
\begin{figure}
\includegraphics[width=0.95\columnwidth]{DALL2norm.pdf}
\caption{The diffusion constant in the $J_2$ channel (solid line), being the sum of the Drude approximation $D_2^D$ plus the weak-localization correction $ \Delta D_2 (k) $ in Eq.~(\ref{D2J2}) from two dipoles, as a function of the detuning $\delta =
(\omega-\omega_0)/\gamma$. It is normalized by the diffusion constant in the $J_0$ channel. The 4 weak localization corrections
discussed in this section are shown separately as dashed lines.}\label{DTLfig}
\end{figure}
\subsubsection{Summary of previous subsections}
\label{secsummary}
We have identified four mechanisms in the transport of electromagnetic waves, expressed by the diffusion current tensor~(\ref{Jgeneral}).
The results have been summarized in Table~\ref{tabeldiff}.
The mechanism described by $J_0(p)$ is the familiar picture of transverse wave diffusion near the shell $p\approx k$
and results in the diffusion constant (\ref{DDrude}). It is inversely proportional to the density of the electric dipoles and contains
an energy velocity that can be small since the impenetrable electric dipole scatterers contain temporarily stored, longitudinal energy.
The mechanism associated with $J_2$ is caused by interference of longitudinal and transverse fields, necessary condition to carry a Poynting vector.
The leading term~(\ref{Dlong}), linear in the dipole density, comes from the Drude approximation.
Upon considering all scattering events involving two dipoles, we have been able to identify three singular terms. After regularization, they are expressed by
Eqs.~(\ref{DTL}), (\ref{DTL2}) and (\ref{S2a}) and proportional to the large quality factor $Q_0$ and the density squared.
They add up to
\begin{eqnarray}\label{DQ}
\pi N(k) \Delta D_2 (k) &=& \frac{Q_0}{6\pi k^3} n^2 \left( (\mathrm{Im}\, t)^2 + \frac{1}{2}\mathrm{Re} \, t^2 - \frac{1}{2}|t|^2 \right)\nonumber \\ &=&0
\end{eqnarray}
This explicit cancelation in the $J_2$-channel is very important and not entirely obvious since the 3 terms stem from entirely different parts in
transport theory (Drude diffusion, Lorentz local field and enhanced forward scattering). Without cancelation they would have given an electromagnetic conductivity $Q_0/k\ell^2$,
and not small at all with respect to the traditional transverse conductivity, of order $k^2\ell$ since $Q_0$ is large for an atomic oscillator.
Their cancelation also supports the general renormalizability of electromagnetic transport theory with point-like dipoles.
It is highly plausible that this cancelation happens in all orders of perturbation theory, but this is currently impossible to prove in general.
We will use this hypothesis in the next session.
\begin{table}
\begin{tabular}{|c||c|c|c|c|c|}
\hline
& Drude $\mathbf{J}^D$ & $\mathbf{J}^{\delta \mathbf{\Sigma}}$ & $\mathbf{J}^{\delta \mathbf{G}}$ & WL $\mathbf{J}^{S}$ \\ \hline\hline
$J_0$ & $+\ell$ & $ 1/k^3\ell^2$ & $1/k^3\ell^2$ & $ -0.774/k $ \\
& (\ref{DDrude}) & (\ref{DTL4}) & NC & (\ref{WL0}) \\ \hline
$J_2$ & $+1/k^2\ell $ & $-Q(\delta)/2k^3\ell^2$ & $-{Q(\delta)}/2k^3\ell^2$ & $+{F(\delta)}/{k^2\ell}$ \\
& $+ Q(\delta)/k^3\ell^2$ (\ref{Dlong}) & (\ref{DTL2}) & (\ref{S2a}) & (\ref{D2J2}) Fig.~\ref{DTLfig} \\ \hline
$J_3$ & $+1/k$ & $ 1/k^3\ell^2$ & $ 1/k^3\ell^2$ & ${1}/{k^2\ell}$ \\
& (\ref{DIM2}) & (\ref{DTL4}) & (\ref{J3S}) & NC \\ \hline
\hline
\end{tabular}
\caption{Contributions to the transport mean free path for various transport channels $J_i$ and the 3 different diagrammatic classes identified in
the Bethe-Salpeter equation~(\ref{JDall}). When an explicit sign is found, it is indicated. Most values depend also explicitly on detuning, not indicated if not calculated.
Numbers refer to the corresponding equations. The channel $J_1$ does not contribute to transport mean free path, $J_3$ only contributes to the transport mean free path associated with the
imaginary part of the Poynting vector. NC stands for ``not calculated'', WL for ``weak localization". The terms proportional to $Q(\delta)$ are regularized singularities depending
on detuning $\delta$ that cancel in the transport mean free path. }\label{tabeldiff}
\end{table}
The $J_2$ channel, in the leading order modified by the weak localization from 2 electric dipoles, exhibits a positive diffusion constant, linear in the dipole density.
Although usually small compared to standard transverse diffusion, it must
be realized that this diffusion stems from a sofar unexplored mechanism for electromagnetic wave diffusion,
involving the interference of longitudinal and transverse waves. In this transport channel, the first weak localization correction induced
by two electric dipoles is
actually of the same order as the Drude value and again \emph{positive}, showing that in the channel $J_2$ interferences behave differently
in comparison to the traditional transverse channel.
{In early experiments on light scattering by cold Rubidium atoms \cite{nice}, values of $k \ell$ are of the order $1000$
and hence the $J_2$ channel should be irrelevant for optical transport. However, atomic Ytterbium clouds with very large densities
can be created \cite{japan} with $k \ell < 1$. Understanding light transport in such clouds will certainly require taking into
account $J_2$ channel involving longitudinal modes.
}
\section{Radiative force density}\label{sectionrad}
A well-known relation exists between diffuse flow and radiative forces. In radiative transfer,
the energy flux is driven by the spatial gradient of the total energy density, and automatically carries momentum. In the presence of an induced polarization density $\mathbf{P}$, the electromagnetic force density $\mathbf{f}$ is caused by
the Lorentz force acting on the induced Coulomb charge
density $\rho (\mathbf{r})= -\bm{\nabla} \cdot \mathbf{P}$ and on the induced current density $\mathbf{j}=\partial_t \mathbf{P}$.
Maxwell's equations allow the formulation of a momentum conservation law that is very generally valid. It takes the form (before cycle averaging) \cite{loudonmom,abrahamik},
\begin{eqnarray}\label{momentumcon}
\partial_t \mathbf{\mathcal{G} }&+& \mathbf{f} = \bm{ \nabla} \cdot \mathbf{T}
\end{eqnarray}
with $\mathbf{\mathcal{G}}= (\mathbf{E} \times \mathbf{B})/4\pi c_0$ the electromagnetic momentum density, and $\mathbf{T}$ the momentum stress tensor,
\begin{equation}
\mathbf{T} = \frac{1}{4\pi }\left[ \mathbf{EE }+\mathbf{ BB}
-\frac{1}{2} \left( {\mathbf{E}}^2 + \mathbf{B}^2\right) + \mathbf{X} \right]
\end{equation}
The tensor $\mathbf{X}$ is related to internal angular momentum inside the particle
that we shall ignore here.
In the regime of multiple scattering, and after cycle averaging, Eq.~(\ref{dij}) expresses that $\langle E_i(\mathbf{r})\bar{E}_j(\mathbf{r}) \rangle
= \frac{1}{3} \langle |\mathbf{E}(\mathbf{r})|^2 \rangle \delta_{ij}$, and idem for the magnetic field. The stress-tensor $\mathbf{T}$ is thus diagonal on average,
meaning that the $i^{\mathrm{th}}$ component of the electromagnetic momentum only flows in the direction $i$. For stationary flow, we thus obtain
\begin{equation}\label{radiativeforce}
\langle \mathbf{f}(\mathbf{r})\rangle = -\frac{1}{3}\bm{\nabla} \left< \frac{|\mathbf{E}(\mathbf{r})|^2 +|\mathbf{B}(\mathbf{r})|^2 }{16\pi} \right>
\end{equation}
For a medium filled with impenetrable electric dipoles we have shown in Eq.~(\ref{DOS3}) that
$|\mathbf{E}|^2/16\pi$ is the total electric energy density having both longitudinal and transverse components, and equal to the magnetic energy density.
In the diffusion approximation, we write the averaged Poynting vector as
$\langle \mathbf{K}\rangle = -D \bm{\nabla} \left< {\left[ |\mathbf{E}(\mathbf{r})|^2 +|\mathbf{B}(\mathbf{r})|^2 \right] }/{16\pi} \right>$. This leads to a simple relation
\begin{equation}\label{KisP}
\langle \mathbf{f} \rangle = \frac{1}{3D}\langle \mathbf{ K} \rangle
\end{equation}
between
Poynting vector and radiative force density.
In the ISA, $D= \frac{1}{3} v_E \ell/(1 - \langle \cos\theta \rangle )$, and this reduces to the almost intuitive expression
$\langle \mathbf{f} \rangle= n\sigma (1- \langle \cos \theta\rangle ) \langle \mathbf{K} \rangle /v_E$ involving
the product of particle density and pressure cross-section of one scatterer.
The second factor accounts for transfer of momentum from the light to a single scatterer, and
of course for independent electric dipoles $\langle \cos\theta \rangle=0$.
The factor $1/v_E$ is less intuitive in this model. For one isolated scatterer this would clearly be $1/c_0$,
since for a plane wave with arbitrary direction in vacuum, momentum current density and energy current density (the Poynting vector) differ by a factor $1/c_0$.
In a medium filled with resonant dipoles, stocked, longitudinal energy contributes to the momentum current density $\mathbf{T}$ but not to the
energy current density $\mathbf{K}$. Put otherwise, scattering of a transverse state with wave number $p\approx k$ to a longitudinal mode with large wave number induces a significant recoil,
but does
not generate an energy current. For the medium filled with dipoles, the ratio $f/K$ thus achieves a factor $(N_L + N_T)/N_Tc_0 \approx 1/v_E$.
\section{No Anderson localization of light? }\label{sectionAL}
In the following we will make a first attempt to include the 4 transport mechanisms, introduced in the previous section, into the self-consistent transport
theory for localization of light.
This theory is celebrated by some for its surprisingly simple description of the transition from long-range diffusion to localization. Others criticize the theory for its oversimplified nature, neglecting many scattering events in the collision operator
$U_{\mathbf{pp}'}(\mathbf{q})$ introduced in Eq.~(\ref{BS2}).
The self-consistent theory predicts the Ioffe-Regel
criterion $k_e\ell \approx 1$ for the mobility edge in 3D, produces the universal finite-size scaling in arbitrary dimension, and can easily be engineered with. However, the theory predicts a wrong critical exponent of the localization transition and fails in the presence of an external magnetic field.
In the standard theory, adapted from electron localization \cite{vw},
the most-crossed diagrams are included into the diffusion constant of the light. These diagrams involve the interference of time-reversed waves and are part of the
scattering vertex $U_{\mathbf{pp}'}(\mathbf{q})$. By reciprocity, the most-crossed diagrams also contain a
hydrodynamic pole, featuring the same diffusion constant. This immediately turns the calculation of $D$ into a self-consistent problem because the most-crossed diagrams,
modify the diffusion current $J(\mathbf{p},\mathbf{q})$ in Eq.~(\ref{BSJ}). We recall that in the case of
electromagnetic waves the diffusion current is a tensor with 4 independent parts. No Anderson
localization was seen to occur in recent numerical simulations with electric dipoles \cite{sergey0}. The intention of this section is to discover
what exactly breaks down in this theory when taking into account longitudinal waves.
We here summarize the various approximations made, which are basically equivalent to the ones made in previous works, even if often adopted implicitly \cite{sheng, akkermans, vw}.
\begin{itemize}
\item The most-crossed diagrams, involving scattering events associated with many dipoles, are the only angle-dependent scattering events that influence the
diffusion current tensor $\mathbf{J}(\mathbf{\mathbf{p}},\mathbf{q})$ when going beyond the Drude
approach. The existence of other diagrams is only acknowledged implicitly to guarantee flux conservation.
Weak localization effects caused by low-order scattering events, such as those described by Eqs.~(\ref{WL0}) and (\ref{D2J2}) are not included either,
although this could be done without dramatic changes in the theory. The
implicit existence of other diagrams is also necessary to justify the cancelation of UV-singularities in transport theory. For scattering events involving only two electric dipoles,
UV-divergencies were seen to cancel \emph{explicitly} earlier in this work, but no general demonstration is known.
\item The diffuse regime of the most-crossed diagrams, only valid on spatial scales well beyond the mean free path, is assumed to hold on
scales up to the mean free path.
On this scale we may expect the diffusion kernel to be of the type $D(q)q^2$ which is disregarded in the standard version of the self-consistent theory.
\item The electromagnetic self-energies $\Sigma_{T/L}(k,p)$ are assumed not to depend on $p$.
In particular this means that $\Sigma_T(k) = \Sigma_{L}(k) $. This is definitely an approximation, even for point-like dipoles, that needs
more study, but in general, such wave number dependence is not believed to be essential for Anderson localization.
\end{itemize}
The contribution of the most-crossed diagrams to the scattering vertex $U_{\mathbf{pp'}} (\mathbf{q})$ can be obtained from the reducible vertex
$R_{\mathbf{pp'}} (\mathbf{q})$ introduced in Eq.~(\ref{BS}) by removing the four external Dyson propagators, and time-reversing the bottom line.
This gives
\begin{equation}\label{MC}
U^{MC}_{\mathbf{pp}'; ij|kl}(\mathbf{q}) = \frac{\tilde{d}_{il}(\mathbf{f}+\mathbf{q},\mathbf{Q})\tilde{ d}_{kj}(-\mathbf{f}+\mathbf{q},\mathbf{Q}) + \mathcal{O}(Q^2)}
{\pi N D\mathbf{ Q}^2}
\end{equation}
with the notation
$\mathbf{Q}=\mathbf{p}+\mathbf{p}'$ and $\mathbf{f}=(\mathbf{p}-\mathbf{p}')/2$.
In this expression the tensor $\tilde{\mathbf{d}}(\mathbf{p},\mathbf{Q})$ is the diffuse eigenfunction defined in Eq.~(\ref{dij}) stripped
from the 4 external lines in Fig.~\ref{Reps} (transforming $-\mathrm{Im}\,\mathbf{ G} + i\mathbf{J}/2 $ into $-\mathrm{Im}\,\mathbf{ \Sigma} + i\mathbf{j}/2 $ with $\mathbf{j}$ again a Hermitian bilinear form). This leads to $\tilde{\mathbf{d}}(\pm \mathbf{f}+\mathbf{q},\mathbf{Q}) = -\mathrm{Im}\, \Sigma (\pm \mathbf{f}+\mathbf{q}) + \mathbf{j}(\pm \mathbf{f}+\mathbf{q},
\mathbf{Q})$.
A generalized Ward identity,
\begin{eqnarray}\label{ward2}
(\mathbf{q}\cdot \partial_\mathbf{p}) \mathrm{Re}\, \mathbf{\Sigma}(\mathbf{p}) &=& \sum_{\mathbf{p}'} U_{\mathbf{pp}'} (0)
\cdot (\mathbf{q}\cdot \partial_{\mathbf{p}'}) \mathrm{Re}\, \mathbf{G}(\mathbf{p}') \nonumber \\
&& + \sum_{\mathbf{p}'} \delta_\mathbf{q} U_{\mathbf{pp}'} (\mathbf{q}) \cdot
\mathrm{Im}\, \mathbf{G}(\mathbf{p}')
\end{eqnarray}
can be used to eliminate the second term in
Eq.~(\ref{BSJ}), which then transforms into
\begin{eqnarray}\label{BSJ2}
\mathbf{J}(\mathbf{p},\mathbf{q}) &=& \mathbf{J}^D(\mathbf{p},\mathbf{q}) \nonumber \\
&+& \mathbf{ G}(\mathbf{p})\cdot (\mathbf{q}\cdot \partial_\mathbf{p}) \mathrm{Re}\, \mathbf{\Sigma}(\mathbf{p}) \cdot \mathbf{G}^*(\mathbf{p})
\nonumber \\
&-& \mathbf{ G}(\mathbf{p})\mathbf{G}^*(\mathbf{p}) \cdot \sum_{\mathbf{p}'} \delta_\mathbf{q} U_{\mathbf{pp}'} (\mathbf{q}) \cdot
\mathrm{Im}\, \mathbf{G}(\mathbf{p}') \nonumber \\
&+& \mathbf{ G}(\mathbf{p})\mathbf{G}^*(\mathbf{p}) \cdot \sum_{\mathbf{p}'} {U}_{\mathbf{pp}'}\cdot \mathbf{J}(\mathbf{p}',\mathbf{q})
\end{eqnarray}
The first and second terms cannot depend on diffusion constant. Because $U^{MC}$ depends on
both diffusion constant $D$ and the entire diffusion tensor $\mathbf{J}(\mathbf{p},\mathbf{q})$, the self-consistent theory would, in its most advanced version, be a non-linear
integral equation for the second-rank tensor $\mathbf{J}$.
In the following we apply the approximations specified above. The above hydrodynamic limit of $U^{MC}$ is assumed valid when
$|\mathbf{p}+\mathbf{p}'| \ll 1/\ell$. In the standard approach of the self-consistent theory
one focusses on its diffuse pole near $\mathbf{p}\approx - \mathbf{p}'$, and neglects all other dependence on $\mathbf{p}'$.
Secondly, wave number dependence of the self-energy is ignored.
In that case the self-consistent problem simplifies to the following equation,
\begin{eqnarray}\label{SC1}
\mathbf{ J}(\mathbf{p},\mathbf{q}) &\approx& \mathbf{J}^D(\mathbf{p},\mathbf{q}) + \mathbf{G}(\mathbf{p})\mathbf{G}(\mathbf{p})^* \cdot
\nonumber \\ &\, & \sum_{|\mathbf{Q}| < q_m} U^{MC}_{\mathbf{Q}=\mathbf{p}+\mathbf{p}'}(0) \cdot\mathbf{ J}(-\mathbf{p} ,\mathbf{q})
\end{eqnarray}
with the Drude current tensor $\mathbf{J}^D$ given in Eq.~(\ref{drudeagain}). In particular, the third term in Eq.~(\ref{BSJ2}) becomes proportional to
$\sum_\mathbf{Q} \mathbf{Q} /D\mathbf{Q}^2$ and drops out. The sum over $\mathbf{Q}$ that remains in Eq.~(\ref{SC1}) is recognized as the
return Green's function of the diffusion
equation in real space \cite{akkermans}, though with short, non-diffusive paths eliminated by the condition $Q < q_m$.
Diffusion constant and diffusion current tensor are related by
the Kubo formula
\begin{eqnarray}\label{SC2}
\sigma \equiv \pi N D &=& \frac{1}{4}\mathrm{Tr} \, \sum_\mathbf{p}
\mathbf{L} (\mathbf{p},\hat{\mathbf{q}}) \cdot \mathbf{J}(\mathbf{p},\hat{\mathbf{q}})
\nonumber \\ &=& \frac{1}{6\pi^2}\int_0^\infty dp \, p^4 \left( J_0(p) -J_2(p) \right)
\end{eqnarray}
It can readily be seen that Eq.~(\ref{SC1}), despite its simplicity, couples the four diffusion current tensors
identified in Eq.~(\ref{Jgeneral}), among which $J_0(p)$ and $J_2(p)$ are relevant in Eq.~(\ref{SC2}). A mobility edge is characterized by $D=0$.
The large weight of large wave numbers ($p \gg k$) in the Kubo formula is evident and UV-divergences will occur that will be regularized with
the argument that other diagrams exist that compensate.
\subsection{Transverse approximation}
In most applications of the self-consistent theory for localization of light one ignores polarization and focusses on the transverse channel $J_0(p)$ and, not unrelated,
assumes this channel to be governed by
excitations near the frequency shell $p\approx k_e$ of the effective medium where their DOS is largest. In this approximation weak localization of light becomes
essentially equivalent to the one of scalar waves. As a matter of fact, this approximation applies to localization of elastic waves with all
polarizations modes propagating with the same velocity everywhere. We refer to the work of Zhang and Sheng \cite{zhang} where the self-consistent theory for
localization of scalar waves is derived and discussed in great detail.
We will first neglect the weak localization found in Eq.~(\ref{WL0}) associated with two dipoles and incorporate it in the next section when dealing with the mode $J_2(p)$.
Upon putting $\mathbf{J}(\mathbf{p},\mathbf{q}) = J_0(p) (\hat{\mathbf{p}}\cdot \mathbf{q}) \mathbf{\Delta}_p$ into Eq.~(\ref{SC1}), and by assuming that $\mathrm{Im}\, \Sigma(p)$ is independent of $p$,
the explicit solution is just
\begin{equation}\label{SC3}
J_0(k,p) = J_0^D(k,p)\left[1 + \frac{\sigma_c}{\sigma} A(k,p) \right]^{-1}
\end{equation}
with the dimensionless function $A(k,p) = |G_T(k,p)|^2 \times \mathrm{Im}^2 \Sigma_T(k,p) $, and
a critical conductivity defined as $\sigma_c \equiv \sum_Q 1/Q^2 = q_m/2\pi^2 $. Note that $A(k,p) \leq 1$ is a bounded function of $p$. Equation~(\ref{SC3}) says that the amount of weak
localization varies in phase space,
and is maximal at the frequency shell $p=k_e$ of the transverse
waves, and small when $p\gg k$. From Eq.~(\ref{SC2}) we obtain a closed equation for $\sigma$,
\begin{equation}\label{SC4}
\sigma(k) = \frac{1}{3} \sum_\mathbf{p} \frac{p^2 J_0^D(k,p)}{1 + (\sigma_c/\sigma) A(k,p) }
\end{equation}
The Kubo formula attributes a large weight to large $p$, nevertheless the integral converges for all $\sigma > 0$. If $\sigma > \sigma_c$ large wave vectors
$p$ are not relevant in the denominator since $J_0^D(k,p)= 4 (\mathrm{Im}\, G_T(k,p))^2$
decays rapidly with $p$. The integral is dominated by $\mathbf{p}$ near the frequency shell $p\approx k_e$ so that
\begin{eqnarray}\label{SC5}
\sigma(k) &\approx & \frac{\sigma^D (k)}{1 + (\sigma_c/\sigma)}
\Rightarrow \sigma(k) = \sigma^D(k) \left( 1 - \frac{\sigma_c}{\sigma_D} \right) \nonumber \\
&=& \sigma^D(k) \left( 1 - \frac{3}{\pi} \frac{1}{(k_e\ell)^2}\right)
\end{eqnarray}
where $q_m= 1/\ell$ has been chosen. This result, when extrapolated,
locates the mobility edge at $k_e\ell = 0.977$.
For $\sigma < \sigma_c$ however, the $p$-dependence of the denominator
shifts the integral over $p$ to larger values for $p$.
At the
mobility edge $\sigma=0$ and
\begin{equation}\label{MEdiv}
\sigma_c = \frac{4}{3}\sum_\mathbf{p} p^2 |G_T(k,p)|^2
\end{equation}
{This involves an integral that diverges as } $\sum_\mathbf{p} 1/p^2$, {or equivalently as} $1/r$ as $r\rightarrow 0$, {a singularity that is
not to be confused with the diverging integral over diffuse modes } $\mathbf{Q} $ in
Eq.~(\ref{SC1}) {that is a clear artifact of the diffusion approximation at small length scales } {and} { repaired by the cut-off} $q_m$.
{This present divergency at large} $p$ is likely to be artificial and caused
by one of the above approximations inherent of the self-consistent theory to go from} Eq.~(\ref{BSJ2}) to Eq.~(\ref{SC1}). {In standard approaches
of the self-consistent theory} \cite{vw,sheng} {this problem is avoided by assuming} $J(\mathbf{p},\mathbf{q})$ to be ``strongly peaked near the frequency shell'' $p\approx k_e$.
{The neglect of the third term on the righthand side of} Eq.~(\ref{BSJ2}) {is no longer justified for large $p$ since $Q$ then also becomes large and one would need to generalize }
Eq.~(\ref{MC}) { beyond the diffusion approximation.}
In this work we will ignore this complication.
When we subtract the singularity $\sum_\mathbf{p} 1/p^2$ by hand, assuming it cancels against other terms that have been ignored, we get
\begin{eqnarray}\label{SC6}
\sigma_c &= &\frac{4}{3} \sum_\mathbf{p} \left( p^2|G_T(k, p)|^2 -\frac{1}{p^2} \right) \nonumber \\
&=&
\frac{k^2_e\ell}{3 \pi} \left(1 - \frac{3}{4(k_e\ell)^2}\right)
\end{eqnarray}
This locates the mobility edge at $k_e\ell = 0.866$ with the choice $q_m = 1/\ell$. This is close to the extrapolated value above, and
we could argue that the extrapolation in Eq.~(\ref{SC5}) is satisfactory up to the mobility edge and consistent with both previous theory
\cite{zhang,sheng} and numerical simulations of scalar dipoles \cite{sergeyscalar}. It is nevertheless tempting to speculate that this divergency highlights
a true breakdown of the self-consistent theory
and that a more rigorous regular solution may actually exhibit a critical
exponent different from one, the value predicted by the extrapolation~(\ref{SC5}).
\subsection{Inclusion of longitudinal modes}
In this section we give a simplified description of how the self-consistent theory is extended when the other 3 diffusion modes are included.
Let us start with Eq.~(\ref{Jgeneral}) and write the diffusion current tensor as
\begin{eqnarray}\label{Jgeneral2}
{J}_{ij}(\mathbf{p},\mathbf{q}) &=& \sum_{n=0}^3 J_n (p) {\chi}^{n}_{ij}(\mathbf{p},q)
\end{eqnarray}
Let us set $ U^{MC}_{ij;kl} = (U/\sigma) \delta_{kj} \delta_{il}$ with
$U= (\mathrm{Im}\, \Sigma)^2 \sigma_c$ (with dimension $1/m^5$) and
$\sigma = \pi N D$ the conductivity (with dimension $1/m$).
We can check that,
\begin{eqnarray*}
&& G_{ni} G^*_{jm} \delta_{kj} \delta_{il} {\chi}^{0}_{kl} = |G_T|^2{\chi}^{0}_{nm} \\
&& G_{ni} G^*_{jm} \delta_{kj} \delta_{il} {\chi}^{1}_{kl}= |G_L|^2 {\chi}^{1}_{nm}\\
&& G_{ni} G^*_{jm} \delta_{kj} \delta_{il} {\chi}^{2}_{kl}= {R} ({\chi}^{2}_{nm}- 2{\chi}^{1}_{nm} ) + I {\chi}^{3}_{nm} \\
&& G_{ni} G^*_{jm} \delta_{kj} \delta_{il} {\chi}^{3}_{kl} = -R {\chi}^{3}_{nm} + I ({\chi}^{2}_{nm}- 2
{\chi}^{1}_{nm}) \\
\end{eqnarray*}
where we abbreviated $R(p)= \mathrm{Re} \, G_L\bar{G}_T$ and $I(p) = \mathrm{Im} \, G_L\bar{G}_T$. This gives the following self-consistent set of equations
\begin{eqnarray}\label{scvector}
&& J_0(p) = J_0^D(p) - \left( \frac{U}{\sigma} |G_T(p)|^2 + \frac{{0.774}}{k_e\ell } \right) J_0(p) \nonumber \\
&& \left( 1 + |G_L|^2\frac{U}{\sigma} \right) J_1(p) = J_1^D(p) + 2 R(p) \frac{U}{\sigma} J_2(p) \nonumber \\
& & \ \ \ \ \ + 2 I(p) \frac{U}{\sigma} J_3(p) \nonumber \\
&& \left( 1+ R(p) \frac{U}{\sigma} \right)J_2(p) + I(p) \frac{U}{\sigma} J_3(p) = J_2^D(p) \nonumber \\
&& \ \ \nonumber \\
&& I(p) \frac{U}{\sigma} J_2(p) + \left(1 - R(p) \frac{U}{\sigma} \right) J_3(p) = J_3^D(p)
\end{eqnarray}
The equation for the transverse mode $J_0$ discussed in the previous section is not altered and decouples from the others. We have added the weak-localization contribution
caused by 2 dipoles found in Eq.~(\ref{WL0}), since it is not covered by the diffusion approximation, and assumed it enters just as a number in the equation for
$J_0(p)$. This is a clear oversimplification but has no huge consequences for what follows.
The purely longitudinal diffusion current $J_1$ is known once the others are known, but is not relevant for Poynting vector and can likewise
be ignored.
The modes $J_2$ and $J_3$ however, couple and the solution for $J_2$ is
\begin{equation}\label{scJ2}
J_2(p)= \frac{J_2^D (p) + (U/\sigma) C_2(p)}{1- U^2 |G_L(p)G_T(p)|^2/\sigma^2}
\end{equation}
We recall from Eq.~(\ref{LT}) that $J_2^D(p)= -2\mathrm{Im}\, G_L \mathrm{Im}\, G_T <0 $ and $J_3^D (p) = -I(p)$. Thus, with $K= k_e + i/2\ell$ the complex pole of $G_T(p)$,
the function $C_2(p)$ is given by
\begin{eqnarray}\label{Fp}
C_2(p) &=& I(p)^2-J_2^D(p)R(p) \nonumber \\
&= & \left( \frac{k_e}{\ell}\right)^2 \frac{|G(p)|^2 + |K|^4|G(p)|^4 }{|K|^8}
\end{eqnarray}
which is strictly positive.
Before calculating diffusion constant we first discuss these results.
Since the wave number integral
of $J_2(p)$ contributes to the diffusion constant via Eq.~(\ref{SC2}), its denominator cannot possess any non-integrable singularity.
This implies that
\begin{equation}\label{limit1}
\sigma(k) > U |G_L(p) G_T(p)|
\end{equation}
to be valid for all $p$. This inequality excludes \emph{de facto} a mobility edge. It is most stringent near the transverse frequency shell
$p \approx k_e$ (more precisely $p^2 = \mathrm{Re}\, K^2 = k_e^2 - 1/4\ell^2$, positive as long as $k_e\ell > 1/2$) where
$G_T = 1/ (-i \mathrm{Im}\, \Sigma) $. Furthermore, since we neglect $p$-dependence in self-energies we set $|G_L| = 1/|K|^2 $ and neglect the fact
that near the transverse shell the complex wave numbers of longitudinal and transverse modes are not necessarily equal.
Recalling that $U = (\mathrm{Im}\, \Sigma)^2 \sigma_c$ and setting $q_m = q/\ell$, with $q$ of order unity, the minimal possible electromagnetic conductivity is
given by
\begin{equation}\label{limit2}
\sigma(k) > c_2(k_e\ell) \sigma_D(k)
\end{equation}
with $c_2(x) =(3q/\pi x) (x^2 + \frac{1}{4})^{-1}$ for $k_e\ell > 1/2$.
Equivalently, if the transport mean free path $\ell^*$ is defined as usual via $ \sigma = k_e^2\ell^*/6\pi$ \cite{LB}, then
\begin{equation}\label{limit3}
k_e \ell^* > \frac{3q }{\pi} \frac{1}{(k_e \ell)^2 + \frac{1}{4}}
\end{equation}
for $k_e\ell > 1/2$. For $k_e\ell < 1/2$ the maximum occurs at $p=0$ and we find
\begin{equation}\label{limit4}
k_e \ell^* > \frac{3q }{\pi} \frac{(k_e\ell)^2}{[(k_e \ell)^2 + \frac{1}{4}]^3}
\end{equation}
The very existence of this minimum conductivity for vector waves is determined by scattering properties of longitudinal and transverse waves near the frequency shell
and not by large wave numbers $p$ that are subject to uncertain regularization procedures. It nevertheless relies on our choice for $q$, and the
approximation that $K_L (p) = K_T(p) = K$.
The above lower bound becomes stringent for $k_e\ell \approx 1$ where one would have expected a mobility edge. In this regime the maximum occurs at $p < k $, so that setting
$K_L (p) = K_T(p) = K$ may not be a bad approximation, knowing that for $p \ll k$ it is valid (see for instance the $p$-dependent self-energies in Fig.~\ref{boom}).
If $q=1$, we find for $k_e\ell = 1$, $ k_e \ell^* > 0.76 $,
and upon entering the evanescent regime $k_e\ell = 1/2$, $ k_e \ell^* > 2.19 $.
For $k_e\ell= 0.35$ the minimum value is $2.26$.
\begin{figure}[t]
\includegraphics[width=0.95\columnwidth]{MEabs.pdf}
\caption{The self-consistent solution for the electromagnetic transport mean free path $\ell^*$ defined by $\sigma = k_e^2 \ell^*/6\pi$.
Shown are the values for $k_e\ell^*$ for the full solution in this section, the conventional
picture described by Eq.~(\ref{SC2}) with only the transverse mode $J_0$ considered, with a mobility edge predicted around $k_e\ell \approx 1$,
the lower threshold imposed by the existence of the diffusion modes $J_2$ and $J_3$, as well as $k_e\ell^*$
associated with the fictitious conductivity and $J_3$. We used a cut-off $q_m = 1/\ell$.}\label{MEfig}
\end{figure}
We next calculate the electromagnetic conductivity, which is the sum of the conductivities of the two channels,
$\hat{\sigma} \equiv \sigma / \sigma_D= \hat{\sigma}_0 + \hat{\sigma}_2 $.
Since the mobility no longer vanishes, the transverse diffusion mode $J_0$,
which decouples from the others, can be given the same treatment as done in Eq.~(\ref{SC5}), with the denominator removed and taken outside at its maximum value.
This gives the first equation for the conductivity of the transverse channel,
\begin{eqnarray}\label{scvector1}
\hat{\sigma}_0 = \frac{1}{1 + c_1(k_e\ell) /\hat{\sigma} + {0.774}/k_e\ell}
\end{eqnarray}
with $c_1(x) = 3q/\pi x^2$. We can apply the same procedure for the diffusion current $J_2$. However, as
was seen in Eq.~(\ref{Dlong}) to be the case for the Drude component, the remaining integral suffers from a divergence at large $p$,
again of the kind~(\ref{MEdiv}). The regularization
proposed in Eq.~(\ref{SC6}) is not satisfactory here since it changes sign at $k_e\ell = 0.86$ and would produce a negative Drude conductivity in the $TL$-channel, arguably not physical.
In Sec.~\ref{drudesection} we found that for $p > k_e^2\ell$ the diffusion theory in the $J_2$ channel breaks down so that the present
theory is not valid for too large $p$.
We therefore propose a regularization
\begin{equation*}
\sum_\mathbf{p} p^2|G_T(p)|^2 \rightarrow |K|^2 \sum_\mathbf{p} |G_T(p)|^2 = |K|^2 \frac{\ell}{4\pi}
\end{equation*}
with $K= k_e + i/2\ell$ the transverse complex wave number. In real space is $G_T({r}) = -\exp(iKr)/4\pi r$ and this regularization comes down to
\begin{equation*}
\int d^3\mathbf{r}\, \left| \bm{\nabla} \left(\frac{\exp(iKr) }{-4\pi r} \right)\right|^2 \rightarrow |K|^2 \int d^3\mathbf{r}\,
\left|\frac{\exp(iKr) }{-4\pi r}\right|^2
\end{equation*}
meaning that the regularization only considers the far field when taking the spatial derivative.
In particular this leads to the Drude diffusion constant of channel $J_2$,
\begin{eqnarray*}
D_2^D &=& -\frac{1}{3\pi N(k)}\sum_\mathbf{p} p^2J_2^D(p) = \frac{4\pi }{3} \frac{v_E}{|K|^4 \ell^2 }\sum_\mathbf{p} p^2 |G_T(p)|^2
\\ &\rightarrow & \frac{1}{3} v_E \frac{1}{|K|^2 \ell}
\end{eqnarray*}
This is a satisfactory, positive extrapolation of the result $\frac{1}{3}v_E/k^2\ell$ obtained in Eq.~(\ref{SC2}) for low density, and where the divergence was seen to cancel explicitly.
If we adopt this regularization, we find in the $J_2$-channel,
\begin{eqnarray}\label{scvector2
\hat{\sigma}_2 &=& \frac{{F(\delta)} c_3(k_e\ell) }{1-(c_2(k_e\ell) /\hat{\sigma})^2} \left( 1 - \frac{c_4(k_e\ell)}{\hat{\sigma}}\right)
\end{eqnarray}
with $c_2(x)$ defined earlier, $c_3(x)= ( x^2 + 1/4 )^{-1}$
and $c_4(x) = (3q/2\pi)(9/8 + x^2/2)( x^2 + 1/4 )^{-2} $. We recall that $F(\delta)$ is the function that describes the \emph{explicit} dependence on detuning of the diffusion constant
in the channel $J_2$,
shown in Fig.~\ref{DTLfig}.
Equations (\ref{scvector1}) and (\ref{scvector2}) lead to a cubic equation for $\hat{\sigma}$ that can be solved analytically. The resulting formula is quite lengthy and we do not present it here. The solution for $k\ell^* = \hat{\sigma} \times k\ell$
is shown in Fig.~\ref{MEfig}. We have put {$F(\delta) = 1$}, its role will be discussed later, in which case the self-consistent theory has only one parameter, the product $k_e\ell$, as in the scalar case. According to Eqs.~(\ref{scvector1}) and~(\ref{scvector2}) the traditional weak localization correction $\delta \sigma_0= - c_1$
in the transverse channel is partially compensated by the positive conductivity $\delta\sigma_2 = c_3$ of the $J_2$ channel, and even exactly when $q \approx 1$.
This explains why $k_e\ell^*$ is well in excess of the traditional prediction~(\ref{SC5}), for values as small as $k_e\ell = 1.8$, and close to the Drude value $k_e\ell$ of the transverse channel.
The term containing $c_4 >0$ tends to suppress diffusion in the $J_2$ mode as $1/(k_e\ell)^4$ but the coupling to $J_3$ described by $c_2$ reverses this trend.
Around the region $k_e\ell \approx 1$ where the conventional picture would locate the mobility edge,
the minimum conductivity starts to impose itself, and the total conductivity rises.
We recall that the fictitious diffusion is determined by $J_3$, as described by Eq.~(\ref{DIM}). The self-consistent solution is given by
\begin{eqnarray}\label{J3SC}
J_3(p) &=& \frac{ -\mathrm{Im}\, G_L(p) G_T(p) }{1 - U^2|G_L(p)G_T(p|^2/\sigma^2} \nonumber \\
&\times& \left[ 1 + \frac{U}{\sigma} \mathrm{Re}\, G_L(p)G_T(p) \right]
\end{eqnarray}
and upon inserting this into Eq.~(\ref{DIM}), the same procedure as above provides an expression for the ``fictitious'' conductivity
\begin{equation}\label{DIMSC}
\hat{\sigma}_I = \frac{1}{k_e\ell} + \frac{1}{1-(c_2(k_e\ell)/\hat{\sigma})^2}\left( \frac{d_3(k_e\ell) }{\hat{\sigma}} -\frac{ d_2(k_e\ell)}{\hat{\sigma}^2} \right)
\end{equation}
with the functions $d_2(x) = \frac{1}{2}(3q/\pi)^2 x^{-1} ( x^2 + 1/4)^{-3} $ and $d_3(x) = (3q/4\pi)x^{-1}(x^2 + 1/4)^{-2}$.
The transport mean free path associated with the fictitious diffusion is also shown in Fig.~\ref{MEfig}. For $k_e\ell \sim 1$ fictitious diffusion
is of same order as the real conductivity and has the same sign.
\begin{figure}
\includegraphics[width=0.7\columnwidth, angle=-90]{fig_comparison.pdf}
\vspace*{-7mm}
\caption{
The ratio of transport and scattering mean free paths $\ell^*/\ell$ as a function of $k_e \ell$ compared to the self-consistent theory for $4\pi n/k_0^3 = 3.77$ and $q = 0.5$ (black solid line, for the two branches, see text for explanation). Points of different colors correspond to different scatterer number densities $n$ for detunings $\delta = (\omega-\omega_0)/\gamma \in [-3, 6.5]$ from the resonance. The dashed line is the lower bound for $\ell^*/\ell$ described by Eqs.~(\ref{limit3}) and (\ref{limit4}), again with $q=0.5$.
}\label{fig_comparison}
\end{figure}
For $q = 0$, Eqs.\ (\ref{scvector1}) and (\ref{scvector2}) simplify to the sum of the diffusion constants associated with one or two dipoles in the channels $J_0$ and $J_2$, without any cross-talk,
\begin{eqnarray}
\label{q0}
\hat{\sigma} = \frac{\ell^*}{\ell} = \frac{1}{1 + 0.774/k_e \ell} + \frac{{F(\delta)}}{(k_e \ell)^2 + \frac14}
\end{eqnarray}
For $k_e\ell < 1$ the second term from the $J_2$ channel starts dominating. If we ignore the explicit dependence on $\delta$ by putting $F(\delta) = 1$,
this equation yields $\hat{\sigma} < 1$ for $k_e \ell > 1.73$ and below this value starts increasing monotonically. In the same limit of $q = 0$, we have $\hat{\sigma}_I = 1/k_e\ell$.
\subsection{Comparison with numerical simulations}
In Fig.\ \ref{fig_comparison} we compare the predictions of the self-consistent theory for electromagnetic waves developed above to numerical simulations in which we simulate the multiple scattering of light by an ensemble of dipolar resonant point scatterers. The results of the simulations allow us to estimate $k_e$, $\ell$ and $\ell^*$. Both the details of the simulations and the way in which we interpret their results are detailed in Appendix \ref{appB}. We repeat calculations for several atomic number densities $n$ and detunings $\delta = (\omega - \omega_0)/\gamma$; the resulting ratios $\ell^*/\ell$ are presented in Fig.\ \ref{fig_comparison} by circles of different colors as functions of the Ioffe-Regel parameter $k_e \ell$. The numerical results are bounded from below by Eqs.\ (\ref{limit3}) and (\ref{limit4}) for the minimum conductivity (dashed line). Equations (\ref{limit3}) and (\ref{limit4}) impose a sharp rise of the ratio $\ell^*/\ell$ at small values of $k_e\ell$ where one would normally have expected a mobility edge. This rise is well reproduced by the numerical results.
A striking feature of the numerical results is the clear tendency of data to group together along two different ``branches''.
A careful inspection of Fig.\ \ref{fig_comparison} shows that the lower branch is composed of data corresponding to $\delta < 0$ whereas the upper branch corresponds to simulations with positive detunings $\delta > 0$. This means that -
apart from the absence of a localization transition - there is no one-parameter dependence with $k_e\ell$ either.
The double-branch structure actually follows from the explicit dependence of the $J_2$ channel on detuning $\delta$,
described by the factor $F(\delta)$, which is larger for positive detunings (see Fig.\ \ref{DTLfig}).
Figure \ref{fig_comparison} shows the prediction of the self-consistent theory for the ratio $\ell^*/\ell$ with the inclusion of the function $F(\delta)$
and with the dimensionless parameter $k_e\ell$ calculated from the averaged incident field (see Appendix B)
for one fixed dipole density $4\pi n/k_0^3 = 3.77$ and for various detunings. Predictions for $k_e\ell$ corresponding to other densities are not shown since they all
exhibit the same overall appearance. Despite the fact that, strictly speaking, $\ell^*/\ell$ is a function of two independent parameters ($k_e \ell$ and $\delta$ or
equivalently $k_e \ell$ and $4\pi n/k_0^3$), we see that all results for the quite wide explored density range $4\pi n/k_0^3 = 0.25$--6.28 roughly
follow the same double-branch master curve that is close to the analytical result for the intermediate density $4\pi n/k_0^3 = 3.77$.
The agreement between numerical and analytical results is not perfect but we believe that it can be further improved by distinguishing explicitly between transverse and longitudinal complex wave numbers (see Sec.\ \ref{sectionDOS}), which are known to be different (see Appendix \ref{appA}, and Figs. \ref{fig_keff} and \ref{fig_tr}). This can be done in future work.
\section{Conclusions and Outlook}
In this work we have included longitudinal excitations into a transport theory for electromagnetic waves propagating in a medium with randomly distributed dipolar electric
scatterers (dipoles).
We identify four diffuse modes, triggered by the gradient in electromagnetic energy, among which two carry a Poynting vector and contribute to the diffusion constant.
We have developed this theory by extending the independent scattering approximation (the elementary scattering unit is a single dipole) to include rigourously
recurrent scattering from two dipoles. This has led to the following results. 1) Longitudinal and transverse waves of the effective medium are characterized by different
complex wave numbers $K_{L}$ and $K_T$, respectively, and dominate near and far field in scattering. 2) The interference between longitudinal and transverse waves
creates a new diffuse
transport channel with a diffusion constant proportional to the number density of dipoles, to be compared to the usual diffusion constant that is
inversely proportional to this density. 3) Divergent terms appear at large wave numbers in the diffusion constant, in the longitudinal density of states and in the
collision operator.
Many of them cancel, in particular for the electromagnetic Kubo diffusion constant all divergent terms cancel. We postulate that this cancelation holds
in all orders of perturbation theory. 4) When extending the self-consistent theory of localization, with all its usual assumptions, to include the four diffuse modes,
we find a minimum conductivity that prevents the onset of Anderson
localization of light, as also observed in numerical simulations \cite{sergey0}.
5) The predictions of the developed self-consistent theory are surprisingly close to the results of independent numerical simulations, including the explicit dependence of the new transport channel on frequency detuning from the dipolar resonance.
These findings demonstrate that, due to the presence of longitudinal, non-propagating waves, (weak) localization of light is fundamentally different from what was believed so far.
Early stages of this work were supported by collaborations with Yvan Castin, Ad Lagendijk, Nicolas Cherroret and Dominique Delande. We thank Denis Basko
for useful discussions.
|
1,477,468,751,002 | arxiv | \section{Introduction}
The notion of quantum walks (QWs) was introduced by Aharonov et al. \cite{Aharonov1} as a quantum version of the usual random walks. QWs have been intensively studied from various fields. For example, quantum algorithm \cite{Portugal}, the topological insulator \cite{Kitagawa}, and radioactive waste reduction \cite{Matsuoka}.
In the present paper, we consider QWs on the two-dimensional lattice, $\mathbb{Z}^2$, and $N\times N$ torus, $\pi_N^2$ with moving shift (MS) and flip-flop shift (FF).
The properties of QWs in the one dimension, e.g., ballistic spreading and localization, are well studied, see Konno \cite{Konno}. On the other hand, the corresponding properties of QWs in the two-dimensions have not been clarified. However, here are some results for the Grover walk and Fourier walk on the two-dimensional lattice. Weak limit theorems for the Grover walks on $\mathbb{Z}^2$ with MS and FF were obtained by Watabe et al. \cite{Watabe et al.} and Higuchi et al. \cite{Higuchi et al.}, respectively. Inui et al. \cite{Inui et al.} and Higuchi et al. \cite{Higuchi et al.} showed that localization occurs for the Grover walks on $\mathbb{Z}^2$ with MS and FF, respectively. Komatsu and Tate \cite{Komatsu and Tate.} proved that localization does not occur for the Fourier walk on $\mathbb{Z}^2$ with MS. It is not known for any result on the Fourier walk on $\mathbb{Z}^2$ with FF. Hence we proved that localization does not occur for the Fourier walk on $\mathbb{Z}^2$ with both MS and FF by using a simple contradiction argument which is different from the method based on the Fourier analysis given by \cite{Komatsu and Tate.}. Time-averaged limit measure of the Grover walk with FF on $\pi_N^2$ was obtained analytically by Marquezino et al. \cite{Marquezino.}. The expression of the time averaged limit measure of the Grover walk on $\pi_N^2$ with both MS and FF is not known.
In this paper, we compute eigenvalues and the corresponding eigenvectors of the $(k_1,k_2)$-space of the Fourier walks on $\pi_N^2$ with both MS and FF for the special initial conditions, for examples, $k_1=k_2$ or $k_1+k_2\equiv 0\ (\bmod N)$. By using these results, we obtain the measures at time $n$ for the walks. Moreover, we calculate amplitudes of the Fourier walks on $\pi_2^2$ (i.e., $N=2$) with MS and FF. We also compute amplitudes of the Grover walks on $\pi_2^2$ with MS and FF, and discuss a difference between the Fourier and Grover walks.
The rest of the paper is organized as follows. Section 2 is devoted to the definition of QWs on $\pi_N^2$. In Section 3, we consider the Fourier walks on $\pi_N^2$ and $\mathbb{Z}^2$ with both MS and FF. Section 4 deals with the Fourier and Grover walks on $\pi_2^2$. In Section 5, we summarize our results.
\section{QWs\ on\ $\pi_N^2$}
This section presents the definition of QWs on $\pi_N^2$. Let $U$ be a $4\times 4$ unitary matrix which is the coin operator of QW. For a coin operator $U$, we introduce $U_j=P_jU\ (j=1,2,3,4)$, where
\begin{align*}P_1=\begin{bmatrix}1&0&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&0 \end{bmatrix},\ \ P_2=\begin{bmatrix}0&0&0&0\\0&1&0&0\\0&0&0&0\\0&0&0&0 \end{bmatrix},\ \ P_3=\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&1&0\\0&0&0&0 \end{bmatrix},\ \ P_4=\begin{bmatrix}0&0&0&0\\0&0&0&0\\0&0&0&0\\0&0&0&1 \end{bmatrix}.\end{align*}
In this paper, we consider two types of shift operator, i.e., moving shift (MS) and flip-flop shift (FF). First we introduce the time evolution of QW with MS as follows: for $x_1,\ x_2\in\pi_N^2$,
\begin{align*}
\Psi_{n+1}^{(m)}(x_1,x_2)&=U_1^{(m)}\Psi_n^{(m)}(x_1+1,x_2)+U_2^{(m)}\Psi_n^{(m)}(x_1-1,x_2)\notag\\&+U_3^{(m)}\Psi_n^{(m)}(x_1,x_2+1)+U_4^{(m)}\Psi_n^{(m)}(x_1,x_2-1),
\end{align*}
where $U^{(m)}=U$. Next we introduce the time evolution of QW with FF as follows:
\begin{align*}
\Psi_{n+1}^{(f)}(x_1,x_2)&=U_1^{(f)}\Psi_n^{(f)}(x_1+1,x_2)+U_2^{(f)}\Psi_n^{(f)}(x_1-1,x_2)\notag\\&+U_3^{(f)}\Psi_n^{(f)}(x_1,x_2+1)+U_4^{(f)}\Psi_n^{(f)}(x_1,x_2-1).
\end{align*}
Here $U^{(f)}$ is given by
\begin{align*}U^{(f)}=\begin{bmatrix} 0&1&0&0 \\ 1&0&0&0\\ 0&0&0&1\\ 0&0&1&0\end{bmatrix}U^{(m)}.\end{align*}
Put $\mathbb{K}_N=\{0,1,\dots, N-1\}$, we define the Fourier transform by
\begin{align*}\Hat{\Psi}_n^{(j)}(k_1,k_2)=\frac{1}{N}\sum_{(x_1,x_2)\in\pi_N^2}\omega^{-(k_1x_1+k_2x_2)}\Psi_n^{(j)}(x_1,x_2)\ \ (j=m,f), \end{align*}
where $(k_1,k_2)\in \mathbb{K}_N^2$\ and $\omega=\exp (2\pi i/N)$.
The time evolution of QW on $(k_1, k_2)$-space is written as
\begin{align*}
\Hat{\Psi}_{n+1}^{(j)}(k_1,k_2)&=U^{(j)}(k_1,k_2)\Hat{\Psi}_{n}^{(j)}(k_1,k_2)\ \ (j=m,f),
\end{align*}
where
\begin{align*}U^{(j)}(k_1,k_2)&=\begin{bmatrix}\omega^{k_1}&0&0&0\\0&\omega^{-k_1}&0&0\\0&0&\omega^{k_2}&0\\0&0&0&\omega^{-k_2}
\end{bmatrix} U^{(j)}\ \ (j=m,f).
\end{align*}
Remark that
\begin{align}
\Hat{\Psi}_n^{(j)}(k_1,k_2)=\left(U^{(j)}(k_1,k_2)\right)^n\Hat{\Psi}_{0}^{(j)}(k_1,k_2)\ \ (j=m,f).\label{phik}
\end{align}
Since $U^{(j)}(k_1,k_2)$ is unitary, we have the following spectral decomposition as follows:
\begin{align}
U^{(j)}(k_1,k_2)=\sum_{i=0}^{3}\lambda_i^{(j)}(k_1,k_2) |v_i^{(j)}(k_1,k_2)\rangle \langle v_i^{(j)}(k_1,k_2)|\ \ (j=m,f). \label{spec1}
\end{align}
where $\lambda^{(j)}_i(k_1,k_2)$ is eigenvalue of $U^{(j)}(k_1,k_2)$ and $|v_i^{(j)}(k_1,k_2)\rangle$ is the corresponding eigenvector for $i=0,1,2,3$. Therefore, combining Eq.\eqref{phik} with Eq.\eqref{spec1}, we obtain
\begin{align*}
\Hat{\Psi}_n^{(j)}(k_1,k_2)=\sum_{i=0}^{3}\lambda_i^{(j)}(k_1,k_2)^n |v_i^{(j)}(k_1,k_2)\rangle \langle v_i^{(j)}(k_1,k_2)|\Hat{\Psi}^{(j)}_0(k_1,k_2)\ \ (j=m,f).
\end{align*}
\section{The Fourier\ walk\ on\ $\pi_N^2$}
In this section, we present the definition of the Fourier walks with moving and flip-flop shifts on $\pi_N^2$.
\subsection{The Fourier\ walk with MS}
This subsection deals with the Fourier walk with MS whose coin operator is defined by
\begin{align*}U^{(m)}=\frac{1}{2}\begin{bmatrix}
1&1&1&1\\ 1&i&-1&-i\\1&-1&1&-1\\1&-i&-1&i\end{bmatrix}.\end{align*}
Then $U^{(m)}(k_1,k_2)$ for $k_1,k_2\in\mathbb{K}_N=\{0, 1, \dots, N-1\}$ is given by
\begin{align*}
U^{(m)}(k_1,k_2)=\frac{1}{2}\begin{bmatrix}\omega^{k_1} &\omega^{k_1}&\omega^{k_1}&\omega^{k_1} \\ \omega^{-k_1} &i\omega^{-k_1}&-\omega^{-k_1}&-i\omega^{-k_1} \\ \omega^{k_2} &-\omega^{k_2}&\omega^{k_2}&-\omega^{k_2} \\ \omega^{-k_2} &-i\omega^{-k_2}&-\omega^{-k_2}&i\omega^{-k_2}
\end{bmatrix}.
\end{align*}
Moreover we can compute the characteristic polynomial as follows.
\begin{align}
\det (\lambda I_4-U^{(m)}(k_1,k_2))&=\lambda^4-\frac{1+i}{2}\Bigl(\cos \tilde{k}_1+\sin \tilde{k}_1+\cos \tilde{k}_2+\sin \tilde{k}_2\Bigr)\lambda^3-\frac{1-i}{2}\Bigl(1+\cos \Bigl(\tilde{k}_1-\tilde{k}_2\Bigr)\Bigr)\lambda^2\notag \\
&+\frac{1+i}{2}\Bigl(\cos \tilde{k}_1+\sin \tilde{k}_1+\cos \tilde{k}_2+\sin \tilde{k}_2\Bigr)\lambda-i,\label{chapo}
\end{align}
with $\tilde{k}_j=2\pi k_j/N$. Let $x=\Re(\lambda)$ and $y=\Im(\lambda)$, where $\lambda$ is an eigenvalue of $U^{(m)}(k_1,k_2)$. Here $\Re(z)$ is the real part of $z$ and $\Im(z)$ is the imaginary part of $z\in\mathbb{C}$. We should remark that Eq.\eqref{chapo} implies that $x$ and $y$ satisfy the following equation.
\begin{align}
x^2-y^2-2xy+Ay-B=0, \label{chapoly}
\end{align}
where $A=\cos \tilde{k}_1+\sin \tilde{k}_1+\cos \tilde{k}_2+\sin \tilde{k}_2$\ and\ $B=\{1+\cos (\tilde{k}_1-\tilde{k}_2)\}/2$. It would be difficult to get an explicit form of $\lambda=x+iy$ for any $(k_1,k_2)\in\mathbb{K}_N^2$ by using Eq.\eqref{chapoly}. Therefore we consider the proper subsets ${\cal A}$ of $\mathbb{K}_N^2$. In this model, we deal with the following two cases; $({\bf a})\ {\cal A}=\{(k_1,k_2)=(0,0)\}$ and $({\bf b})\ {\cal A}=\{(k_1,k_2)\in \mathbb{K}_N^2 : k_1=k_2\}$. Let $\lambda_j^{(m)}(k_1,k_2)$ denote the eigenvalues of $U^{(m)}(k_1,k_2)$ and $v^{(m)}_j(k_1,k_2)$ be the corresponding eigenvectors for $j=0,1,2,3$. We should note that case $({\bf a})$ related to a QW starting from uniform initial state given by Eq.\eqref{iniuni}, and case $({\bf b})$ is related to a QW starting from restricted uniform initial state $(x_1+x_2=N)$ given by Eq.\eqref{inikk}.\\ \\
$({\bf a})\ (k_1,k_2)=(0,0)$ case\\
The eigenvalues of $U^{(m)}(0,0)$ are
\begin{align}\lambda_0^{(m)}(0,0)=\lambda^{(m)}_1(0,0)=1,\ \lambda_2^{(m)}(0,0)=-1,\ \lambda_3^{(m)}(0,0)=i,\label{zerozeroei}
\end{align}
and the corresponding eigenvectors are
\begin{align}
v_0^{(m)}(0,0)&=\frac{1}{2}^T\begin{bmatrix}1&1&-1&1\end{bmatrix},\ v_1^{(m)}(0,0)=\frac{1}{\sqrt{2}}^T\begin{bmatrix}1&0&1&0\end{bmatrix}, \notag \\
v_2^{(m)}(0,0)&=\frac{1}{2}^T\begin{bmatrix}1&-1&-1&-1\end{bmatrix},\ v_3^{(m)}(0,0)=\frac{1}{\sqrt{2}}^T\begin{bmatrix}0&1&0&-1\end{bmatrix}. \label{zerozerovec}
\end{align}
$({\bf b})\ (k_1,k_2)=(k,k)$ case\\
The eigenvalues are
\begin{align}
\lambda_0^{(m)}(k,k)=1,\ \lambda_1^{(m)}(k,k)=\omega^k,\ \lambda_2^{(m)}(k,k)=-1,\ \lambda_3^{(m)}(k,k)=i\omega^{-k}, \label{kkei}
\end{align}
and the corresponding eigenvectors are
\begin{align}
v_0^{(m)}(k,k)&=\frac{1}{2}\ {}^T\begin{bmatrix}\omega^k&1&-\omega^k&1 \end{bmatrix},\ v_1^{(m)}(k,k)=\frac{1}{\sqrt{2}}\ {}^T\begin{bmatrix}1&0&1&0 \end{bmatrix}, \notag \\
v_2^{(m)}(k,k)&=\frac{1}{2}^T\begin{bmatrix}\omega^k&-1&-\omega^k&-1 \end{bmatrix},\ v_3^{(m)}(k,k)=\frac{1}{\sqrt{2}}\ {}^T\begin{bmatrix}0&1&0&-1 \end{bmatrix}. \label{kkvec}
\end{align}
From now on, we calculate $\Psi_n^{(m)}(x_1,x_2)$ for two cases. Specific initial states by Eqs.\eqref{zerozeroei},\ \eqref{zerozerovec},\ \eqref{kkei} and \eqref{kkvec}. \\
(i)\ Here we consider with uniform initial state $\Psi_0^{(m)}(x_1,x_2)$, i.e.,
\begin{align}
\Psi_0^{(m)}(x_1,x_2)=\frac{1}{N}\ {}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}\ ((x_1,x_2)\in \pi_N^2),\label{iniuni}
\end{align}
where $|\alpha_1|^2+|\alpha_2|^2+|\alpha_3|^2+|\alpha_4|^2=1$ with $\alpha_j\in \mathbb{C}\ (j=1,2,3,4)$. By the Fourier transform, we see that the initial state of $(k_1, k_2)$-space becomes
\begin{align*}\Hat{\Psi}_0^{(m)}(k_1,k_2)=\begin{cases} \cfrac{1}{N}\ {}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix} \ \ (k_1,k_2)=(0,0)\\ \\
{}^T\begin{bmatrix} 0&0&0&0\end{bmatrix}\ \ \ \ \ \ \ \ \ (k_1,k_2)\neq(0,0)
\end{cases}.
\end{align*}
Thus we have
\begin{align*}
\Psi_n^{(m)}(x_1,x_2)&=\frac{1}{N}\sum_{k_1,k_2 \in \mathbb{K}_N}\omega^{k_1x_1+k_2x_2}\sum_{i=0}^3 \lambda_i(k_1,k_2)^n |v_i(k_1,k_2)\rangle \langle v_i(k_1,k_2)|\phi \rangle \notag \\
&=\frac{1}{N}\sum_{i=0}^3\lambda_i(0,0)^n|v_i(0,0)\rangle \langle v_i(0,0)|\phi \rangle ,
\end{align*}
where $\phi={}^T[\alpha_1\ \alpha_2\ \alpha_3\ \alpha_4]$. From Eqs.\eqref{zerozeroei} and \eqref{zerozerovec}, we obtain the desired result as follows.
\begin{align*}
&\Psi_n^{(m)}(x_1,x_2)\notag \\ &=\frac{1}{4N}\begin{bmatrix}\bigl(3+(-1)^n\bigr)\alpha_1+\bigl(1-(-1)^n\bigr)\alpha_2+(1-(-1)^n\bigr)\alpha_3+\bigl(1-(-1)^n\bigr)\alpha_4 \\
\bigl(1-(-1)^n\bigr)\alpha_1+\bigl(1+(-1)^n+2i^n\bigr)\alpha_2+(-1+(-1)^n)\alpha_3+\bigl(1+(-1)^n-2i^n\bigr)\alpha_4\\
\bigl(1-(-1)^n\bigr)\alpha_1+\bigl(-1+(-1)^n\bigr)\alpha_2+\bigl(3+(-1)^n\bigr)\alpha_3+\bigl(-1+(-1)^n\bigr)\alpha_4\\
\bigl(1-(-1)^n\bigr)\alpha_1+\bigl(1+(-1)^n-2i^n\bigr)\alpha_2+\bigl(-1+(-1)^n\bigr)\alpha_3+\bigl(1+(-1)^n+2i^n\bigr)\alpha_4
\end{bmatrix},
\end{align*}
for $(x_1,x_2)\in \pi_N^2$. Hence we find that the amplitude $\Psi_{n+4}^{(m)}(x_1,x_2)=\Psi_n^{(m)}(x_1,x_2)$ for $(x_1,x_2)\in\pi_N^2$ and $n\in\mathbb{Z}_{\geq}$. That is to say, the Fourier walk with MS starting from uniform initial state has the period $4$.\\
(ii)\ We consider a QW with restricted uniform initial state $\Psi_0^{(m)}(x_1,x_2) $ given by
\begin{align}
\Psi_0^{(m)}(x_1,x_2)=\begin{cases}\cfrac{1}{\sqrt{N}}\ {}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}\ (x_1+x_2=N)\\ \\ {}^T\begin{bmatrix} 0&0&0&0\end{bmatrix}\ \ \ \ \ \ \ \ \ \ \ (x_1+x_2\neq N)\label{inikk}
\end{cases},
\end{align}
where $|\alpha_1|^2+|\alpha_2|^2+|\alpha_3|^2+|\alpha_4|^2=1$. The Fourier transform implies that the initial state of $(k_1,k_2)$-space becomes
\begin{align*}
\Hat{\Psi}_0^{(m)}(k_1,k_2)=\begin{cases} \cfrac{1}{\sqrt{N}}\ {}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}\ \ (k_1=k_2)\\ \\ ^T\begin{bmatrix} 0&0&0&0\end{bmatrix}\ \ \ \ \ \ \ \ \ \ \ (k_1\neq k_2)
\end{cases}.
\end{align*}
Therefore we have
\begin{align}
\Psi_n^{(m)}(x_1,x_2)=\cfrac{1}{N^{3/2}}\sum_{k\in \mathbb{K}_N}\omega^{(x_1+x_2)k}\sum_{i=0}^3 \lambda_i(k,k)^n|v_i(k,k)\rangle \langle v_i(k,k)|\phi \rangle,\label{psikk1}
\end{align}
where $\phi=\ {}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}$. Combining Eq.\eqref{kkei} with Eq.\eqref{psikk1}, we get
\begin{align}
\Psi_n^{(m)}(x_1,x_2)=\frac{1}{N^{3/2}}\sum_{k=0}^{N-1} &\omega^{(x_1+x_2)k}\Bigl\{ 1^n\langle v_0^{(m)}(k,k)|\phi\rangle |v_0^{(m)}(k,k)\rangle +\omega^{nk}\langle v_1^{(m)}(k,k)|\phi\rangle |v_1^{(m)}(k,k)\rangle \notag \\
&+(-1)^n\langle v_2^{(m)}(k,k)|\phi\rangle |v_2^{(m)}(k,k)\rangle+i^n\omega^{-nk}\langle v_3^{(m)}(k,k)|\phi\rangle |v_3^{(m)}(k,k)\rangle \Bigr\}.\label{psikk2}
\end{align}
From Eq.\eqref{kkvec}, we see that
\begin{align}
&\langle v_0^{(m)}(k,k)|\phi\rangle=\frac{1}{2}\bigl\{(\alpha_1-\alpha_3)\omega^{-k}+\alpha_2+\alpha_4 \bigr\},\ \ \langle v_1^{(m)}(k,k)|\phi\rangle=\frac{1}{\sqrt{2}}(\alpha_1+\alpha_3),\notag \\
&\langle v_2^{(m)}(k,k)|\phi\rangle=\frac{1}{2}\bigl\{(\alpha_1-\alpha_3)\omega^{-k}-\alpha_2-\alpha_4\bigr\},\ \ \langle v_3^{(m)}(k,k)|\phi\rangle=\frac{1}{\sqrt{2}}(\alpha_2-\alpha_4).\label{vphi}
\end{align}
Inserting Eq.\eqref{vphi} to Eq.\eqref{psikk2} gives
\begin{align}
&\Psi_n^{(m)}(x_1,x_2)=\frac{1}{4N^{3/2}}\sum_{k=0}^{N-1}\omega^{(x_1+x_2)k} \notag\\
&
\begin{bmatrix}
\alpha_1-\alpha_3+(\alpha_2+\alpha_4)\omega^k+2(\alpha_1+\alpha_3)\omega^{nk}+(-1)^n(\alpha_1-\alpha_3)-(-1)^n(\alpha_2+\alpha_4)\omega^k \\
(\alpha_1-\alpha_3)\omega^{-k}+\alpha_2+\alpha_4-(-1)^n\{(\alpha_1-\alpha_3)\omega^{-k}-\alpha_2-\alpha_4 \}+2i^n(\alpha_2-\alpha_4)\omega^{-nk}\\
-\alpha_1+\alpha_3-(\alpha_2+\alpha_4)\omega^k+2(\alpha_1+\alpha_3)\omega^{nk}-(-1)^n\{\alpha_1-\alpha_3-(\alpha_2+\alpha_4)\omega^k\} \\
(\alpha_1-\alpha_3)\omega^{-k}+\alpha_2+\alpha_4-(-1)^n\{(\alpha_1-\alpha_3)\omega^{-k}-\alpha_2-\alpha_4\}-2i^n(\alpha_2-\alpha_4)\omega^{-nk}
\end{bmatrix}. \label{psikk3}
\end{align}
Then Eq.\eqref{psikk3} can be rewritten as
\begin{align*}
&\Psi_n^{(m)}(x_1,x_2)=\frac{1}{4\sqrt{N}}\Biggl\{ \begin{bmatrix}
\alpha_1-\alpha_3\\ \alpha_2+\alpha_4\\ -(\alpha_1-\alpha_3)\notag\\ \alpha_2+\alpha_4 \end{bmatrix}\{1+(-1)^n\}\delta_{j,-j}(x_1,x_2)\notag\\ &+\begin{bmatrix}\alpha_2+\alpha_4\\ 0\\ -(\alpha_2+\alpha_4)\\
0 \end{bmatrix}\{1+(-1)^n\}\delta_{j,-j-1}(x_1,x_2)+\begin{bmatrix}0\\ \alpha_1-\alpha_3\\ 0\\
\alpha_1-\alpha_3 \end{bmatrix}\{1-(-1)^n\}\delta_{j,-j+1}(x_1,x_2)\notag\\
&+\begin{bmatrix}\alpha_1+\alpha_3\\0\\ \alpha_1+\alpha_3\\ 0\end{bmatrix}2\delta_{j,-j-n}(x_1, x_2)+\begin{bmatrix}0\\ \alpha_2-\alpha_4\\
0\\-(\alpha_2-\alpha_4)
\end{bmatrix}2i^n\delta_{j,-j+n}(x_1,x_2)\Biggr\} \ \ (j\in\mathbb{K}_N),
\end{align*}
where
\[\delta_{a,b}(x_1,x_2)=\begin{cases}1\ \bigl((x_1,x_2)=(a,b)\bigr)\\
0\ \bigl((x_1,x_2)\neq (a,b)\bigr)
\end{cases}.
\]
The first, second and third terms of the equation mean that the walker is trapped around $x_1+x_2\equiv0\ (\bmod N)$. The forth and fifth terms of the equation mean that the walker keeps on moving straightly.
\subsection{The Fourier walk with FF}
In this subsection, we consider the Fourier walk with FF whose coin operator is defined by
\begin{align*}U^{(f)}_F=\frac{1}{2}\begin{bmatrix}
1&i&-1&-i\\1&1&1&1\\1&-i&-1&i\\1&-1&1&-1\end{bmatrix}.\end{align*}
Then $U^{(f)}(k_1,k_2)$ for $k_1,k_2\in\mathbb{K}_N=\{0, 1, \dots, N-1\}$ is given by
\begin{align*}
U^{(f)}(k_1,k_2)=\frac{1}{2}\begin{bmatrix}\omega^{k_1} &i\omega^{k_1}&-\omega^{k_1}&-i\omega^{k_1} \\ \omega^{-k_1} &\omega^{-k_1}&\omega^{-k_1}&\omega^{-k_1} \\ \omega^{k_2} &-i\omega^{k_2}&-\omega^{k_2}&i\omega^{k_2} \\ \omega^{-k_2} &-\omega^{-k_2}&\omega^{-k_2}&-\omega^{-k_2}
\end{bmatrix}.
\end{align*}
The eigenvalues of $U^{(f)}(k_1,k_2)$ are the roots of the following polynomial:
\begin{align}
\det (\lambda I_4-U^{(f)}(k_1,k_2))&=\lambda^4-\Bigl(\cos \tilde{k}_1-\cos \tilde{k}_2\Bigr)\lambda^3+\frac{1-i}{2}\Bigl(1-\cos \Bigl(\tilde{k}_1-\tilde{k_2} \Bigr) \Bigr)\lambda^2 \notag\\
&+i\Bigl(\cos \tilde{k}_1-\cos \tilde{k}_2 \Bigr)\lambda-i. \label{chapof}
\end{align}
We put $x=\Re(\lambda)$ and $y=\Im(\lambda)$, where $\lambda$ is an eigenvalue of $U^{(f)}(k_1,k_2)$. We should remark that Eq.\eqref{chapof} implies that $x$ and $y$ satisfy the following equation:
\begin{align*}
x^2-y^2-2xy-C\bigl(x-y\bigr)+D=0,
\end{align*}
where $C=\cos \tilde{k}_1-\cos \tilde{k}_2$\ and\ $D=\{1-\cos (\tilde{k}_1-\tilde{k_2} )\}/2.$ It would be hard to get solution for any $(k_1,k_2)\in\mathbb{K}_N^2$, so we consider suitable proper subsets $\cal{B}$\ $\subset\mathbb{K}_N^2$, as in the case of MS model.\ In this model, we deal with the following two cases: $({\bf a})\ \cal{B}$\ $=\{(k_1,k_2)\in \mathbb{K}_N^2:k_1=k_2\}$ and $({\bf b})\ \cal{B}$\ $=\{(k_1,k_2)\in \mathbb{K}_N^2:k_1+k_2\equiv 0\ \ (\bmod N)\}$. Let $\lambda_j^{(f)}(k_1,k_2)$ be the eigenvalues of $U^{(f)}(k_1,k_2)$ and $v^{(f)}_j(k_1,k_2)$ be the corresponding eigenvectors for $j=0,1,2,3$.\\ \\
$({\bf a})\ (k_1,k_2)=(k,k)$ case\\
The eigenvalues are
\begin{align*}
\lambda_0^{(f)}(k,k)=e^{\pi i/8},\ \lambda_1^{(f)}(k,k)=e^{5\pi i/8}, \ \lambda_2^{(f)}(k,k)=e^{9\pi i/8},\ \lambda_3^{(f)}(k,k)=e^{13\pi i/8},
\end{align*}
and the corresponding eigenvectors are
\begin{align*}
v_j^{(f)}(k,k)=\frac{1}{Z_j(k,k)}\begin{bmatrix}
\omega^k({\lambda_j^{(f)}(k,k)}^2+i\omega^{-2k})(\lambda_j(k,k)+\omega^k)\\
\omega^{-k}({\lambda_j^{(f)}(k,k)}^2+\omega^{2k})(\lambda_j(k,k)+\omega^{-k})\\
-\omega^k({\lambda_j^{(f)}(k,k)}^2+i\omega^{-2k})(\lambda_j(k,k)-\omega^k)\\
\omega^{-k}({\lambda_j^{(f)}(k,k)}^2+\omega^{2k})(\lambda_j(k,k)-\omega^{-k})
\end{bmatrix},
\end{align*}
where $Z_{j}(k,k)$ is a normalized constant.\vspace{0.5\baselineskip}\\
$({\bf b})\ (k_1,k_2)=(k,N-k)$ case\\
The eigenvalues $\lambda$ satisfy the following equation with fourth order;
\begin{align*}
\lambda^4+(1-i)\sin^2\tilde{k}\lambda^2-i=0.
\end{align*}
Thus we get
\begin{align*}
\lambda_j^{(f)}(k,N-k)=&\pm\frac{\sqrt{2-\sin^2\tilde{k}+\sqrt{2-\sin^4\tilde{k}}}+i\sqrt{2+\sin^2\tilde{k}-\sqrt{2-\sin^4\tilde{k}}}}{2},\\ \\
&\pm\frac{\sqrt{2-\sin^2\tilde{k}-\sqrt{2-\sin^4\tilde{k}}}-i\sqrt{2+\sin^2\tilde{k}+\sqrt{2-\sin^4\tilde{k}}}}{2},
\end{align*}
and corresponding eigenvectors are
\begin{align*}
v_j^{(f)}(k,N-k)=\frac{1}{Z_{j}(k,N-k)}
\begin{bmatrix}
(\lambda_j^{(f)}(k,N-k)+i\sin\Tilde{k})(1+\lambda_j^{(f)}(k,N-k)\omega^k)\\(\overline{\lambda_j^{(f)}(k,N-k)}+i\sin\Tilde{k})(1+\lambda_j^{(f)}(k,N-k)\omega^{-k})\\(\lambda_j^{(f)}(k,N-k)+i\sin\Tilde{k})(1-\lambda_j^{(f)}(k,N-k)\omega^{-k})\\-(\overline{\lambda_j^{(f)}(k,N-k)}+i\sin\Tilde{k})(1-\lambda_j^{(f)}(k,N-k)\omega^{k})
\end{bmatrix},
\end{align*}
where $Z_{j}(k,N-k)$ is a normalized constant.
\subsection{Non-existence of localization}
In this subsection, we prove that localization does not occur for the Fourier walks on $\mathbb{Z}^2$ with both MS and FF. According to Komatsu and Tate \cite{Komatsu and Tate.}, if a QW has localization, then the characteristic polynomial of the quantum coin in $(k_1, k_2)$-space has greater than one constant roots. Assume that the Fourier walk with MS has a constant root $\lambda$ with $|\lambda|=1$. In a similar way, by using characteristic polynomial \eqref{chapo}, we have
\begin{align}
\lambda^4&-\frac{1+i}{2}\Bigl(\cos k_1+\sin k_1+\cos k_2+\sin k_2\Bigr)\lambda^3-\frac{1-i}{2}\Bigl(1+\cos \Bigl(k_1-k_2\Bigr)\Bigr)\lambda^2\notag \\
&+\frac{1+i}{2}\Bigl(\cos k_1+\sin k_1+\cos k_2+\sin k_2\Bigr)\lambda-i=0,\label{chapo0}
\end{align}
where $(k_1,k_2)\in(-\pi,\pi]^2$. Since the constant root $\lambda$ does not depend on $k_1$, we obtain the following equation by differentiating Eq.\eqref{chapo0} with respect to $k_1$.
\begin{align}
-\frac{1+i}{2}\Bigl(-\sin k_1+\cos k_1 \Bigr)\lambda^3+\frac{1-i}{2}\Bigl(\sin\Bigl(k_1-k_2 \Bigr)\Bigr)\lambda^2+\frac{1+i}{2}\Bigl(-\sin k_1+\cos k_1 \Bigr)\lambda=0. \label{bibunms}
\end{align}
Hence Eq.\eqref{bibunms} can be rewritten as
\begin{align}
\Bigl(-\sin k_1+\cos k_1\Bigr)\Bigl(\lambda^2-1\Bigr)-\lambda\sin\Bigl(k_1-k_2 \Bigr)+i\Bigl\{\Bigr(-\sin k_1+\cos k_1\Bigr)\Bigl(\lambda^2-1\Bigr)+\lambda\sin\Bigl(k_1-k_2 \Bigr)\Bigr\}=0.\notag
\end{align}
Therefore we have $\lambda\sin\bigl(k_1-k_2\bigr)=0$ for any $(k_1, k_2)\in(-\pi,\pi]^2$. This contradicts $|\lambda|=1$. Thus we conclude that Eq.\eqref{chapo0} does not have any constant root, so non-existence of localization for the Fourier walk with MS is shown.
In a similar fashion, we can prove that localization does not occur for the Fourier walk with FF. Eq.\eqref{chapof} gives the characteristic polynomial for this model is as follows.
\begin{align}
\lambda^4-\Bigl(\cos k_1-\cos k_2\Bigr)\lambda^3+\frac{1-i}{2}\Bigl(1-\cos \Bigl(k_1-k_2 \Bigr) \Bigr)\lambda^2+i\Bigl(\cos k_1-\cos k_2 \Bigr)\lambda-i=0,\label{chapof0}
\end{align}
where $(k_1,k_2)\in(-\pi,\pi]^2$. By differentiating Eq.\eqref{chapof0} with respect to $k_1$, we have
\begin{align}
\Bigl(\sin k_1\Bigr)\lambda^3+\frac{1-i}{2}\Bigl(\sin\Bigl(k_1-k_2\Bigr)\Bigr)\lambda^2-i\Bigl(\sin\Tilde{k}_1\Bigr)\lambda=0.\label{bibunf}
\end{align}
Therefore Eq.\eqref{bibunf} becomes
\begin{align}
2\lambda^2\sin k_1+\lambda\sin\Bigl(k_1-k_2\Bigr)-i\Bigl\{2\sin k_1+\lambda\sin\Bigl(k_1-k_2\Bigr) \Bigr\}=0.\notag
\end{align}
Then we obtain $2\sin k_1\pm\sin\Bigl(k_1-k_2\Bigr)=0$ for any $(k_1, k_2)\in(-\pi,\pi]^2$, this is contradiction. Hence we show non-existence of localization for the Fourier walk with FF.
\section{$\pi_2^2$ case}
In this section, we compute $\Psi_n(x_1, x_2)$ of the Fourier and Grover walks with both MS and FF when $N=2$, for all $n\in\mathbb{Z}_{\geq}$ and $(x_1,x_2)\in\pi_2^2$. As the initial state, we take
\begin{align}
\Psi_0^{(j)}(x_1,x_2)=\begin{cases}{}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}\ \ \bigl((x_1,x_2)=(0,0)\bigr)\\ \\ {}^T\begin{bmatrix} 0&0&0&0\end{bmatrix}\ \ \bigl((x_1,x_2)\neq(0,0)\bigr)\end{cases}\ \ (j=m,f)\notag
\end{align}
for $|\alpha_1|^2+|\alpha_2|^2+|\alpha_3|^2+|\alpha_4|^2=1$ with $\alpha_{\ell}\in \mathbb{C}\ (\ell=1,2,3,4)$.
\subsection{The Fourier walk on $\pi_2^2$}
This subsection deals with $\Psi_n^{(j)}(x_1, x_2)\ \ (j=m,f)$ of the Fourier walk.\\
${\bf(a)}$\ MS case\\
By Eq.\eqref{chapo0}, we get the eigenvalues of $U^{(m)}(k_1,k_2)$ as follows.
\begin{align*}
\lambda_0^{(m)}(0,0)&=1,\ \lambda_1^{(m)}(0,0)=-1,\ \lambda_2^{(m)}(0,0)=1,\ \lambda_3^{(m)}(0,0)=i,\\
\lambda_0^{(m)}(1,1)&=1,\ \lambda_1^{(m)}(1,1)=-1,\ \lambda_2^{(m)}(1,1)=-1,\ \lambda_3^{(m)}(1,1)=-i,\\
\lambda_0^{(m)}(0,1)&=e^{\pi i/8},\ \lambda_1^{(m)}(0,1)=e^{5\pi i/8},\ \lambda_2^{(m)}(0,1)=e^{9\pi i/8},\ \lambda_3^{(m)}(0,1)=e^{13\pi i/8},\\
\lambda_0^{(m)}(1,0)&=e^{\pi i/8},\ \lambda_1^{(m)}(1,0)=e^{5\pi i/8},\ \lambda_2^{(m)}(1,0)=e^{9\pi i/8},\ \lambda_3^{(m)}(1,0)=e^{13\pi i/8},
\end{align*}
and corresponding eigenvectors are
\begin{align*}
v_0^{(m)}(0,0)&=\dfrac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&1\\
\end{bmatrix},\
v_1^{(m)}(0,0)=\dfrac{1}{2}\ {}^T
\begin{bmatrix}
1&-1&-1&-1\\
\end{bmatrix},\\
v_2^{(m)}(0,0)&=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&0&1&0\\
\end{bmatrix},\
v_3^{(m)}(0,0)=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&1&0&-1\\
\end{bmatrix},\\
v_0^{(m)}(1,1)&=-\dfrac{1}{2}\ {}^T
\begin{bmatrix}
1&-1&-1&-1\\
\end{bmatrix},\
v_1^{(m)}(1,1)=-\dfrac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&1\\
\end{bmatrix},\\
v_2^{(m)}(1,1)&=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&0&1&0\\
\end{bmatrix},\
v_3^{(m)}(1,1)=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&1&0&-1\\
\end{bmatrix},\\
v_j^{(m)}(0,1)&=\frac{\sqrt{2(2+\lambda_j^{(m)}+\bar{\lambda}_j^{(m)})}}{4\lambda_j^{(m)}(\lambda_j^{(m)}+1)}
\begin{bmatrix}
\lambda_j(\lambda_j^{(m)}+1)\\
{\lambda_j^{(m)}}^3+1 \\
\lambda_j^{(m)}(\lambda_j^{(m)}-1)\\
{\lambda_j^{(m)}}^3-1
\end{bmatrix},\\
v_j^{(m)}(1,0)&=\frac{\sqrt{2(2-\lambda_j^{(m)}-\bar{\lambda}_j^{(m)})}}{4\lambda_j^{(m)}(\lambda_j^{(m)}+1)}
\begin{bmatrix}
\lambda_j^{(m)}(\lambda_j^{(m)}-1)\\
-({\lambda_j^{(m)}}^3+1 )\\
\lambda_j^{(m)}(\lambda_j^{(m)}+1)\\
-({\lambda_j^{(m)}}^3-1)
\end{bmatrix}.
\end{align*}
Then we obtain $\Psi_n^{(m)}(x_1,x_2)$ for all $n\in\mathbb{Z}_{\geq}$ and $(x_1,x_2)\in\pi_2^2$.
\begin{align*}
\Psi_{4k}^{(m)}(0,0)=\frac{1+i^k}{2}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},\ \Psi_{4k}^{(m)}(1,1)=\frac{1-i^k}{2}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k}^{(m)}(0,1)=\Psi_{4k}^{(m)}(1,0)= {}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(m)}(0,1)=\dfrac{1}{4}
\begin{bmatrix}
1-i^k & 1-i^k & 1-i^k & 1-i^k \\
1-i^k & i-i^{k+1} & -1+i^k & -i+i^{k+1} \\
1+i^k & -1-i^k & 1+i^k & -1-i^k \\
1+i^k & -i-i^{k+1} & -1-i^k & i+i^{k+1}
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(m)}(1,0)=\dfrac{1}{4}
\begin{bmatrix}
1+i^k & 1+i^k & 1+i^k & 1+i^k \\
1+i^k & i+i^{k+1} & -1-i^k & -i-i^{k+1} \\
1-i^k & -1+i^k & 1-i^k & -1+i^k \\
1-i^k & -i+i^{k+1} & -1+i^k & i-i^{k+1}
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(m)}(0,0)=\Psi_{4k+1}^{(m)}(1,1)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(m)}(0,0)=\frac{1}{4}
\begin{bmatrix}
2 & i^k(1+i) & 0 & -i^k(-1+i)\\
i^k(-1+i) & 0 & -i^k(-1+i) & 2\\
0& i^k(-1+i) & 2 & -i^k(1+i)\\
i^k(-1+i)& 2 & -i^k(1+i) & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(m)}(1,1)=\frac{1}{4}
\begin{bmatrix}
2 & -i^k(1+i) & 0 & i^k(-1+i)\\
-i^k(-1+i) & 0 & i^k(-1+i) & 2\\
0& -i^k(-1+i) & 2 & i^k(1+i)\\
-i^k(-1+i)& 2 & i^k(1+i) & 0
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(m)}(0,1)=\Psi_{4k+2}^{(m)}(1,0)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(m)}(0,1)=\dfrac{1}{4}
\begin{bmatrix}
1-i^{k+1} & 1-i^{k+1} & 1+i^{k+1} & 1+i^{k+1} \\
1-i^{k+1} & -i-i^k & -1-i^{k+1} & i-i^k \\
1-i^{k+1} & -1+i^{k+1} & 1+i^{k+1} & -1-i^{k+1} \\
1-i^{k+1} & i+i^k & -1-i^{k+1} & -i+i^k
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(m)}(1,0)=\dfrac{1}{4}
\begin{bmatrix}
1+i^{k+1} & 1+i^{k+1} & 1-i^{k+1} & 1-i^{k+1} \\
1+i^{k+1} & -i+i^k & -1+i^{k+1} & i+i^k \\
1+i^{k+1} & -1-i^{k+1} & 1-i^{k+1} & -1+i^{k+1} \\
1+i^{k+1} & i-i^k & -1+i^{k+1} & -i-i^k
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(m)}(0,0)=\Psi_{4k+3}^{(m)}(1,1)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
where $k\in\mathbb{Z}_{\geq}$.
\vspace{1\baselineskip}\\
${\bf(b)}$\ FF case\\
The eigenvalues are
\begin{align*}
\lambda_0^{(f)}(0,0)&=e^{\pi i/8},\ \lambda_1^{(f)}(0,0)=e^{5\pi i/8},\ \lambda_2^{(f)}(0,0)=e^{9\pi i/8},\ \lambda_3^{(f)}(0,0)=e^{13\pi i/8},\\
\lambda_0^{(f)}(1,1)&=e^{\pi i/8},\ \lambda_1^{(f)}(1,1)=e^{5\pi i/8},\ \lambda_2^{(f)}(1,1)=e^{9\pi i/8},\ \lambda_3^{(f)}(1,1)=e^{13\pi i/8},\\
\lambda_0^{(f)}(0,1)&=1,\ \lambda_1^{(f)}(0,1)=1,\ \lambda_2^{(f)}(0,1)=e^{\pi i/4},\ \lambda_3^{(f)}(0,1)=e^{5\pi i/4},\\
\lambda_0^{(f)}(1,0)&=-1,\ \lambda_1^{(f)}(1,0)=-1,\ \lambda_2^{(f)}(1,0)=-e^{\pi i/4},\ \lambda_3^{(f)}(1,0)=-e^{5\pi i/4},
\end{align*}
and the corresponding eigenvectors are
\begin{align*}
v_j^{(f)}(0,0)&=\dfrac{\sqrt{2(2+\lambda_j^{(f)}+\bar{\lambda}_j^{(f)})}}{4(1+\lambda_j^{(f)})}
\begin{bmatrix}
1+\lambda_j^{(f)} \\
-i{\lambda_j^{(f)}}^2(1+\lambda_j^{(f)})\\
1-\lambda_j^{(f)} \\
i{\lambda_j^{(f)}}^2(1-\lambda_j^{(f)})
\end{bmatrix},\\
v_j^{(f)}(1,1)&=\dfrac{\sqrt{2(2-\lambda_j^{(f)}-\bar{\lambda}_j^{(f)})}}{4(1-\lambda_j^{(f)})}
\begin{bmatrix}
1-\lambda_j^{(f)} \\
-i{\lambda_j^{(f)}}^2(1-\lambda_j^{(f)})\\
1+\lambda_j^{(f)} \\
i{\lambda_j^{(f)}}^2(1+\lambda_j^{(f)})
\end{bmatrix},\\
v_0^{(f)}(0,1)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1 & 0 & -1 & 0 \\
\end{bmatrix},\
v_1^{(f)}(0,1)=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0 & 1 & 0 & 1\\
\end{bmatrix},\\
v_2^{(f)}(0,1)&=\dfrac{1}{2}\ {}^T
\begin{bmatrix}
1 & e^{7\pi i/4} & 1 & -e^{7\pi i/4} \\
\end{bmatrix},\
v_3^{(f)}(0,1)=\dfrac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1 & e^{3\pi i/4} & 1 & -e^{3\pi i/4} \\
\end{bmatrix}.
\end{align*}
Thus we have $\Psi_n(x_1,x_2)$ as below.
\begin{align*}
\Psi_{4k}^{(f)}(0,0)=\dfrac{1}{4}
\begin{bmatrix}
2i^k+1+(-1)^k&0&-1+(-1)^k&0\\
0&2i^k+1+(-1)^k&0&1-(-1)^k\\
-1+(-1)^k&0&2i^k+1+(-1)^k&0\\
0&1-(-1)^k&0&2i^k+1+(-1)^k
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k}^{(f)}(1,1)=\dfrac{1}{4}
\begin{bmatrix}
2i^k-(1+(-1)^k)&0&-(-1+(-1)^k)&0\\
0&2i^k-(1+(-1)^k)&0&-(1-(-1)^k)\\
-(-1+(-1)^k)&0&2i^k-(1+(-1)^k)&0\\
0&-(1-(-1)^k)&0&2i^k-(1+(-1)^k)
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k}^{(f)}(0,1)=\Psi_{4k}^{(f)}(1,0)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(f)}(0,1)=\dfrac{1}{4}
\begin{bmatrix}
-1+i^k & (-1)^{k+1}i+i^{k+1} & 1-i^k & (-1)^{k}i-i^{k+1} \\
(-1)^{k+1}+i^k & -1+i^k & (-1)^{k+1}+i^k & -1+i^k \\
1+i^k & (-1)^{k+1}i-i^{k+1} & -1-i^k & (-1)^{k}i+i^{k+1} \\
(-1)^{k}+i^k & -1-i^k & (-1)^k+i^k & -1-i^k
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(f)}(1,0)=\dfrac{1}{4}
\begin{bmatrix}
1+i^k & (-1)^{k}i+i^{k+1} & -1-i^k & (-1)^{k+1}i-i^{k+1} \\
(-1)^{k}+i^k & 1+i^k & (-1)^{k}+i^k & 1+i^k \\
-1+i^k & (-1)^{k}i-i^{k+1} & 1-i^k & (-1)^{k+1}i+i^{k+1} \\
(-1)^{k+1}+i^k & 1-i^k & (-1)^{k+1}+i^k & 1-i^k \\
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+1}^{(f)}(0,0)=\Psi_{4k+1}^{(f)}(1,1)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(f)}(0,0)=\dfrac{1}{4}
\begin{bmatrix}
1+(-1)^ki & 2i^{k+1} & -1+(-1)^ki & 0\\
2i^k & 1+(-1)^ki & 0 & 1-(-1)^ki\\
-1+(-1)^ki & 0&1+(-1)^ki & -2i^{k+1}\\
0 & 1-(-1)^ki & -2i^k & 1+(-1)^ki
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(f)}(1,1)=\dfrac{1}{4}
\begin{bmatrix}
-1-(-1)^ki & 2i^{k+1} & 1-(-1)^ki & 0\\
2i^k & -1-(-1)^ki & 0 & -1+(-1)^ki\\
1-(-1)^ki & 0 & -1-(-1)^ki & -2i^{k+1}\\
0 & -1+(-1)^ki & -2i^k & -1-(-1)^ki
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+2}^{(f)}(0,1)=\Psi_{4k+2}^{(f)}(1,0)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(f)}(0,1)=\dfrac{1}{4}
\begin{bmatrix}
-1+i^{k+1} & (-1)^{k}+i^{k+1} & 1+i^{k+1} & (-1)^{k+1}+i^{k+1} \\
(-1)^{k+1}i+i^k & -1+i^{k+1} & (-1)^{k+1}i-i^k & -1-i^{k+1} \\
1-i^{k+1} & (-1)^{k}+i^{k+1} & -1-i^{k+1} & (-1)^{k+1}+i^{k+1} \\
(-1)^{k}i-i^k & -1+i^{k+1} & (-1)^{k}i+i^k & -1-i^{k+1} \\
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(f)}(1,0)=\dfrac{1}{4}
\begin{bmatrix}
1+i^{k+1} & (-1)^{k+1}+i^{k+1} & -1+i^{k+1} & (-1)^{k}+i^{k+1} \\
(-1)^{k}i+i^k & 1+i^{k+1} & (-1)^{k}i-i^k & 1-i^{k+1} \\
-1-i^{k+1} & (-1)^{k+1}+i^{k+1} & 1-i^{k+1} & (-1)^{k}+i^{k+1} \\
(-1)^{k+1}i-i^k & 1+i^{k+1} & (-1)^{k+1}i+i^k & 1-i^{k+1} \\
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_{4k+3}^{(f)}(0,0)=\Psi_{4k+3}^{(f)}(1,1)={}^T
\begin{bmatrix}
0&0&0&0
\end{bmatrix}.
\end{align*}
Then we see that the Fourier walks with MS and FF on $\pi_2^2$ have period $16$ i.e., for all $\Psi_{n+16}^{(j)}=\Psi_n^{(j)}\ (j=m,f)$ for $n\in\mathbb{Z}_\geq$. Where $\Psi_n^{(j)}$ is the state of the walk at time $n$.
\subsection{The Grover walk on $\pi_2^2$}
In this subsection, we will check the probability amplitudes of the Grover walk to compare with that of the Fourier walk for the following same initial state:
\begin{align}
\Psi_0^{(j)}(x_1,x_2)=\begin{cases}{}^T\begin{bmatrix} \alpha_1&\alpha_2&\alpha_3&\alpha_4\end{bmatrix}\ \ \bigl((x_1,x_2)=(0,0)\bigr)\\ \\ {}^T\begin{bmatrix} 0&0&0&0\end{bmatrix}\ \ \bigl((x_1,x_2)\neq(0,0)\bigr)\end{cases}\ \ (j=m,f)\notag
\end{align}
for $|\alpha_1|^2+|\alpha_2|^2+|\alpha_3|^2+|\alpha_4|^2=1$ with $\alpha_{\ell}\in \mathbb{C}\ (\ell=1,2,3,4)$.\\
${\bf(a)}$\ MS case\\
\begin{align*}
\lambda_0^{(m)}(0,0)&=1,\ \lambda_1^{(m)}(0,0)=\lambda_2^{(m)}(0,0)=\lambda_3^{(m)}(0,0)=-1,\\\lambda_0^{(m)}(1,1)&=\lambda_1^{(m)}(1,1)=\lambda_2^{(m)}(1,1)=1,\ \lambda_3^{(m)}(1,1)=-1,\\
\lambda_0^{(m)}(0,1)&=1,\ \lambda_1^{(m)}(0,1)=-1,\ \lambda_2^{(m)}(0,1)=i,\ \lambda_3^{(m)}(0,1)=-i,\\ \lambda_0^{(m)}(1,0)&=1,\ \lambda_1^{(m)}(1,0)=-1,\ \lambda_2^{(m)}(1,0)=i,\ \lambda_3^{(m)}(1,0)=-i,
\end{align*}
the eigenvectors are
\begin{align*}
v_0^{(m)}(0,0)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&1&1
\end{bmatrix},\
v_1^{(m)}(0,0)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\\
v_2^{(m)}(0,0)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\
v_3^{(m)}(0,0)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&-1
\end{bmatrix},\\
v_0^{(m)}(1,1)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\
v_1^{(m)}(1,1)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\\
v_2^{(m)}(1,1)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&-1
\end{bmatrix},\
v_3^{(m)}(1,1)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&1&1
\end{bmatrix},\\
v_0^{(m)}(0,1)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\
v_1^{(m)}(0,1)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\\
v_2^{(m)}(0,1)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&i&i
\end{bmatrix},\
v_3^{(m)}(0,1)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1-i&-i
\end{bmatrix},\\
v_0^{(m)}(1,0)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\
v_1^{(m)}(1,0)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\\
v_2^{(m)}(1,0)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-i&-i
\end{bmatrix},\
v_3^{(m)}(1,0)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&i&i
\end{bmatrix}.
\end{align*}
Then we get
\begin{align*}
\Psi_n^{(m)}(0,0)=\dfrac{1+(-1)^n}{8}
\begin{bmatrix}
i^n+3&i^n-1&0&0\\
i^n-1&i^n+3&0&0\\
0&0&i^n+3&i^n-1\\
0&0&i^n-1&i^n+3
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4\\
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(m)}(1,1)=\dfrac{(1-i^n)(1+(-1)^n)}{8}
\begin{bmatrix}
1&1&0&0\\
1&1&0&0\\
0&0&1&1\\
0&0&1&1
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4\\
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(m)}(0,1)=\dfrac{1+(-1)^{n+1}}{8}
\begin{bmatrix}
0&0&1+i^{n+1}&1+i^{n+1}\\
0&0&1+i^{n+1}&1+i^{n+1}\\
1-i^{n+1}&1-i^{n+1}&-2&2\\
1-i^{n+1}&1-i^{n+1}&2&-2
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4\\
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(m)}(1,0)=\dfrac{1+(-1)^{n+1}}{8}
\begin{bmatrix}
-2&2&1-i^{n+1}&1-i^{n+1}\\
2&-2&1-i^{n+1}&1-i^{n+1}\\
1+i^{n+1}&1+i^{n+1}&0&0\\
1+i^{n+1}&1+i^{n+1}&0&0
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4\\
\end{bmatrix},
\end{align*}
${\bf(b)}$\ FF case\\
\begin{align*}
\lambda_0^{(f)}(0,0)&=\lambda_1^{(f)}(0,0)=\lambda_2^{(f)}(0,0)=1,\ \lambda_3^{(f)}(0,0)=-1,\\
\lambda_0^{(f)}(1,1)&=1,\ \lambda_1^{(f)}(1,1)=\lambda_2^{(f)}(1,1)=\lambda_3^{(f)}(1,1)=-1,\\
\lambda_0^{(f)}(0,1)&=1,\ \lambda_1^{(f)}(0,1)=-1,\ \lambda_2^{(f)}(0,1)=i,\ \lambda_3^{(f)}(0,1)=-i,\\
\lambda_0^{(f)}(1,0)&=1,\ \lambda_1^{(f)}(1,0)=-1,\ \lambda_2^{(f)}(1,0)=i, \lambda_3^{(f)}(1,0)=-i,
\end{align*}
and the corresponding eigenvectors are
\begin{align*}
v_0^{(f)}(0,0)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\
v_1^{(f)}(0,0)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\\
v_2^{(f)}(0,0)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&1&1
\end{bmatrix},\
v_3^{(f)}(0,0)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&-1
\end{bmatrix},\\
v_0^{(f)}(1,1)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-1&-1
\end{bmatrix},\
v_1^{(f)}(1,1)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\\
v_2^{(f)}(1,1)&=\frac{1}{\sqrt{2}}{}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\
v_3^{(f)}(1,1)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&1&1
\end{bmatrix},\\
v_0^{(f)}(0,1)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\
v_1^{(f)}(0,1)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\\
v_2^{(f)}(0,1)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&i&i
\end{bmatrix},\
v_3^{(f)}(0,1)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1-i&-i
\end{bmatrix},\\
v_0^{(f)}(1,0)&=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
0&0&1&-1
\end{bmatrix},\
v_1^{(f)}(1,0)=\frac{1}{\sqrt{2}}\ {}^T
\begin{bmatrix}
1&-1&0&0
\end{bmatrix},\\
v_2^{(f)}(1,0)&=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&-i&-i
\end{bmatrix},\
v_3^{(f)}(1,0)=\frac{1}{2}\ {}^T
\begin{bmatrix}
1&1&i&i
\end{bmatrix}.
\end{align*}
Thus we have
\begin{align*}
\Psi_n^{(f)}(0,0)=\dfrac{1+(-1)^n}{8}
\begin{bmatrix}
i^n+3&i^n-1&0&0\\
i^n-1&i^n+3&0&0\\
0&0&i^n+3&i^n-1\\
0&0&i^n-1&i^n+3
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(f)}(1,1)=\dfrac{(1-i^n)(1+(-1)^n)}{8}
\begin{bmatrix}
1&1&0&0\\
1&1&0&0\\
0&0&1&1\\
0&0&1&1
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(f)}(0,1)=\dfrac{1+(-1)^{n+1}}{8}
\begin{bmatrix}
0&0&1+i^{n+1}&1+i^{n+1}\\
0&0&1+i^{n+1}&1+i^{n+1}\\
1-i^{n+1}&1-i^{n+1}&2&-2\\
1-i^{n+1}&1-i^{n+1}&-2&2
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix},
\end{align*}
\begin{align*}
\Psi_n^{(f)}(1,0)=\dfrac{1+(-1)^{n+1}}{8}
\begin{bmatrix}
2&-2&1-i^{n+1}&1-i^{n+1}\\
-2&2&1-i^{n+1}&1-i^{n+1}\\
1+i^{n+1}&1+i^{n+1}&0&0\\
1+i^{n+1}&1+i^{n+1}&0&0
\end{bmatrix}
\begin{bmatrix}
\alpha_1\\
\alpha_2\\
\alpha_3\\
\alpha_4
\end{bmatrix}.
\end{align*}
Compared with the Fourier walks, we see that the Grover walks with both MS and FF on $\pi_2^2$ have period $4$, i.e., $\Psi_{n+4}=\Psi_n$ for $n\in\mathbb{Z}_\geq$. Where $\Psi_n$ is the state of the walk at time $n$.
\section{Summary}
In this paper we considered discrete-time QWs with MS and FF on $\mathbb{Z}^2$ and $\pi_N^2$. We showed that localization does not occur for the Fourier walk on $\mathbb{Z}^2$ with MS and FF by using our contradiction argument which is different from the method based on the Fourier analysis by Komatsu and Tate \cite{Komatsu and Tate.}. Moreover we computed eigenvalues and the corresponding eigenvectors of the $(k_1,k_2)$-space of the Fourier walks on $\pi_N^2$ with MS and FF for some special initial conditions, for instance, $k_1=k_2$ or $k_1+k_2\equiv0\ (\bmod N)$. We derived the measure at time $n$ from these results. In addition, we calculated amplitudes of the Grover and Fourier walks on $\pi_2^2$. One of the interesting future problems would be to obtain the measure at time $n$ of the Fourier walks on $\mathbb{Z}^2$ and $\pi_N^2$ for any initial state.
|
1,477,468,751,003 | arxiv | \@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt}
\documentclass[conference]{IEEEtran}
\usepackage{geometry}
\geometry{letterpaper,left=1.75cm,right=1.75cm,top=1.5cm,bottom=1.5cm}
\usepackage{graphicx}%
\usepackage{subfigure}
\usepackage{setspace}
\usepackage{amsmath}
\usepackage{color}
\begin{document}
\title{\Large A Noise Filter for Dynamic Vision Sensors \\ using Self-adjusting Threshold
\author{\normalsize
\begin{tabular}[t]{c@{\extracolsep{1em}}c@{\extracolsep{1em}}c@{\extracolsep{1em}}c@{\extracolsep{1em}}c@{\extracolsep{1em}}c@{\extracolsep{1em}}c}
\large Shasha Guo& \large Ziyang Kang& \large Lei Wang& \large Limeng Zhang& \large Xiaofan Chen& \large Shiming Li& \large Weixia Xu \\
\\
\multicolumn{7}{c}{College of Computer Science and Technology} \\
\multicolumn{7}{c}{National University of Defense Technology} \\
\multicolumn{7}{c}{Changsha, Hunan, China. 410073} \\
\multicolumn{7}{c}{e-mail: {guoshasha13, kangziyang14}@nudt.edu.cn} \\
\end{tabular}}
\maketitle
\makeatletter
\def\ps@IEEEtitlepagestyle{%
\def\@oddfoot{\mycopyrightnotice}%
\def\@evenfoot{}%
}
\makeatother
\def\mycopyrightnotice{%
\begin{minipage}{\textwidth}
\footnotesize
\hfill\\~\\
\end{minipage}
\gdef\mycopyrightnotice{
}
{\small\textbf Abstract---
Neuromorphic event-based dynamic vision sensors (DVS) have much faster sampling rates and a higher dynamic range than frame-based imagers. However, they are sensitive to background activity (BA) events which are unwanted.
we propose a new criteria with little computation overhead for defining real events and BA events by utilizing the global space and time information rather than the local information by Gaussian convolution, which can be also used as a filter. We denote the filter as GF. We demonstrate GF on three datasets, each recorded by a different DVS with different output size. The experimental results show that our filter produces the clearest frames compared with baseline filters and run fast.}
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Introduction}
Research on neuromorphic event-based sensors (``silicon retinae'') started a few decades back \cite{mead91}. Recently, the technology has matured to a point where there appears some commercially available sensors. Some of the popular dynamic vision sensors (DVS) are DVS128 \cite{dvs128}, the Dynamic and Active pixel Vision Sensor (DAVIS) \cite{davis}, Asynchronous Time-based Image Sensor (ATIS) \cite{atis}, and the CeleX-IV \cite{bib:CeleX}.
Different from conventional frame-based imagers that work by sampling the scene at a fixed temporal rate (typically 30 frames per second), these sensors detect dynamic changes in illumination. This results in a higher dynamic range, higher sampling rate, and lower power consumption.
These sensors have several possible applications.
However, these sensors will produce background activity (BA) events under constant illumination, which are caused by temporal noise and junction leakage currents \cite{dvs128,Bs2filter,phdthesis}.
There are already multiple noise filtering methods for event-based data available. The Nearest Neighbor (NNb) filter based on spatiotemporal correlation \cite{Bs1filter, filterIeng, Bs2filter, filter2015, filter2016, feng2020event} is the most commonly employed method. Besides, there are some variations of NNb filters as well as some other filters based on differing polarity, refractory period and inter-spike interval \cite{filter2016}.
However, it is difficult to distinguish whether an event is a real event or noise with only the event itself. When the track of the target is known, the time correlation of events generated by a single pixel can be counted through repeated recording. Higher correlation suggest higher probability of real events, and vice versa. However, in the real world, it is unlikely to obtain every target’s motion information in advance, and relying on this method of judging image quality is not feasible.
Although \cite{filter2016} and \cite{feng2020event} introduce their criterion for real events, the computation is heavy and time-consuming. \cite{filter2016} use Gaussian convolution on the time diemension to obtain the correlation. And \cite{feng2020event} strengthen it by using convolution on both space dimension and time dimension. These methods make each event participate in multiple calculations in addition to the key comparison operations for judging. This will put a lot of burden on the calculation and lead to the increase of processing time.
To tackle these challenges, we design a new criteria with little computation overhead for defining real events and BA events by utilizing the global space and time information rather than the local information by Gaussian convolution. And it is naturally that this can be used as a filter since it decides whether an event is the real event or the BA event and thus decides whether to pass the event or filter the event. Therefore, we introduce a criteria for DVS BA events filtering as well as a new spatiotemporal BA filter.
Our contributions are as follows. First, we propose a criteria for defining real events and BA events with little computation overhead, which is also a BA filter and called GF. Second, we demonstrate GF on three datasets, each recorded by a different DVS system with different output size. The experimental results show that our filter produces the clearest frames compared with baseline filters.
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Background}
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{DVS128}
The DVS128 \cite{dvs128} sensor is an event-based image sensor that generates asynchronous events when it detects the changes in log intensity. If the change of illumination exceeds an upper or lower threshold, the DVS128 will generate an "ON" event or "OFF" event respectively.
Each pixel independently and in continuous time quantizes local relative intensity changes to generate spike events. If changes in light intensity detected by a pixel since the last event exceed the upper threshold, pixel will generate an ON event and if these changes pass the lower threshold pixel will generate an OFF event. A pixel will not generate an event otherwise. By this mechanism, DVS128 only generates events if there is a change in light intensity, therefore, sensor’s output stream only includes the detected changes in sensed signal and does not carry any redundant data.
To encode all the event information for output, the DVS128 sensor uses Address Event Representation (AER) protocol \cite{aerprotocal} to create a quadruplets $e(p,x,y,ts)$ for each event. Specifically, $p$ is polarity, i.e., ON or OFF, x is the x-position of a pixel's event, y is the y-position of a pixel's event, and $ts$ is a 32-bits timestamp, i.e., the timing information of an event.
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{DAVIS}
DAVIS \cite{davis} combines the DVS with an active pixel sensor (APS) at the pixel level. It allows simultaneous output of asynchronous events and synchronous frames.
The DAVIS sensor also uses AER protocol to encode the event output.
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{CeleX-IV}
\label{sec:CeleX}
The CeleX-IV is a high resolution Dynamic Vision Sensor from CelePixel Technology Co., Ltd. \cite{bib:CeleX}. The resolution of the sensor is 768 $\times$ 640, and the maximum output rate is 200Meps. Unlike DAVIS, the event output by this sensor does not contain polarity information, but contains light intensity information of the event \cite{bib:CeleX2}, and the order of event output by this sensor follows a certain pattern. Take Fig.~\ref{fig:CeleX} as an example. First, it selects a row randomly from all the rows that has events at a single point, and then output the events on the row sequentially in a certain order. After the row finishing output, it repeats the process again from selecting a row. The key point is that the events on a row share the same timestamp.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{./celexevent.png}
\caption{The event readout stream of CeleX. }
\label{fig:CeleX}
\end{figure}
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{BA Events}
BA events are caused due to thermal noise and junction leakage currents \cite{dvs128,Bs2filter,phdthesis}. These events degrade the quality of the data and further incurs unnecessary communication bandwidth and computing resources.
The BA and the real activity events differ in that the BA event lacks temporal correlation with events in its spatial neighborhood while the real events, arising from moving objects or changes in illumination, have a temporal correlation with events from their spatial neighbors. On the basis of this difference, the BA events can be filtered out by detecting events which do not have spatial correlation with events generated by the neighborhood pixels. Such a filter is a spatiotemporal correlation filter.
The filter decides whether an event is real or noise by checking the condition $T_{NNb} - T_{e} < dT$. If the condition is meet, the event is regarded as a real activity event.
The $T_{NNb}$ is the timestamps from the neighborhood pixels, which meet this condition: $|x_{p} - x| \leq 1$ and $|y_{p} - y| \leq 1$ where $p$ stands for a pixel. And $dT$ is the limitation for timestamp difference.
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Related work}
\label{sec:relatedwork}
In this section, we introduce three event-based spatiotemporal filters and one frame-based filter.
There are also some other filtering methods. Researchers \cite{TNfilter} proposed a filter with neuromorphic integrate-and-fire neurons which integrate spikes not only from the corresponding pixel but also its neighborhood pixels for firing. And \cite{lifetimeestimation} assigns a lifetime to each event and the lifetime of a noise event will be assigned 0.
\label{sec:bs2}
Here we introduce three event-based filters with O(N$^2$), O(N/s) and O(N) space complexity respectively, and they will be denoted as Bs1, Bs2, and Bs3 in the rest content.
In Bs1 filter \cite{Bs1filter}, each pixel has a memory cell for storing the last event’s timestamp. The stored timestamps are used for computing the spatiotemporal correlation (Fig.~\ref{fig:bs1}).
Bs2 filter uses sub-sampling groups to reduce the memory size \cite{Bs2filter}. Each sub-sampling group of factor $s$ includes $s^{2}$ pixels and uses one memory cell for storing the timestamp of the most recent event of the group (Fig.~\ref{fig:bs2}).
Bs3 filter assigns two memory cells to each row and each column to store the most recent event in that row or column (Fig.~\ref{fig:bs3}) \cite{Bs3filter}. This filter is designed to store all the information of an event, so both the two cells are 32-bits with one for storing the timestamp and one for polarity and the other axis position.
\begin{figure}
\centering
\subfigure[Bs1]{
\label{fig:bs1}
\includegraphics[width=1.01in, height=0.8in]{./bs1.png}}
\subfigure[Bs2]{
\label{fig:bs2}
\includegraphics[width=1.01in, height=0.75in]{./bs2.png}}
\subfigure[Bs3]{
\label{fig:bs3}
\includegraphics[width=1.01in, height=0.8in]{./bs3s.png}}
\caption{Three event-based filters \cite{Bs3filter}.}
\label{fig:filters}
\end{figure}
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Proposed Filter}
We propose a new method for separating real events and BA events by using the time difference between two events in the same pixel. By separating real events and BA events, it can naturally be used as a BA filter. It utilizes both the space information and time information from a global perspective, and we donote the filter as $GF$ for simplicity.
Table~\ref{tab:timeparameter} gives the denotations and explainations of parameters that appear in the following description.
\begin{table}
\centering
\caption{Parameters of threshold calculation for $GF$.}
\begin{tabular}{lp{20.835em}}
\hline
Para. & \multicolumn{1}{l}{Description} \\
\hline
TD & the time difference between the first event and the last event within a frame \\
ATD & the average time difference\\
ANEP & the average number of events per pixel\\
ANEM & the average number of events per memory cell\\
FN & the number of events of a frame \\
SF & the scaling factor \\
X & the image width of the frame \\
Y & the image height of the frame \\
s & the subsampling window similar to Bs2 described in Section.~\ref{sec:bs2} \\
\hline
\end{tabular}%
\label{tab:timeparameter}%
\end{table}%
We introduce the time threshold for the GF filter as follows, which is denoted as $TGF$.
For each pixel, the $ANEP$ is
\begin{equation}
\centering
ANEP = \frac{FN}{X \times Y}.
\end{equation}
Intuitively, the time threshold for separating real events and BA events should be the ratio of the whole time difference and the average number of events per pixel within a frame when each pixel has a memory cell itself ($s=1$) like Bs1, which is
\begin{equation}
\centering
TGF = \frac{TD}{ANEP} = \frac{TD}{\frac{FN}{X \times Y}}.
\end{equation}
However, when $s^{2}$ pixels share a memory cell like Bs2, the average number of events per pixel turns to the average number of events per memory cell, $ANEM$, which is
\begin{equation}
\centering
ANEM = ANEP \times s^{2} = \frac{s^{2} \times FN}{X \times Y}.
\end{equation}
Thus, the time threshold for GF$_{s}$ is denoted as $TGF_{s}$, and described by Eq.~\ref{equ:dvstime0}.
\begin{equation}
TGF_{s} = \frac{TD}{ANEM} = \frac{TD \times (X \times Y)}{s^{2} \times FN}.
\label{equ:dvstime0}
\end{equation}
This is not the end. Since the $ATD$ of two BA events is supposed to be much larger than that of two real events according to the spatiotemporal correlation, these BA events will increase the $ATD$ between any two events in the frame compared with an ideal condition that have no BA events in the frame. In other words, it will increase the $TD$ of the frame compared with the ideal condition. The $TGF_{s}$ based on Eq.~\ref{equ:dvstime0} could be larger than expected. So we introduce the scaling factor $SF$. And the $TGF_{s}$ is updated as
\begin{equation}
TGF_{s} = \frac{TD}{ANEM} = \frac{TD \times X \times Y}{s^{2} \times FN \times SF}.
\label{equ:dvstime}
\end{equation}
For CeleX, due to its special timestamp assignment as described in Section.~\ref{sec:CeleX}, up to X events could have the same timestamp. We suppore that these events are regarded as one event when computing the ANEM, namely,
\begin{equation}
\centering
ANEM = \frac{s^{2} \times FN}{X \times (X \times Y)}.
\end{equation}
With consideration of scaling factor $SF$, the time threshold $TGF_{s}$ can be described as Eq.~\ref{equ:CeleXtime}.
\begin{equation}
TGF_{s} = \frac{TD}{ANEM \times SF} = \frac{TD \times X \times (X \times Y)}{s^{2} \times FN \times SF}.
\label{equ:CeleXtime}
\end{equation}
And it is worth noticing that the $TGF_{s}$ for CeleX is likely to be smaller than expected. Since it is up to X events could share the same timestamp, usually it is smaller than X.
For each frame's events, the time threshold $TGF_{s}$ is calculated based on the last frame. The first frame of one event stream is initialized as a constant number. Although the threshold is calculated based on the frame information, GF should always be seen as an event-oriented filter. The steps for the GF filter are outlined as
follows. For each event:
\begin{itemize}
\item Fetch the corresponding memory cell and get the last recorded timestamp;
\item Check if the present timestamp is within $TGF_{s}$ of the last timestamp. If the time difference is less than $TGF_{s}$, pass the event to the output, otherwise discard it.
\item Store the timestamp of the new event in the corresponding memory cell.
\end{itemize}
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Experiment Setup}
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{Dataset}
We use three datasets, a collected DVS dataset, DvsGesture \cite{dataset}, a DAVIS-240C dataset Roshambo \cite{bib:RoShamBo}, and our own dataset recorded from CeleX-IV.
DvsGesture comprises 11 hand gesture categories from 29 subjects under 3 illumination conditions. Roshambo is a dataset of rock, paper, scissors and background images. We use three sub-recordings of rock, paper, and scissors. And our own dataset is also a rock-paper-scissors dataset.
To make the event stream visible, it is common to generate a picture frame from the events, either of fixed time length or of a constant number of events.
We choose to use the fixed number of events. A pixel in the picture will be 255 if it has an event.
The baseline filters are decribed in section \ref{sec:relatedwork}.
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{Software Configuration}
The fixed time threshold used for Bs1 is 0.5 ms. For Bs2 with subsampling window $s$ ($s \textgreater 1$), the fixed time threshold is $0.5 \times (s\times s)$ ms. For Bs3, the time threshold is $\frac{0.5}{X}$ ms.
The choice of $SF$ is related to the number of events in a frame and the BA frequency under different circumstances when recording the data. We find the proper $SF$ for DVS128 and DAVIS is larger than 1 while the proper $SF$ for CeleX is smaller than 1. The $SF$ for Eq.~\ref{equ:dvstime} is set to be 10 and the $SF$ for Eq.~\ref{equ:CeleXtime} is set to be 0.2 in this work.
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Experiment result}
First, we want to show that GF$_{x}$ well separates the real events and BA events. Then we compare the runtime cost.
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{Denoising Effect}
\label{sec:visual effect}
For Roshambo dataset, we use 5k events for generating each frame.
For our dataset, since the CeleX-IV has very large output, namely 768 $\times$ 640 pixels, we choose 50k events as the number of events per frame to make the images easy to distinguish with human eyes.
Fig.~\ref{fig:dvsroshambo} shows the performance of different filters on the Roshambo dataset. It can be seen that GF$_{1}$ is clearer than Bs1 and GF$_{2}$ is clearer than Bs2. Bs3 filters the BA as well as many real events with a dim outline.
For our dataset, we show two different cases, that is, object moving fast and moving slowly. Fig.~\ref{fig:roshambo1} depicts the events when the hand is actively moving and thus there are many real events within the frame. On the contrary, Fig.~\ref{fig:roshambo2} shows the events when the hand barely moved and thus the BA event account for a much higher percentage within the frame compared with the above cases.
It can be seen that, in Fig.~\ref{fig:roshambo1}, the initial frame don't witness many noise pixel. The effect of GF$_{1}$ is less clearer than Bs1 but clearer than Bs2. GF$_{2}$ is clearer than Bs1 and still keeps the background area clean. Bs3 is the worst as it still contains many BA noise pixels. In Fig.~\ref{fig:roshambo2}, although Bs3 shows the object, it shows many noise points as well. For the other filters, Bs1 keeps relatively more information than Bs2 and GF$_{1}$, and GF$_{1}$ and Bs2 are still very similar. GF$_{2}$ shows clear outline of the object and get rid of the noise effectively.
\begin{figure*}
\centering
\subfigure[INIT]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./INIT_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs1]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs1_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{1}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs2]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs2_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{2}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF2_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs3]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs3_4155.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_3067.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_5058.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\caption{The rock-paper-scissors recorded by DAVIS240 \cite{davis}.}
\vspace{-0.2cm}
\label{fig:dvsroshambo}
\end{figure*}
\begin{figure*}
\centering
\subfigure[INIT]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./INIT_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs1]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs1_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{1}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs2]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs2_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{2}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF2_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs3]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs3_67.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_22.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\caption{The rock-paper-scissors from CeleX when object moving fast.}
\vspace{-0.2cm}
\label{fig:roshambo1}
\end{figure*}
\begin{figure*}
\centering
\subfigure[INIT]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./INIT_75.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_32.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./INIT_6.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs1]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs1_75.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_32.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_6.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{1}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF_75.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_32.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF_6.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs2]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs2_75_s2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_32_s2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_6_s2.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[GF$_{2}$]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./TF2_75_s2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_32_s2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./TF2_6_s2.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs3]{
\begin{minipage}[t]{0.16\textwidth}
\centering
\includegraphics[width=0.9in]{./Bs3_75.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_32.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_6.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\caption{The rock-paper-scissors from CeleX when object moving slowly. The percentage of BA rises in the fixed count frames.}
\vspace{-0.2cm}
\label{fig:roshambo2}
\end{figure*}
It can be explained why GF$_{1}$ does not keep many real events in this case. Because the object movement slows down, the time difference between real events becomes close to that between BA events. When they are close, it is hard to distinguish them using GF$_{1}$. But the background filters don't show satisfactory result as well in such cases. However, GF$_{2}$ solves this problem better because it has more space information support as a group pixels share the same memory cell while GF$_{2}$ has the same memory cost as Bs2.
\subsubsection{Quantitative Analysis based on GF}
This experiment is carried on Gesture dataset recorded from DVS128. Since GF$_{2}$ method shows good performance on denoising the frame and the time consumption is also acceptable, we use it as a evaluating method for the other filters.
We calculate the $TPR$ and $FPR$ of the event-based filters. $TPR$ is the percentage of correct predictions for real events, and $FPR$ is the percentage of predicting a BA event as a real event. Fig.~\ref{fig:tfsnr} shows the results. The $FPR$ of all filters are low which is good to see, especially Bs2. These baseline filters rarely mistake the BA events defined by our criteria as real events, which suggests that our definition is approved by the baseline filters. The $TPR$ witness different distributions. Bs2 is the best. Bs3 shows the lowest $TPR$ which explains the reason for light outlines of objects as shown in figures in section~\ref{sec:visual effect}. We fund that Bs1 only pass about half percent of real events. We suppose the reason might be that the threshold for Bs1 is too low. So we adjust the threshold for Bs1 to 1ms. And the Fig.~\ref{fig:bs1add} shows the result. After increasing the threshold, it also shows good performance on $TPR$.
\begin{figure}
\centering
\subfigure[Bs1]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./Bs1_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_3.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs2]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./Bs2_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs2_3.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Bs3]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./Bs3_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs3_3.png}\\
\vspace{0.02cm}
\end{minipage}%
\subfigure[GF1]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./Ts1_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Ts1_3.png}\\
\vspace{0.02cm}
\end{minipage}%
\caption{Comparison of $TPR$ and $FPR$. One point in the figure represents a frame. The x-axis is the frame id. The y-axis is the ratio value ranging from 0 to 1. The top line is $TPR$, and the bottomline is $FPR$.}
\label{fig:tfsnr}
\end{figure}
\begin{figure}
\centering
\subfigure[Thr=0.5ms]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./Bs1_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./Bs1_3.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[Thr=1ms]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=0.9in]{./2Bs1_2.png}\\
\vspace{0.02cm}
\includegraphics[width=0.9in]{./2Bs1_3.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\caption{Comparison of $TPR$ and $FPR$ for Bs1 with different threshold. One point in the figure represents a frame. The x-axis is the frame id. The y-axis is the ratio value ranging from 0 to 1. The top line is $TPR$, and the bottomline is $FPR$.}
\label{fig:bs1add}
\end{figure}
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{time comparison}
Fig.~\ref{fig:timefiltertime} shows the time consumption of different filters.
We make several repeat experiments by using different number of events per frame with the fixed number of events (3 million) from a event stream. And with the same tendency, we can see that the Bs1 filter is 2.5x time consuming than GF$_{1}$ filter. This time reduction is achieved because the GF$_{1}$ only needs to write once and compute once. However, the Bs1 needs to write 9 times for updating the timestamps of 8 neighbors and the pixel itself, and compute once according to the process in \cite{Bs1filter}. We can see that GF$_{2}$ is similar to Bs2 in time cost. Also, GF$_{2}$ is similar to GF$_{1}$ in average time consumption because for GF$_{2}$, it also writes once and computes once for each coming event as well.
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{./celextime.png}
\caption{Runtime Comparison of different filters. We use 3 million events and different number of events to make a frame. The x-axis is the number of events per frame. The y-axis is the time consumption of filtering in total and the unit is second. }
\label{fig:timefiltertime}
\end{figure}
\@startsection {subsection}{2}{\z@}{16pt plus 2pt minus 2pt{Discussion}
One interesting behavior is demonstrated by the Bs3 filter. For Roshambo dataset, where each event has its own timestamp, Bs3 still works but it filters large amount of events as it shows the relative light outline of the object. However, for CeleX dataset, where up to a row of events share the same timestamp, it still works for the left part of the pixels but when the pixel is at the right side of the output, it has almost no filtering effect. This is especially clear in Fig.~\ref{fig:roshambo2}.
We also make experiments on different subsampling windows as shown in Fig.~\ref{fig:tfgroup1}. We can see that the Time filter performs better than Bs2 with different windows, especially in slow movement scenarios.
\begin{figure}
\centering
\subfigure[fast-s2]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=1.2in]{./Bs2_12.png}\\
\vspace{0.02cm}
\includegraphics[width=1.2in]{./TF2_12.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[fast-s4]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=1.2in]{./Bs2_12_s4.png}\\
\vspace{0.02cm}
\includegraphics[width=1.2in]{./TF2_12_s4.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[slow-s2]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=1.2in]{./Bs2_75.png}\\
\vspace{0.02cm}
\includegraphics[width=1.2in]{./TF2_75.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\subfigure[slow-s4]{
\begin{minipage}[t]{0.24\linewidth}
\centering
\includegraphics[width=1.2in]{./Bs2_75_s4.png}\\
\vspace{0.02cm}
\includegraphics[width=1.2in]{./TF2_75_s4.png}\\
\vspace{0.02cm}
\end{minipage}%
}%
\caption{Performance comparison between Bs2 and Time Filter with $s = 2$ and $s = 4$ on hand frames. $s2$ means $s = 2$. Fast means the hand is moving fast. Slow indicates the hand is moving slow. The subsampling window $s$ for Bs2 and Time filter are same. The top row is the Bs2. The bottom row is the Time Filter.}
\vspace{-0.2cm}
\label{fig:tfgroup1}
\end{figure}
\@startsection {section}{1}{\z@}{20pt plus 2pt minus 2pt{Summary and Conclusions}
Neuromorphic event-based sensors have witnessed rapid development in the past few decades, especially dynamic vision sensors. These sensors allow for much faster sampling rates and a higher dynamic range which outperform frame-based imagers. However, they are sensitive to background activity events which cost unnecessary communication and computing resources. Moreover, improved noise filtering will enhance performance in many applications.
We propose a new criteria with little computation overhead for defining real events and BA events based on the space and time information of the event stream. We utilize the global information rather than the local information by Gaussian convolution.
The experimental results show that the proposed criteria shows good performance on denoising and run very fast.
\bibliographystyle{unsrt}
|
1,477,468,751,004 | arxiv | \section{Conditional Computation for Deep Nets}
Deep learning is about learning hierarchically-organized representations,
with higher levels corresponding to more abstract concepts automatically
learned from data, either in a supervised, unsupervised, semi-supervised
way, or via reinforcement learning~\citep{Deepmind-atari-arxiv2014}.
See~\citet{Bengio-Courville-Vincent-TPAMI2013} for a recent review. There
have been a number of breakthroughs in the application of deep learning,
e.g., in speech~\citep{Hinton-et-al-2012} and computer
vision~\citep{Krizhevsky-2012-small}. Most of these involve deep neural
networks that have as much capacity (the number of units and parameters)
as possible, given the constraints on training and test time that made
these experiments reasonably feasible.
It has recently been reported that bigger models could yield better
generalization on a number of
datasets~\citep{Coates2011-shorter,Hinton-et-al-arxiv2012,Krizhevsky-2012-small,Goodfellow+al-ICML2013-small}
provided appropriate regularization such as dropout~\citep{Hinton-et-al-arxiv2012} is used.
These experiments however have generally been limited by training time in which
the amount of training data that could be exploited.
An important factor in these recent breakthroughs
has been the availability of GPUs
which have allowed training deep nets at least 10 times faster, often
more~\citep{RainaICML09}. However, whereas the task of recognizing handwritten
digits, traffic signs~\citep{Ciresan-et-al-2012} or faces~\citep{Taigman-et-al-CVPR2014}
is solved to the point of
achieving roughly human-level performance, this is far from true for other
tasks such as general object recognition, scene understanding, speech
recognition, or natural language understanding, even with GPUs.
If we had 100 or 1000 more computing power that could be harnessed for
training, then we could train correspondingly larger models on correspondingly
larger datasets, covering more categories, modalities and concepts.
This is important, considering that current neural network models are
still small in size (especially if we count the number of artificial
neurons) compared to biological brains, not even reaching the size of
animals such as frogs, and several orders of magnitude less than that of mammals or humans.
In this sense, we expect that much larger models are needed
to build computers that truly master the visual world, or the
world of ideas expressed in language, i.e., to make sense of the world
around us at a level comparable to a child.
Moore's law has practically saturated if one considers only the computing power
of a single computing core. Most of the continued growth in computing power
comes from parallelization. Unfortunately, despite the impressive progress in
recent years~\citep{QuocLe-ICML2012-small,Dean-et-al-NIPS2012}, exploiting
large computer clusters to efficiently parallelize the training procedures for
deep neural networks remains a challenge. Furthermore, additionally to faster
training, in some applications we want faster inference, or test.
Thus, the question we need to ask is: besides
distributed training, are there other ways
to build deep neural networks of much higher capacity
without waiting a decade for hardware to evolve to the required level?
\citet{Bengio-tricks-chapter-2013,bengio2013estimating} have proposed
the notion of {\bf conditional computation} for deep learning to answer positively to this
question. The idea is to activate only a small fraction of the
parameters of the model for any particular examples, and correspondingly
reduce the amount of computation to be performed.
Currently, the ratio of the number of parameters to the amount of computation is
essentially one in deep nets, i.e., every parameter is touched (usually with a
single multiply-add) for each example. In contrast, there are machine learning
models, such as decision trees~\citep{Breiman84}, with a much more favorable
ratio: with $N$ computations, a decision tree can actively select $O(N)$
parameters out of a pool of up to $O(2^N)$. Unfortunately decision trees suffer
from poor statistical properties that prevent them, like many other
non-parametric techniques relying only on the smoothness prior, from
generalizing in a non-trivial way to regions of input space far from training
examples.
See~\citet{cucker+grigoriev99,Bengio-decision-trees10} for a mathematical
analysis of the case of decision trees and~\citet{Bengio-2009-book} for a
longer analysis covering a wider class of learning algorithms, such as
Gaussian kernel SVMs and graph-based non-parametric statistical models.
On the other hand, there are both theoretical and empirical indications
suggesting that deep distributed
representations~\citep{Bengio-2009-book,Pascanu+et+al-ICLR2014b} can benefit
from advantageous statistical properties, when the data has been generated by
multiple factors organized hierarchically, with the characteristics of each
factor being learnable without requiring to see all the configurations of the
other factors.
The conditional computation for deep learning as well as this paper is
aimed at combining
the statistical efficiency of deep learning and the computational efficiency, in
terms of ratio of capacity to computation, of algorithms such as decision trees.
With this objective in mind, we propose here a novel way to
parametrize deep neural networks (supervised or unsupervised, discriminative
or generative) that allows up to exponential increase in the ratio of
number of parameters to computation. In other words, we allow exponentially many
parameters with respect to the amount of computation.
We achieve this by observing that
one can exploit bit patterns associated with hidden units
in order to selectively activate different weight vectors or
weight matrices. Since the number of such bit patterns can
grow exponentially in the number of bits considered, this gives
us the required rate of growth, controllable by the maximum
size of these bit patterns.
\section{Exponentially Rich Parametrization of a Weight Matrix}
Here we consider a single layer consisting of $p$-dimensional input vector $\vx$
and $q$-dimensional output vector $\vh$. In a conventional approach, the layer
is parametrized with a weight matrix $\mW \in \RR^{p \times q}$ and a bias
vector $\vb \in \RR^q$, and computes
\begin{align*}
\vh = \phi \left( \mW^\top \vx + \vb \right),
\end{align*}
where $\phi$ is an element-wise nonlinear function. In this case, the number of
parameters of a single layer is $O(pq)$, and very often $q=O(p)$ so the number
of parameters is $O(p^2)$.
In this note, we propose another way to parametrize a layer of a neural network,
where the number of free parameters is $O(2^k p^2)$, where $k$ is a free parameter
that controls the trade-off between capacity and computation.
Similarly to the conventional approach, a single layer consists of $\vx \in
\RR^p$ and $\vh \in \RR^q$. However, now the weight matrix is not anymore
independent of the input variable $\vx$, but is parametrized using $\vx$. The
basic idea is that $k$ bits will be derived from $\vx$ from which $O(2^k)$
weight matrices will be defined and used to define the actual weight matrix
mapping $\vx$ to $\vh$.
Let us first define a binary indicator vector $\vg \in \RR^k$ as a function
of the input $\vx$: $\vg = g(\vx)$. The gating function $g$ may
be chosen freely as long as it provides \textit{hard decisions}.
One possibility is
\[
\vg = \left( \vx > \tau\right)_{1,\dots,k},
\]
where $\tau$ is a predefined scalar threshold. It is also
possible to make a stochastic decision such that each $g_i$ is
sampled from Bernoulli distribution with its mean
$\sigma(\mU^\top \vx)$, where $\mU \in \RR^{p \times k}$.
Using the binary indicators we obtain each column of the weight
matrix $\vw_j$ ($j=1,\dots,p$) as a function of $\vg$ and $j$,
using up to $k$ bits of $\vg$ (possibly chosen according to $j$) to
obtain $\vw_j$:
\begin{align*}
\vw_j = F_j(S_j(\vg)),
\end{align*}
where $S_j$ is a subset of up to $k$ elements of $\vg$, and $F_j$
maps this binary $k$-dimensional vector to an $\RR^q$ vector of
output weights for unit $j$. For example, $F_j$ may simply bit a
look-up in a table indexed by $j$ and $S_j(\vg)$, and $S_j(\vg)$ may
simply be the first $k$ bits of $\vg$, or the set of $k$
consecutive bits of $\vg$ indexed from $\floor{j/k}$ to
$\floor{j/k}+k-1$.
One can view the above as a generalization of three-way
connections found in some
models~\citep{Memisevic+Hinton-2010,Sutskever-et-al-ICML2011} to
$k+2$-way interactions (between the $k$ gating bits, the input
$\vx_j$ and each output $\vh_i$). For example,
\citet{Sutskever-et-al-ICML2011} select a different recurrent
weight matrix $\mW_{s_t}$ in a recurrent neural network to go
from the current state $\vh_t$ to the next state $\vh_{t+1}$
depending on the (integer) input $s_t$.
The parametrization proposed here enables the association of up
to $O(2^k)$ weight vectors with unit $j$, triggered by the
particular values of the selected $k$ bits of $\vg$. The number
of parameters is therefore $O(2^k p q)$. The required computation
depends on $F_j$, but can be as low as the cost of a table
look-up, followed by the actual computation for the matrix
multiplication, i.e., $O(pq)$.
In the next section, we describe one particular strategy of
implementing $F_j$ that aims to improve the generalization.
\section{Regularized Tree-Structured Prefix Sum of Weights}
One potential issue with the proposed scheme is that a model may
easily overfit to training samples because only a fraction of
samples are used to activate/update each of the $2^k$ possible
weight vectors.
Beside the obvious regularization of choosing small $k$,
we propose here an additional device
that is inspired by the impressive success of smoothed or
interpolated n-grams and back-off models in statistical language
modeling~\citep{Katz87,Jelinek80}.
The basic idea is to maintain a set of weight vectors that are
indexed by bit sequences of different lengths. Those vectors
associated with shorter bit sequences will be updated with more
examples, therefore not requiring much regularization.
Other weight vectors
indexed by the longer bit sequences will see few examples and be used only
to make small corrections, as needed by the data.
As for the regularization, we simply add the norms of the weight
vectors.
Regularization, either L1 or L2 weight decay, will
automatically penalize more those that are less often activated,
since only when a weight vector is activated does it receive a
gradient that may counterbalance the regularizer's pull towards
0.
We examine here one way to achieve this, based on a binary tree
structure where each node corresponds to a prefix of the $k$ bits
$\vb=S_j(\vg)$. We thereby define $F_j(\vb)$ as follows:
\begin{align*}
F_j(b) = \sum_{l=0}^k T(j,\vb_{1\ldots l})
\end{align*}
where $\vb_{1\ldots l}=(b_1, \ldots b_l)$ is the prefix of the
$l$ first bits of $\vb$, and $T(j,\vb_{1\ldots l})$ is a table
look-up returning an $\RR^q$ weight vector associated with unit
$j$ and bit pattern $\vb_{1\ldots l}$. With the empty sequence,
$T(j,())$ returns the default weight matrix for unit $j$.
It can be understood more intuitively by imagining a binary tree
of depth $k+1$, where each node has a weight matrix. The
above procedure traverses the tree from its root to one of the
leaves using the bit sequence $\vb$ and sums over the $j$-th
columns of the nodes' weight matrices to get the weight vector
$F_j(\vb)$.
The computation of $F_j(\vb)$ involves $O(k q)$ additions per unit
instead of being a small constant (a single table look-up), or
$O(k p q)$ in total. This is a noticeable but at the same time
reasonable overhead over the $O(p q)$ multiply-adds that will be
required for the actual matrix multiplication.
In this case, the number of weight vectors associated with a unit
$j$ is
\begin{align*}
\left| \theta \right| = 1+\sum_{l=1}^k 2^l = 2^{k+1}
\end{align*}
and the total number of parameters in the layer is $p q
2^{k+1}$. However, only $2^k$ of these are actually
independent parameters while the others serve to help
regularization. This is in contrast to the conventional case of
$O(p q)$.
Overall the degrees of freedom to computation ratio has thus
increased by $\frac{2^k}{k}$, a rapidly growing function of $k$.
As the number of parameters is much larger in the proposed
scheme, it is more efficient to implement the weight decay
regularization such that only the selected weight vectors at each
update are regularized. However, in this case,
we must keep track of the
interval $\Delta t = t - t'$
since each weight vector was last updated, where $t$ and $t'$ are
the current update step and the last time the weight vector was
updated. Next time the weight
vector $\vw_j$ is chosen, we treat the weight vector
specially to compensate for the lost $\Delta t$
steps of regularization.
For L2 weight decay, regularization
with coefficient $\lambda$ and learning rate
$\epsilon$, this simply corresponds to pre-multiplying the weight
vector by $(1-\epsilon \lambda)^{\Delta t}$:
\[
\vw_j \leftarrow \vw_j (1-\epsilon \lambda)^{\Delta t}.
\]
This is performed before the new update is applied to
$\vw_j$. For L1 regularization,
this can be done by moving $\vw_j$ towards 0 by $\epsilon \lambda
\Delta t$ but not crossing 0:
\[
\vw_j \leftarrow {\rm sign}(\vw_j) \max(0, |\vw_j| - \epsilon \lambda \Delta t)
\]
\section{Credit Assignment for Gating Decisions}
One issue raised earlier by~\citet{bengio2013estimating} is the
question of training signal
for gating decisions, i.e., the credit assignment for the gating
decisions. What is the correct way to update parameters
associated with the gating units in order
to improve the gating decisions?
One interesting hypothesis is that it may be sufficient to back-prop
as usual into the network by ignoring the effect of the gating
units $\vg$ on the choice of the weight vectors $\mW$.
Although the gating decisions themselves are not adapted toward
minimizing the training loss in this case, the weight vectors are
regardlessly updated according to the objective of training.
In other words, as long as the gating units perform a reasonable
job of partitioning the input space, it might be good enough to
adapt the exponentially many parameters stored in the table $T$.
To test that hypothesis, it would be good to evaluate alternative
approaches
that provide training signal into the gating units.
Here are some alternatives:
\begin{enumerate}
\item Following~\citet{bengio2013estimating}
and~\citet{Mnih+Gregor-ICML2014}, estimate a gradient using a
variance-reduced variant of REINFORCE, i.e., by reinforcement
learning.
\item
Following~\citet{bengio2013estimating},~\citet{Gregor-et-al-ICML2014}
and~\citet{Raiko2014}, estimate a gradient using a heuristic
that propagates the gradient on $\vg$ (obtained by back-prop
of the loss) backwards into the pre-threshold values $\vx$.
\item In the spirit of the noisy rectifier approach by
\citet{bengio2013estimating}, compute $F_j$ as a weighted
sum, where the gating units' activation level modulate the selected
weight vector's magnitude:
\[
F_j(b,\vx) = \sum_{l=1}^k T(j,\vb_{1\ldots l}) \left(\prod_{i=1}^l (1-\tanh(x_{\pi_i}))\right)^{1/l}
\]
where $\pi_i$ is the index of bit $b_i$ in the input vector
$\vx$, and $\vx$ is assumed to be the output of a rectifier,
i.e., non-negative. Hence, when a unit is too active, it tends to
turn off the weight contributions that it controls (which creates
a preference for sparse $\vx$). The outside power normalizes for
length of the controlling bit sequence.
\end{enumerate}
\section{Conclusion}
One of the greatest challenges to expand the scope of
applicability and the performance of deep neural networks is our
ability to increase their capacity without increasing the
required computations too much. The approach proposed in this
paper has the potential to achieve up to exponential increases in
this ratio, in a controllable way.
Future work is clearly required to validate this proposal
experimentally on large enough datasets for which the increased
capacity would actually be valuable, such as speech or language
datasets with on the order of a billion examples.
|
1,477,468,751,005 | arxiv | \section{Introduction}
The share of wind in the energy mix, and the number of wind energy installations, continue to grow and will be a key component of a significantly greener grid in the decades to come \cite{irena2021wind}. Moreover, according to the U.S. Department of Energy (DOE), \cite{doe2021wind}, wind energy is predicted to surpass other sources of renewable power generation within this decade. Wind energy sources, however, suffer from inherent variability and limited predictability and have led to a significant increase in forecasting errors for some Independent System Operators (ISOs), introducing additional challenges for the market-clearing algorithms, which are the backbone of grid operations. As a result, there is a strong need for novel forecasting methods that can \emph{quantify} and \emph{reduce} the uncertainty of wind power predictions and enable grid operators to act effectively \cite{pinson2013wind}.
One possible way to manage the increased uncertainty of wind energy sources is to reduce the prediction resolution by clustering wind farms according to their proximity or estimating average wind conditions over a more extended period. Such spatial or temporal aggregations of wind data can not only lower the uncertainty and make the wind energy more predictable but also significantly reduce the scale of the problem, simplify the training of the forecasting algorithms, and make the downstream market-clearing algorithms more tractable. However, over-simplifications of the spatio-temporal relationships between wind farms may result in a significant loss of information related to wind direction and speed. In particular, recent upstream wind conditions play a pivotal role in predicting future downstream wind power generation. As shown in Table~\ref{tab:illustration}, the tradeoff between wind predictions in high and low data resolutions crave more powerful models that may consider complex spatio-temporal dependencies between clusters while leveraging results from predictions at different data resolutions.
\begin{table}[]
\centering
\caption{
Comparison of wind speed predictions at different data resolutions. Predictions in some resolution may demonstrate better performance than
others under certain circumstances. }
\label{tab:illustration}
\resizebox{\linewidth}{!}{%
\begin{tabular}{ccccc}
\toprule[1pt]\midrule[0.3pt]
& \begin{tabular}[c]{@{}c@{}}High space\\ resolution\end{tabular} & \begin{tabular}[c]{@{}c@{}}Low space\\ resolution\end{tabular} & \begin{tabular}[c]{@{}c@{}}High time\\ resolution\end{tabular} & \begin{tabular}[c]{@{}c@{}}Low time\\ resolution\end{tabular} \\ \hline
\begin{tabular}[c]{@{}c@{}}Data Granularity\end{tabular} & High & Low & High & Low \\
\begin{tabular}[c]{@{}c@{}}Computational Efficiency\end{tabular} & Low & High & Low & High \\
\begin{tabular}[c]{@{}c@{}}Predictive Uncertainty\end{tabular} & High & Low & Low & High \\
\begin{tabular}[c]{@{}c@{}}Predictive Accuracy\end{tabular} & Low & High & High & Low \\
\midrule[0.3pt]\bottomrule[1pt]
\end{tabular}%
}
\vspace{-0.3cm}
\end{table}
This paper first presents a spatio-temporal delayed regressive model for wind speed prediction under specified data resolutions; the wind speed can then be transformed to power generation by a non-linear mapping \cite{international2005wind, ezzat2019spatio}. The wind direction at each cluster is encoded by a directed dynamic (time-varying) graph, where a directed edge gives the direction of wind blowing from one cluster to the other. To consider the physical propagation delay as the wind must travel from upstream to downstream, the model includes a carefully crafted kernel function that captures the delayed triggering effects between clusters. The model can be efficiently estimated by minimizing an adaptive mean square prediction error.
The paper then develops a multi-resolution model for data at different time and spatial scales and captures the correlated multi-resolution predictions using the Gaussian process. We consider a separable kernel for the Gaussian process to capture the covariance of prediction errors at different spatio-temporal coordinates and different data resolutions. To tackle the computational challenge of the Gaussian process with a large-scale data set, we leverage sparsity in the model, which enables efficient model fitting via a variational learning strategy. Numerical results show that the good predictive performance of the proposed model. Moreover, we demonstrate that the multi-resolution model can significantly reduce forecasting errors while providing reasonable uncertainty quantification.
The remainder of this paper is organized as follows. After discussing related work, Section~\ref{sec:data} describes the data sets and preliminary analysis. Section~\ref{sec:regression} presents the spatio-temporal delayed regressive model. Section~\ref{sec:kriging} introduces the multi-resolution spatio-temporal Kriging model and its variational learning strategy. Lastly, Section~\ref{sec:result} presents the numerical results.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/wind-plot-t0.pdf}
\caption{12:00 AM, Sept 21, 2020}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/wind-plot-t48.pdf}
\caption{12:00 PM, Sept 21, 2020}
\end{subfigure}
\caption{Examples of the real wind data: 506 wind farms in the Midwestern United States, where the red dots indicate the location of these farms. The blue lines indicate the wind directions and the line widths represent the wind speeds. Coordinates have been shifted to maintain confidentiality.}
\label{fig:raw-data}
\vspace{-0.3cm}
\end{figure}
\vspace{.1in}
\noindent\emph{Related work.} There has been a extensive research effort devoted to wind prediction (e.g., \cite{LEI2009915, soman2010review, giebel2011state}).
Early attempts \cite{potter2006very, lange2008new, candy2009comparison} resort to physical models, relying on parameterizations based on a detailed physical description of the atmosphere. For instance, the Numeric Weather Prediction (NWP) usually runs 1 or 2 times per day due to the difficulty and cost of acquiring real-time information, which limits its usefulness to medium to long-term forecasts ($> 6$ hours ahead) \cite{LEI2009915, soman2010review}. Kosovic et al \cite{Kosovic2020} mentions that "the best strategies for nowcasts (0- to about 3-hour ahead) rely on observations near the wind farm”.
Many statistical methods \cite{brown1984time, kusiak2010estimation, erdem2011arma, he2014spatio, dowell2015very, jiang2015time, pourhabib2016short} predict the wind speed or power using past observations and time series models, e.g., an autoregressive model (AR). These models are simple and provide timely and reasonably accurate predictions. Recent work has focused on machine-learning models for wind prediction \cite{mohandes2004support, sideratos2012probabilistic}.
In particular, recurrent neural networks have been widely adopted to model wind time series and make predictions sequentially \cite{yao2018multidimensional, liu2019wind, yu2019lstm, liu2020novel}.
However, most of the above methods fail to provide uncertainty quantification about their predictions and do not consider spatio-temporal correlations between observations or assume that spatial correlations are time-invariant.
A few studies investigated the uncertainty quantification of wind power forecasts \cite{quan2019survey, khosravi2014optimized, ak2015interval}.
However, most of these approaches assume that the distribution of wind speed/power is in a parametric form and do not consider correlations between different data resolutions. The multi-resolution framework proposed in this paper is related to multi-fidelity models \cite{kennedy2000predicting}, which provide predictive confidence intervals by capturing the correlation between different fidelity levels through a Kriging model. The main difference is that this paper jointly models correlations across time, space, and data resolution.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/multires/merged_locations_k20.pdf}
\caption{$\kappa=20$}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/multires/merged_locations_k40.pdf}
\caption{$\kappa=40$}
\end{subfigure}
\caption{Spatial map of the wind farm groups (stars). Farms (dots) are grouped according to their proximity using the $k$-means algorithm. The farms in the same group have the same color.}
\label{fig:map-diff-spatial-resolution}
\vspace{-0.3cm}
\end{figure}
\section{Data overview and preliminary analysis}
\label{sec:data}
The data in this study include 506 wind farms operated or planned by the Midcontinent Independent System Operator (MISO), which delivers safe, cost-effective electric power to 42 million customers across 15 U.S. states and the Canadian province of Manitoba. MISO's grid includes 71,800 miles of transmission lines and a generation capacity of 177,760 MW for a peak summer system demand of 127,125 MW. We collected eight days of location-specific quarter-hourly \emph{wind speed} and \emph{wind direction} values, starting from September 2020, for each of these wind farms. The wind speed is reported in meters per second (m/s), and the wind direction is reported in cardinal (or compass) directions (in degrees) from which it originates. Fig.~\ref{fig:raw-data} presents the spatial map of the wind farms and the wind data at two specific times.
The wind data were prepared at different time and space resolutions. Denote by $K=506$ the total number of wind farms in the region of interest and by $T=750$ the total number of time units (15 minutes per unit) in the time horizon. Let $\kappa \in \{1,2,\dots,K\}$ be the space resolution (the number of clusters) and $\eta \in \{1,2,\dots,T\}$ be the time resolution (the number of time units). For the space domain, wind farms can be partitioned into $\kappa$ clusters using the $k$-means algorithm based on their Euclidean distances, with each wind farm belonging to the cluster with the nearest centroid (Fig.~\ref{fig:map-diff-spatial-resolution}).
Each cluster is indexed by $i$ with its centroid's latitude and longitude $s_i^\kappa \in \mathscr{S} \subset \mathbb{R}^2$, where $\mathscr{S}$ represents the space of geographic coordinate system (GCS). For the time domain, the time horizon can be evenly divided into $T/\eta$ frames, indexed by $t$ (Fig.~\ref{fig:linechart-diff-time-resolution}). Given a data resolution $(\kappa, \eta)$, the wind speed and direction are averaged for each of the $\kappa$ clusters and every $\eta$ time unit.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/multires/speed-time-res-average.pdf}
\vfill
\includegraphics[width=\linewidth]{imgs/multires/speed-time-res-Turbine1.pdf}
\vfill
\caption{Speed}
\end{subfigure}
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/multires/direction-time-res-average.pdf}
\vfill
\includegraphics[width=\linewidth]{imgs/multires/direction-time-res-Turbine1.pdf}
\vfill
\caption{Direction}
\end{subfigure}
\caption{Examples of (a) wind speed and (b) wind direction in three time resolutions: $\eta=4$ (1-hour), $\eta=12$ (3-hour), and $\eta=20$ (5-hour).}
\label{fig:linechart-diff-time-resolution}
\vspace{-0.3cm}
\end{figure}
A preliminary analysis suggests that the wind data exhibit clear dependencies across time and space. This is illustrated in Fig.~\ref{fig:mae-vs-dist}.
Fig.~\ref{fig:linechart-diff-time-resolution} also highlights that the wind speed changes more frequently than the wind direction, suggesting that the variation of wind speed in this study is highly dynamic.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/dist_vs_speed_mae.pdf}
\caption{Speed MAD vs distance}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/dist_vs_direction_mae.pdf}
\caption{Direction MAD vs distance}
\end{subfigure}
\vfill
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/timediff_vs_speed_mae.pdf}
\caption{Speed MAD vs time diff}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/rawdata/timediff_vs_direction_mae.pdf}
\caption{Direction MAD vs time diff}
\end{subfigure}
\caption{Data similarity versus distance and time differences.
The top panels show the mean absolute difference of (a) the wind speed (m/s) and (b) the wind direction (degree) between two arbitrary locations versus their distance, respectively. The bottom panels show the mean absolute difference of (c) the wind speed (m/s) and (d) the wind direction (degree) of the same location between two time points. The color depth indicates the density of these points; the darker the denser.}
\label{fig:mae-vs-dist}
\vspace{-0.3cm}
\end{figure}
\section{Spatio-temporal delayed regressive model}
\label{sec:regression}
Consider the average wind speed and wind direction of a set of clusters $\mathscr{V}^\kappa = \{i: 1 \le i \le \kappa\}$ at time points $\mathscr{T}^\eta = \{t: 1 \le t \le T/\eta \}$ under data resolution $(\kappa, \eta)$. The wind speed for cluster $i$ at time $t$ is assumed to be correlated with the historic wind speed at the upstream wind farms before time $t$. The wind directions, from cluster to cluster, can be specified by a directed \emph{dynamic} graph $\mathscr{G}^{\kappa, \eta} = (\mathscr{V}^\kappa, \{\mathscr{E}^\kappa_t\}_{t\in\mathscr{T}^\eta})$, where $\mathscr{V}^\kappa$ represents the clusters,
$\mathscr{E}^\kappa_t \subseteq \mathscr{E}^\kappa$
is a set of directed edges (ordered pairs of vertices) connecting two clusters if the wind blows from the source cluster to the target cluster at time $t$, and
$\mathscr{E}^\kappa$ denotes the fully-connected graph with $\kappa$ vertices.
Let $y^{\kappa, \eta}_{it}$
be the true average wind speed
of cluster $i$ at time $t$ under data resolution $(\kappa, \eta)$ and let $f^{\kappa, \eta}(i, t)$ be the corresponding wind speed prediction. The spatio-temporal delayed regressive model for the wind speed prediction is specified by:
\begin{equation}
\begin{aligned}
f^{\kappa, \eta}(i, t)
=&~\mu^\kappa_{i} + \sum_{\tau = t-d}^{t-1} \sum_{j: (j, i)\in\mathscr{E}^\kappa_\tau} g^{\kappa, \eta}(t, \tau, i, j),\\
&~i \in \mathscr{V}^\kappa, t\in\mathscr{T}^\eta,
\label{eq:wind-speed-predictor}
\end{aligned}
\end{equation}
where $\mu^\kappa_i$ is a learnable scalar representing the background wind speed in cluster $i$, $d$ is the chosen memory depth, $g^{\kappa, \eta}(t, \tau, i, j)$ is a triggering function that describes the ``influence'' of strong wind energy (m/s) at cluster $j$ at time $\tau$ triggers a strong wind energy at cluster $i$ at time $t$ (the model is motivated by the Hawkes processes \cite{Hawkes1974} although here we do not consider point processes).
The choice of the triggering function $g^{\kappa, \eta}$ relies on three key assumptions.
(i) The upstream influence decays over time and hence the triggering function includes an exponential function $\beta \exp\{-\beta(t-\tau)\}$ commonly used to represent such decay,
where $t > \tau \geq 0$ and the parameter $\beta \ge 0$ captures the decay rate of the influence (note that the function integrates to one over $t$).
(ii) The inter-cluster influence varies from pair to pair and may depend on the geographical features of the region that lie between two clusters. Hence each edge $(j, i) \in \mathscr{E}$ is associated with a non-negative weight $\alpha^{\kappa, \eta}_{ji} \ge 0$ indicating the correlation between cluster $i$ and $j$: the larger the weight $\alpha^{\kappa, \eta}_{ji}$, the cluster $i$ is more likely to be affected by cluster $j$.
(iii) The upstream influence has a physical propagation delay as the wind must travel over the Earth's surface to reach the downstream cluster \cite{hwang2019do};
such delay can be estimated by the distance between two clusters divided by the wind speed at that time.
In summary, the triggering function can be expressed as
\begin{align*}
g^{\kappa, \eta}(t, \tau, i, j)
= &~\alpha^{\kappa, \eta}_{ji} \beta^{\kappa, \eta}_j \exp\{-\beta^{\kappa, \eta}_j (t - \tau - \lambda^{\kappa,\eta}_{ji\tau})\} \cdot \\
&~y^{\kappa, \eta}_{j\tau} \cdot \mathbbm{1}\{t - \tau \ge \lambda^{\kappa,\eta}_{ji\tau}\},\\
&~i, j \in \mathscr{V}^\kappa, t, \tau \in \mathscr{T}^\eta,
\end{align*}
where
$\{\lambda^{\kappa,\eta}_{jit} > 0\}$
is a tensor of wind travel times (in seconds) from cluster $j$ to cluster $i$ at time $t$ estimated from real data
and $\beta^{\kappa, \eta}_j \ge 0$ is the decay rate of cluster $j$'s influence.
Since the model assumes that $\alpha^{\kappa, \eta}_{ii} = 1,~i \in \mathscr{V}^\kappa$, $\beta^\kappa_j$ can be regarded as the influence's decay rate of cluster $j$.
\vspace{.1in}
\noindent\emph{Directed dynamic graph for wind direction.}
The directed dynamic graph (DDG) $\mathcal{G}^{\kappa, \eta}$ in \eqref{eq:wind-speed-predictor} defines the wind direction between clusters, which can be extracted from the raw data.
Given a data resolution $(\kappa, \eta)$, the dynamic graph includes the directed edge from cluster $j$ to $i$ at time $t$ if the difference between their wind directions at that time is not larger than 15\degree, as illustrated in Fig.~\ref{fig:illustration-ddg} (a). Appendix~\ref{append:ddg} gives the algorithm for extracting the DDG. The graph support $\mathscr{E}^\kappa$ has a sparse structure: indeed, the preliminary analysis in Section~\ref{sec:data} indicates that an arbitrary cluster can only be affected by its nearest clusters within a 100-$Km$ radius of itself, as shown in Fig.~\ref{fig:illustration-ddg} (b). The sparsity of the graph support leads to significant reductions in the calculation of \eqref{eq:wind-speed-predictor} and plays a big role in the computational efficiency of the proposed model. Fig.~\ref{fig:dynamic-graph-diff-spatial-resolution} presents two examples of extracted DDG using the real wind data in different space resolutions.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.4\linewidth}
\includegraphics[width=\linewidth]{imgs/dynamic-graph-illustration}
\caption{Dynamic edge}
\end{subfigure}
\begin{subfigure}[h]{0.4\linewidth}
\includegraphics[width=\linewidth]{imgs/dynamic-graph-support}
\caption{Graph support}
\end{subfigure}
\caption{Illustrations of (a) an edge in the DDG and (b) the support of DDG for one of the clusters.}
\label{fig:illustration-ddg}
\vspace{-0.3cm}
\end{figure}
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{0.475\linewidth}
\includegraphics[width=\linewidth]{imgs/regressor/graph-plot-0-50.pdf}
\caption{$\kappa=50$}
\end{subfigure}
\begin{subfigure}[h]{0.49\linewidth}
\includegraphics[width=\linewidth]{imgs/regressor/graph-plot-0-100.pdf}
\caption{$\kappa=100$}
\end{subfigure}
\caption{Examples of DDGs for wind direction with different space resolutions. The lines represent the wind direction from one cluster to another, where the dark red end indicates the upstream cluster and the light red end the downstream cluster.}
\label{fig:dynamic-graph-diff-spatial-resolution}
\vspace{-0.3cm}
\end{figure}
\vspace{.1in}
\noindent\emph{Model estimation.}
The model with resolution $(\kappa, \eta)$ can be estimated by minimizing the mean square error between the true wind speed and its prediction.
Denote the set of the parameters of the model $f^{\kappa, \eta}$ as $\theta^{\kappa, \eta} \coloneqq \{\{\mu_i^\kappa\}, \{\alpha_{ij}^{\kappa, \eta}\}, \{\beta_i^{\kappa, \eta}\}\} \in \Theta^\kappa$, where $\Theta^\kappa \subseteq \mathbb{R}_+^\kappa \times \mathbb{R}_+^{\kappa \times \kappa} \times \mathbb{R}_+^\kappa$ is the corresponding parameter space.
It is noteworthy that the model includes less than $\kappa (2 + \kappa)$ parameters thanks to the sparse structure of $\{\alpha_{ij}^{\kappa, \eta}\}$. The model can be effectively learned by using $\kappa (T/\eta)$ points under resolution $(\kappa, \eta)$, where $\kappa \ll T/\eta$; for example, when $\kappa=20,\eta=4$, there are about 120 parameters and 3,740 data points.
The over-prediction of future wind power may result in fewer commitments of other types of generators, creating reliability issues. To this end, the loss function for training the model includes a scaling factor that penalizes overestimations.
Formally, the optimal parameters can be found by solving the following optimization problem:
\begin{align*}
&~\underset{\theta^{\kappa, \eta} \in \Theta^{\kappa, \eta}}{\arg \min}~l(\theta^{\eta, \kappa}) \coloneqq \\
&~\sum_{t=1}^{T/\eta} \sum_{i=1}^\kappa
\Big( e_{it}^{\kappa, \eta} + \delta \mathbbm{1}\left\{y_{it}^{\kappa, \eta} \le f^{\kappa, \eta}(i, t)\right\} e_{it}^{\kappa, \eta} \Big) ,
\end{align*}
where $e_{it}^{\kappa, \eta} \coloneqq \left[y_{it}^{\kappa, \eta} - f^{\kappa, \eta}(i, t)\right]^2$ denotes the square error of the prediction and $\delta \ge 0$ is a factor that controls the magnitude of penalization for the overestimation.
\section{Multi-Resolution Spatio-Temporal Kriging}
\label{sec:kriging}
Multi-resolution modeling enables the seamless fusion of information from a collection of heterogeneous sources of predictions with various accuracies and uncertainties.
These predictions can be considered correlated ``experts'', and each of them may outperform others under certain circumstances (see Table~\ref{tab:illustration}).
By capturing the cross-correlation between these sources, one can construct surrogate models that can dramatically improve the predictive performance.
These deterministic estimators at different data resolutions also enjoy a degree of smoothness, in the sense that the output values for similar spatio-temporal coordinates are reasonably close.
This section presents a new method to correct the deterministic prediction results by adding a noise term, that captures the prior belief about each level of the predictions via a Kriging model.
Consider the real wind speeds $\{y_{it}^{\kappa, \eta}\}$ and their deterministic predictions $\{f^{\kappa,\eta}(i, t)\}$ using the method described in Section~\ref{sec:regression} under different data resolutions $(\kappa, \eta) \in \mathscr{R}$, where $\mathscr{R}$ denotes the set of data resolutions. The goal is to find a corrected estimator that improves the predictive accuracy and reduces uncertainty by leveraging the information across different data sets with different resolutions. The proposed multi-resolution Kriging model is given by
\begin{equation}
y^{\kappa, \eta}_{it} = f^{\kappa, \eta}(i, t) + \epsilon(i, t, \kappa, \eta) + c^\kappa_i,~i \in \mathscr{V}^\kappa, t\in \mathscr{T}^\eta,
\end{equation}
where $\epsilon(i, t, \kappa, \eta)$ denotes the error of the deterministic estimator and follows a zero-mean Gaussian process whose covariance is specifed by a kernel function $k$. We also introduce a constant $c_i^\kappa > 0$ representing the mean prediction error for each cluster if the overestimations are penalized ($\delta > 0$), which can be estimated empirically by averaging the prediction errors of cluster $i$, i.e., $(\sum_{t=1}^{T/\eta} (y_{it}^{\kappa, \eta} - f^{\kappa, \eta}(i, t))^2) / (T/\eta)$.
For simplicity, denote the spatio-temporal coordinate of a cluster $i$ at time $t$ under data resolution $(\kappa, \eta)$ by $\mathbf{x} \coloneqq (i, t, \kappa, \eta) \in \mathscr{X}$, where $\mathscr{X}$
represents the corresponding joint space.
For any subset $\mathbf{X} \subseteq \mathscr{X}$ with $N$ coordinates,
the corresponding real wind speeds and their predictions are denoted by $\mathbf{y} \coloneqq \{y_{it}^{\kappa, \eta}\}$ and $\mathbf{f} \coloneqq \{f^{\kappa, \eta}(i, t)\}$, respectively. Assume that the set of function variables $\boldsymbol{\epsilon} \coloneqq \{\epsilon(i, t, \kappa, \eta)\}$ has joint (zero-mean) distribution
\begin{equation}
p(\boldsymbol{\epsilon}) = \mathcal{N}(\boldsymbol{\epsilon}~|~
\mathbf{0}, \mathbf{K}_{XX}),
\label{eq:noise}
\end{equation}
where $\mathbf{K}_{XX}$
is an $N \times N$ matrix and its entries are pairwise evaluations of $k(\mathbf{x}, \mathbf{x}')$, $\forall \mathbf{x}, \mathbf{x}' \in \mathbf{X}$.
Here $\mathcal{N}(\boldsymbol{x}|\boldsymbol{\mu}, \boldsymbol{\Sigma})$ is the probability density function of variable $\boldsymbol{x}$ that follows a Gaussian distribution with mean $\boldsymbol{\mu}$ and variance $\boldsymbol{\Sigma}$.
The conditional probability of $\mathbf{y}$ can then be expressed as
\begin{equation}
p(\mathbf{y}) = \mathcal{N}(\mathbf{y}~|~\mathbf{f}
, \mathbf{K}_{XX}),
\label{eq:prob-y}
\end{equation}
\noindent
The parameters of the Kriging model are optimized by maximizing the log marginal likelihood of wind speeds:
\begin{equation}
\begin{aligned}
&~ \underset{\sigma \in \Sigma}{\arg \max}~\ell(\sigma)
\coloneqq \log p(\mathbf{y}) = \\
&~ -\frac{1}{2} \left( \mathbf{y} - \mathbf{f} \right)^\top \mathbf{K}_{XX}^{-1} \left( \mathbf{y} - \mathbf{f} \right) - \frac{1}{2} \log |\mathbf{K}_{XX}| - \frac{N}{2} \log 2\pi,
\label{eq:objective}
\end{aligned}
\end{equation}
where $\sigma$ denotes the set of model parameters and $\Sigma$ is the corresponding parameter space; the Gram matrix $\mathbf{K}_{XX}$ is invertible because, in practice, the time and space of data points are not linearly dependent.
\subsection{Kernel design for time, space, and data resolution}
This section discusses the choice of the kernel function $k$. Standard GP models use a stationary covariance, in which the covariance between any two points is a function of Euclidean distance.
In this study, the kernel requires a difference covariance structure, since it contains both spatio-temporal coordinates and data resolution.
The kernel design first assumes that the time, space, and data resolution are mutually independent, as each of these factors describes the wind power in a different domain. The correlation function is thus separable, i.e.,
\begin{align*}
&~k\left( (i, t, \kappa, \eta), (j, \tau, \kappa', \eta') \right) =\\ &~\upsilon_s(i, \kappa, j, \kappa') \cdot \upsilon_t(t, \eta, \tau, \eta') \cdot \nu_s(\kappa, \kappa') \cdot \nu_t(\eta, \eta'),\\
&~i\in\mathscr{V}^\kappa, j\in\mathscr{V}^{\kappa'}, t\in\mathscr{T}^\eta, \tau\in\mathscr{T}^{\eta'}.
\end{align*}
The temporal and spatial kernels are also assumed to be commonly-used Gaussian correlation functions:
\begin{align*}
\upsilon_s(i, \kappa, j, \kappa') =&~\exp\left\{ - \sigma_s ||s_i^\kappa - s_j^{\kappa'}||^2 \right\},
~i\in\mathscr{V}^\kappa, j\in\mathscr{V}^{\kappa'},\\
\upsilon_t(t, \eta, \tau, \eta') =&~\exp\left\{ - \sigma_t (t\eta - \tau{\eta'})^2 \right\},
~t\in\mathscr{T}^\eta, \tau\in\mathscr{T}^{\eta'},
\end{align*}
where $\sigma_s > 0$ and $\sigma_t > 0$ are two learnable parameters. Recall that $s_i^\kappa$ represents the geographical location of the cluster $i$ and $t\eta$ is the recorded time (\emph{not} the time index) under the data resolution $(\kappa, \eta)$.
Designation of $\nu_s$ and $\nu_t$ requires more deliberation. According to Fig.~\ref{fig:pred-out-of-sample}, they cannot be zero when $\kappa =\kappa'$ or $\eta = \eta'$ and would become larger as $\kappa$ or $\eta$ increases.
Therefore, the following correlation function for both $\nu_s$ and $\nu_t$ was selected:
\begin{align*}
\nu_s(\kappa, \kappa') =
&~\exp\left\{-\sigma^\kappa_0 (\kappa - \kappa')^2 \right\} + \\
&~\exp\left\{-\sigma^\kappa_1 (\kappa - K)^2 \right\} +
\exp\left\{-\sigma^\kappa_1 (\kappa' - K)^2 \right\},\\
\nu_t(\eta, \eta') =
&~\exp\left\{ - \sigma^\eta_0 (\eta - \eta')^2 \right\} + \\
&~\exp\left\{-\sigma^\eta_1 (\eta - T)^2 \right\} +
\exp\left\{-\sigma^\eta_1 (\eta' - T)^2 \right\},
\end{align*}
where $\sigma^\kappa_0 > 0$, $\sigma^\kappa_1 > 0$, $\sigma^\eta_0 > 0$, and $\sigma^\eta_1 > 0$ are learnable parameters.
\subsection{Variational learning for large-scale data set}
\label{sec:variational-inference}
The GP approach is notoriously intractable for large datasets since the computations require the inversion of a matrix of size $N \times N$ which scales as $O(N^3)$ \cite{rasmussen2003gaussian}.
In this study, the data set includes 11 different data resolutions, with up to 190 time indices and 50 clusters, resulting in $N \approx 95,000$. To address the tractability issues, this paper
derives sparse models for the noise $\boldsymbol{\epsilon}$ inspired by \cite{titsias2009variational, hensman2013gaussian, hensman2015scalable, zhu2021early}.
The idea is to introduce a small set of $M$ auxiliary inducing variables $\mathbf{u}$ evaluated at the pseudo-inputs $\mathbf{Z} \coloneqq \{z \in \mathscr{X}\}$ that aim to best approximate the training data. The initial inputs $\mathbf{Z}$ are a subset of spatio-temporal coordinates that are randomly sampled from the training inputs \cite{snelson2005sparse}. It is then possible to adopt a variational learning strategy for such a sparse approximation and jointly infer the optimal inducing inputs and other model parameters by maximizing a lower bound of the true log marginal likelihood.
Inducing variables $\mathbf{u}$ are function points drawn from the same GP prior as the training functions $\boldsymbol{\epsilon}$ in \eqref{eq:noise}, so the joint distribution can be written as
\begin{equation}
p(\boldsymbol{\epsilon}, \mathbf{u}) = \mathcal{N}\left(~
\begin{bmatrix}
\boldsymbol{\epsilon} \\
\mathbf{u}
\end{bmatrix}
~\bigg|~
\mathbf{0},~
\begin{bmatrix}
\mathbf{K}_{XX} & \mathbf{K}_{XZ} \\
\mathbf{K}_{XZ}^\top & \mathbf{K}_{ZZ}
\end{bmatrix}~
\right),
\label{eq:joint-dist-f-u}
\end{equation}
where $\mathbf{K}_{ZZ}$ is formed by evaluating the kernel function pairwisely at all pairs of inducing points in $\mathbf{Z}$, and $\mathbf{K}_{XZ}$ is formed by evaluating the kernel function across the data points $\mathbf{X}$ and inducing points $\mathbf{Z}$ similarly.
To obtain a computationally efficient inference, the posterior distribution $p(\boldsymbol{\epsilon}, \mathbf{u}|\mathbf{y})$ over random variable vector $\boldsymbol{\epsilon}$ and $\mathbf{u}$ is approximated by a variational distribution $q(\boldsymbol{\epsilon}, \mathbf{u})$.
Assume that this variational distribution $q(\boldsymbol{\epsilon}, \mathbf{u})$ can be factorized as
$
q(\boldsymbol{\epsilon}, \mathbf{u})
\coloneqq p(\boldsymbol{\epsilon} | \mathbf{u}) q(\mathbf{u})
$.
To jointly determine the variational parameters and model parameters $\sigma = \{\sigma_s, \sigma_t, \{\sigma_0^\kappa, \sigma_1^\kappa\}, \{\sigma_0^\eta, \sigma_1^\eta\}\}$, the variational evidence lower bound (ELBO) \cite{hoffman2016elbo}
substitutes for the marginal likelihood $\ell(\sigma)$ defined in \eqref{eq:objective}:
\begin{equation}
\log p(\mathbf{y}) \ge \mathbb{E}_{q(\boldsymbol{\epsilon})}\left [ \log p(\mathbf{y}|\boldsymbol{\epsilon}) \right ] - \text{KL}\left [ q(\mathbf{u}) || p(\mathbf{u}) \right ],
\label{eq:elbo}
\end{equation}
where $\text{KL}[q||p]$ denotes the Kullback–Leibler (KL) divergence between two distributions $q$ and $p$ \cite{kullback1951information}. The derivation defines $q(\boldsymbol{\epsilon}) \coloneqq \int p(\boldsymbol{\epsilon}|\mathbf{u}) q(\mathbf{u}) d\mathbf{u}$ and assumes $q(\mathbf{u}) \coloneqq \mathcal{N}(\mathbf{u} | \mathbf{m}, \mathbf{S})$, which is the most common way to parameterize the prior distribution of inducing variables in terms of a mean vector $\mathbf{m}$ and a covariance matrix $\mathbf{S}$.
To ensure that the covariance matrix remains positive definite, it can be represented as a lower triangular form $\mathbf{S} = \mathbf{L} \mathbf{L}^\top$. This leads to the analytical form for $q(\boldsymbol{\epsilon})$:
\[
q(\boldsymbol{\epsilon}) = \mathcal{N}(\boldsymbol{\epsilon}~|~\mathbf{A}\mathbf{m}, \mathbf{K}_{XX} + \mathbf{A} (\mathbf{S} - \mathbf{K}_{ZZ}) \mathbf{A}^\top),
\]
where $\mathbf{A} = \mathbf{K}_{XZ} \mathbf{K}_{ZZ}^{-1}$.
The likelihood can also be factorized as $p(\mathbf{y}|\boldsymbol{\epsilon}) = \prod_{n=1}^N p(y_n|\epsilon_n)$ for the ease of computation in \eqref{eq:elbo}. Therefore, the ELBO objective can be rewritten as
\begin{equation}
\begin{aligned}
&~\ell_\text{ELBO}(\sigma, \mathbf{Z}, \mathbf{m}, \mathbf{S}) \coloneqq \\
&~\sum_{n=1}^N \mathbb{E}_{q(\epsilon_n)}\left [ \log p(y_n|\epsilon_n) \right ] - \text{KL}\left [ q(\mathbf{u}) || p(\mathbf{u}) \right ].
\label{eq:obj-elbo}
\end{aligned}
\end{equation}
Note that the one dimensional integrals of the log-likelihood in \eqref{eq:obj-elbo} can be computed by Gauss-Hermite quadrature \cite{liu1994note} (the derivation of the ELBO can be found in Appendix~\ref{append:elbo}).
In contrast to directly maximizing the marginal log likelihood defined in \eqref{eq:objective}, computing this objective and its derivatives only costs $O(NM^2)$.
In practice, the optimization is carried out through stochastic gradient descent (see Appendix~\ref{append:sgd}).
\subsection{Prediction with variational posterior}
An one-step ahead prediction for wind speed in cluster $i$ at time $t+1$ given the past observations (before time $t$) in resolution $(\kappa, \eta)$ requires the derivation of the posterior distribution of prediction $p(\boldsymbol{\epsilon}|\mathbf{y})$.
For all clusters $i \in \mathscr{V}^\kappa$, we consider the spatio-temporal coordinates $\mathbf{X}_t \coloneqq \{(i, \tau, \kappa, \eta)\}_{\tau\le t}$, their observations $\mathbf{y}_t \coloneqq \{y_{i\tau}^{\kappa, \eta}\}_{\tau\le t}$, and fitted inducing points $\mathbf{Z}$ and assume that the unobserved future data comes from the same generation process.
Therefore, for all the coordinates at the next moment $\mathbf{X}_* \coloneqq \{(i, t+1, \kappa, \eta)\}$,
the distribution of one-step-ahead noise prediction $\boldsymbol{\epsilon}_* \coloneqq \{\epsilon_*(i, t+1, \kappa, \eta)\}$ is given by
\begin{equation}
p(\boldsymbol{\epsilon}_*|\mathbf{y}_t)
= \mathcal{N}(\boldsymbol{\epsilon}_*~|~\mathbf{A}_* \mathbf{m}, \mathbf{A}_* \mathbf{S} \mathbf{A}_*^\top + \mathbf{B}_*),
\label{eq:pred-posterior}
\end{equation}
where $\mathbf{A}_* = \mathbf{K}_{*Z}\mathbf{K}_{ZZ}^{-1}$ and $\mathbf{B}_* = \mathbf{K}_{**} - \mathbf{K}_{*Z} \mathbf{K}_{ZZ}^{-1} \mathbf{K}_{*Z}^\top$.
The $\mathbf{K}_{* Z}$ denotes a $\kappa \times M$ matrix and its entries are pairwise evaluations of $k(\mathbf{x}_*, \mathbf{z})$ where $\mathbf{x}_* \in \mathbf{X}_*$ and $\mathbf{z} \in \mathbf{Z}$.
The derivation of the predictive posterior can be found in Appendix~\ref{append:pred-posterior}.
The prediction for the wind speed therefore can be made by plugging \eqref{eq:pred-posterior} into \eqref{eq:prob-y}.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{\linewidth}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K20-N4-average}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K20-N4-cluster8}
\caption{$\kappa=20, \eta=4$}
\end{subfigure}
\begin{subfigure}[h]{\linewidth}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K20-N24-average}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K20-N24-cluster8}
\caption{$\kappa=20, \eta=24$}
\end{subfigure}
\vfill
\begin{subfigure}[h]{\linewidth}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K50-N4-average}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K50-N4-cluster8}
\caption{$\kappa=50, \eta=4$}
\end{subfigure}
\begin{subfigure}[h]{\linewidth}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K50-N24-average}
\includegraphics[width=.49\linewidth]{imgs/outofsample/red-outofsample-K50-N24-cluster8}
\caption{$\kappa=50, \eta=24$}
\end{subfigure}
\caption{Examples of prediction using \texttt{STDR} on four data sets with different data resolutions. The 1st column shows the spatially average of prediction and the 2nd column shows the prediction of one of the clusters.
}
\label{fig:pred-out-of-sample}
\vspace{-0.3cm}
\end{figure}
\section{Results on the Case Study}
\label{sec:result}
The results consider the wind data at 11 different data resolution as shown in Fig.~\ref{fig:multi-resolution-comparison}. The proposed spatio-temporal delayed regressive (\texttt{STDR}) model described in Section~\ref{sec:regression} generates deterministic wind speed predictions using for each of these data sets. The predictions are then corrected using the proposed multi-resolution spatio-temporal Kriging (\texttt{MRSTK}) model described in Section~\ref{sec:kriging}. The out-of-sample predictive performance of these two approaches is measured using their mean absolute error (MAE). Observe that model \texttt{MRSTK} not only generates accurate predictions, but also quantifies the uncertainty about the predictions. The results report the estimated predictive confidence interval of \texttt{MRSTK} over time and space, respectively. The results are also compared with other baseline approaches. The estimation of each \texttt{STDR} uses Stochastic Gradient Descent (SGD) with a learning rate of $10^{-2}$ and a scaling factor $\delta = 0.8$ to penalize the overestimation. The estimated model for predicting wind speed at time $t$ can be used as the warm start for the model at time $t+1$.
Model \texttt{MRSTK} with $M = 500$ inducing variables is estimated with SGD with a learning rate of $10^{-2}$ and a batch size of $1,000$. All experiments are performed on Google Colaboratory (Pro version) with 12GB RAM and dual-core Intel processors, with speeds up to 2.3 GHz (without GPU).
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/kriging/gp-k50-n24-avg}
\caption{$\kappa=50, \eta=24$}
\end{subfigure}
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/kriging/gp-k30-n20-avg}
\caption{$\kappa=30, \eta=20$}
\end{subfigure}
\vfill
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/kriging/gp-k30-n8-avg}
\caption{$\kappa=30, \eta=8$}
\end{subfigure}
\begin{subfigure}[h]{.49\linewidth}
\includegraphics[width=\linewidth]{imgs/kriging/gp-k20-n4-avg}
\caption{$\kappa=20, \eta=4$}
\end{subfigure}
\caption{Examples of corrected prediction suggested by \texttt{MRSTK} on four data sets with different data resolutions. The black dash line and red solid line represent the ground truth and deterministic prediction, respectively. The blue solid line represents the corrected prediction made by \texttt{MRSTK}, where the blue shaded area represents the corresponding confidence interval.}
\label{fig:ci-out-of-sample}
\vspace{-0.3cm}
\end{figure}
\subsection{Deterministic spatio-temporal prediction}
The \texttt{STDR}’s predictive power is assessed by performing the one-step ahead or $\eta$-unit ahead (out-of-sample) prediction at different data resolutions.
The prediction for time index $t$ given resolution $(\kappa, \eta)$ is carried out by
(i) withholding the data after $t$ from the model estimation and using the $\eta$-unit moving average of historical data before $t$ to fit the model (Fig.~\ref{fig:prediction-illustration} in Appendix~\ref{append:extra-result}); and
(ii) using the fitted model to make predictions for the (hold-out) data at time $t+1$. Fig.~\ref{fig:pred-out-of-sample} presents examples of one-step ahead predictions of \texttt{STDR} at four different data resolutions.
Fig.~\ref{fig:mae-out-of-sample-space} in Appendix~\ref{append:extra-result} summarize the cluster-wise MAEs for the same four data sets.
The above results show that model \texttt{STDR} can predict wind speed accurately at the cluster-level. Observe that high resolution wind speeds oscillate rapidly, with significant amplitude, and increases of data resolution $(\kappa, \eta)$ will degrade the predictive accuracy significantly.
In addition, Fig.~\ref{fig:alpha} and Fig.~\ref{fig:community-alpha} in Appendix~\ref{append:extra-result} visualize the fitted $\{\alpha_{ji}^{\kappa, \eta}\}$ for different $\kappa$ and their detected community structures using Leiden algorithm \cite{traag2019louvain}.
\begin{figure}[!t]
\centering
\begin{subfigure}[h]{.45\linewidth}
\includegraphics[width=\linewidth]{imgs/outofsampleci/ci-k50-n24}
\caption{$\kappa=50, \eta=24$}
\end{subfigure}
\begin{subfigure}[h]{.45\linewidth}
\includegraphics[width=\linewidth]{imgs/outofsampleci/ci-k30-n20}
\caption{$\kappa=30, \eta=20$}
\end{subfigure}
\vfill
\begin{subfigure}[h]{.45\linewidth}
\includegraphics[width=\linewidth]{imgs/outofsampleci/ci-k30-n8}
\caption{$\kappa=30, \eta=8$}
\end{subfigure}
\begin{subfigure}[h]{.45\linewidth}
\includegraphics[width=\linewidth]{imgs/outofsampleci/ci-k20-n4}
\caption{$\kappa=20, \eta=4$}
\end{subfigure}
\caption{Examples of the spatial distribution of $1\sigma$ confidence interval (68.27\%) for the corrected prediction suggested by \texttt{MRSTK} on four data sets at different data resolutions. The black dot represents the centroid of cluster; the color depth represents the length of confidence interval (m/s).}
\label{fig:ci-out-of-sample-space}
\vspace{-0.1cm}
\end{figure}
\subsection{Corrected prediction for multi-resolution data}
The learning of model \texttt{MRSTK} uses all the data sets at different resolution and their predictions before time index $t$ as input in order to correct the prediction of wind speed at the next time step $t+1$. Fig.~\ref{fig:ci-out-of-sample} shows four examples of corrected predictions on four data sets with different resolutions.
The blue and red lines indicate the predictions of \texttt{STDR} and \texttt{MRSTK}, respectively.
The results show that the \texttt{MRSTK} outperforms the \texttt{STDR}, particularly for the period from 180 to 250. Fig.~\ref{fig:ci-out-of-sample} also presents the estimated confidence interval suggested by \texttt{MRSTK}. The black dots represent the observed wind speeds for a certain cluster; the light and dark shaded blue areas indicate the 1-$\sigma$ (68.27\%) and 2-$\sigma$ (95.45\%) prediction intervals, respectively. The result shows that the estimated confidence interval achieves good data coverage. Fig.~\ref{fig:ci-out-of-sample-space} presents the spatial distribution of the estimated 1-$\sigma$ (68.27\%) prediction intervals for the same four data sets. The model achieves smaller confidence interval as the data resolution decreases. Note also that the regions with significantly higher predictive uncertainty are where the wind was usually originated. This is due to the fact that the data sets do not have enough upstream observations to infer the wind condition for these regions. Fig.~\ref{fig:multi-resolution-comparison} compares the predictive performance between \texttt{STDR} and \texttt{MRSTK} quantitatively on each of the data sets. The results show that the MAEs of \texttt{MRSTK} are significantly lower than those of \texttt{STDR} for all scenarios and confirm the effectiveness of \texttt{MRSTK}.
\begin{figure}[!t]
\centering
\includegraphics[width=.7\linewidth]{imgs/kriging/mae-barchart-thin}
\caption{Performance of our methods on 11 data sets at different data resolutions. The red and blue bar represent the MAE of \texttt{STDR} and \texttt{MRSTK}, respectively.}
\label{fig:multi-resolution-comparison}
\vspace{-0.3cm}
\end{figure}
\subsection{Comparison with baselines}
\begin{table}[!t]
\caption{Performance comparison of all methods.}
\label{tab:mae}
\resizebox{\linewidth}{!}{%
\begin{threeparttable}
\centering
\begin{tabular}{l:llll}
\toprule[1pt]\midrule[0.3pt]
\multicolumn{1}{c:}{} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}MAE\\(average)\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}MAE\\ ($\kappa=50, \eta=24$)\end{tabular}} &
\multicolumn{1}{c}{\begin{tabular}[c]{@{}c@{}}MAE\\ ($\kappa=20, \eta=4$)\end{tabular}}
\\\hline
\texttt{MRSTK} & \multicolumn{1}{c}{\bf 0.20 (5\%)} & \multicolumn{1}{c}{\bf 0.25 (5\%)} & \multicolumn{1}{c}{\bf 0.14 (4\%)}\\
\texttt{STDR} & \multicolumn{1}{c}{0.27 (6\%)} & \multicolumn{1}{c}{0.31 (7\%)} & \multicolumn{1}{c}{0.21 (6\%)}\\
\texttt{NN} & \multicolumn{1}{c}{0.48 (11\%)} & \multicolumn{1}{c}{0.45 (11\%)} & \multicolumn{1}{c}{0.56 (13\%)}\\
\texttt{LSTM} & \multicolumn{1}{c}{0.50 (12\%)} & \multicolumn{1}{c}{0.50 (11\%)} & \multicolumn{1}{c}{0.43 (11\%)}\\
\texttt{VAR} & \multicolumn{1}{c}{0.45 (11\%)} & \multicolumn{1}{c}{0.70 (16\%)} & \multicolumn{1}{c}{0.27 (6\%)}\\
\midrule[0.3pt]\bottomrule[1pt]
\end{tabular}%
\end{threeparttable}}
\vspace{-0.3cm}
\end{table}
The proposed models were compared with baseline approaches, including neural network-based uni-variate prediction model (\texttt{NN}), long short-term memory (\texttt{LSTM}), and vector autoregressive model (\texttt{VAR}); see \cite{LEI2009915, soman2010review} for detailed review of these predictive algorithms and see Appendix~\ref{append:baseline} for their experimental settings and hyperparameter choices. Table~\ref{tab:mae} reports the MAE and its percentage with respect to the ground truth for the out-of-sample prediction at cluster-level of the models. They
confirm that the proposed models significantly outperform the baseline methods.
\section{Conclusion}
This paper proposes a spatio-temporal predictive model for wind speed.
A directed dynamic graph is introduced to represent the wind directions between clusters of wind farms. The paper also introduced a Bayesian framework that bridges the gap between data at different resolution through a Gaussian process and significantly enhance the predictive power of the model. The joint framework has shown great promise in modeling and predicting wind speed.
The numerical study also showed that the proposed methods have superior predictive performance while providing reasonable uncertainty quantification.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,477,468,751,006 | arxiv | \section{Decoupling via Inverse Iteration}
\label{sec:decoupling}
The following results are the basis of the subroutine we use to decouple a Hessenberg matrix once a forward approximate eigenvalue of $H$ is obtained.
\begin{lemma}[Decoupling in Exact Arithmetic]
\label{lem:exactdecoupling}
Let $s\in \bC$ and $H\in \bC^{n\times n}$ be a Hessenberg matrix. Consider the sequence given by $H_0 := H$ and $H_{{\ell}+1} := R_\ell Q_\ell+s$ for $[Q_\ell, R_\ell] := \mathrm{qr}(H_{\ell}-s)$. Then, for any $m\geq 1$ there is some $1\leq \ell \leq m$ for which
\begin{equation}
\label{eq:exactdecoupling}
|(H_{\ell})_{n, n-1}| \leq \frac{\kappa_V(H)^{\frac{1}{m}}\ds{s}{H}}{\P\Big[|Z_H-s| = \ds{s}{H}\Big]^{\frac{1}{2m}}}.
\end{equation}
\end{lemma}
\begin{proof}
Because by definition: $R_\ell$ is upper triangular, all the entries of $Q_\ell$ are bounded by 1, and $H_{\ell+1} = R_\ell Q_\ell+s$, we know that
\begin{equation}
\label{eq:upperboundonentry}
|(H_{\ell+1})_{n, n-1}| \le |(R_{\ell})_{n,n}|.
\end{equation}
On the other hand
\begin{align}
|(R_{0})_{n, n} \cdots (R_{m-1})_{n, n}|^{\frac{1}{m}}& = \|e_n^* (H-s)^{-m}\|^{-\frac{1}{m}} \nonumber && \text{\cite[Lemma 3.3]{banks2022II}} \\
& \le \frac{\kappa_V(H)^{\frac{1}{m}}}{\mathbb{E}\left[|Z_H-s|^{-2m}\right]^{\frac{1}{2m}}} \nonumber && \text{Lemma \ref{lem:spectral-measure-apx}} \\ \label{eq:boundonprodofRs}
&\le \frac{\kappa_V(H)^{\frac{1}{m}}\ds{s}{H}}{\P\Big[|Z_H-s| = \ds{s}{H}\Big]^{\frac{1}{2m}}}.
\end{align}
So, combining (\ref{eq:upperboundonentry}) and (\ref{eq:boundonprodofRs}) we get that (\ref{eq:exactdecoupling}) holds for some $1\leq \ell\leq m$.
\end{proof}
Using the forward error guarantees for $\mathsf{IQR}$ given in Lemma \ref{lem:multiiqrstability} we can easily get a finite arithmetic version of the above result.
\begin{lemma}[Decoupling in Finite Arithmetic]
\label{lem:decoupling}
Let $H\in \bC^{n\times n}$ be a Hessenberg matrix and $s\in D(0, C \|H\|)$. For every $\ell$ define $\ax{H_\ell} = \mathsf{IQR}(H,(z-s)^\ell)$ . Then, for each $m \ge 1$, if
\begin{equation}
\label{assump:decoupling}
\textbf{\textup{u}}_{} \leq
\min_{\ell\in [m]} \textbf{\textup{u}}_{\mathsf{IQR}}\big(n,\ell, \|H\|, \kappa_V(H), \mathrm{dist}(s,\Spec{H})\big)
\end{equation}
there is some $\ell \in [m]$ for which
\begin{align*}
|(\ax{H_{\ell}})_{n, n-1}| \le \frac{\kappa_V(H)^{\frac{1}{m}}\ds{s}{H}}{\P\Big[|Z_H-s| = \ds{s}{H}\Big]^{\frac{1}{2m}}} + 32\kappa_V(H) \|H\|\left(\frac{(2 + 2C)\|H\|}{\mathrm{dist}(s,\Spec{H})}\right)^\ell n^{1/2}\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}.
\end{align*}
\end{lemma}
\begin{proof}
Let $H_0, \dots, H_m$ be as in the statement of Lemma \ref{lem:exactdecoupling}, and let $\ell\in [m]$ be such that (\ref{eq:exactdecoupling}) holds. Now, (\ref{assump:decoupling}) ensures that we can apply Lemma \ref{lem:multiiqrstability} for the $\ell$ we have specified, yielding
$$\left|(H_\ell)_{n, n-1}- (\ax{H_\ell})_{n, n-1} \right| \leq \left\|H_\ell - \ax{H_\ell} \right\|_F \leq 32\kappa_V(H) \|H\|\left(\frac{(2 + 2C)\|H\|}{\mathrm{dist}(s,\Spec{H})}\right)^\ell n^{1/2}\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}.$$
Combining this with (\ref{eq:exactdecoupling}) the advertised bound follows.
\end{proof}
\subsection{Analysis of ${\mathsf{Decouple}}$}
In view of the above results we define the subroutine ${\mathsf{Decouple}}$ as follows.
\bigskip
\begin{boxedminipage}{\textwidth}
$${\mathsf{Decouple}}$$
\textbf{Input:} Hessenberg $H\in \bC^{n\times n}$, $\ax{\lambda}\in\bC$, and decoupling parameter $\omega>0$ \\
\textbf{Output:} $\next{H}\in \bC^{n\times n}$ Hessenberg matrix \\
\textbf{Requires:} $0< \ds{\ax{\lambda}}{H} \leq \omega/2$ \\
\textbf{Ensures:} $|\next{H}_{n, n-1}| \leq \omega$ and there exists a unitary $Q$ with $\|\hat{H}- Q^* HQ\|\leq 3.5 m \|H\|\nu_{\mathsf{IQR}}(n) \textbf{\textup{u}}$, for $m$ defined as in the statement of Proposition \ref{prop:decouple}
\begin{enumerate}
\item $\hat{H}\gets H$
\item \label{line:decwhileloop} \textbf{While} $|\next{H}_{n, n-1}| > \omega$
\begin{enumerate}[label= (\roman*)]
\item $\next{H} \gets \mathsf{IQR}(\next{H}, z-\ax{\lambda})$
\end{enumerate}
\item Output $\hat{H}$
\end{enumerate}
\end{boxedminipage}
\begin{proposition}[Guarantees for ${\mathsf{Decouple}}$]
\label{prop:decouple}
Assume that the requirements of ${\mathsf{Decouple}}$ are satisfied, that $H$ is diagonalizable, and that $d:=\ds{\ax{\lambda}}{H}$ and $p:=\P\big[|Z_H-\ax{\lambda}| = d\big]$ are positive. If
\begin{align}
\label{assum:decouple} \textbf{\textup{u}} & \leq \textbf{\textup{u}}_{{\mathsf{Decouple}}}\big(n, \|H\|, \kappa_V(H), p, d\big)
\\ & := \frac{\textbf{\textup{u}}_{\mathsf{IQR}}(n,m,\|H\|,\kappa_V(H),d)\omega}{16\cdot 5^m\cdot n^{1/2} \|H\|}, \nonumber
\end{align}
for $m= \left\lceil \frac{\log(\kappa_V(H)^2/p)}{2 \log(3\omega/4d)} \right\rceil$, then ${\mathsf{Decouple}}$ satisfies its guarantees and halts after at most $m$ calls to $\mathsf{IQR}$. Hence, it runs in at most
\begin{align*}
T_{{\mathsf{Decouple}}}(n, \kappa_V(H), p, d) := m T_{\mathsf{IQR}}(n, m) = O\left( \log(\kappa_V(H)/p)^2 n^2 \right)
\end{align*}
arithmetic operations.
\end{proposition}
\begin{proof}
First, if $\omega\geq \|H\|$ the while loop in line \ref{line:decwhileloop} terminates immediately and ${\mathsf{Decouple}}$ satisfies its guarantees after one arithmetic operation. Hence, we can assume $\omega\leq \|H\|$, which combined with the assumption $d \leq \omega/2$ gives $d\leq \|H\|/2$ and $\ax{\lambda} \in D(0, 1.5 \|H\|)$ .
Now, for every $\ell$ define $\ax{H_\ell} := \mathsf{IQR}\big(H, (z-\ax{\lambda})^\ell\big)$, and note that (\ref{assum:decouple}) implies that
$$\textbf{\textup{u}}_{} \leq \textbf{\textup{u}}_{\mathsf{IQR}}(n, m, \|H\|, \kappa_V(H), d)= \min_{\ell\in [m]} \textbf{\textup{u}}_{\mathsf{IQR}}\big(n,\ell, \|H\|, \kappa_V(H), d\big),$$
where the last equality follows from $d\leq \|H\|/2$. Therefore, we can apply Lemma \ref{lem:decoupling} to get that there is some $\ell\in [m]$ for which
\begin{align*}
|(\ax{H_{\ell}})_{n, n-1}| & \le \left(\frac{\kappa_V(H)^2}{p}\right)^{\frac{1}{2m}}d + 32\kappa_V(H) \|H\|\left(\frac{5\|H\|}{d}\right)^\ell n^{1/2}\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}.
\end{align*}
Now, by our choice of $m$ we have that
$$\left(\frac{\kappa_V(H)^2}{p}\right)^{\frac{1}{2m}}d \leq \frac{3\omega}{4},$$
and by (\ref{assum:decouple}), because $\ell\leq m$ and $\omega\leq \|H\|$, we have that
$$ 32\kappa_V(H) \|H\|\left(\frac{5\|H\|}{d}\right)^\ell n^{1/2}\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}\leq \frac{\omega}{4}.$$
Combining the above inequalities we get that $|(\ax{H_\ell})_{n, n-1}| \leq \omega$ as we wanted to show. To prove the remaining claim use again that $\ax{\lambda}\in D(0, C \|H\|)$ for $C=1.5$, and apply Lemma \ref{lem:iqr-multi-backward-guarantees} to get that there is a unitary $Q$ for which
$$\|\ax{H_\ell}- Q^* H Q\|\leq 1.4 \ell (1 + C)\|H\|\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}} \leq 3.5 m \|H\|\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}},$$
as we wanted to show.
\end{proof}
\section{The Main Algorithm}
\label{sec:eig}
So far, all but one of the subroutines required to define $\mathsf{SmallEig}$ have been discussed. The remaining subroutine that will be needed is the one used for deflation, denoted here by $\mathsf{Deflate}(H, \omega)$, which on a Hessenberg input $H\in \bC^{n\times n}$ sets to zero any of the $n-1$ subdiagonals of $H$ that are less or equal (in absolute value) to $\omega$, and returns the diagonal blocks $H_1, H_2, \dots$ of the resulting matrix.
We are now ready to define the main algorithm and prove its guarantees. Note that $n$ refers to the dimension of the original input matrix, which is used to set parameters throughout the recursive calls to $\mathsf{SmallEig}$.
\bigskip
\begin{boxedminipage}{\textwidth}
$$\mathsf{SmallEig}$$
\textbf{Input:} Complex matrix $M$, accuracy $\delta$, failure probability tolerance $\phi$ \\
\textbf{Global Data:} Dimension $n$, norm estimate $\scale$, pseudospectral parameter $\epsilon$, shattering parameter $\zeta$\\
\textbf{Output:} A multiset $\Lambda \subset \mathbb{C} $ \\
\textbf{Ensures:} $\Lambda$ is the spectrum of a matrix $\ax{M}$ with $\|M- \ax{M}\| \leq \delta $.
\begin{enumerate}
\item \label{line:eigparameters} $\Delta \gets \frac{\delta \scale}{2}, \, \omega\gets \frac{\epsilon \wedge \Delta}{3 n} $, \, $\beta\gets \frac{\omega}{20} , \, p\gets \frac{ \phi \epsilon^2}{2 n^{5} \zeta^2} $, \,$\varphi\gets \frac{\phi}{2n}$,\, $\mathsf{correctness} \gets \texttt{false}$
\item \label{line:eigrhess} $H \gets \mathsf{RHess}(M)$
\item \label{line:eigwhile} \textbf{While} $\mathsf{correctness} = \texttt{false}$ \\
$[\ax{\lambda}, \mathsf{correctness}] \gets \mathsf{OneEig}(H, \beta,\varphi, p) $
\item $H \gets {\mathsf{Decouple}}(H, \ax{\lambda}, \omega)$
\item $[M_1, M_2, \dots ] \gets \mathsf{Deflate}(H, \omega)$
\item $\Lambda \gets \bigsqcup_i \mathsf{SmallEig}\big(M_i, \delta , \phi \big) $
\end{enumerate}
\end{boxedminipage}
\bigskip
\begin{theorem}
\label{thm:mainquantitative}
Let $M$ be the input matrix and $\delta \in (0, 1)$. Let $\beta, \Delta$ and $p$ be as in line \ref{line:eigparameters} of $\mathsf{SmallEig}$. Assume that the global data satisfies $n=\dim(M)$, $\scale/2 \leq \|M\|\leq \scale$, and that $\Lambda_{2\epsilon}(M)$ is $\zeta$-shattered. If
\begin{align}
\label{assum:eig}
\textbf{\textup{u}} & \leq \textbf{\textup{u}}_{\mathsf{SmallEig}}(n, \scale, \epsilon, \zeta, \delta, \phi)
\\ & := \frac{\epsilon}{6\cdot 10^3 (c_{\mathsf{h}}\vee c_{\mathsf{H}}\vee c_{\mathsf{root}})\nu_{\mathsf{IQR}}(n) n \zeta } \left( \frac{\eta_1}{44\scale}\right)^{2m_1}
\nonumber
\end{align}
where
\begin{align}
&m_1=\left \lceil 12\log(n\zeta/\epsilon)+ 6\log(1/p)\right\rceil =O\big(\log(n\zeta/\epsilon\phi)\big) \nonumber
\\ \label{eq:etaandm} \text{and} \quad &\eta_1= \left( \frac{\epsilon\wedge \Delta}{300n}\right) \left( \frac{\phi}{24 n\log(18\scale n/(\epsilon\wedge \Delta))} \right)^{1/2} = O\left( \frac{(\epsilon\wedge \Delta)\phi^{1/2} }{n \log(\scale n/(\epsilon\wedge \Delta))^{1/2}}\right)
\end{align}
Then, with probability at least $1 - \phi$ $\mathsf{SmallEig}$ satisfies its guarantees, and in this event runs in at most
\begin{align*}
T_{\mathsf{SmallEig}}(n, \epsilon, \zeta, \delta) &= (n - 1)(T_{\mathsf{RHess}}(n)+ T_{\mathsf{OneEig}}(n, \scale, \epsilon, \zeta, p, \beta)+ T_{{\mathsf{Decouple}}}(n, \zeta n/\epsilon, p , \beta) )
\\ &= O\Big( n^4 + n^3 \log(\zeta n/\epsilon\phi)\big( \log(\scale n/(\epsilon\wedge \Delta))+ \log(\zeta n/\epsilon \phi) \big)
\\ & \quad \quad + \log (\scale n/(\epsilon\wedge \Delta))\log(\zeta n/\epsilon \phi) \log \log(\zeta n/\epsilon \phi)\Big)
\end{align*}
arithmetic operations.
\end{theorem}
\subsection{Preservation of the Norm and Pseudospectral Parameters}
Before delving into the analysis of $\mathsf{SmallEig}$ we will show that the global data provides valuable information throughout the execution of the algorithm. The first observation here is that the only subroutine of $\mathsf{SmallEig}$ that accesses the global data is $\mathsf{OneEig}$, so, to ensure correctness, the only requirements regarding the global data that need to be fulfilled are the ones ensured by the following lemma.
\begin{lemma}
\label{lem:controloftheparameters}
Suppose that the assumptions of Theorem \ref{thm:mainquantitative} are satisfied and let $H'$ and $M'$ be any values acquired by the variables $H$ and $M$. If every while loop (line \ref{line:eigwhile}) involved in the production of $H'$ and $M'$ ended with $\mathsf{OneEig}$ being terminated successfully, then:
\begin{enumerate}[label=\roman*)]
\item $\frac{\omega}{2}\leq \min\{ \|H'\|, \|M'\|\} \leq \max\{\|H'\|, \|M'\|\} \leq 2\Sigma$.
\item $\Lambda_{\epsilon}(H')$ and $\Lambda_{\epsilon}(M')$ are $\zeta$-shattered.
\end{enumerate}
\end{lemma}
To prove the above lemma we will need the following results to control the pseudospectral parameters after each deflation step (controlling the norm after deflation is trivial).
\begin{lemma}[Lemma 5.9 in \cite{banks2020pseudospectral}] \label{lem:compress-shattered}
Suppose $P$ is a spectral projector of $M\in\bC^{n\times n}$ of rank $r\leq n$. Let $S\in \bC^{n\times r}$ be such that $S^*S=I_r$ and that its columns span the same space as the columns of $P$. Then for every $\epsilon>0$,
$$\Lambda_\epsilon(S^*MS)\subset \Lambda_\epsilon(M).$$
Alternatively, the same pseudospectral inclusion holds if again $S^*S=I_r$ and, instead, the columns of $S$ span the same space as the rows of $P$.
\end{lemma}
Combining Lemmas \ref{lem:decrementeps} and \ref{lem:compress-shattered} we can show the following.
\begin{lemma}[Pseudospectrum After Deflation]
\label{lem:deflationpseudospectrum}
Let $H\in \bC^{n\times n}$ be a Hessenberg matrix and and $1\leq r\leq n-1$ . Let $H_{-}$ and $H_{+}$ be its upper-left and lower-right $r\times r$ and $(n-r)\times (n-r)$ corners respectively. If $|H_{r+1, r}|\leq \epsilon'$ then
$$\Lambda_{\epsilon-\epsilon'} (H_-) \cup \Lambda_{\epsilon-\epsilon'}(H_+) \subset \Lambda_\epsilon(H).$$
\end{lemma}
\begin{proof}
Let $H_0$ be the matrix obtained by zeroing out the $(r+1, r)$ entry of $H$. By Lemma \ref{lem:decrementeps} and the assumption $|H_{r+1, r}|\leq \epsilon'$ we get $\Lambda_{\epsilon -\epsilon'}(H_0)\subset \Lambda_\epsilon(H)$.
We will begin by showing that $\Lambda_{\epsilon-\epsilon'}(H_+)\subset \Lambda (H_0)$. Let $w\in \bC^r$ be any left eigenvector of $H_+$ and note that, since $H_0$ is block upper triangular, $0_{n-r}\oplus w \in \bC^{n\times n}$ is a left eigenvector of $H_0$. Hence, there is a spectral projector $P$ of $H_0$ for which its left eigenvectors (equivalently its rows) span the space $\text{span}\{e_{n-r+1}, \dots, e_n\}$. Hence the span of the columns of the $n\times (n-r)$ matrix $$S= \left( \begin{array}{cc}
0 \\
I_{n-r}
\end{array} \right)$$
coincides the span of the rows of $P$. So, by Lemma \ref{lem:compress-shattered}, $\Lambda_{\epsilon-\epsilon'} (H_+) = \Lambda_{\epsilon-\epsilon'}(SHS^*) \subset \Lambda_{\epsilon-\epsilon'}(H_0)$.
The proof that $\Lambda_{\epsilon-\epsilon'}(H_-) \subset \Lambda_{\epsilon-\epsilon'}(H_0)$ is very similar, with the sole difference that this time one should look at the right eigenvectors of $H_-$, and work with columns (rather than rows) of the spectral projector.
\end{proof}
We can now proceed to the proof of the lemma.
\begin{proof}[Proof of Lemma \ref{lem:controloftheparameters}]
First note that in each call to $\mathsf{SmallEig}$ the working matrix gets modified exactly once by each of the subroutines $\mathsf{RHess}$, ${\mathsf{Decouple}}$ and $\mathsf{Deflate}$. So, there is a sequence of the form
$$M=M_1, F_1, F_1', M_2, F_2, F_2' \dots$$ that ends in $H'$ (respectively $M'$), and such that $F_i =\mathsf{RHess}(M_i),F_i'={\mathsf{Decouple}}(F_i, , \ax{\lambda}_i, \omega)$ and $M_{i+1}$ is one of the matrices in the output of $\mathsf{Deflate}(F_i')$. Moreover, by the assumption that $\mathsf{OneEig}$ terminated successfully at the end of each while loop, we have that
\begin{equation}
\label{eq:distassumption}
\eta_1 \leq \ds{\ax{\lambda}_i}{F_i}\leq \beta.
\end{equation}
We will show by induction that for every $i\leq n$ the pseudospectra $\Lambda_{2\epsilon-\epsilon_{i, 0}}(M_i), \Lambda_{2\epsilon-\epsilon_{i, 1}}(F_i)$ and $\Lambda_{2\epsilon-\epsilon_{i, 2}}(F_i')$ are $\zeta$-shattered, where
$$\epsilon_{i, j} := (3(i-1)+j)\omega= \frac{3(i-1)+j}{3n}(\Delta \wedge \epsilon),$$
and that $\|M_i\|\leq \scale + \epsilon_{i, 0}, \|F_i\|\leq \scale + \epsilon_{i, 1}$ and $\|F_i'\|\leq \scale+\epsilon_{i, 2}$. Note that in particular this will imply that $\epsilon$-pseudospectra of the $M_i, F_i$ and $F_i'$ are $\zeta$-shattered, and their norms are bounded by $2\scale$ (since $\epsilon\leq \scale$).
That $M_1=M$ has the advertised pseudospectral and norm properties follows from the assumption about the global data. We can then induct:
\begin{itemize}
\item \emph{Effect of $\mathsf{RHess}$.} Assume that $\Lambda_{2\epsilon-\epsilon_{i, 0}}(M_i)$ is $\zeta$-shattered and $\|M_i\| \leq \scale +\epsilon_{i, 0}$. Because $F_i =\mathsf{RHess}(M_i)$, and since (\ref{assum:oneig}) implies that
$$\textbf{\textup{u}} \leq \textbf{\textup{u}}_{\mathsf{RHess}}(n) \leq \textbf{\textup{u}}_{\mathsf{RHess}}(\dim(M_i)), $$
we can apply Proposition \ref{prop:rhformguarantees} to get that $\Lambda_{2\epsilon-\epsilon_{i, 0}-\epsilon'}(F_i)$ is $\zeta$-shattered for
\begin{align*}
\epsilon' & = c_{\mathsf{RH}} \|M_i\| \dim(M_i)^{5/2} \textbf{\textup{u}}
\\ & \leq 2 c_{\mathsf{RH}} \scale n^{5/2} \textbf{\textup{u}} & &\dim(M_i)\leq n,\, \|M_i\|\leq 2\scale
\\ & \leq \omega && \text{by (\ref{assum:eig})}.
\end{align*}
So, it follows that $\Lambda_{2\epsilon-\epsilon_{i, 1}}(F_1)$ is $\zeta$-shattered. And in the same way we can get $\|F_i\|\leq \scale + \epsilon_{i, 1}$.
\item \emph{Effect of ${\mathsf{Decouple}}$.} Now assume that $\Lambda_{2\epsilon-\epsilon_{i, 1}}(F_i)$ is $\zeta$-shattered and $\|F_i\|\leq \scale + \epsilon_{i, 1}$. Let $p$ and $\beta$ be as in line \ref{line:eigparameters} of $\mathsf{SmallEig}$ and define
\begin{equation}
\label{eq:defofmtwo}
m_2:= \left\lceil \frac{\log(\zeta^2n^2/p \epsilon^2)}{2 \log(3\omega/4\beta)} \right\rceil = \left\lceil \frac{\log(\zeta^2n^2/p \epsilon^2)}{2 \log(15)} \right\rceil.
\end{equation}
Now, because $m_2 \leq \lceil .3 \log(\zeta n/\epsilon)+ .15 \log(1/p) \rceil$, it is clear that $m_2\leq m_1$, and then it is easy to see that (\ref{assum:eig}) implies
$$\textbf{\textup{u}} \leq \textbf{\textup{u}}_{{\mathsf{Decouple}}}(2\scale, \zeta n /\epsilon, p, \eta_1).$$
So, because $\|F_i\|\leq 2\scale$ (by assumption), $\kappa_V(F_i)\leq \zeta \epsilon/n$ (since $\Lambda_\epsilon(F_i)$ is $\zeta$-shattered and by Lemma \ref{lem:kappavfromshattering}), and $\ds{\ax{\lambda}}{F_i}\geq \eta_1$ (by the assumption in (\ref{eq:distassumption})), we can apply Proposition \ref{prop:decouple} to get that there exists a unitary matrix $Q$ for which
\begin{align*}
\|F_i'-Q^* F_iQ\| & \leq 3.5 m_1 \|F_i\|\nu_{\mathsf{IQR}}(n) \textbf{\textup{u}}
\\ & \leq 7 m_1 \scale \nu_{\mathsf{IQR}}(n) \textbf{\textup{u}} && \|F_i\| \leq 2\scale
\\ & \leq \omega && \text{by (\ref{assum:eig})}.
\end{align*}
Then, by Lemma \ref{lem:decrementeps} and the assumption that $\Lambda_{2\epsilon-\epsilon_{i, 1}}(F_i)$ is $\zeta$-shattered, it follows that $\Lambda_{2\epsilon-\epsilon_{i, 2}}(F_i')$ is $\zeta$-shattered. And because the norm is preserved under unitary conjugation we also get that $\|F_i'\|\leq \scale + \epsilon_{i, 2}$.
\item \emph{Effect of $\mathsf{Deflate}$. } Assume that $\Lambda_{2\epsilon-\epsilon_{i, 2}}(F_i')$ is $\zeta$-shattered, and recall that $M_{i+1}$ is an output of $\mathsf{Deflate}(F_i', \omega)$. Then, by Lemma \ref{lem:deflationpseudospectrum} we have that $$\Lambda_{2\epsilon-\epsilon_{i+1, 0}}(M_{i+1})=\Lambda_{2\epsilon-\epsilon_{i, 2} -\omega}(M_{i+1})\subset \Lambda_{2\epsilon-\epsilon_{i, 2}}(F_i')$$ and hence $\Lambda_{2\epsilon-\epsilon_{i+1, 0}}(M_{i+1})$ is $\zeta$-shattered. Similarly, we can note that $\|M_{i+1}\|\leq \|F_i''\|+\omega \leq \scale+ \epsilon_{i+1, 0}$, which concludes the induction.
\end{itemize}
Now, since the depth of the recursion tree of $\mathsf{SmallEig}$ is at most $n$, and we have proven the above claim for any $M_i, F_i, F_i''$ with $i\leq n$, we can conclude that $\Lambda_{\epsilon}(H')$ is $\zeta$-shattered (resp. $\Lambda_{\epsilon}(M')$) and $\|H'\|\leq 2\scale$ (resp. $\|M'\|\leq 2\scale$), as we wanted to show.
Finally, to show that $\omega/2\leq \|H'\|$ (resp. $\omega/2 \leq \|M'\|$), first note that $\omega \leq \|M_i\| $ for every $i$. Indeed, when $i=1$ we can use the assumption $\delta\leq 1$, which yields $ \Delta \leq \|M\|$, and combine this with $ \omega\leq \Delta$. For $i>1$ note that $M_i$ is an output of $\mathsf{Deflate}(F_{i-1}'', \omega)$, and hence its subdiagonals are guaranteed to have absolute value at least $\omega$, which implies that $\omega\leq \|M_i\|$. We can then proceed as above (using slightly stronger bounds) to show that $\|M_i-F_i\|\leq \omega/2$ and $\|M_i-F_i''\|\leq \omega/2$. So the proof is concluded.
\end{proof}
\subsection{Analysis of $\mathsf{SmallEig}$}
We are now ready to prove Theorem \ref{thm:mainquantitative}. For clarity, let us divide the proof in several parts.
\paragraph{Backward stability.} Assume that $\mathsf{SmallEig}$ terminates and outputs $\Lambda$. Moreover, assume that when running $\mathsf{SmallEig}$, at the end of all the while loops from line \ref{line:eigwhile}, the subroutine $\mathsf{OneEig}$ terminated successfully (later we will prove that this occurs with probability at least $1-\phi$).
We will show that $\Lambda$ is the spectrum of a matrix $\ax{M}$ with $\|\ax{M}-M\|\leq \Delta$ (which combined with the assumption about the global data gives $\|\ax{M}-M\|\leq \delta \|M\|$). To be precise we will show an equivalent statement, namely that $\Lambda$ is the spectrum of a matrix that is at distance at most $\Delta$ from the class of matrices that are unitarily equivalent to $M$. To do this, for the purpose of the analysis, it will be convenient to imagine that during the deflation process (after setting to zero the small subdiagonals) instead of cutting out the blocks on the diagonal and considering them as separate subproblems, one keeps the full $n\times n$ matrix and continues to operate on the full matrix in the obvious way. With this view point the algorithm terminates when the working matrix becomes an upper triangular matrix, and its diagonal elements are precisely the elements of $\Lambda$.
In the proof of Lemma \ref{lem:controloftheparameters} it was shown that the only subroutines that deviate the working matrix from the unitary orbit of the original matrix are $\mathsf{RHess}, {\mathsf{Decouple}}$ and $\mathsf{Deflate}$. Moreover, it was shown that when each of these subroutines is applied, the corresponding backward error incurred is at most of size $\omega$. So we need only to give an upper bound for the number of times these subroutines are called. To do this consider $\calT_n(M)$, the recursion tree of $\mathsf{SmallEig}$, where the input matrix $M$ is placed at the root, and then the children of any vertex $v$ are in one-to-one correspondence with the matrices outputted after running $\mathsf{Deflate}$ on the matrix associated to $v$. It is clear from the construction that leaves correspond to matrices of dimension 1, and internal vertices (vertices that are not leaves) correspond to higher dimensional matrices. Now note that for any internal vertex $v$ it holds that the sum of the dimensions of the matrices associated to the children of $v$ equals the dimension of the matrix associated to $v$. Then, by induction on $n$ it follows that $\calT_n(M)$ has at most $n-1$ internal vertices. And, since the relevant subroutines are only called once at times corresponding to internal leaves, we conclude that each of these subroutines was called at most $n-1$ times. Hence, the ultimate deviation from the original unitary equivalence class is at most
$$3(n-1)\omega = \frac{3(n-1)}{3n}(\epsilon\wedge \Delta) \leq \Delta,$$
as we wanted to show.
\paragraph{Precision requirements.} To ensure that the precision has been set to be small enough, so that the precision requirements of each subroutine are satisfied throughout the iteration, we will show that
$$\textbf{\textup{u}}_{\mathsf{SmallEig}}(n, \scale, \epsilon, \zeta, \delta, \phi, n) \leq \min \{ \textbf{\textup{u}}_{\mathsf{RHess}}(n), \textbf{\textup{u}}_{\mathsf{OneEig}}(n, 2\scale, \epsilon, \zeta, p, \beta, \varphi), \textbf{\textup{u}}_{{\mathsf{Decouple}}}(n, 2\scale, \zeta n/\epsilon, p, \eta_1)\}.$$
First, that $\textbf{\textup{u}}_{\mathsf{SmallEig}}(n, \scale, \epsilon, \zeta, \delta, \phi, n)\leq \textbf{\textup{u}}_{\mathsf{RHess}}(n)$ is trivial. On the other hand, by definition we have
\begin{align*}
\textbf{\textup{u}}_{\mathsf{OneEig}}(n, \scale, \epsilon, \zeta, p, \beta, \phi) & = \textbf{\textup{u}}_{\mathsf{DistSpec}}\big(n,m_1, 10,2 \scale, n\zeta/\epsilon,\eta_1\big) && \text{for } m_1, \eta_1 \text{ as in (\ref{eq:etaandm})}
\\ & \geq \frac{\epsilon}{ 6\cdot 10^3c_{\mathsf{root}} \cdot \nu_{\mathsf{IQR}}(n) n \zeta }\left(\frac{\eta_1}{44\scale}\right)^{2m_1} && \text{(\ref{assum:oneig}) and (\ref{assum:comptau})}
\end{align*}
So from the (\ref{assum:eig}) it is clear that $ \textbf{\textup{u}}_{\mathsf{SmallEig}}(n, \scale, \epsilon, \zeta, \delta, \phi, n)\leq \textbf{\textup{u}}_{\mathsf{OneEig}}(n, 2\scale, \epsilon, \zeta, p, \beta, \phi)$. Finally
\begin{align*}
\textbf{\textup{u}}_{{\mathsf{Decouple}}}\big(n, 2\scale, \zeta n /\epsilon, p, d\big) & = \frac{\textbf{\textup{u}}_{\mathsf{IQR}}(n,m_2,2\scale,\zeta n /\epsilon,\eta_1)\omega}{16\cdot 5^{m_2}\cdot n^{1/2} 2\scale} && \text{for }m_2 \text{ as in (\ref{eq:defofmtwo})}
\\ & = \frac{\omega \epsilon }{16 \cdot 8 \nu_{\mathsf{IQR}}(n) n^{3/2} \zeta \cdot 2\scale }\left(\frac{\eta_1}{5\cdot 2\scale}\right)^{m_2} && \text{from (\ref{assum:machvsp})}
\end{align*}
And because $m_2 \leq m_2$, from (\ref{assum:eig}) it is clear that $\textbf{\textup{u}}_{\mathsf{SmallEig}}(n, \scale, \epsilon, \zeta, \delta, \phi, n) \leq \textbf{\textup{u}}_{{\mathsf{Decouple}}}\big(n, 2\scale, \zeta n /\epsilon, p, d\big)$.
\paragraph{Probability of success.} Observe that the only randomized subroutines of $\mathsf{SmallEig}$ are $\mathsf{RHess}$ and $\mathsf{OneEig}$. First we will provide a lower bound for the probability that the guarantees of $\mathsf{RHess}$ and $\mathsf{OneEig}$ are satisfied every time these subroutines are called.
Combining Lemma \ref{lem:controloftheparameters} and Proposition \ref{prop:rhformguarantees} we get that, if $\mathsf{OneEig}$ has succeeded every time it has been called, then for any value $H'$ acquired by the variable $H$ in line \ref{line:eigrhess} of $\mathsf{SmallEig}$ we have for any $t>0$, with probability $1-nt^2$, that
$$\min_{\lambda \in \Spec{H'}} \P[Z_{H'}=\lambda] \geq \left(\frac{\epsilon t}{n^{3/2}\zeta}\right)^2. $$
In particular (for $t^2= \phi/2n^2$) we get that with probability at least $1-\phi/2n$ it holds that
$$\min_{\lambda \in \Spec{H'}} \P[Z_{H'}=\lambda] \geq p$$
for $p$ defined as in line \ref{line:eigparameters}. Under this event, and because of Lemma \ref{lem:controloftheparameters} and because the precision is high enough, the requirements of $\mathsf{OneEig}$ will be met in line \ref{line:eigwhile}, and hence (for this call) $\mathsf{OneEig}$ will succeed with probability at least $1-\varphi =1- \phi/2n$.
Therefore, every time $\mathsf{SmallEig}$ is called, both $\mathsf{RHess}$ and $\mathsf{OneEig}$ will satisfy their guarantees with probability at least $1-\phi/n$. Moreover, from the backward stability proof we know that the recursion tree for $\mathsf{SmallEig}$ has at most $n-1$ internal vertices. Therefore, we can conclude that all the calls to $\mathsf{RHess}$ and $\mathsf{OneEig}$ will succeed with probability at least $1-\phi$, as we wanted to show.
Now, under the assumption that $\mathsf{RHess}$ and $\mathsf{OneEig}$ succeed every time, we have that the values of the variables $H$ and $\ax{\lambda}$ that are passed every time to ${\mathsf{Decouple}}$ satisfy the requirements of this subroutine, and by our previous discussion we know that the precision requirements for ${\mathsf{Decouple}}$ are also met. Therefore, we can apply Proposition \ref{prop:decouple} to argue that the matrix $H$ will be decouple in a finite amount of time, and by Lemma \ref{lem:controloftheparameters} we know that the pseudospectral parameters and norm guarantees will also be maintained.
\paragraph{Running time.} From the above discussion we know that with probability at least $1-\phi$, $\mathsf{SmallEig}$ terminates successfully and moreover, throughout the algorithm, every call to $\mathsf{RHess}, \mathsf{OneEig}$ and ${\mathsf{Decouple}}$ will be successful, and the requirements of these subroutines will always be met. Under this event (recalling that each subroutine is called at most $n-1$ times) by Propositions \ref{prop:rhformguarantees}, \ref{prop:findone} and \ref{prop:decouple} and using the the running times of the subroutine are monotone in the dimension of the input, we get that the running time of $\mathsf{SmallEig}$ is at most
$$(n-1)(T_{\mathsf{RHess}}(n)+ T_{\mathsf{OneEig}}(n, \scale, \epsilon, \zeta, p, \beta)+ T_{{\mathsf{Decouple}}}(n, \zeta n/\epsilon, p , \beta) ).$$
The proof is concluded by writing $p, \beta$ and $\eta_1$ as a function of $\epsilon, \zeta$, $\scale$ and $\delta$, and using the big-$O$ bounds provided in Propositions \ref{prop:rhformguarantees}, \ref{prop:findone} and \ref{prop:decouple}.
\subsection{Pseudospectral Shattering and Proof of the Main Result}
\label{sec:shatandain}
Note that Theorem \ref{thm:mainquantitative} assumes that $\mathsf{SmallEig}$ has access to the parameters $\epsilon$ and $\zeta$ in the global data, which control both the minimum eigenvalue gap and the eigenvector condition number of the input matrix $M\in \bC^{n\times n}$. In order to ensure that $\mathsf{SmallEig}$ works on every input (without having access to $\epsilon$ and $\zeta$), instead of running the algorithm on $M$ we will run it on $M+\gamma G_n$ (for $\gamma = \Theta (\delta)$ and $G_n$ a normalized complex Ginibre matrix\footnote{That is, an $n\times n$ random matrix with independent centered complex Gaussian entries of variance $1/n$.}), and exploit the following result, whose proof we defer to Appendix \ref{sec:shat}.\footnote{A version of this result was already proven and used in a similar context in \cite{banks2020pseudospectral}. However, since the notion of \emph{shattered pseudospectrum} from that paper differs from the one used here, we were not able to directly apply the aforementioned result.}
\begin{lemma}[Shattering]
\label{lem:shattering}
For any $M\in \bC^{n\times n}$ and $\varphi\in (0, 1/2), \gamma\in (0, \|M\|/2)$, we have that, with probability at least $1-\varphi$, $\Lambda_\epsilon(M+\gamma G_n)$ is $\zeta$-shattered for
$$\zeta :=\frac{\varphi^{1/2}\gamma}{2\sqrt{3} n^{3/2}} \quad \text{and} \quad \epsilon := \frac{\gamma^2 \varphi}{180 \sqrt{2} \|M\| \log(1/\varphi) n^{3}}$$
\end{lemma}
The main result of this paper then follows from combining Lemma \ref{lem:shattering} with Theorem \ref{thm:mainquantitative}.
\begin{proof}[Proof of Theorem \ref{thm:main}] Start by recalling the following the well-known tail bound for the norm of a Ginibre matrix (e.g. see \cite[Lemma 2.2]{banks2021gaussian})
\begin{equation}
\label{eq:ginibrenormbound}
\P[\|G_n\| \geq t]\leq 2\exp\big(-n(t-2\sqrt{2})^2\big), \quad \forall t \geq 2\sqrt{2}.
\end{equation}
Then, for $W:= 2\sqrt{2}+\frac{1}{n^{1/2}}\log(6/\phi)^{1/2} $ we have that $$\P[\|G_n\|\leq W]\geq 1-\phi/3.$$
Then, given a norm estimate $\scale$ satisfying $\scale/2 \leq \|M\|(1\pm \delta/2) \leq \scale$, we will choose $\gamma:= \frac{\delta \scale}{4W}$, so that
\begin{align*}
\P\left[\gamma \|G_n\|\leq \frac{\delta \|M\|}{2} \right] & = \P\left[ \|G_n\| \leq \frac{2 \|M\| W }{\scale} \right]
\\ & \geq \P\left[ \|G_n\| \leq W \right] && \scale/2 \leq \|M\|
\\ & \geq 1-\phi/3.
\end{align*}
Moreover, for this choice of $\gamma$, by Lemma \ref{lem:shattering} we have that, with probability at least $1-\phi/3$, $\Lambda_\epsilon(M+\gamma G_n)$ is $\zeta$-shattered for
$$\zeta :=\frac{\phi^{1/2}\gamma}{2\sqrt{6} n^{3/2}} \quad \text{and} \quad \epsilon := \frac{\gamma^2 \phi}{540\sqrt{2}\|M\| \log(1/\phi) n^{3}}.$$
On the other hand, conditioning on $\|G_n\|\leq W$ and $\Lambda_\epsilon(M+\gamma G_n)$ being $\zeta$-shattered, we have that $\mathsf{SmallEig}(M+\gamma G_n, \delta/2, \phi/3)$ succeeds with probability at least $1-\phi/3$ when using $n, \epsilon, \zeta$ and $\Sigma$ as global data and provided that $\textbf{\textup{u}}$ satisfies (\ref{assum:eig})), in which case the output $\Lambda$ will be a $\delta$-backward approximation of the spectrum of $M$.
Hence, using a union bound we get that with probability $1-\phi$, $\mathsf{SmallEig}(M+\gamma G_n, \delta/2, \phi/3)$ provides a $\delta$-accurate answer, and from Theorem \ref{thm:mainquantitative} we have that the running time and required bits of precision are as in the statement of of Theorem \ref{thm:main}.
\end{proof}
\section{Introduction}
In Part I of this series \cite{banks2021global} we gave a family of shifting strategies, of some suitable degree $k$, for which the Hessenberg shifted QR algorithm converges globally and rapidly in exact arithmetic on nonsymmetric matrices with controlled eigenvector condition number. Our analysis relied on the existence of an algorithm, which we called a \emph{Ritz value finder}, which on a matrix input $H$ and accuracy parameter $\r>1$ would compute a set $\calR = \{r_1, \dots, r_k\}$ of $\r$-optimal Ritz values for $H$, i.e. a set of complex numbers satisfying
\begin{equation*}
\left\|e_n^* \prod_{i\le k}(H-r_i)\right\|^{1/k}\le \r \min_{p\in\calP_k} \|e_n^*p(H)\|^{1/k},
\end{equation*}
where $\calP_k$ denotes the set of monic polynomials of degree $k$. Then, in Part II \cite{banks2022II} we showed that any algorithm that could solve the forward-error eigenproblem could be used (on the lower-right $k\times k$ corner of $H$) to build a Ritz value finder. In this paper (Part III of the series) we complete our analysis by presenting a randomized algorithm $\mathsf{SmallEig}$, based on shifted inverse iteration, that can solve this eigenvalue problem on any input $M\in \bC^{n\times n}$ using a controlled amount of precision in floating point arithmetic. Our main result can be stated as follows.
\begin{theorem}
\label{thm:main}
On any input matrix $M\in \bC^{n\times n}$, accuracy parameter $\delta >0$, and failure probability tolerance $\phi>0$, the algorithm $\mathsf{SmallEig}(M, \delta, \phi)$ produces, with probability $1-\phi$, the eigenvalues of a matrix $\ax{M}\in \bC^{n\times n}$ with $\|M-\ax{M}\|\leq \delta \|M\|$, using at most
$$O\big(n^4+ n^3 \log (n/\delta\phi)^2+\log(n/\delta\phi)^2 \log \log(n/\delta\phi) \big)$$
arithmetic operations on a floating point machine with $O(\log(n/\delta \phi)^2)$ bits of precision.
\end{theorem}
The above theorem shows that, when implemented on a floating point machine with $O(\log(n/\delta \phi)^2)$ bits of precision, the algorithm $\mathsf{SmallEig}$ is $\delta$-backward stable. However, as mentioned above, the shifting strategy analyzed in \cite{banks2021global, banks2022II} requires an algorithm that provides forward approximations of the eigenvalues of a (small) matrix. The following result \cite[Theorem 39.1]{bhatia2007perturbation} turns any backward error algorithm for the eigenproblem into a forward error algorithm, at the cost of multiplying the number of bits of precision by a roughly $n$ (which might be tolerable for small $n$, but prohibitively expensive otherwise).
\begin{lemma}
\label{thm:backward-to-forward-eig}
Let $M, \ax{M} \in \bC^{n\times n}$ be any two matrices. Then there are labellings $\lambda_1,...,\lambda_n$ and $\ax\lambda_1,...,\ax\lambda_n$ of the eigenvalues of $M$ and $\ax M$, respectively, so that
$$
\max_i |\lambda_i - \ax\lambda_i| \le 4(\|M\| + \|\ax M\|)^{1 - 1/n}\|M - \ax M\|^{1/n}.
$$
\end{lemma}
In particular, for every $\beta \le 1$ one can produce $\beta$-forward approximate eigenvalues by calling $\mathsf{SmallEig}$ with accuracy
$$
\delta = \left( \frac{\beta}{12} \right)^n,
$$
as $\|\ax M\| \le \|M\| + \delta \le 2\|M\|$. This yields the following corollary.
\begin{corollary}
On any input matrix $M\in \bC^{n\times n}$ with eigenvalues $\lambda_1, \dots, \lambda_n$, any accuracy parameter $\beta >0$, and failure probability tolerance $\phi>0$, one can use $\mathsf{SmallEig}$ to find, with probability $1-\phi$, approximate eigenvalues $\ax{\lambda}_1, \dots, \ax \lambda_n\in \bC^{n\times n}$ such that
$$\max_{i} |\lambda_i-\ax{\lambda}_i| \leq \beta \|M\|,$$
using at most
$$O\big(n^5 \log (n/\beta\phi)^2 +n^2\log(n/\beta\phi)^2 \log (n \log(n/\beta\phi)) \big)$$
arithmetic operations on a floating point machine with $O(n^2 \log(n/\beta \phi)^2)$ bits of precision.
\end{corollary}
\subsection{Overview of the Algorithm and Intermediate Results}
\label{sec:introoverview}
The main subroutine of $\mathsf{SmallEig}$, which we call $\mathsf{OneEig}$, is a form of \emph{shifted} inverse iteration that on a diagonalizable input $M\in \bC^{n\times n}$ and an input accuracy parameter $\beta \geq 0$, produces a $\beta$-forward approximation $\ax{\lambda}\in \bC$ of an eigenvalue of $M$. The precision required to ensure stability of this subroutine and its running time are a function of $n$ and the eigenvector condition number of $M$, i.e. of
$$\kappa_V(M) : = \inf_{V : M = V D V^{-1}} \|V\| \|V^{-1}\|. $$
The shifting strategy in $\mathsf{OneEig}$ crucially relies on a subroutine $\mathsf{DistSpec}$, which allows us to estimate the distance of any given point $s\in \bC$ to the spectrum of $M$ (henceforth denoted by $\Spec{M}$) up to relative distance 0.1. The subroutine $\mathsf{DistSpec}$ is in itself a form of \emph{unshifted} inverse iteration on $M-s$ and its required precision and running time are also a function of $n$ and $\kappa_V(M)$.
Once a $\beta$-forward approximation $\ax{\lambda}\in \bC$ of $M$ is obtained, the algorithm calls a subroutine ${\mathsf{Decouple}}$, which essentially uses inverse iteration on $M-\ax{\lambda}$ to find a vector $v\in \bC^n$ which is close to the right eigenvector of $M$ associated to the eigenvalue which is closest to $\ax{\lambda}$. Then, the subroutine $\mathsf{Deflate}$ is called to reduce the problem $M$ to a smaller instance.
All of the subroutines used in the algorithm require some control on $\kappa_V(M)$, and some additionally require a lower bound on the minimum eigenvalue gap of $M$, i.e.
$$\mathrm{gap}(M):= \min_{i\neq j} |\lambda_i(M)-\lambda_j(M)|.$$
In order for $\mathsf{SmallEig}$ to work on any matrix, we pre-process the input matrix by adding a small random perturbation.\footnote{If $M\in \bC^{n\times n}$ is the input matrix, we run the algorithm on $M+ \gamma G_n$, where $G_n$ is a normalized complex Ginibre matrix and $\gamma>0$ is a function of the desired accuracy and failure probability. } This was done in \cite{banks2020pseudospectral} to provide general guarantees for the spectral bisection algorithm, and by now the random matrix literature possesses several results giving high probability quantitative upper bounds on $\kappa_V$ and lower bounds on $\mathrm{gap}$ for the pre-processed matrix \cite{armentano2018stable, banks2021gaussian, banks2020pseudospectral, jain2021real, banks2020overlaps, ge2017eigenvalue, luh2021eigenvectors}. We refer the reader to Section \ref{sec:shatandain} for a detailed discussion.
Below we elaborate on the main subroutines of $\mathsf{SmallEig}$ and discuss the technical results proven in this paper.
\paragraph{Computing the Distance to the Spectrum ($\mathsf{DistSpec}$).} Let $M\in \bC^{n\times n}$ be a diagonalizable matrix with spectral decomposition
$$M = \sum_{i=1}^n \lambda_i v_i w_i^*,$$
and fix $s\in \bC\setminus \Spec{M}$. The main idea behind $\mathsf{DistSpec}$ is simple: if $u\in \bC^n$ is a vector sampled uniformly at random from the complex unit sphere $\bS^{n-1}$ then $\|u^* (s-M)^{-m}\|^{-\frac{1}{m}}$ converges (with probability one) as $m$ goes to infinity, to the distance from $s$ to the spectrum of $M$, which we will denote by $\ds{s}{M}$. Indeed:
\begin{align}
\lim_{m\to \infty} \|u^* (s-M)^{-m}\|^{-\frac{1}{m}} & = \lim_{m\to \infty}\Big\|\sum_{i=1}^n (s-\lambda_i)^{-m} u^*v_i w_i^*\Big\|^{-\frac{1}{m}} \nonumber
\\ & = \lim_{m\to \infty} \ds{s}{M} \Big\|\sum_{i=1}^n \left(\frac{\ds{s}{M}}{s-\lambda_i}\right)^{m} u^*v_i w_i^*\Big\|^{-\frac{1}{m}} \nonumber \\ \label{eq:convergencedispec} & = \ds{s}{M}
\end{align}
where the last equality holds almost surely. In Section \ref{sec:oneig} we will prove a quantitative version of this fact, and show that when $m = \Omega\big(\log (n \kappa_V(M))\big) $ one obtains an approximation of $\ds{s}{M}$ up to a relative error of 0.1. We will then conclude that $\mathsf{DistSpec}$ can be implemented with a running time of at most
$$O(\log(n \kappa_V(M))n^2+ \log(n \kappa_V(M))\log \log(n \kappa_V(M)) )$$
arithmetic operations and prove its backward error guarantees, which depend on $\ds{s}{M}$ and $\kappa_V(M)$.
\paragraph{Finding One Eigenvalue ($\mathsf{OneEig}$).} With $\mathsf{DistSpec}$ in hand, $\mathsf{OneEig}$ generates a sequence of complex numbers $s_0, s_1, \dots $ that converges linearly to an eigenvalue of $M$. This sequence is recursively generated as follows: at time $t$, the algorithm uses $\mathsf{DistSpec}$ to compute an estimate $\tau_t \approx \mathrm{dist}(s_t, \Spec{M})$ with relative error of at most $0.1$. This guarantees that there is at least one eigenvalue of $M$ inside the annulus
\begin{equation}
\label{eq:annulusi}
\calA_{s_t, \tau_t} := \{ z \in \bC : 0.9 \tau_t\leq |z-s_t| \leq 1.12 \tau_t \},
\end{equation}
and hence if $\calN_{s_t, \tau_t}$ is a fine enough net of $\calA_{s_t, \tau_t}$ (we will show that nets of six points suffice), we will be able to guarantee that
$$\min_{s\in \calN_{s_t, \tau_t}} \mathrm{dist}(s, \Spec{H}) \leq 0.6 \, \mathrm{dist}(s_t, \Spec{H}).$$
Given the above guarantee, $\mathsf{OneEig}$ then uses $\mathsf{DistSpec}$ again, now to estimate the distances of the points $s\in \calN_{s_t, \tau_t}$ to the spectrum of $M$, and chooses a point $s\in\calN$ for which
$$\mathsf{DistSpec}(s, \Spec{M}) \leq \gamma \tau_t$$
for some suitably chosen parameter $\gamma\in (0, 1)$ (we will show that when $\gamma=0.66$ the above inequality is guaranteed for some point in the net). For such an $s$, $\mathsf{OneEig}$ sets $s_{t+1}:= s$ and $\tau_{t+1}:= \mathsf{DistSpec}(s, \Spec{M})$, after which the iteration is repeated (see Figure \ref{fig:net} for an example).
\begin{figure}[h]
\centering
\includegraphics[scale=.22]{InverseIt.pdf}
\caption{The locations of the eigenvalues of $M$ are represented by an $\times$. The figure illustrates the first steps of the iteration which produce $s_0, s_1$ and $s_2$. The annuli $\calA_{s_0, \tau_0}$ and $\calA_{s_1, \tau_1}$ are signal with dotted lines, and the corresponding nets of six points on them are marked. }
\label{fig:net}
\end{figure}
Clearly, the $s_t$ will converge linearly to an eigenvalue of $M$ and hence finding a point that is at distance at most $\beta$ from the spectrum of $M$ will take $O(\log(1/\beta))$ calls to $\mathsf{DistSpec}$. This will be discussed in detail in Section \ref{sec:oneig}.
\begin{remark}
Intuitively, $\mathsf{OneEig}$ is a shifting strategy for inverse iteration where each shift is an \emph{exceptional shift} (cf. \cite{eberlein1975global, wang2002convergence, banks2021global}) chosen from a net of six points.
\end{remark}
\begin{remark}
Note that even if the subroutine $\mathsf{OneEig}$ provides a $\beta$-\emph{forward} approximation of an eigenvalue of the matrix, the ultimate algorithm $\mathsf{SmallEig}$ will only be able to provide an $O(\beta)$-\emph{backward} set of approximate eigenvalues. This is because in order to obtain the full eigendecomposition one needs to \emph{deflate} the problem once a converged eigenvalue is obtained (see the next paragraph for more details on this process), and after deflation we are only able to control the backward error of the eigenvalues that are subsequently obtained.
\end{remark}
\paragraph{Implementation of the Subroutines ($ \comptau{m}, {\mathsf{Decouple}}, \mathsf{Deflate}$).} There are many ways to implement the subroutines $\mathsf{DistSpec}$ and $\mathsf{OneEig}$ described above. In this paper, for several reasons, we have decided to operate with matrices in their Hessenberg form (similar to what the shifted QR algorithm does). One of the advantages of doing this is that, in the Hessenberg setting, instead of computing the quantity $\|u^* (s-M)^{-m}\|^{-\frac{1}{m}}$ mentioned in the analysis of $\mathsf{DistSpec}$ one need to compute
$$\tau_{(z-s)^m}(H):= \|e_n^*(s-H)^{-m} \|^{-\frac{1}{m}}$$
where $H$ is a Hessenberg matrix that is unitarily equivalent to $M$ (or \emph{almost} unitarily equivalent when finite arithmetic is taken into account). Computing the latter quantity, as shown in e.g. \cite{banks2022II}, can be done directly from running the implicit QR algorithm $\mathsf{IQR}$ on $H$ (see Section \ref{sec:imported} for a definition of $\mathsf{IQR}$ and the subroutine $\comptau{m}$ defined by it). So in essence, when working with Hessenberg matrices the subroutine $\mathsf{DistSpec}$ can be easily implemented by calling $\mathsf{IQR}$ with a suitable degree.
The second advantage of working with a Hessenberg matrix $H$ is that once a forward approximate eigenvalue $\ax{\lambda}$ of $H$ is found (which is the purpose of $\mathsf{OneEig}$), reducing the problem $H$ to a smaller instance becomes easier. Indeed, in Section \ref{sec:decoupling} we will show that if $H_\ell:= \mathsf{IQR}(H, (z-\ax{\lambda})^\ell)$, then one is guaranteed to have $|(H_\ell)_{n, n-1}| = O(\beta)$ for some $\ell = O(n \log \kappa_V(M))$. This will allow us to decouple and then deflate the problem.
\begin{remark}[Comparison to Shifted QR]
One reason why our algorithm is not an actual shifted QR algorithm is that we have chosen to \emph{maintain} the same Hessenberg matrix $H$ throughout the computation of the shifts $s_1, s_2, \dots$ done by $\mathsf{OneEig}$, as opposed to \emph{updating} the Hessenberg matrix in each iteration to produce a sequence of Hessenberg matrices $H_0=H, H_1, \dots$ hand in hand with the computation of each $s_t$ (as a standard shifted QR algorithm would do). During this process we are using the Hessenberg structure merely as a device for a fast implementation of inverse iteration, and not any of its more subtle properties as in \cite{banks2021global}. Between calls to $\mathsf{OneEig}$ the Hessenberg structure is further used to deflate the matrix in a convenient manner.
More substantially, $\mathsf{OneEig}$ requires as input a Hessenberg matrix whose right eigenvectors all have {\em reasonably large} (say $1/\poly(n)$) inner products with the vector $e_n$; this is roughly because our analysis is based on the power method and not the more sophisticated potential-based arguments of \cite{banks2021global} which require no assumptions whatsoever. We guarantee the inner product condition by computing a Hessenberg form with respect to a {\em random} vector. Unfortunately this must be redone after each deflation, which inflicts a cost in the running time of $O(n^4)$, as opposed to the $O(n^3)$ achieved by algorithms that do not need to repeatedly recompute the Hessenberg form.
\end{remark}
\paragraph{Randomness in the Algorithm ($\mathsf{RHess}, \mathrm{Unif}(D(0, \eta_2)), G_n$).} Our algorithm uses randomness in three different ways. The first one is related to the inverse iteration described above when discussing $\mathsf{DistSpec}$. In the Hessenberg setting, the equivalent of running inverse iteration on a randomly chosen vector is to compute a \emph{random} Hessenberg matrix $H$ that is unitarily equivalent to the initial matrix $M$, where the randomness is uniform (in some suitable sense) among the set of Hessenberg matrices that are uniformly equivalent to $M$. The source of randomness in this case is also a unit vector distributed uniformly on the complex unit sphere $\bS_{\bC}^{n-1}$. We refer the reader to Section \ref{sec:randsampling} for the details on the sampling assumptions made in this paper, and to Section \ref{sec:rhess} for an analysis of the subroutine $\mathsf{RHess}$ which on an input matrix $M$ returns a Hessenberg matrix $H$ chosen at random from the unitary equivalence class (up to machine error) of $M$.
The second use of randomness is related to the forward stability of $\mathsf{IQR}(H, (z-s)^m)$, which as discussed in \cite{banks2022II}, is a function of $\ds{s}{H}$. As in \cite{banks2022II}, before every call to $\mathsf{IQR}$ we will add a small random perturbation to the desired shift $s$, i.e. we define $\check{s} := s+w$ with $w$ chosen uniformly at random from the disk centered at zero of radius $\eta_2$ --- henceforth denoted by $w\sim \mathrm{Unif}(D(0, \eta_2))$ --- and run $\mathsf{IQR}(H, (z-\check{s})^m)$ instead of $\mathsf{IQR}(H, (z-s)^m)$. The point of doing this is to ensure that with high probability $\ds{\check{s}}{H}\geq \eta_1$, for some appropriately chosen (as a function of the desired probability) tolerance parameter $\eta_1$ that will ultimately determine the precision required for $\mathsf{IQR}$ to be numerically forward stable, a necessary condition for our running time guarantees on $\mathsf{SmallEig}$ to hold.
Finally, the third way in which we use randomness is to randomly perturb the matrix that is given as input to $\mathsf{SmallEig}$, with the purpose of having high probability upper and lower bounds on $\kappa_V$ and $\mathrm{gap}$ (cf. \cite[Remark 1.4]{banks2021global} and \cite{banks2020pseudospectral}) when running the subroutines of $\mathsf{SmallEig}$. For this we assume access to a Gaussian sampler that allows us to generate (once) an $n\times n$ complex Ginibre matrix $G_n$.
\bigskip
To conclude this section we make some comments about our analysis and presentation.
\paragraph{Pseudospectrum vs $\mathrm{gap}$ and $\kappa_V$.} Although all of the requirements, actions, and guarantees of the subroutines used by the main algorithm can be phrased in terms of the minimum eigenvalue gap and eigenvector condition number of the matrices in question, in some cases we have decided to instead work with the notion of pseudospectrum. This treatment simplifies the analysis of the effects of roundoff error, since the perturbation theory for the pseudospectrum of a matrix is significantly simpler than that for the eigenvalue gap and eigenvector condition number. In Section \ref{sec:pseudospectrum} we include all the necessary preliminaries regarding the notion of pseudospectrum, and explain in what sense the eigenvector condition number and minimum eigenvalue gap of a matrix can be encoded (conversely recovered) in the pseudospectrum.
\paragraph{Use of Global Data.} As in \cite{banks2022II} we will use the notion of \emph{global data} when presenting the pseudocode of the algorithms. Here, the global data will be composed of four quantities that all of the subroutines can access if needed. More specifically, the global data will be given by $n$ the dimension of the original input matrix, $\scale$ an approximation of the norm of the matrix, and two parameters $\epsilon$ and $\zeta$ which will be used to control the pseudospectrum.
\subsection{Related Work and Discussion}
\label{sec:relatedwork}
Inverse iteration has been used since the 1940's \cite{wielandt1944iterationsverfahren} as a method for computing an eigenvector when an approximation of the corresponding eigenvalue is known; a detailed survey of its history and properties may be found in \cite{varah1968calculation,peters1971calculation,peters1979inverse,ipsen1997computing}. In contrast, this paper uses inverse iteration along with a simple shifting strategy to find the eigenvalues from scratch.
As discussed in the references above, two situations in which the behavior of inverse iteration in finite arithmetic is known to be tricky to analyze are: (1) matrices with tiny eigenvalue gaps (2) nonnormal matrices which exhibit transient behavior. We deal with these issues by assuming {\em a priori} bounds on the eigenvalue gaps and nonnormality of our input matrix (see Definition \ref{def:shat}) and always dealing with high enough powers of the inverse to dampen transient effects. Assuming such bounds is not restrictive because they may be guaranteed with high probability by adding a small random perturbation, as discussed above.
The algorithm in this paper is, at the time of writing, one of four known provable algorithms for computing backward approximations of the eigenvalues of an arbitrary complex matrix in floating point arithmetic, along with \cite{armentano2018stable,banks2020pseudospectral,banks2022II}. The strengths of the algorithm are its simplicity and use of $O(\log^2(n/\delta))$ bits of precision, which is better than \cite{banks2020pseudospectral} but worse than \cite{armentano2018stable} (however $\cite{armentano2018stable}$ has the drawback of running in $O(n^{10}/\delta)$ arithmetic operations). The main weakness of this algorithm compared to \cite{banks2020pseudospectral, banks2022II} is its use of $O(n^4)$ arithmetic operations for repeatedly computing the Hessenberg form.
We do not know any example where this recomputation after deflation is actually needed, but are not able to prove that it is not (with high probability). Doing so would entirely remove the $O(n^4)$ factor from the running time in Theorem \ref{thm:main} and is worthy of further investigation.
\section{The Shifting Strategy}
\label{sec:shiftingstrategy}
\subsection{Analysis of $\mathsf{DistSpec}$}
\label{sec:dispec}
We define the subroutine $\mathsf{DistSpec}(H, s, m)$ as follows and prove its guarantees below.
\bigskip
\begin{boxedminipage}{\textwidth}
$$\mathsf{DistSpec}$$
\textbf{Input:} Hessenberg $H\in \bC^{n\times n}$, $s\in\bC$, $m\in \mathbb{N}$ \\
\textbf{Output:} $\tau \geq 0$ \\
\textbf{Ensures:} $ \frac{0.998}{\kappa_V(H)^{\frac{1}{m}}} \ds{s}{H} \leq \tau \leq \frac{1.003 \kappa_V(H)^{\frac{1}{m}} }{\P\big[|Z_H-s|=\ds{s}{H}\big]^{\frac{1}{2m}}} \ds{s}{H}$
\begin{enumerate}
\item $\ax{\tau^m} \gets \comptau{m} (H, (z-s)^m)$
\item $\tau \gets \mathsf{fl}\left( (\ax{\tau^m})^{\frac{1}{m}}\right)$
\end{enumerate}
\end{boxedminipage}
\bigskip
\begin{proposition}[Guarantees for $\mathsf{DistSpec}$]
\label{prop:guaranteefordispec}
Let $C >0$ and assume that $s \in D(0, C\|H\|)$. Then, the algorithm $\mathsf{DistSpec}$ runs in $$T_{\mathsf{DistSpec}}(n, m):= T_{\comptau{}}(n, m)+ T_{\mathsf{root}}(m, 10^{-3}) = O(mn^2+m\log m) $$
arithmetic operations and satisfies its guarantees provided that
\begin{align}
\label{assum:dispec}
\textbf{\textup{u}} & \leq \textbf{\textup{u}}_{\mathsf{DistSpec}}(n,m,C,\|H\|,\kappa_V(H),\mathrm{dist}(s,\Spec{H}))
\\ & := \frac{1}{c_{\mathsf{root}}} \textbf{\textup{u}}_{\comptau{}}(n,m,C,\|H\|,\kappa_V(H),\mathrm{dist}(s,\Spec{H})). \nonumber
\end{align}
\end{proposition}
\begin{proof} First note that
\begin{align}
\tau_{(z-s)^m}(H) & = \|e_n^* (H-s)^{-m}\|^{-\frac{1}{m}} \nonumber
\\ & \le \frac{\kappa_V(H)^{\frac{1}{m}}}{\mathbb{E}\left[|Z_H-r|^{-2m}\right]^{\frac{1}{2m}}} \nonumber && \text{Lemma \ref{lem:spectral-measure-apx}} \nonumber \\ \label{eq:boundontau}
&\le \frac{\kappa_V(H)^{\frac{1}{m}}\ds{r}{H}}{\P\Big[|Z_H-r| = \ds{r}{H}\Big]^{\frac{1}{2m}}}.
\end{align}
Similarly, to lower bound $\tau_{(z-s)^m}(H)$ use Lemma \ref{lem:spectral-measure-apx} again to obtain
$$ \tau_{(z-s)^m}(H)= \|e_n^* (H-s)^{-m}\|^{-1/m} \geq \frac{1}{\kappa_V(H)^{\frac{1}{m}}\mathbb{E}[|Z_H-s|^{-2m}]^{\frac{1}{2m}}}\geq \frac{\ds{s}{H}}{\kappa_V(H)^{\frac{1}{m}}}.$$
So, it only remains to control $|\tau-\tau_{(z-s)^m}(H)|$, where $\tau$ is the output of $\mathsf{DistSpec}$. Since by assumption (\ref{assum:dispec}) holds, we can apply Lemma \ref{lem:guaranteetaum} to get
\begin{equation*}
\label{eq:intermediateestimation}
0.999 \tau_{(z-s)^m}(H)^m \leq \ax{\tau^m} \leq 1.001\tau_{(z-s)^m}(H)^m.
\end{equation*}
Similarly, we can apply Lemma \ref{lem:mthroot} to get that $\mathsf{fl}((\ax{\tau^m})^{\frac{1}{m}})$ can be computed to relative accuracy $\epsilon = 10^{-3}$, using at most $T_{\mathsf{root}}(m, 10^{-3})$ arithmetic operations. Hence
$$0.999 (\ax{\tau^m})^{\frac{1}{m}} \leq \mathsf{fl}((\ax{\tau^m})^{\frac{1}{m}}) \leq 1.001 (\ax{\tau^m})^{\frac{1}{m}},$$
which combined with all of the above yields the advertised guarantees. To compute the final running time, add to $T_{\mathsf{root}}(m, 10^{-3})$ the $T_{\comptau{}}(n,m)$ arithmetic operations needed to compute $\comptau{m}$.
\end{proof}
\subsection{Analysis of $\mathsf{OneEig}$}
\label{sec:oneig}
For every $s\in \bC$ and $\tau>0$, on the annulus $\calA_{s, \tau} =\{ z \in \bC : 0.9 \tau \leq |z-s| \leq 1.12 \tau \}$ we will define the set $\calN_{s, \tau}$ of six points given by
$$\calN_{s, \tau} := \left\{ s+ \tau e^{i \pi \ell/3} : \ell=1, \dots, 6 \right\}.$$
As explained in Section \ref{sec:introoverview}, at time $t$, $\mathsf{OneEig}$ will call $\mathsf{DistSpec}$ on the the locations given by the points in a net on $\calA_{s_t, \tau_t}$ for some $s_t$ and $\tau_t$. So, to give accuracy guarantees on the output provided by $\mathsf{DistSpec}$, we will choose the net to be the randomly perturbed set $$\check{\mathcal{N}}_{s_t, \tau_t} := \{s_t+w: s_t\in \calN_{s_t, \tau_t} \}, \quad \text{where}\quad w\sim \mathrm{Unif}(D(0, \eta_2)),$$ (cf. the discussion on shift regularization in Section \ref{sec:imported}).
We begin by noting that for any $s\in \bC$ and $\tau>0$, $\check{\mathcal{N}}_{s, \tau}$ is a net on $\calA_{s, \tau}$ in the following sense.
\begin{observation}
\label{obs:scalereduction}
Using the above notation, if $\eta_2 \leq .03 \tau$ then for any realization of $\check{\mathcal{N}}_{s, \tau}$ we have $$\sup_{z\in \calA_{s, \tau}}\mathrm{dist}\big(z, \check{\mathcal{N}}_{s, \tau}\big)\leq 0.6 \tau.$$
\end{observation}
\begin{proof}
Basic trigonometry shows that because $z\in \calA_{s, \tau}$ we can guarantee $\mathrm{dist}(z, \calN_{s, \tau})\leq .57 \tau.$ Then, because any realization of $w\sim D(0, \eta_2)$ (which yields a realization of $\check{\mathcal{N}}_{s, \tau}$) satisfies $|w| \leq \eta_2 \leq .03 \tau$, the result follows from the triangle inequality.
\end{proof}
We can now define the algorithm.
\bigskip
\begin{boxedminipage}{\textwidth}
$$\mathsf{OneEig}$$
\textbf{Input:} $H\in \bC^{n\times n}$ Hessenberg, accuracy $\beta>0$, failure probability tolerance $\varphi$, eigenvalue mass lower bound $p$ \\
\textbf{Global Data:} Norm bound $\scale$, pseudospectral parameter $\epsilon$, shattering parameter $\zeta$ \\
\textbf{Output:} $[\ax{\lambda}, \mathsf{correctness}]$ with $\ax{\lambda}\in \bC$ and $\mathsf{correctness} \in \{\texttt{true}, \texttt{false}\}$ \\
\textbf{Requires:} $\beta \leq 1/2$, $\Lambda_\epsilon(H)$ is $\zeta$-shattered, $\P[Z_{H} = \lambda ]\geq p$ for all $\lambda \in \Spec{H}$, $ 10 \beta \leq \|H\| \leq 2 \scale $ \\
\textbf{Ensures:} With probability at least $1-\varphi$, $\mathsf{OneEig}$ terminates successfully, that is $\mathsf{correctness} =\texttt{true}$ and $\ax{\lambda}$ satisfies $ \eta_1 \leq \ds{\ax{\lambda}}{H} \leq \beta $, where $\eta_1$ is defined in line \ref{line:mandeta}
\begin{enumerate}
\item \label{line:mandeta} $m \gets \left\lceil 12 \left( \log\left(\frac{n\zeta}{\epsilon}\right) + \frac{1}{2} \log\left(\frac{1}{p}\right) \right) \right\rceil$, $\eta_2\gets \frac{\beta}{5}\wedge \frac{\zeta}{3}$, $\eta_1 \gets \eta_2 \left( \frac{\varphi}{12 \log(3\scale/10\beta)} \right)^{1/2}$
\item \label{line:oneiginitialization} $w\sim \mathrm{Unif}(D(0, \eta_2 )), \, \check{s} \gets H_{nn}+w, \, \tau \gets \mathsf{DistSpec}(\check{s} , H, m)$
\item \label{line:oneigwhileloop} \textbf{While} $\tau > 0.9 \beta $
\begin{enumerate}
\item $w\sim \mathrm{Unif}(D(0, \eta_2 ))$, $\, \check{\mathcal{N}} \gets \{\check{s}^{(1)}, \dots, \check{s}^{(6)}\}= \calN_{\check{s}, \tau}+w$
\item $\tau' \gets \min_{j \in [6]} \mathsf{DistSpec}(\check{s}^{(j)},H,m)$
\item \textbf{If} $\tau' \le 0.66\tau$\\
$\check{s} \gets \check{s}^{(j)}$, $\tau \gets \tau'$, $\mathsf{correctness} \gets \texttt{true}$
\item \textbf{Else} $\mathsf{correctness} \gets \texttt{false}$, terminate $\mathsf{OneEig}$ and output $[\check{s}, \texttt{false}]$.
\end{enumerate}
\item $\ax{\lambda} \gets \check{s}$, output $[\ax{\lambda}, \texttt{true}]$
\end{enumerate}
\end{boxedminipage}
\bigskip
\begin{remark}[About the $\mathsf{correctness}$ Flag]
Although small, there is a positive probability that while running $\mathsf{OneEig}$ the subroutine $\mathsf{DistSpec}$ is called on a complex number $s\in \bC$ for which $\ds{s}{H} < \eta_1$. When this happens there will be no guarantee that the output of $\mathsf{DistSpec}$ is relatively accurate, and the information provided by it might be misleading, giving rise to an update of $\check{s}$ for which the distance to $\Spec{H}$ might be even larger than what it was for its previous value. In view of this, the purpose of the flag $\mathsf{correctness}$
is to identify when as a consequence of an inaccurate output of $\mathsf{DistSpec}$ it is no longer possible to decrease the variable $\tau$ at a geometric rate, in which case the algorithm halts and outputs $\mathsf{error}$\footnote{Of course, one could try to formulate a dichotomy as in \cite{banks2022II} in which one leverages that errors can only be made once the shifts that are being used are very close to $\Spec{H}$, and have a mechanism that outputs a forward approximate eigenvalue even when $\mathsf{DistSpec}$ provides inaccurate answers. Since this proved to be intricate, for the sake of clarity we have decided to settle for this simpler, but efficient enough, version of the algorithm.}.
\end{remark}
Before proving the main result about $\mathsf{OneEig}$, we observe that in line \ref{line:mandeta} of this algorithm, $m$ is set so that $\mathsf{DistSpec}(s, H, m)$ will yield an accurate approximation of $\ds{s}{H}$ all throughout the iteration (provided that $s$ is not too close to $\Spec{H}$).
\begin{observation}[$m$ is large enough]
\label{obs:gooddistapprox}
Let $C>0$, $s\in D(0, C \|H\|)$ and $m$ be as in line \ref{line:mandeta} of $\mathsf{OneEig}$. Assume that the requirements of $\mathsf{OneEig}$ are satisfied and that
\begin{equation}
\label{eq:assumdispecforoneig}
\textbf{\textup{u}} \leq \textbf{\textup{u}}_{\mathsf{DistSpec}}(n,m,C,\|H\|,\kappa_V(H),\mathrm{dist}(s,\Spec{H})).
\end{equation}
Then
$$0.9 \ds{s}{H} \leq \mathsf{DistSpec}(H, s, m) \leq 1.1 \ds{s}{H}.$$
\end{observation}
\begin{proof}
Let $\tau = \mathsf{DistSpec}(H, s, m)$. Since $\textbf{\textup{u}}\leq \textbf{\textup{u}}_{\mathsf{DistSpec}}$ we can apply Proposition \ref{prop:guaranteefordispec} to get
$$ \frac{0.998}{\kappa_V(H)^{\frac{1}{m}}} \ds{s}{H} \leq \tau \leq \frac{1.003 \kappa_V(H)^{\frac{1}{m}} \ds{s}{H}}{\P\big[|Z_H-s|=\ds{s}{H}\big]^{\frac{1}{2m}}} .$$
Then, it suffices to show that $$0.9 \leq \frac{0.998}{\kappa_V(H)^{\frac{1}{m}}} \quad \text{and} \quad \frac{1.003 \kappa_V(H)^{\frac{1}{m}} }{\P\big[|Z_H-s|=\ds{s}{H}\big]^{\frac{1}{2m}}}\leq 1.1,$$
or equivalently
$$m \geq \frac{\log(\kappa_V(H))}{\log\left(0.998/0.9\right)} \quad \text{ and } \quad m \geq \frac{\log(\kappa_V(H))+ \frac{1}{2} \log(1/ \P\big[|Z_H-s|=\ds{s}{H}\big])}{\log(1.1/1.003)}.$$
Finally, using that
$$\P\big[|Z_H-s|=\ds{s}{H}\big] \geq \min_{\lambda \in \Spec{H}} \P[Z_H=\lambda]\geq p$$ and $\kappa_V(H) \leq \frac{n\zeta}{\epsilon}$ (which follows from Lemma \ref{lem:kappavfromshattering}), it is clear that this $m$ satisfies the above inequalities.
\end{proof}
Now we observe that in line \ref{line:mandeta} of $\mathsf{OneEig}$, the parameters $\eta_1$ and $\eta_2$ are set to be small enough that we can apply Lemma \ref{lem:fixguarantee1}.
\begin{observation}
\label{obs:etasaresmall}
Let $\eta_1, \eta_2$ be as in line \ref{line:mandeta} and assume that the requirements of $\mathsf{OneEig}$ are satisfied. Then
$$\eta_1+\eta_2 \leq \frac{\mathrm{gap}(H)}{2} \quad \text{and} \quad \eta_2 \leq 0.02 \|H\|. $$
\end{observation}
\begin{proof}
Since $\Lambda_\epsilon(H)$ is $\zeta$-shattered we have $\zeta \leq \mathrm{gap}(H)$, and by definition of the parameters we have $2\eta_1 \leq \eta_2 \leq \zeta/3 $, from where $\eta_1+\eta_2 \leq \mathrm{gap}(H)/2$. To prove the other assertion, note that the requirements of $\mathsf{OneEig}$ imply that $\beta \leq 0.1 \|H\|$, on the other hand by definition $\eta_1 \leq \beta/5$, so the proof is concluded by combining both bounds.
\end{proof}
We now state the main result of this section.
\begin{proposition}[Guarantees for $\mathsf{OneEig}$]
\label{prop:findone}
Assume that the requirements of $\mathsf{OneEig}$ are satisfied, let $m$ and $\eta_1$ be as defined in line \ref{line:mandeta} of $\mathsf{OneEig}$ and assume that
\begin{align}
\label{assum:oneig} \textbf{\textup{u}} & \leq \textbf{\textup{u}}_{\mathsf{OneEig}}(n,\scale, \epsilon, \zeta, p, \beta, \varphi)
\\ & := \textbf{\textup{u}}_{\mathsf{DistSpec}}\big(n,m, 10,2 \scale, n\zeta/\epsilon,\eta_1\big). \nonumber
\end{align}
Then, with probability at least $1-\varphi$, $\mathsf{OneEig}$ outputs a $\ax{\lambda}\in \bC$ satisfying
\begin{equation}
\label{eq:lambdaguarantees}
\eta_1 \leq \ds{\ax{\lambda}}{H} \leq \beta,
\end{equation}
using at most
\begin{align*}
T_{\mathsf{OneEig}}(n, \scale, \epsilon, \zeta, p, \beta ) & := (6\lceil 2 \log(\scale/5\beta)\rceil+1) T_{\mathsf{DistSpec}}(n, m)+ \lceil 2 \log(\scale/5\beta)\rceil(C_{\mathsf{D}} + 16)+O(1)
\\ & = O\big( \log(\scale/\beta) \log(n\zeta/\epsilon p) (n^2+ \log \log(n\zeta/\epsilon p)) \big)
\end{align*}
arithmetic operations.
\end{proposition}
Since the proof of this proposition requires several steps we will present it in a separate subsection.
\subsubsection{Proof of Proposition \ref{prop:findone}}
It is clear that the exact arithmetic version of $\mathsf{OneEig}$ would satisfy the advertised guarantees. The challenge is in arguing that in finite arithmetic, with high probability, each call to $\mathsf{DistSpec}$ yields an accurate enough answer, and that the aggregate roundoff errors and failure probabilities is not too large. Since $\mathsf{DistSpec}$ is based on the subroutine $\mathsf{IQR}$, inaccuracies can only arise when the input $s\in \bC$ is either too close to $\Spec{H}$ or $|s|$ is too large. This is quantified in the following observation, which we will use repeatedly throughout the proof.
\begin{observation}[Conditions for accuracy]
\label{obs:accuracy}
For any $s\in D(0, 10\|H\|)$ with $\ds{s}{H}\geq \eta_1$ the following guarantee holds
$$0.9 \ds{s}{H} \leq \mathsf{DistSpec}(H, s, m) \leq 1.1 \ds{s}{H}.$$
\end{observation}
\begin{proof}
Since $\Lambda_\epsilon(H)$ is $\zeta$-shattered by assumption, Lemma \ref{lem:kappavfromshattering} shows that $\kappa_V(H) \leq \frac{n\zeta}{\epsilon}$, and using the assumption $\|H\|\leq 2\scale$, we get that (\ref{assum:oneig}) implies $$\textbf{\textup{u}} \leq \textbf{\textup{u}}_{\mathsf{DistSpec}}\Big(n,m, 10,\|H\|,\kappa_V(H),\eta_1\Big).$$ So, for any $s\in D(0, 10 \|H\|)$ with $\ds{s}{H}\geq \eta_1$, $\textbf{\textup{u}}$ will satisfy inequality \eqref{eq:assumdispecforoneig}, which by Observation \ref{obs:gooddistapprox} yields the desired inequalities.
\end{proof}
Let $s_0, s_1, \dots$ be the values acquired by the variable $\check{s}$ throughout the algorithm, $\tau_0, \tau_1, \dots $ be the values acquired by $\tau$, and $w_0, w_1, \dots $ be the values acquired by $w$. We will now show that, by the structure of the algorithm, the only real obstruction to obtaining accuracy is the possibility of the $s_i$ being to close to $\Spec{H}$.
\begin{lemma}[Accuracy of the $\tau_i$]
\label{lem:tauaccuracy}
Let $t\geq 0$ and assume that $\mathsf{OneEig}$ does not terminate in the first $t$ while loops\footnote{Here, terminating in the while loop $t=0$ means that that the first while loop was never started. }, and that $\ds{s_i}{H}\geq \eta_1$ for all $i=0, \dots, t$. Then, for all $i=0, \dots, t$ we have that
\begin{equation}
\label{eq:guaranteefori}
0.9 \ds{s_i}{H} \leq \tau_i \leq 1.1 \ds{s_i}{H},
\end{equation}
$s_i\in D(0, 10\|H\|)$, and moreover $\check{\mathcal{N}}_{s_i, \tau_i}\subset D(0, 10\|H\|)$.
\end{lemma}
\begin{proof}
We proceed by induction. First we will prove the statement for $t=0$. In this case, because of the way $\check{s}$ is initialized (see line \ref{line:oneiginitialization} of $\mathsf{OneEig}$), $s_0=H_{nn}+w_0$ for $w_0 \sim D(0, \eta_2 )$. So, by definition, $|s_0|\leq \|H\|+\eta_2$, and by Observation \ref{obs:etasaresmall} we have $s_0\in D(0, C\|H\|)$ for $C=1.02$. It follows, by Observation \ref{obs:accuracy}, that $\tau_0$ satisfies the inequalities in (\ref{eq:guaranteefori}). Therefore $$\tau_0 \leq 1.1 \ds{s_0}{H} \leq 1.1\cdot 2.02 \|H\| \leq 2.3 \|H\|$$ which we record for later use.
Now take $k\leq t$ and assume that (\ref{eq:guaranteefori}) holds for $i=0, \dots,k$, we will then show that it also holds for $k+1$. First note that by the assumption that $\mathsf{OneEig}$ does not terminate in the first $t$ while loops, we have that $ \tau_{i+1} \leq .66 \tau_i$ and $.9\beta \leq \tau_i$ for all $i=0, \dots, k$. Hence, by construction of the sequence $s_0, s_1, \dots $, for any $s\in \check{\mathcal{N}}_{s_k, \tau_k}$ we can obtain
\begin{align*}
\big|s\big| & \leq |s_0|+ |s_1-s_0|+ \cdots +|s_k-s_{k-1}| +|s-s_{k}|
\\ & \leq |s_0|+ \tau_0+|w_1|+ \cdots +\tau_{k}+|w_{k+1}| && \text{since }\, s_{i+1}\in \check{\mathcal{N}}_{s_{i}, \tau_{i}}, \, s\in \check{\mathcal{N}}_{s_k, \tau_k}
\\ & \leq |s_0| + 1.3(\tau_0+ \cdots + \tau_{k}) && \tau_i \geq 0.9 \beta\, \text{ and }\, \eta_2 \leq \frac{\beta}{5}
\\ & \leq |s_0| + 1.3 \cdot 2.3 \|H\| (1+ 0.66 + 0.66^2+\cdots ) && \tau_{i+1}\leq 0.66^i \tau_0 \leq 0.66^i 2.3 \|H\|
\\ & \leq |s_0| + 8.8\|H\|
\\ & \leq 10 \|H\| && |s_0| \leq 1.02 \|H\|.
\end{align*}
This proves that $\check{\mathcal{N}}_{s_k,\tau_k }\subset D(0, 10\|H\|)$. So, when $k\leq t-1$ we get get that $s_{k+1}\in D(0, 10\|H\|)$, and because we also know that $\ds{s_{k+1}}{H}\geq \eta_1$, we can apply Observation \ref{obs:accuracy} to show that (\ref{eq:guaranteefori}) holds for $i=k+1$.
\end{proof}
In the above lemma we assumed that $\mathsf{OneEig}$ did not terminate in the first $t$ calls to the while loop, which tacitly assumes that the the flag $\mathsf{correctness}$ was set back to $\texttt{true}$ in each of those loops. We now show that if $\tau_t$ is sufficiently accurate and the elements in $\check{\mathcal{N}}_{s_t, \tau_t}$ are far enough from $\Spec{H}$, then there is a guarantee that in the while loop $t+1$ the flag $\mathsf{correctness}$ will be set back to $\texttt{true}$.
\begin{lemma}[Guaranteeing $\mathsf{correctness}=\texttt{true}$]
\label{lem:guaranteeingcor}
Assume that $\ds{s_i}{H}$ for $i=1, \dots, t$ and moreover that each $s\in \check{\mathcal{N}}_{s_t, \tau_t}$ satisfies that $\ds{s}{H}\geq \eta_1$. Then
$$\min_{s\in \check{\mathcal{N}}_{s_t, \tau_t}} \mathsf{DistSpec}(s, H, m) \leq .66 \tau_t,$$
where $m$ is defined as in line \ref{line:mandeta} of $\mathsf{OneEig}$.
\end{lemma}
\begin{proof}
Because $\tau_t$ satisfies (\ref{eq:guaranteefori}) we know that there is at least one eigenvalue of $H$ in $\calA_{s_t, \tau_t}$. By Observation \ref{obs:scalereduction} there is at least one $s\in \check{\mathcal{N}}_{s_{t+1}, \tau_{t+1}}$ for which $\ds{s}{H}\leq 0.6 \tau_t$. Moreover, by assumption, for such $s$ we know that $\ds{s}{H}\geq \eta_1$, and by Lemma \ref{lem:tauaccuracy} we also know that $s\in D(0, 10\|H\|)$. Hence Observation \ref{obs:accuracy} implies that
$$\mathsf{DistSpec}(H, s, m) \leq 1.1 \ds{s}{H} \leq 0.66 \tau_t,$$
as we wanted to show.
\end{proof}
Lemmas \ref{lem:tauaccuracy} and \ref{lem:guaranteeingcor} imply that as long as all of the values of $\check{s}$ and $\check{s}^{(j)}$ for $j=1, \dots, 6$ satisfy that $\ds{\check{s}}{H}\geq \eta_1$ and $\ds{\check{s}^{(j)}}{H}\geq \eta_1$, we will have accurate $\tau_i$ and the flag $\mathsf{correctness}$ will always be set back to $\texttt{true}$. We can now conclude the proof.
\paragraph{Probability of success.} Take $t= \lceil 2 \log(\scale/5\beta)\rceil$, which is set so that $ 4.6\cdot 0.66^t /0.9 \leq \beta/\Sigma. $
For $i=1, \dots, t$ and $j=1, \dots, 6$ let $s_i^{(j)}$ be the value acquired by the variable $\check{s}^{(j)}$ during the while loop $i$. Using Lemma \ref{lem:fixguarantee1} and taking a union bound we have that the probability that
$$\ds{s_0}{H}\geq \eta_1\quad \text{and}\quad \ds{s_i^{(j)}}{H}\geq \eta_1, \quad \forall i\in [t]\, \forall j\in [6]$$
is at least $1-(6t+1)(\eta_1/\eta_2)^2$. And from the above discussion we know that under this event $\mathsf{OneEig}$ will not terminate in the first $t$ while loops with $\mathsf{correctness}=\texttt{false}$, and moreover $\tau_0\leq 2.3\|H\|$ and $\tau_{i+1}\leq .66 \tau_i$. Therefore, because $\|H\|\leq 2\scale$ and the way we have chosen $t$,
$$\tau_t \leq 0.66^{t} \tau_0\leq 0.66^{t} 2.3 \|H\| \leq 0.66^{t} \cdot 4.6 \scale \leq 0.9 \beta.$$
This ensures that the algorithm terminates with $\mathsf{correctness}=\texttt{true}$ sometime in the first $t$ while loops with probability at least $1-(6t+1)(\eta_1/\eta_2)^2$. Moreover, when it terminates, say at time $t_0$, we are guaranteed that $\ds{s_{t_0}}{H}\geq \eta_1$, and because $\tau_{t_0}$ is accurate we have that
$$.9 \ds{s_{t_0}}{H}\leq \tau_{t_0}\leq .9\beta,$$
which implies that $\ds{s_{t_0}}{H}\leq \beta$.
On the other hand
$$(6t+1)(\eta_1/\eta_2)^2 =(6\lceil 2 \log(\scale/5\beta)\rceil+1)(\eta_1/\eta_2)^2\leq 12 \log(3\scale/10\beta) (\eta_1/\eta_2)^2 = \varphi,$$
that is, the failure probability is upper bounded by $\varphi$.
\paragraph{Running time.} Finally, we give an upper bound on the running time. First note that each iteration of the while loop calls $\mathsf{DistSpec}$ six times, draws one sample from $\mathrm{Unif}(D(0, \eta_2))$, and at most other 16 arithmetic operations are done. Since, in the successful event, there are at most $\lceil 2 \log(\scale/5\beta)\rceil$ while loops, this gives us the count of
$$\lceil 2 \log(\scale/5\beta)\rceil (T_{\mathsf{DistSpec}}(n, m)+ C_{\mathsf{D}} + 16). $$
Before the while loops $\mathsf{DistSpec}$ is called once, and other than that at most $O(1)$ operations are done. This yields the advertised result.
\section{Preliminaries}
\label{sec:preliminaries}
As in the previous two papers in this sequence, all vector/matrix norms are $\ell_2$/operator norms unless stated otherwise, and we use the notation
$$\mathrm{dist}(\calS_1, \calS_2) := \inf_{s_1 \in \calS_1, s_2\in \calS_2} |s_1-s_2|. $$
for any sets $\calS_1, \calS_2\subset \bC$, and when $s\in \bC$ we use $\mathrm{dist}(s, \calS)$ as a shorthand notation for $\mathrm{dist}(\{s\}, \calS)$.
\subsection{Finite Precision Arithmetic}
We use the standard floating point axioms from \cite[Chapter 2]{higham2002accuracy} (ignoring overflow and underflow as is customary), and use $\textbf{\textup{u}}$ to denote the unit roundoff. Specifically, we will assume that we can add, subtract, multiply, and divide floating point numbers, and take square roots of positive floating point numbers, with relative error $\textbf{\textup{u}}$. We will use $\mathsf{fl}(\ast)$ to denote that the expression $\ast$ is computed in finite arithmetic.
As in \cite{banks2022II} we will have to compute $m$-th roots of positive numbers, for which we assume access to an algorithm satisfying the guarantees of the following lemma.
\begin{lemma}[Lemma 2.1 in \cite{banks2022II}]
\label{lem:mthroot}
There exist small universal constants $C_{\mathsf{root}}, c_{\mathsf{root}} \geq 1$, such that whenever $m c_{\mathsf{root}} \textbf{\textup{u}} \leq \epsilon \leq 1/2 $ and for any $a\in \bR^+$, there exists an algorithm that computes $a^{\frac{1}{m}}$ with relative error $\epsilon$ in at most $$T_{\mathsf{root}}(m,\epsilon):= C_{\mathsf{root}} m \log(m\log(1/\epsilon))$$ arithmetic operations.
\end{lemma}
\subsection{Random Sampling Assumptions.}
\label{sec:randsampling}
In Section \ref{sec:introoverview} we enlisted the three different ways in which randomness is used in $\mathsf{SmallEig}$. Here we specify the assumptions we make about the algorithms used to generate the desired random objects.
\begin{definition}[Efficient $\mathrm{Unif}(\bS^{n-1}_{\bC})$ Sampler]
\label{def:usampler}
An efficient random vector algorithm takes as input a positive integer $n$ and generates a random unit vector $u\in \bC^n$ distributed uniformly in the complex unit $n$-sphere $\bS^{n-1}$ and runs in $C_{\mathsf{U}} n$ arithmetic operations, for some universal constant $C_{\mathsf{U}}$.
\end{definition}
\begin{definition}[Efficient $\mathrm{Unif}(D(0, R))$ Sampler]
An efficient random perturbation algorithm takes as input an $R>0$, and generates a random $w\in \bC$ distributed uniformly in the disk $D(0, R)$, and runs in $C_{\mathsf{D}}$ arithmetic operations, for some universal constant $C_{\mathsf{D}}$.
\end{definition}
\begin{definition}[Efficient Ginibre Sampler]
An efficient Ginibre sampler takes as input a positive integer $n$ and generates a random matrix $G_n\in \bC^{n\times n}$, where the entries of $G_n$ independent centered complex Gaussians of variance $1/n$, and runs in $C_{\mathsf{G}} n^2$ arithmetic operations.
\end{definition}
Note that the roundoff error in the algorithm coming from using finite precision when sampling any of these random objects only affects (in a negligible way) the failure probabilities reported in the analysis of the algorithm, and not the quantities handled by the algorithm itself. So, for simplicity we will assume that the samples can be drawn from their exact distribution.
\subsection{Definitions and Lemmas from \cite{banks2021global} and \cite{banks2022II}.}
\label{sec:imported}
\paragraph{Approximate Functional Calculus.} As in the first two parts of this series, we will exploit the notion of \emph{approximate functional calculus}. For a diagonalizable Hessenberg matrix $H\in \bC^{n\times n}$, with diagonalization $H=VDV^{-1}$ for $V$ chosen\footnote{If there are multiple such $V$, choose one arbitrarily.} to satisfy $\|V\|= \|V^{-1}\| = \sqrt{\kappa_V(H)}$, define $Z_H$ to be the random variable supported on $\Spec(H)$ with distribution
\begin{equation}\label{eqn:specmeasure}\P[Z_H = \lambda_i ] = \frac{|e_n^* V e_i|^2}{\|e_n^* V\|^2},\end{equation}
where $\lambda_i= D_{ii}$. As in the prequels, we will often use the following inequalities (see \cite[Lemma 2.4]{banks2021global} for a proof).
\begin{lemma}[Approximate Functional Calculus]
\label{lem:spectral-measure-apx}
For any upper Hessenberg $H$ and complex function $f$ whose domain includes the eigenvalues of $H$,
$$
\frac{\|e_n^\ast f(H)\|}{\kappa_V(H)} \le \mathbb{E}\left[|f(Z_H)|^2\right]^{\frac{1}{2}} \le \kappa_V(H)\|e_n^\ast f(H)\|.
$$
\end{lemma}
\paragraph{Implicit QR Algorithm.} For an invertible matrix $M$ we will use $[Q, R] = \mathrm{qr}(M)$ to denote that $M=QR$ is the unique QR decomposition of $M$ where the upper triangular part $R$ has positive diagonal entries.
We will assume access to a degree 1 implicit QR algorithm $\mathsf{IQR}(H, s)$, which is $\nu_{\mathsf{IQR}}(n)$-stable in the sense of \cite[Definition 3.4 ]{banks2022II} and we will implement higher degree shifts by composing this $\mathsf{IQR}$ algorithm, that is, for any polynomial $p(z) = (z-s_1)\cdots (z-s_m)$ we define
$$\mathsf{IQR}(H,p(z)):= \mathsf{IQR}(\mathsf{IQR}(\cdots \mathsf{IQR}(\mathsf{IQR}(H,s_1),s_2), \cdots), s_m),$$
and recall the following backward-stability guarantees given in \cite[Lemma 3.6]{banks2022II}.
\begin{lemma}[Backward Error Guarantees for $\mathsf{IQR}$]
\label{lem:iqr-multi-backward-guarantees}
Fix $C > 0$ and let $p(z) = \prod_{\ell \in [m]}(z - s_\ell)$, where $\calS = \{s_1,...,s_m\} \subset \mathbb{D}(0,C\|H\|)$. If $\ax{\next{H}} = \mathsf{IQR}(H,p(z))$, and
$$
\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}} \le 1/4,
$$
there exists a unitary $\ax{Q}$ satisfying
\begin{align}
\left\|\ax{\next{H}} - \ax{Q}^\ast H \ax{Q} \right\| \le 1.4 m(1 + C)\|H\|\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}.
\end{align}
\end{lemma}
Using Givens rotations, $\mathsf{IQR}(H, p(z))$ can be executed in
$$T_{\mathsf{IQR}}(n, m) := 7mn^2$$
arithmetic operations and it is $\nu_{\mathsf{IQR}}(n)$-stable for $\nu_{\mathsf{IQR}}(n) = 32n^{3/2}$ (see \cite[Appendix A]{banks2022II} for details). Forward error guarantees for $\nu_{\mathsf{IQR}}(n)$-stable implicit QR algorithms on an input $H \in \bC^{n\times n}$ can also be given, this time in terms of the distance of the shifts to the spectrum of $H$. More precisely, the following part of Lemma 3.9 in \cite{banks2022II} will be used repeatedly below.
\begin{lemma}[Forward Error Guarantees for $\mathsf{IQR}$]
\label{lem:multiiqrstability}
Let $H\in \bC^{n\times n}$ be a Hessenberg matrix and fix $C > 0$. Assume that $p(z) = \prod_{\ell \in [m]}(z - s_\ell)$, where $\mathcal{S} = \{s_1, \dots, s_m \} \subset D(0,C\|H\|)$. Furthermore, let $[Q, R] = \mathrm{qr}(p(H))$, $\next{H} = Q^\ast H Q$, and assume that
\begin{align}
\label{assum:machvsp}
\textbf{\textup{u}} \le \textbf{\textup{u}}_{\mathsf{IQR}}(n,m,\|H\|,\kappa_V(H),\mathrm{dist}(\calS,\Spec{H}))
&:= \frac{1}{8\kappa_V(H)\nu_{\mathsf{IQR}}(n)}\left(\frac{\mathrm{dist}(\calS,\Spec{H})}{\|H\|}\right)^m \\
&= 2^{-O\left(\log n\kappa_V(H) + m\log\frac{\|H\|}{\mathrm{dist}(\calS,\Spec{H})}\right)}. \nonumber
\end{align}
Then, we have the forward error guarantees:
$$
\left\|\ax{\next{H}} - \next{H}\right\|_F \le 32\kappa_V(H) \|H\|\left(\frac{(2 + 2C)\|H\|}{\mathrm{dist}(\calS,\Spec{H})}\right)^m n^{1/2}\nu_{\mathsf{IQR}}(n)\textbf{\textup{u}}.
$$
\end{lemma}
\paragraph{Computing $\tau^m$.} For a Hessenberg matrix $H\in \bC^{n\times n}$ and $s\in \bC$, our algorithm needs to estimate quantities of the form $\|e_n^* (s-H)^{-m}\|^{-1}$. For this task we will use the subroutine $\comptau{m}$ which was analyzed in \cite{banks2022II}.
\bigskip
\begin{boxedminipage}{\textwidth}
$$\comptau{m}$$
\textbf{Input:} Hessenberg $H\in \bC^{n\times n}$, polynomial $p(z)=(z-s_1)\cdots (z-s_m)$ \\
\textbf{Output:} $\ax{\tau^m} \geq 0$ \\
\textbf{Ensures:} $|\ax{\tau^m} - \tau_p (H)^m| \le 0.001 \tau_p(H)^m$
\begin{enumerate}
\item $[\ax{\hat{H}}, \ax{R}_1, \dots, \ax{R}_m] \gets \mathsf{IQR} (H, p(z))$
\item $\ax{\tau^m} \gets \mathsf{fl}\left( (\ax{R}_1)_{nn}\cdots (\ax{R}_m)_{nn} \right)$
\end{enumerate}
\end{boxedminipage}
\bigskip
\begin{lemma}[Lemma 3.9 in \cite{banks2022II}]
\label{lem:guaranteetaum}
If $\calS = \{s_1,...,s_m\} \subset \mathbb{D}(0,C\|H\|)$ and
\begin{align}
\label{assum:comptau}
\textbf{\textup{u}}
&\le \textbf{\textup{u}}_{\comptau{}}(n,m,C,\|H\|,\kappa_V(H),\mathrm{dist}(\calS,\Spec{H})) \\
&:= \frac{1}{6 \cdot 10^3 \kappa_V(H) \nu_{\mathsf{IQR}}(n)}\left(\frac{\mathrm{dist}(\calS,\Spec{H})}{(2 + 2C)\|H\|}\right)^{2m} \nonumber \\
&= 2^{-O\left(\log n\kappa_V(H) + m\log \frac{\|H\|}{\mathrm{dist}(\calS,\Spec{H})}\right)}, \nonumber
\end{align}
then $\comptau{m}$ satisfies its guarantees, and runs in $$T_{\comptau{}}(n,m):= T_{\mathsf{IQR}}(n, m) + m = O(m n^2)$$ arithmetic operations.
\end{lemma}
\paragraph{Shift Regularization.} In this paper we will only call $\comptau{m}$ on polynomials of the form $p(z)=(z-s)^m$ for some $s\in \bC$. So, proceeding as in \cite{banks2022II}, to have a control on the relative accuracy of $\comptau{m}$, we will randomly perturb $s$ to ensure that it is far enough from the spectrum of the input matrix. To be precise, we will use the following particular case of \cite[Lemma 3.10]{banks2022II}.
\begin{lemma}[Regularization of Shifts]
\label{lem:fixguarantee1}
Let $s \in \bC$ and $\eta_2\geq \eta_1 >0$, and assume that $ \eta_1 +\eta_2 \leq \frac{\mathrm{gap}(H)}{2}.$
Let $w \sim \text{Unif}(D(0, \eta_2))$ and $\check{s} := s+w$. Then with probability at least $1-\left(\eta_1/\eta_2\right)^2$, we have $\mathrm{dist}(\check{s},\Spec H) \ge \eta_1$.
\end{lemma}
\subsection{Pseudospectrum}
\label{sec:pseudospectrum}
Given $M\in \bC^{n\times n}$ and $\epsilon>0$ the $\epsilon$-pseudospectrum of $M$ is defined as
\begin{equation}
\label{eqn:pseudodef2}
\Lambda_\epsilon(M): = \left\{\lambda \in \bC : \big\|(\lambda - M)^{-1}\big\| \geq 1/\epsilon \right\}.
\end{equation}
In particular $\Spec(M)\subset \Lambda_\epsilon(M)$ for every $\epsilon>0$, and one can show (see \cite{trefethen2020spectra}) that
\begin{equation*}
\label{eq:characterizationofLambda}
\Lambda_\epsilon(M) = \{\lambda \in \bC : \lambda\in \mathrm{Spec}(M+E) \text{ for some } \|E\|\leq \epsilon\},
\end{equation*}
and as direct consequence the following two standard properties follow.
\begin{lemma}
\label{lem:decrementeps}
For any $M, E, U\in \bC^{n\times n}$ with $\|E\|\leq \epsilon$ and $U$ unitary, the following are true
\begin{enumerate}[label=\roman*)]
\item $\Lambda_{\epsilon}(UM U^*) = \Lambda_\epsilon(M)$.
\item $\Lambda_\epsilon(M+E) \subset \Lambda_{\epsilon-\|E\|}(M)$.
\end{enumerate}
\end{lemma}
We refer the reader to the excellent book \cite{trefethen2020spectra} for a comprehensive treatment on the notion of pseudospectrum. For this paper we will only need the following basic lemmas that relate the pseudospectrum to the notions of eigenvalue gap and eigenvector condition number. First, we recall that the pseudospectrum can be controlled in terms of the eigenvector condition number.
\begin{lemma}[\cite{trefethen2020spectra}]
\label{lem:pseudospectralbauerfike}
For every $M \in \C^{n\times n}$,
\begin{equation} \label{eqn:lambdakappa}
\bigcup_i D(\lambda_i,\eps)\subset \Lambda_\eps(M)\subset \bigcup_i D(\lambda_i, \eps \kappa_V(M)).
\end{equation}
\end{lemma}
When analyzing the algorithm in finite arithmetic it will be necessary to have some control on the eigenvector condition number and minimum eigenvalue gap of the matrices produced by the algorithm. For this, we will use the notion of $\zeta$-shattered pseudospectrum, which is very similar to the notion of shattered pseudospectra introduced in \cite{banks2020pseudospectral}, but without referencing a grid.
\begin{definition}[$\zeta$-shattered pseudospectrum]\label{def:shat}
Let $\epsilon, \zeta>0$ and $M \in \bC^{n\times n}$. We say that $\Lambda_\epsilon(M)$ is $\zeta$-shattered if there exist $n$ disjoint disks $D_1, \dots, D_n$ of radius $\zeta$ such that
\begin{enumerate}[label=\roman*)]
\item (Containment) $\Lambda_\epsilon(M)\subset \bigcup_{i=1}^n D_i.$
\item (Separation) Any two disks are at distance at least $\zeta$, that is, $\mathrm{dist}(D_i, D_j)\geq \zeta$ for all $i\neq j$.
\end{enumerate}
\end{definition}
In what can be thought as a converse of Lemma \ref{lem:pseudospectralbauerfike}, the shattering parameter can be used to control the eigenvector condition number of a matrix and its minimum eigenvalue gap.
\begin{lemma}[$\kappa_V$ from $\zeta$ and $\epsilon$]
\label{lem:kappavfromshattering}
Let $\epsilon, \zeta>0$ and $M\in \bC^{n\times n}$. If $\Lambda_\epsilon(M)$ is $\zeta$-shattered, then
\begin{enumerate}[label=\roman*)]
\item $\kappa_V(M)\leq \frac{n \zeta}{\epsilon}$.
\item $\mathrm{gap}(M)\geq \zeta$.
\end{enumerate}
\end{lemma}
\begin{proof}
First note that ii) follows from the fact taht $\Spec(M)\subset \Lambda_\epsilon(M)$ and the definition of $\zeta$-shattering. To show i) let $\lambda_1, \dots, \lambda_n$ be the eigenvalues of $M$, and for every $i$ let $\kappa(\lambda_i)$ the denote the eigenvalue condition number of $\lambda_i$ (see \cite[Section 2.2]{banks2020pseudospectral} for a definition). A trivial modification of the proof of Lemma 3.11 in \cite{banks2020pseudospectral} yields that $\kappa(\lambda_i) \leq \frac{\zeta}{\epsilon}$. Then, by Lemma 3.1 in \cite{banks2021gaussian} we have
$$\kappa_V(M) \leq \sqrt{n\sum_{i=1}^n \kappa(\lambda_i)^2} \leq \frac{n\zeta}{\epsilon}.$$
\end{proof}
\iffalse
We now record some variations of results from \cite{banks2021gaussian,banks2020pseudospectral} showing that a random perturbation of any matrix leads to shattering. This shattering result follows easily from the following lemmas about $\mathrm{gap}$ and $\kappa_V$.
\begin{lemma}[Eigenvalue gap, Proposition D.5 \cite{banks2020pseudospectral}]
\label{lem:gapbound}
Let $M\in \C^{n\times n}$ be an arbitrary matrix and let $G_n$ be a normalized complex Ginibre matrix. Then for any $t, \gamma >0$
$$\P\Big[\mathrm{gap}(M+\gamma G_n) < \frac{\gamma}{t n^{3/2} } \Big] \leq \frac{1}{t^2}.$$
\end{lemma}
\begin{lemma}[Eigenvector condition number [??refer to part II] ]
\label{lem:boundonkappaV}
Let $M\in \C^{n\times n}$ be an arbitrary matrix and let $G_n$ be a normalized complex Ginibre matrix. Then for any $ \gamma \in (0, \|M\|)$ and $t>0$
$$\P\Big[\kappa_V(M+\gamma G_n)\geq \frac{t n^{3/2}}{\gamma} \Big] \leq 2e^{-n} + \frac{(\|M\|+4\gamma)^2}{t^2}$$
\end{lemma}
Lemmas \ref{lem:gapbound} and \ref{lem:boundonkappaV} can be put together to obtain that, with high probability, $\Lambda_\epsilon(M+\gamma G_n)$ will be composed of $n$ small and well-separated components. This is a version of the \emph{shattering} phenomenon exploited in \cite{banks2020pseudospectral}, and it only differs in that in this work, shattering will be treated as an absolute concept, instead of being defined with respect to a grid.
\begin{lemma}[Shattering]
For any $\epsilon, \gamma, t>0$ we have
$$\P\left[\Lambda_\epsilon(M+\gamma G_n) \subset \bigcup_{i=1}^n D\left(\lambda_i, \frac{ \epsilon t n^{3/2}}{\gamma}\right) \right] \leq 1-\frac{ 1+ (\|M\|+4\gamma)^2}{t^2}-2e^{-n}. $$
In particular, for $\epsilon < \frac{\gamma^2}{4t^2 n^{3/2}}$
$$\P\left[\Lambda_\epsilon(M+\gamma G_n) \subset \bigcup_{i=1}^n D\left(\lambda_i, \frac{\mathrm{gap}(M+\gamma G_n)}{4}\right) \right] \leq 1-\frac{ 1+ (\|M\|+4\gamma)^2}{t^2}-2e^{-n}. $$
\end{lemma}
\fi
\section{Randomized Hessenberg Form}
\label{sec:rhess}
Some of the most common and well understood subroutines in numerical linear algebra are those used to put an arbitrary matrix $M\in \bC^{n\times n}$ into a Hessenberg form $H$ (e.g. see \cite{demmel1997applied, higham2002accuracy}). The only reason why we have decided to include this section in the present paper, is that we were not able to find in the literature a rigorous result about the effect of randomizing the Hessenberg form $H$ that could allow us to conclude an explicit probabilistic lower bound on $\min_{\lambda\in \Spec(H)} \P[Z_H=\lambda]$. Here, in our analysis we assume access to a deterministic algorithm that uses Householder reflectors to obtain the Hessenberg form (see Definition \ref{def:buh} below for details), and to a random unit vector generator satisfying the assumptions from Definition \ref{def:usampler} above.
\subsection{Householder Reflectors}
Computing Householder reflectors is essential to many numerical linear algebra algorithms and a thorough analysis of the numerical errors involved can be found in \cite[Section 19.3]{higham2002accuracy}. In short, Householder reflectors are matrices $P\in \bC^{n\times n}$ of the form $P=I-\beta v v^*$ with $v\in \bC^{n} \setminus \{0\}$ and $\beta:= \frac{2}{v^* v}. $\footnote{It is easy to see that $P$ is a reflection over the hyperplane $\{v\}^\perp$.} In practice, given $v$, instead of computing $P$ it is more convenient to simply store $v$, which for any vector $x$ allows to compute $P x$ by just computing $x-\beta (v^* x) v$ and this takes
$$T_{\mathsf{hous}}(n) = O(n)$$
arithmetic operations.
With this in mind, given $x, v \in \bC^{n}$ we will use $\hous{v, x}$ to denote the finite arithmetic computation of $Px$ following the procedure outlined above. Similarly, given $A\in \bC^{n\times n}$ we will use $\hous{v, A}$ to denote the finite arithmetic computation of $PA$, where the $i$-th column of $PA$ is computed as $\hous{v, A^{(i)}}$ where $A^{(i)}$ denotes the $i$-th column of $A$.
In \cite[Lemma 19.2]{higham2002accuracy} it was shown that there exists a small universal constant $c_{\mathsf{h}}$ for which, provided that $c_{\mathsf{h}} n\textbf{\textup{u}} < 1/2$, one has
\begin{equation}
\label{eq:errorhouseholder}
\hous{v, x} = (P+E)x \quad \text{for} \quad \|E\|_F \leq 2c_{\mathsf{h}} n \textbf{\textup{u}},
\end{equation}
for any $x\in \bC^{n}$. This will be used later in the analysis of $\mathsf{RHess}$.
\subsection{Hessenberg Form}
The standard way in which a matrix $M\in \bC^{n\times n}$ is put into Hessenberg form using Householder reflectors is by using a \emph{left-to-right} approach, where one generates a sequence of Householder reflectors $P_1, \dots, P_{n-2}$, that ensure that $H:= P_{n-2} \cdots P_1 M P_1 \cdots P_{n-2}$ is Hessenberg, and where each $P_i$ is used to set to zero the entries in \emph{column} $i$ of the working matrix that are \emph{below} the subdiagonal.
However, since we will be interested in randomizing the relative position of $e_n$ with respect to the eigenbasis of $H$, it will be convenient to instead use a \emph{bottom-up} approach, and choose each $P_i$ to set to zero the entries in \emph{row} $i$ that are to the \emph{left} of the corresponding subdiagonal. In this way, when acting on the left of the matrix, the $P_i$ leave the $n$-th row of the working matrix invariant and, in particular, we will have $e_n^* P_i = e_n^*$. Since the left-to-right and bottom-up approaches are essentially equivalent, the results from \cite[Theorem 2]{coise1996backward} and \cite[Section 4.4.6]{demmel1997applied} apply in both situations, and in particular imply the existence of an efficient and backward stable algorithm in the following sense.
\begin{definition}[Bottom-up Hessenberg Form Algorithm]
\label{def:buh}
A $c_{\mathsf{H}}$-stable bottom-up Hessenberg form algorithm $\mathsf{HessBU}$, is an algorithm that takes as input a matrix $M \in \bC^{n\times n}$ and outputs a Hessenberg matrix $H\in \bC^{n\times n}$ satisfying that there exists a unitary $Q$ with
$$\| H - Q^*M Q\|\leq c_{\mathsf{H}} \|M\| n^{5/2} \textbf{\textup{u}} $$
and such that $Q e_n = e_n$. We say that $\mathsf{HessBU}$ is efficient if it runs in at most
$$T_{\mathsf{HessBU}}(n) := \frac{10}{3} n^3 + O(n^2)$$
arithmetic operations.
\end{definition}
\subsection{Analysis of $\mathsf{RHess}$}
As mentioned above, the only source of randomness for $\mathsf{HessBU}$ is a random vector uniformly sampled from the complex unit sphere. Our main technical tool for the analysis will be the following standard anti-concentration result, whose proof we defer to the appendix.
\begin{lemma}[Anti-Concentration for Random Vectors]
\label{lem:anticoncentration}
Let $u\sim \mathrm{Unif}(\bS^{n-1}_\bC)$ and $v\in \bC$ with $\|v\|=1$. Then for all $t\in [0, 1]$
$$\P\left[ |u^* v| \leq \frac{t}{\sqrt{n-1}} \right] \leq t^2.$$
\end{lemma}
We can now define the algorithm and proof its guarantees.
\bigskip
\begin{boxedminipage}{\textwidth}
$$\mathsf{RHess}$$
\textbf{Input:} $M\in \bC^{n\times n}$ \\
\textbf{Output:} $H \in \bC^{n\times n}$ \\
\textbf{Requires:} $\Lambda_\epsilon (M)$ is $\zeta$-shattered \\
\textbf{Ensures:} $H$ is Hessenberg, $\|H - Q^* M Q\|\leq c_{\mathsf{RH}} \|M\| n^{5/2}\textbf{\textup{u}} $ for some unitary $Q$, $\Lambda_{\epsilon'}(H)$ is \\ $\zeta$-shattered for $\epsilon' = \epsilon -c_{\mathsf{RH}} \|M\| n^{5/2}\textbf{\textup{u}} $. Moreover, for any $t$, with probability at least $1-nt^2$ it holds that $\P[Z_{H} = \lambda] \geq \left(\frac{\epsilon' t}{n^{3/2} \zeta} \right)^2$ for all $\lambda \in \Spec{H}$
\begin{enumerate}
\item $u \sim \mathrm{Unif}(\bS^{n-1}_{\bC})$
\item \label{line:firstconjugatedmatrix} $H \gets \hous{u-e_n, M}$
\item \label{line:secconjugatedmatrix} $H \gets \hous{u-e_n, H^*}^*$
\item $H \gets \mathsf{HessBU}\big(H\big)$
\end{enumerate}
\end{boxedminipage}
\bigskip
\begin{proposition}[Guarantees for randomized Hessenberg form]
\label{prop:rhformguarantees}
Assume that
\begin{equation}
\label{assum:RHess}
\textbf{\textup{u}} \leq \textbf{\textup{u}}_{\mathsf{RHess}}(n) := \frac{1}{20 c_{\mathsf{h}} n^{3/2}}.
\end{equation}
Then, $\mathsf{RHess}$ satisfies its guarantees for $c_{\mathsf{RH}} = 3(c_{\mathsf{H}}+c_{\mathsf{h}})$ and can be instantiated using at most
$$T_{\mathsf{RHess}}(n) := T_{\mathsf{HessBU}}(n)+ 2nT_{\mathsf{hous}}(n)+C_{\mathsf{U}} n = O(n^3). $$
arithmetic operations.
\end{proposition}
\begin{proof}
The case $n=1$ is trivial so we assume $n\geq 2$. Let $H$ be the output of $\mathsf{RHess}(M)$, $A_1$ and $A_2$ be the matrices computed in lines \ref{line:firstconjugatedmatrix} and \ref{line:secconjugatedmatrix} of $\mathsf{RHess}$, $P=I-\beta vv^*$ for $v=u-e_n$ (and $\beta=\frac{2}{v^*v}$), and define $E_1:= A_1-PM$ and $E_2 :=A_2-A_1P$. From (\ref{eq:errorhouseholder}) it is easy to see that
$$\|E_1\| \leq 2 c_{\mathsf{h}} \|M\| n^{3/2} \textbf{\textup{u}} \quad \text{and} \quad \|E_2\| \leq 2 c_{\mathsf{h}} \|A_1\| n^{3/2} \textbf{\textup{u}}.$$
Using the first inequality and (\ref{assum:RHess}) we get that $\|A_1\|\leq \|E_1\|+\|M\| \leq 1.1 \|M\|$. Then, combining this with the second inequality we get $\|E_2\| \leq 2.2 c_{\mathsf{h}} \|M\| n^{3/2} \textbf{\textup{u}}$. Hence
\begin{equation}
\label{eq:rhess1}
\|A_2-PMP\| \leq \|A_2-A_1P\|+ \|A_1P-PMP\| = \|E_1\|+\|E_2\|\leq 4.2 c_{\mathsf{h}} \|M\| n^{3/2} \textbf{\textup{u}}.
\end{equation}
Again because of (\ref{assum:RHess}) the above inequality implies that $\|A_2\| \leq 1.3 \|M\|$. So, by Definition \ref{def:buh} we get that $\|H-Q^* A_2 Q\|\leq 1.3 c_{\mathsf{H}} \|M\| n^{5/2}\textbf{\textup{u}},$ for some unitary $Q$ satisfying $Qe_n = e_n$, which combined with (\ref{eq:rhess1}) yields
$$\|H-Q^*PMPQ\| \leq (1.3c_{\mathsf{H}} n^{5/2}+4.2 c_{\mathsf{h}} n^{3/2})\|M\|\textbf{\textup{u}} \leq c_{\mathsf{RH}} \|M\| n^{5/2}\textbf{\textup{u}},$$
proving the first claim. Now, because $\Lambda_\epsilon(M)$ is $\zeta$-shattered, the above inequality and Lemma \ref{lem:decrementeps} imply that $\Lambda_{\epsilon'}(H)$ is $\zeta$-shattered for $\epsilon'=\epsilon -c_{\mathsf{RH}}\|M\|n^{5/2} \textbf{\textup{u}}$.
It remains to prove the anti-concentration statement for $Z_H$. To do this let $E\in \bC^{n\times n}$ be such that $H = Q^*P(M+E)PQ$, and let $M+E = VDV^{-1}$ with $D=\mathrm{diag}(\lambda_1, \dots, \lambda_n)$ and $V$ chosen so that $\|V\|=\|V^{-1}\| = \sqrt{\kappa_V(M+E)}$. Now note that $Q^*PV$ is an eigenvector matrix for $H$, and because $P$ and $Q$ are unitary $\|Q^*PV\| = \| V\| =\sqrt{\kappa_V(M+E)} = \sqrt{\kappa_V(H)}$. So
\begin{align*}
\P[Z_H = \lambda_i] & = \frac{|e_n^* Q^* P V e_i|^2}{\|e_n^* Q^* P V\|^2} && \text{definition of } \P[Z_{H} = \lambda_i]
\\ & = \frac{|e_n^* P V e_i |^2}{\|e_n^* P V\|^2} && e_n^* Q^* = e_n^*
\\ & = \frac{|u^* V e_i|^2 }{\|u^* V\|^2} && u = P e_n\text{ by definition of }P.
\end{align*}
To simplify notation define $v_i:= \frac{V e_i}{\|Ve_i\|}$. We then have
\begin{align*}
\frac{|u^* V e_i|^2 }{\|u^* V\|^2} & = \frac{|u^* v_i|^2 \|Ve_i\|^2 }{\|u^* V\|^2}
\\ & \geq \frac{|u^* v_i|^2 }{\| V\|^2\|V^{-1}\|^2} && \|Ve_i\| \geq \frac{1}{\|V^{-1}\|} \text{ and } \|u^* V\| \leq \|V\|
\\ &= \frac{|u^* v_i|^2 }{\kappa_V(H)^2} && \kappa_V(M+E) = \kappa_V(H)
\\ & \geq \left(\frac{\epsilon'|u^*v_i|}{n\zeta}\right)^2 && \Lambda_{\epsilon'}(H) \text{ is } \zeta\text{-shattered and Lemma \ref{lem:kappavfromshattering}}.
\end{align*}
Now, because $\|v_i\|=1$, we can apply Lemma \ref{lem:anticoncentration} to get that for any $t\geq 0$
$$\P\left[|u^* v_i| \geq \frac{t}{\sqrt{n-1}}\right] \geq 1- t^2.$$
Which, in conjunction with the above gives that
$$\P[Z_H=\lambda_i] \geq \left(\frac{\epsilon' t}{n^{3/2} \zeta} \right)^2, $$
with probability at least $1-t^2$. The advertised claim then follows from taking a union bound over all $v_i$. The claim about the running tie follows trivially.
\end{proof}
\section{Anti-concentration for Random Vectors}
\label{sec:appendixproofs}
\begin{proof}[Proof of Lemma \ref{lem:anticoncentration} ]
Because the distribution of $u$ is unitarily invariant and $\|v\|=1$, we have $u^*v=_d u^*e_i=u(i)$\footnote{Given two random variables $X$ and $Y$, we use $X=_d Y$ to denote that they have the same distribution.} for every $i\in [n]$. So, for concreteness we will take $i=1$ and bound $\P[|u(1)|\leq t]$ for any $t\geq 0$.
Now recall that if $X_1, \dots, X_n, Y_1, \dots, Y_n$ are independent \emph{real} standard Gaussians, then $$u=_d \frac{(X_1+iY_1, \dots, X_k+iY_n)}{\sqrt{X_1^2+Y_1^2+\cdots+ X_k^2+ Y_n^2}} $$
and in particular $|u(1)|^2 = \frac{Z_1}{Z_1+Z_2}$ where $Z_1 \sim \chi^2(2)$ and $Z_2\sim \chi^2(2n-2)$ are independent. Then, we use the well known fact that $\frac{Z_1}{Z_1+Z_2}$ has a $\mathrm{Beta}(1, n-1)$ distribution, and hence its probability density function is given $f_{\mathrm{Beta}(1, n-1)}(s) = (n-1)(1-s)^{n-2}\cdot 1_{\{0\leq s\leq 1\}} $. It follows that, for $t\in [0, 1]$
$$ \P[|u(1)|\leq t] = \P[|u(1)|^2 \leq t^2] = (n-1)\int_0^{t^2} (1-s)^{n-2} ds = 1-(1-t^2)^{n-1} \leq (n-1)t^2, $$
where the last inequality follows from Bernoulli's inequality.
\end{proof}
\section{Pseudospectral Shattering}
\label{sec:shat}
Here we will use $G_n$ to denote a normalized complex Ginibre matrix. That a perturbation by $\gamma G_n$ leads to shattering of the pseudospectrum of the perturbed matrix, follows easily from the following lemmas about $\mathrm{gap}$ and $\kappa_V$.
\begin{lemma}[Eigenvalue gap, Proposition D.5 in \cite{banks2020pseudospectral}]
\label{lem:gapbound}
For any $M\in \bC^{n\times n}$ and any $t, \gamma >0$
$$\P\Big[\mathrm{gap}(M+\gamma G_n) \leq t \Big] \leq \frac{n^3 t^2}{\gamma^2}.$$
\end{lemma}
\begin{lemma}[Eigenvector condition number ]
\label{lem:boundonkappaV}
For any $M\in \bC^{n\times n}, \gamma \in (0, \|M\|)$ and $t>0$ satisfying
$$ t < \frac{\gamma}{\|M\| n^{3/2}}, $$
we have
$$\P\Big[\kappa_V(M+\gamma G_n)\geq \frac{ 1}{ t} \Big] \leq 2 \left( 2\sqrt{2}+ \frac{\|M\|}{\gamma} + \sqrt{\frac{4\log(1/t)}{n}} \right)^2 n^3 t^2.$$
\end{lemma}
All of the ideas needed to prove Lemma \ref{lem:boundonkappaV} already appeared in \cite{banks2021gaussian}, but for the convenience of the reader we quickly outline them below. First, we begin by recalling the following result.
\begin{lemma}[Theorem 1.5 in \cite{banks2021gaussian}]
\label{lem:daviesexpectation}
Let $M\in \bC^{n\times n}$, $\gamma \in (0, \|M\|)$, and let $\lambda_1, \dots, \lambda_n\in \bC$ be the random eigenvalues of $M+\gamma G_n$. Then for every measurable open set $B\subset \bC$
$$
\mathbb{E} \bigg[ \sum_{\lambda_i \in B} \kappa(\lambda_i)^2 \bigg] \leq \frac{n^2}{\pi \gamma^2} \mathrm{Area} (B).
$$
\end{lemma}
We can now proceed to the proof.
\begin{proof}[Proof of Lemma \ref{lem:boundonkappaV}]
To simplify notation put $X:= M+ \gamma G_n$ and let $\lambda_1, \dots, \lambda_n$ be its random eigenvalues. Then for any $s, t>0$
\begin{align*}
\P\bigg[\kappa_V(X)\geq \frac{1}{t}\bigg] & = \P\bigg[\kappa_V(X)^2\geq \frac{1}{t^2}\bigg]
\\ & \leq \P\bigg[ \sum_{i=1}^n \kappa(\lambda_i)^2 \geq \frac{1}{n t^2}\bigg] && \kappa_V(X) \leq \sqrt{n\sum_{i=1}^n \kappa(\lambda_i)^2}
\\ & \leq \P[\|G_n\|\geq s] + \P\bigg[\|G_n\| \leq s \text{ and } \sum_{i=1}^n \kappa(\lambda_i)^2 \geq \frac{1}{n t^2} \bigg].
\end{align*}
Moreover, from (\ref{eq:ginibrenormbound}) we have $\P[\|G_n\| \geq s]\leq 2\exp\big(-n(s-2\sqrt{2})^2\big)$. On the other hand
\begin{align*}
\P\left[\|G_n\| \leq s \text{ and } \sum_{i=1}^n \kappa(\lambda_i)^2 \geq \frac{1}{nt^2} \right] & \leq \P\left[ \sum_{\lambda_i\in D(0, \|M\|+s\gamma)} \kappa(\lambda_i)^2 \geq \frac{1}{n t^2} \right]
\\ & \leq \left(\frac{\|M\|}{\gamma}+s\right)^2 n^3t^2,
\end{align*}
where the last inequality follows from Lemma \ref{lem:daviesexpectation} and Markov's inequality. Putting everything together we get that
$$\P\bigg[\kappa_V(X)\geq \frac{1}{t}\bigg]\leq 2\exp\big(-n(s-2\sqrt{2})^2\big)+ \left(\frac{\|M\|}{\gamma}+s\right)^2 n^3t^2.$$
Now, to simplify notation define $\mathrm{P} := \frac{\|M\|}{\gamma} n^{3/2} t$. Then choose $s$ to be the solution of the equation $2\exp\big(-n(s-2\sqrt{2})^2\big) = \mathrm{P}^2$, and plug it into the above inequality to obtain
\begin{align*}
\P\bigg[\kappa_V(X)\geq \frac{1}{t}\bigg]& \leq \mathrm{P}^2 + \left( \frac{\|M\|}{\gamma}+ 2\sqrt{2} + \frac{1}{\sqrt{n}} \log(2/\mathrm{P}^2) \right)^2 n^3 t^2
\\ & \leq 2 \left( \frac{\|M\|}{\gamma}+ 2\sqrt{2} + \frac{1}{\sqrt{n}} \log(2/\mathrm{P}^2) \right)^2 n^3 t^2
\\ &\leq 2 \left( \frac{\|M\|}{\gamma}+ 2\sqrt{2} + \frac{2}{\sqrt{n}} \log(1/t) \right)^2 n^3 t^2 && 2 \mathrm{P}^{-2} \leq t^{-2}.
\end{align*}
\end{proof}
We can now prove the shattering result.
\begin{proof}[Proof of Lemma \ref{lem:shattering}]
First, if we take $t_1:= \frac{\varphi^{1/2}\gamma}{\sqrt{2}n^{3/2}}$ and apply Lemma \ref{lem:gapbound} we get that
$$\P[ \mathrm{gap}(M+\gamma G_n) \geq t_1 ] \geq 1-\varphi/2. $$
Then, taking $t_2 = \frac{\gamma \varphi^{1/2}}{60\|M\| \log(1/\varphi) n^{3/2}}$ and applying Lemma \ref{lem:boundonkappaV} we get
\begin{align*}
\P[\kappa_V(M+\gamma G_n)\geq 1/t_2] & \leq 2 \left( 2\sqrt{2}+ \frac{\|M\|}{\gamma} + \frac{2}{\sqrt{n}} \log(1/t_2)^{1/2} \right)^2 n^3 t^2_2
\\ &\leq 6 \left( 8 + \frac{\|M\|^2}{\gamma^2} + \frac{4}{n} \log(1/t_2) \right) n^3 t_2^2 && \text{AM-QM}
\\ & \leq \varphi/6+ \varphi/6+\varphi/6
\end{align*}
yielding
$$\P[ \kappa_V(M+\gamma G_n) \leq 1/t_2 ] \geq 1-\varphi/2. $$
Now define $\zeta = t_1/3$ and $\epsilon = t_1 t_2/3$. By the tail bounds obtained above we have the event $\{ \mathrm{gap}(M+\gamma G_n) \geq t_1 \text{ and } \kappa_V(M+\gamma G_n)\leq 1/t_2\}$ occurs with probability $1-\varphi$, and, by Lemma \ref{lem:pseudospectralbauerfike}, under this event we have that $\Lambda_\epsilon(M+\gamma G_n)$ is $\zeta$-shattered, as we wanted to show.
\end{proof} |
1,477,468,751,007 | arxiv | \section{Introduction}
Inflation has been remarkably successful in solving some puzzles in
the standard hot big bang cosmology
\cite{Guth81,Linde82,Steinhardt82,Starobinsky-inf}. Inflation also
predicts that fluctuations of quantum origin were generated and
frozen to seed wrinkles in the cosmic microwave background (CMB)
\cite{CMBobserve,WMAP} and today's large scale structure
\cite{Mukhanov81,Guth82,Hawking82,Starobinsky82,Bardeen83}.
In spite of the success, inflation also faces some naturalness
problems. One of the problems is why the inflaton potential is so
flat, leading to typically $10^5$ e-folds instead of $50\sim 60$,
which is needed to solve the flatness and horizon problems.
Since the invention of inflation, a great number of inflation models were
proposed. Selecting the correct inflation model has become one of
the key problems in cosmology. The currently observed quantities
such as the power spectrum and the spectral index are not adequate
to distinguish the inflation models. However, luckily some more
quantities are expected to be measured accurately in the forthcoming
experiments. For example, non-Gaussianity, isocurvature
perturbation, and primordial gravitational waves.
Non-Gaussianity characterizes the departure of perturbations from
the Gaussian distribution. To characterize this departure, the non-Gaussian
estimator $f_{NL}$ is often used. Using the WMAP convention,
$f_{NL}$ can be written as \cite{fnl}
\begin{equation}
\label{eq:fnl}
\zeta=\zeta_g +\frac{3}{5} f_{NL}\zeta_g^2 ~,
\end{equation}
where $\zeta$ is the curvature perturbation in the uniform density
slice, and $\zeta_g$ is the Gaussian part of $\zeta$. This
particular model of non-Gaussianity is called the local shape
non-Gaussianity. The simplest single field inflation models predict
that $f_{NL}<{\cal
O}(1)$. So large $f_{NL}$ indicates a departure from these
simplest models, such as the curvaton \cite{Lyth:2001nq} and (non-inflationary)
ekyrotic \cite{ekpyrotic}
models. To
describe more general shapes, such as the k-type\cite{K} and the DBI shapes \cite{DBI}, one needs to
calculate the 3-point correlation functions.
Recently, there are hints from experiments that the local shape
non-Gaussianity may be large. \cite{Yadav:2007yy} claims that
$f_{NL}=0$ is excluded above $99.5\%$ confidence level. In the WMAP
5-year data analysis, it is shown that the expectation value of
$f_{NL}$ using the bi-spectrum method is $f_{NL}=51$, but
$f_{NL}=0$ still lies within the $2 \sigma$ range. If the
non-Gaussianity is confirmed in the WMAP 8-year or the Planck
experiment, it will be a very powerful tool to distinguish between
inflation models.
Another possibility beyond the simplest single field inflation model is
isocurvature perturbation. The existence of isocurvature
perturbation indicates that there are more than one scalar degrees
of freedom during inflation. This can arise from multi-field
inflation \cite{Gordon:2000hv, Seery:2005gb}, modified gravity
\cite{Chen:2006wn}, or some exotic matter content during inflation
\cite{Chen:2006qy}. The WMAP 5-year +BAO+SN bound on
isocurvature perturbation is $\alpha_{-1}<0.0037$ (95\% CL). So no
evidence for isocurvature perturbation is shown. This result can be
used to constrain models such as the curvaton model.
The primordial gravitational waves also provide an important probe
for the early universe. The amplitude of gravitational waves varies
greatly in different inflation models. For example, chaotic
inflation predicts a tensor-to-scalar ratio $r\sim {\cal O}(0.1)$.
While most known stringy inflation models predict $r\lesssim {\cal
O} (10^{-3})$. The WMAP 5-year result, combined with BAO and SN
gives $r<0.2$ at $95\%$ confidence level. This has put a tight
constraint on chaotic inflation models. On the other hand, if future
experiments show that $r>{\cal O} (10^{-3})$, it will be a challenge
for string cosmology.
One attempt to solve the flat potential problem of inflation, and to
produce a large tensor-to-scalar ratio is proposed in
\cite{isocurvaton}. The idea is to suppress the perturbation outside
the inflationary horizon. This scenario looks like the curvaton
scenario, while the second scalar field only creates isocurvature
perturbation. So we call this scenario ``isocurvaton''. In this
paper, we show that if the isocurvaton scenario can be realized,
large non-Gaussianity can be produced, without producing observable
isocurvature perturbations.
However, it is not easy to realize the isocurvaton scenario. In
\cite{isocurvaton}, it is shown that in the simplest slow roll
inflation model, with $m^2\chi^2$ type isocurvaton, the above
scenario does not work. In \cite{Linde:2005he} it is argued that
the scenario does not work either for more general cases. In this
paper, we prove a no-go theorem that during slow roll inflation, if
the isocurvaton does not interact with the inflaton, then the
super-horizon perturbation can not be suppressed. The proof also
goes through for the k-type \cite{K} isocurvaton, where the
kinetic term of the isocurvaton and inflaton are allowed to be
generalized. After proving the no-go theorem, we discuss some
possible ways to bypass the theorem.
The proof of the no-go theorem also provides some new insights into
the combined inflaton and curvaton fluctuation \cite{Langlois:2004nn}. It is shown that in
the uniform inflaton density slice, the curvaton propagates
freely after the quantum initial condition is provided. This provides
a simplified treatment for the combined inflaton and curvaton
fluctuation.
We also combine the results of non-Gaussianity, isocurvature
perturbation and gravitational waves to constrain the curvaton
model. It is shown that the inequality $\label{f5} f_{NL}<
\frac{5}{432} r_T \left(\frac{M_p}{T}\right)^{2/3}$ should
be satisfied for the non-Gaussianity, gravitational waves and the
temperature of the universe when CDM was created.
This paper is organized as follows, in Section 2, we show the
virtues of the isocurvaton model, this is why the model is worth to
investigate. In Section 3, we prove a no-go theorem for the
isocurvaton scenario in the slow roll non-interacting multi-field
models. In Section 4, we extend the proof to the case with
generalized kinetic terms. In Section 5, we discuss the implications
for the curvaton model. In Section 6, we provide a simplified
analysis for the combined inflation and curvaton perturbations. In
Section 7, we discuss some other possibilities including large
negative $f_{NL}$ and large equilateral non-Gaussianity. We conclude
and discuss some possibilities to bypass the no-go theorem in
Section 8.
\section{Virtues of Isocurvaton}
In this section, we discuss the virtues of isocurvaton. We show that
some features of this scenario are rather attractive. That is why a
no-go theorem is valuable to mark the forbidden regions.
We use $\varphi$ to denote the inflaton
and use $\chi$ to denote the isocurvaton. It is shown in
\cite{Lyth:2004gb} that if there is no interaction between these two
components, the curvature perturbations for inflaton and isocurvaton on
their uniform density slices are separately conserved. The proof is
reviewed briefly in the appendix. These curvature
perturbations can be written in a gauge invariant form as
\begin{equation}
\label{eq:zetas}
\zeta_\varphi=-\psi-H\frac{\delta\rho_\varphi}{\dot\rho_\varphi}~,~~~
\zeta_\chi=-\psi-H\frac{\delta\rho_\chi}{\dot\rho_\chi}~,
\end{equation}
where $\psi$ is the metric perturbation. The explicit definitions for
$\psi$, $\delta\rho_\varphi$ and $\delta\rho_\chi$ are given in the
appendix. The total curvature perturbation takes the form
\begin{equation}
\label{eq:zeta}
\zeta = -\psi-H\frac{\delta\rho}{\dot\rho}= r
\zeta_\chi+(1-r)\zeta_\varphi~,~~~ r\equiv \frac{\dot\rho_\chi}{\dot\rho_\chi+\dot\rho_\varphi}~.
\end{equation}
Note that if the inflaton and the isocurvaton have different equations
of state, $r$ will vary with respect to time. In this case, $\zeta$
is not conserved. Especially, during the epoch that inflaton has
decayed to radiation and the isocurvaton oscillates around its
minimum, we have $\rho_\varphi\propto a^{-4}$ and $\rho_\chi\propto
a^{-3}$. In this case, $r$ increases with time until the isocurvaton
decays. After curvaton decays to radiation, $r$ is a constant, and
$\zeta$ is conserved. If we would further assume $r\zeta_\chi\ll
(1-r)\zeta_\varphi$ when the isocurvaton decays, then
\begin{equation}
\label{eq:isocurvaton}
\zeta=(1-r)\zeta_\varphi~.
\end{equation}
From (\ref{eq:isocurvaton}) we observe that if the isocurvaton decays
very late so that $1-r\ll 1$, then the super Hubble horizon
perturbation is suppressed.
The direct consequence of suppressing the super-horizon perturbation
is to provide a solution to the problem of the flatness of the potential.
This can make inflation more natural, this is because a large scalar
type perturbation usually implies a non-flat potential.
The isocurvaton also serves as an amplifier for non-Gaussianity, this
is because if the initial inflaton fluctuation is larger, it
should generate a larger non-Gaussianity than the standard scenario.
This can be seen explicitly by writing
\begin{equation}
\label{eq:fnlvarphi}
\zeta_\varphi=\zeta_{\varphi g}+\frac{3}{5}f_{NL\varphi} \zeta_{\varphi g}^2
\end{equation}
Combining (\ref{eq:fnl}), (\ref{eq:isocurvaton}) and
(\ref{eq:fnlvarphi}), for the observable non-Gaussianity, we get
\begin{equation}
\label{eq:fnlfinal}
f_{NL}=\frac{1}{1-r} f_{NL\varphi}~.
\end{equation}
When $1-r\ll 1$, $f_{NL}$ can be large in the isocurvaton model.
For more general shape of non-Gaussianity, the 3-point function of
$\varphi$ is also amplified by isocurvaton. To see this, note that
\begin{equation}
\label{eq:generalshape}
\langle\zeta_{{\bf k}1}\zeta_{{\bf k}2}\zeta_{{\bf k}3}\rangle=
(1-r)^3 \langle\zeta_{\varphi {\bf k}1}\zeta_{\varphi {\bf
k}2}\zeta_{\varphi {\bf k}3}\rangle ~.
\end{equation}
Generally, one can rewrite the above 3-point functions using the
bi-spectrum expression
\begin{equation}
\label{eq:bispectrum}
\langle\zeta_{{\bf k}_1}\zeta_{{\bf k}_2}\zeta_{{\bf k}_3}\rangle
\propto \delta^3({\bf k}_1+{\bf k}_2+{\bf k}_3) {\cal P}_\zeta^2
f_{NL}^{\rm (nonlocal)} {\cal A}(k_1,k_2,k_3)
~,
\end{equation}
where ${\cal P}_\zeta$ is the dimensionless power spectrum of
$\zeta$, and ${\cal A}(k_1,k_2,k_3)$ describes the shape of the
non-Gaussianity.
So for general shape non-Gaussianity we have
\begin{equation}
\label{eq:nonlocal}
f_{NL}^{\rm (nonlocal)}=\frac{1}{1-r} f_{NL\varphi}^{\rm (nonlocal)}~.
\end{equation}
As there are two components in the model, it is natural to ask
whether the isocurvature perturbation is produced in the model. The
treatment for isocurvature perturbation is the same as for the curvaton
model. If dark matter is produced after the isocurvaton decays, the
model can be consistent with the experimental results on
isocurvature perturbation.
In the isocurvaton scenario, the scalar perturbation is suppressed,
however, the tensor perturbation is unaffected. This results in an
enhancement for the tensor-to-scalar ratio. In this scenario,
the observed tensor-to-scalar ratio becomes
\begin{equation}
r_T\equiv
\frac{P_T}{P_\zeta}=\frac{1}{1-r}\frac{P_T}{P_{\zeta_\varphi}}=\frac{r_{T0}}{1-r}~,
\end{equation}
where $r_{T0}$ is the tensor-to-scalar ratio without the isocurvaton
dilution. If the isocurvaton scenario works, and future
experiments detect gravitation waves, then isocurvaton with $1-r\ll
1$ can be a way to save a large number of string inflation models,
which make the prediction that the gravitational waves are too small
to be detected. This possibility is investigated in detail in
\cite{isocurvaton}.
However, unluckily, as we shall show in the following two sections,
under reasonable assumptions, the isocurvaton scenario can not be
realized.
\section{A No-Go Theorem for Isocurvaton}
As stated in the introduction, there are theoretical obstructions to
constructing the isocurvaton model. In this section, we prove a
no-go theorem that the isocurvaton model can not be realized in the
standard non-interacting slow roll double field models.
We shall prove that in the $\delta\rho_\varphi=0$ gauge, to good
approximation, $\delta\chi$ propagates freely, and do not feel the
inflaton fluctuation or the gravitational potential. With this
result, the gauge invariant curvature perturbation can be written as
\begin{equation}
\zeta_\varphi=-\psi~,~~~\zeta_\chi=-\psi-H\frac{\delta\rho_\chi}{\dot\rho_\chi}=-\psi+\frac{\dot{\delta\chi}}{3\dot\chi}+\frac{1}{3}\frac{V_\chi}{\dot\chi^2}\delta\chi~,
\end{equation}
where as we shall prove, $\delta\chi$ is an independent stochastic
source other than $\psi$. So the $-\psi$ term in $\zeta_\chi$ can
not be canceled without fine-tuning , and $\zeta_\chi \ll
\zeta_\varphi$ can not be naturally realized. This result indicates
that the model considered above can not realize the isocurvaton
scenario.
To prove the free propagation of $\delta\chi$ in the
$\delta\rho_\varphi=0$ gauge, we first show that outside the
horizon, $\delta\chi$ propagates freely without gravitational source
term. After that, we show that the initial condition for $\delta
\chi$ is determined by the quantum fluctuation before horizon exit,
and the influence from the inflaton fluctuation and the
gravitational potential can be neglected.
We start with the familiar Newtonian gauge perturbation
equations. Before curvaton dominates the energy density, the
perturbation equations takes the form
\begin{equation}\label{n00}
-3H (H\psi^{\rm (n)}+\dot\psi^{\rm (n)})-\frac{k^2}{a^2}\psi^{\rm
(n)}=4\pi G \delta\rho_\varphi^{\rm (n)}~,
\end{equation}
\begin{equation}\label{n11}
(2\dot H+3H^2)\psi^{\rm (n)}+ 4H\dot\psi^{\rm (n)}+\ddot\psi^{\rm
(n)}=4\pi G\delta p_\varphi^{\rm (n)}~,
\end{equation}
\begin{equation}\label{nchi}
\ddot{\delta\chi}^{\rm (n)}+ 3H \dot{\delta\chi}^{\rm (n)}
+V_{\chi\chi}\delta\chi^{\rm (n)}=-2V_\chi\psi^{\rm (n)}+4\dot\chi\dot\psi^{\rm (n)}~,
\end{equation}
where the superscript ``(n)'' denotes the Newtonian gauge. The gauge
transformation from the Newtonian gauge to the $\delta\rho_\varphi=0$
gauge can be written as
\begin{equation}\label{gauge}
\psi^{\rm (n)}=\psi-H\beta~,~~~\delta x^{\rm (n)}=\delta x+\dot x
\beta~,~~~
\beta\equiv \frac{\delta\rho_\varphi^{\rm (n)}}{\dot\rho_\varphi}~,
\end{equation}
where $x=x(t)$ denotes a background scalar field, and $\delta x$
stands for its perturbation. We assume that
$p_\varphi=p_\varphi(\rho_\varphi)$, so that in the
$\delta\rho_\varphi=0$ gauge, we have $\delta p_\varphi=0$. This
assumption holds for the ideal fluid without intrinsic isocurvature
perturbation, as well as the scalar field outside the inflationary
horizon. The proof for the ideal fluid is straightforward, and the
proof for the scalar field is given in the appendix.
We first consider the $k\ll aH$ limit.
Changing the equations into the $\delta\rho_\varphi=0$
gauge, one can simplify Eqs. (\ref{n00}) and (\ref{n11}) as
\begin{equation}\label{0011}
\dot\psi=0~,~~~\dot\beta=\psi-H\beta~,
\end{equation}
note that in this paper, all perturbation variables without the
superscript (n) denote perturbations in the $\delta\rho_\varphi=0$
gauge if not stated otherwise.
Writing Eq. (\ref{nchi}) in the $\delta\rho_\varphi=0$ gauge, and
using Eq. (\ref{0011}), we find
\begin{equation}
\label{eq:freechi}
\ddot{\delta\chi}+3H\dot{\delta\chi}+V_{\chi\chi}\delta\chi=0~.
\end{equation}
The $\psi$ and $\beta$ terms are canceled in this equation. In other
words, in this gauge, $\chi$ does not feel the gravitational
potential and propagates freely.
This result can be obtained in a simpler way. One can show
that $\delta\chi$ is proportional to the isocurvature perturbation.
Then from the well-known result in double field inflation that
isocurvature perturbation propagates without source outside the
horizon, we obtain that $\delta\chi$ propagates freely. However, we
still write down the derivation explicitly, because this derivation
is rather general, holds after $\varphi$ decays, and can be used to
simplify some calculations in the curvaton model.
Now let us consider the $k\gg aH$ and $k\sim aH$ case, and see whether
$\delta\chi$ can feel the gravitational potential. Note that the super
horizon analysis only requires $p_\varphi=p_\varphi(\rho_\varphi)$,
and does not require detailed information about the
inflaton. While to investigate the horizon crossing, we need to
focus on the standard single field inflaton plus the isocurvaton.
We employ the results in the double field inflation model
\cite{Gordon:2000hv} to rewrite $\varphi$ and $\chi$ into
the inflation direction $\sigma$ and the isocurvature direction $s$,
\begin{equation}
\label{eq:sigmas}
\delta\sigma\equiv\cos\theta\delta\varphi+\sin\theta\delta\chi~,~~~
\delta s\equiv -\sin\theta\delta\varphi+\cos\theta\delta\chi~,~~~
\sin\theta\equiv\frac{\dot\chi}{\sqrt{\dot\varphi^2+\dot\chi^2}}~.
\end{equation}
Note that $\delta s$ is automatically gauge invariant. During
inflation, if $\dot\chi\ll \dot\varphi$, we have $\theta\simeq 0$
during inflation. The inflation direction does not change and
the isocurvature perturbation is obviously sourceless. However, we
do not limit to this case, because we only require
$\rho_\chi\ll\rho_\varphi$ during inflation.
The perturbation equation for the isocurvature direction can be written as
\begin{equation}\label{ds}
\ddot{\delta s}+3H\dot{\delta
s}+\left(\frac{k^2}{a^2}+V_{ss}+3\dot\theta^2\right)\delta s=
\frac{\dot\theta}{\dot\sigma}\frac{k^2}{2\pi G a^2}\psi^{\rm (n)}~.
\end{equation}
Now we shall prove that the RHS of Eq. (\ref{ds}) is much smaller
than a typical term in the LHS. To see this, we first estimate the
fluctuation amplitude of the inflation direction. From the
perturbation in the Newtonian gauge $\dot\psi^{\rm (n)}+H\psi^{\rm
(n)}=4\pi G\dot\sigma\delta\sigma^{\rm (n)}$ and the slow roll
condition, we have
\begin{equation}
\left|\frac{\dot\sigma}{H}\psi^{\rm (n)}\right|\leq \left|\frac{4\pi G\dot\sigma^2\delta\sigma^{\rm
(n)}}{H^2}\right| \ll \delta\sigma^{\rm (n)}~.
\end{equation}
From the amplitude for $\delta\sigma$ in the $\psi=0$ gauge
\cite{Gordon:2000hv}, we have
\begin{equation}
\delta\sigma^{\rm (n)}\simeq \delta\sigma^{\rm (n)}+\frac{\dot\sigma}{H}\psi^{\rm (n)}= \left(\delta\sigma\right)_{\psi=0~\rm
gauge}\sim a^{-1}k^{-1/2} e^{-ik\tau}~,
\end{equation}
Next, from the perturbation equation $\dot\psi^{\rm (n)}+H\psi^{\rm
(n)}=4\pi G\dot\sigma\delta\sigma^{\rm (n)}$, we have
\begin{equation}
|\psi^{\rm (n)}|\sim k^{-3/2}\frac{\dot\sigma}{M_p^2}~,
\end{equation}
where we only want to count the orders in slow roll parameters, so the
numerical coefficients are neglected.
The source term in the RHS
of (\ref{ds}) takes the form
\begin{equation}
\left|\frac{\dot\theta}{\dot\sigma}\frac{k^2}{2\pi G a^2}\psi^{\rm (n)}\right|\sim
\left|\frac{\dot\chi\ddot\varphi-\ddot\chi\dot\varphi}{\dot\varphi^2+\dot\chi^2}\right|
\times\frac{k^2}{a^2} k^{-3/2}\ll H \frac{k^2}{a^2} k^{-3/2}~,
\end{equation}
where we have used the slow roll approximation
\begin{equation}
\left|\frac{\dot\chi\ddot\varphi-\ddot\chi\dot\varphi}{\dot\varphi^2+\dot\chi^2}\right|
\leq\left|\frac{\dot\chi\ddot\varphi-\ddot\chi\dot\varphi}{2\dot\varphi\dot\chi}\right|\leq\left|\frac{\ddot\varphi}{2\dot\varphi}\right|+\left|\frac{\ddot\chi}{2\dot\chi}\right|\ll H
\end{equation}
However the quantum initial condition of $\delta s$ is $|\delta s| \sim
a^{-1}k^{-1/2}$, so when $k\geq aH$, for a typical term in the LHS of (\ref{ds}),
\begin{equation}
\left|\frac{k^2}{a^2}\delta s\right| \geq H \frac{k^2}{a^2} k^{-3/2} \gg
\left|\frac{\dot\theta}{\dot\sigma}\frac{k^2}{2\pi G
a^2}\psi^{\rm (n)}\right|~.
\end{equation}
Since the horizon exit does not take too many e-folds, the
suppression in slow-roll parameter indicates that the initial
condition of $\delta s$ is prepared by the quantum fluctuation, and
the influence from the gravitational potential can be ignored.
Physically, this result originates from the fact that the inflaton
and the isocurvaton fields couple weakly due to the slow roll
conditions.
Note that we have ignored the back-reaction of $\delta s$ on the
inflaton direction. This approximation can also be verified using the
slow roll approximation.
After horizon exit, in the $\delta\rho_\varphi=0$ gauge,
$\delta\varphi\simeq0$ \cite{Lyth:2004gb}. So in this case, $\delta
s\simeq\cos\theta\delta\chi$. Combining the above discussion for
$\delta s$, we conclude that the initial condition for $\delta\chi$
is prepared by its quantum fluctuation.
Finally, recall (\ref{eq:freechi}), we get to the conclusion that
$\delta\chi$ has its independent quantum initial condition,
evolves freely and does not feel the gravitational potential.
This proof of the no-go theorem can be generalized directly to the
non-interacting multi-field isocurvaton case. So increasing the number
of fields does not make things better.
There are two exceptions where the above proof does not apply,
namely, the vacuum energy and a field with completely flat potential
as the isocurvaton. However, neither of them can serve as
isocurvaton. For the vacuum energy, we can think of it as a shift of
the inflaton potential, so it can not dilute the inflaton
perturbation. For a field with completely flat potential, the
solutions for both $\sigma$ and $\delta\sigma$ have a constant mode
plus a decaying mode. The constant mode does not contribute to
$\zeta_\sigma$. So up to the decaying mode, the flat potential case
is the same as the vacuum energy case, and can not serve as
isocurvaton.
In this section, we have provided a direct and self-contained proof
for the no-go theorem on isocurvaton. In order to link the double
field inflaton to observations, analysis similar to the above proof
has been performed in the literature. In a series of papers
\cite{doublefield}, the authors proved that the cross correlation
between the adiabatic and entropy modes is suppressed by the slow
roll parameters, and the primordial adiabatic mode is related to the
adiabatic mode at horizon crossing by
\begin{equation}
P_\zeta=\frac{P_{\zeta*}}{\sin^2\Theta}~,~~~\sin\Theta\equiv\frac{1}{\sqrt{1+{\cal
T}_{\cal RS}^2}}~,
\end{equation}
where ${\cal T}_{\cal RS}$ is the transfer function from entropy
mode to adiabatic mode. From this relation, one can see that the
super horizon perturbation can not be suppressed in the context of
slow roll inflation. This reasoning also extends to the interacting
double field theory with some additional slow roll assumptions.
\section{Proof for Generalized Kinetic Terms}
In this section, we try to generalize the no-go theorem in the last
section to the case of generalized kinetic terms. As done in the
last section, we first investigate the super horizon evolution, and
then study the horizon crossing.
Consider the isocurvaton Lagrangian $P=P(X(\chi),\chi)$, where
$X(\chi)=\frac{1}{2}g^{\mu\nu}\partial_\mu\chi\partial_\nu\chi$, and
a general dominate component originating from the inflaton
$p_\varphi=p_\varphi(\rho_\varphi)$. In the $k\ll aH$ limit, the
coupling equations for $\rho_\varphi$, $p_\varphi$ and $\phi$
(\ref{n00}), (\ref{n11}) are not changed, so we still have
(\ref{0011}).
The equation of motion for $\chi$ takes the form
\begin{equation}
P_X g^{\mu\nu}\nabla_\mu\nabla_\nu\chi+\partial_\mu P_X
g^{\mu\nu}\partial_\nu \chi-P_\chi=0 ~.
\end{equation}
Expanding this equation to the zeroth and first order in the
perturbation variables, we get the background and the leading order
perturbation equations in the Newtonian gauge,
\begin{equation}\label{generalizedbackground}
\partial_t (P_X\dot\chi) + 3HP_X\dot\chi-P_\chi=0~,
\end{equation}
\begin{equation}\label{generalizedpert}
\ddot\chi\delta P_X^{\rm (n)} + P_x\ddot{\delta\chi}^{\rm
(n)}+\dot\chi\dot{\delta P}_X^{\rm (n)}+\dot P_X\dot{\delta\chi}^{\rm
(n)} + 3H\dot\chi\delta P_X^{\rm (n)}+3HP_X\dot{\delta\chi}^{\rm
(n)} - 2P_\chi\phi^{\rm (n)}-4 P_X\dot\chi\dot\phi^{\rm
(n)}-\delta P_\chi^{\rm (n)}=0~.
\end{equation}
Terms such as $\delta P_\chi^{\rm (n)}$ in Eq.
(\ref{generalizedpert}) can be expanded into more explicit forms.
But we do not need this expansion for our purpose. Note that $X$,
$P_X$ and $P_\chi$ are scalars under the gauge transformation
(\ref{gauge}). Using the background equation of motion
(\ref{generalizedbackground}), it can be shown that in the
$\delta\rho_\varphi=0$ gauge, all the source terms in
(\ref{generalizedpert}) vanish,
\begin{equation}\label{generalizedpert}
\ddot\chi\delta P_X + P_x\ddot{\delta\chi}+\dot\chi\dot{\delta P}_X
+\dot P_X\dot{\delta\chi} + 3H\dot\chi\delta P_X+3HP_X\dot{\delta\chi}
-\delta P_\chi=0~.
\end{equation}
Again, this result is not surprising, as isocurvature
perturbation should be sourceless after horizon crossing.
For horizon crossing, we focus on the model that the inflaton and
the isocurvaton have a unified generalized kinetic term. The
Lagrangian of the model takes the form
\begin{equation}
P=P(X,\varphi,\chi)~,~~~
X=\frac{1}{2}G_{IJ}\nabla_{\mu}\varphi^I\nabla^\mu\varphi^J~,
\end{equation}
where $G_{IJ}$ is the metric in the field space, and $I,J=\{1,2\}$
such that $\varphi^1=\varphi$, $\varphi^2=\chi$.
Using the results obtained in \cite{Langlois:2008mn}, the
isocurvature direction perturbation (\ref{eq:sigmas}) can be written
as
\begin{equation}
\ddot{\delta s}+\left(3H+\frac{\dot P_X}{P_X}\right)\dot{\delta s}+
\left(\frac{k^2}{a^2}+\mu_s^2+\frac{\Xi^2}{c_s^2}\right)\delta s
=-\frac{\dot\sigma}{\dot H} \Xi \frac{k^2}{a^2}\psi^{\rm (n)}~.
\end{equation}
where
\begin{equation}
\Xi \equiv \frac{1}{\dot\sigma
P_X}\left((1+c_s^2)P_s-c_s^2P_{Xs}\dot\sigma^2\right)~,
\end{equation} and
\begin{equation}
\mu_s^2\equiv -\frac{P_{ss}}{P_X}+\frac{1}{2}\dot\sigma^2\bar R
-\frac{1}{2 c_s^2 X}\frac{P_s^2}{P_X^2}+2\frac{P_{Xs}P_s}{P_X^2}~,
~~~c_s\equiv \frac{P_X}{P_X+2X P_{XX}}~,
\end{equation}
where $\bar R$ is the scalar curvature in the field space. Note that
$\Xi$ plays the role of $\theta$ in the standard kinetic term case,
which characterizes the coupling between the inflaton direction and
the isocurvature perturbations.
When $\Xi=0$, the isocurvaton propagates freely at horizon crossing.
So we still have that $\zeta_\chi$ equals $\zeta_\varphi$ plus an
term originating from an independent random quantum initial
fluctuation, and the isocurvaton scenario does not work.
If $\Xi$ is large enough to provide source to the isocurvature
perturbation, the above proof breaks down. However, the $\Xi\neq 0$
case has not been investigated analytically in the literature, with
only numerical results available (see, {\it e.g.}
\cite{Yang:2008ns}). We are not able to prove the no-go theorem in
this case.
Note that in the proof of the horizon crossing, the generalized
kinetic term will introduce interaction between $\varphi$ and
$\chi$. However, our proof makes no reference to the conserved
quantities during inflation. So the interaction is not an
obstruction of our proof.
\section{Implication for the Curvaton Models}
In this section, we first show that the no-go theorem proved above
does not rule out the curvaton scenario. We also discuss some
physical constraint for the curvaton model from the non-Gaussianity,
isocurvature perturbation and gravitational waves experimental data.
Let us see why the curvaton scenario is not affected by the no-go
theorem. Most of the calculation in the above two sections applies
to the curvaton scenario by setting $\psi^{\rm (n)}=0$. But in the
curvaton scenario, it is the curvaton field, not the inflaton field,
which produces the primordial perturbations. In the curvaton scenario,
$\zeta_\varphi$ is small, and does not need to be canceled by the
curvaton field. So the no-go theorem does no harm to the curvaton
scenario.
However, the observables we consider, namely, non-Gaussianity,
isocurvature perturbation and gravitational waves do put a tight
constraint on the curvaton scenario. In the remainder of this
section, we shall combine the ``non-Gaussianity + gravitational
waves'' \cite{Huang:2008ze} and the ``non-Gaussianity + isocurvature
perturbation'' constraints \cite{WMAP, Lyth:2002my, Beltran:2008ei} to provide a
more complete constraint for the curvaton model.
We first quickly review the result in \cite{Huang:2008ze}. Consider
the simplest curvaton model. To distinguish it from the inflaton
direction used in the above sections, we denote the curvaton field
by $\chi$. The local shape non-Gaussianity is related to the ratio
of energy densities when curvaton decays
\begin{equation}
f_{NL}\simeq \frac{5}{4r}~,~~~r=\left(\frac{\rho_\chi}{\rho_{\rm
tot}}\right)_D~.
\end{equation}
Note that by writing this equation, we have assumed $f_{NL}>1$.
Otherwise, the order 1 and order $r$ terms in $f_{NL}$ can dominate
the expression.
The curvaton starts to oscillate after inflaton decays into
radiation. As the curvaton is much lighter than the inflation scale,
the field value of the curvaton field is practically unchanged from
the time of
horizon exit to the time the curvaton starts to oscillate. So when the curvaton starts to
oscillate, we have
\begin{equation}
H=m~,~~~\rho_\chi=\frac{1}{2}m^2\chi_*^2~,~~~\rho_\varphi=3m^2M_p^2~,
\end{equation}
where $m$ is the curvaton mass, and $\chi_*$ is the curvaton field
value at horizon exit.
Another important time scale in the curvaton scenario is the time when the curvaton
decays. When the curvaton decays, we have
\begin{equation}
H=\Gamma~, ~~~\rho_\varphi=3M_p^2\Gamma^2~.
\end{equation}
From $\rho_\chi\propto a^{-3}$ and $\rho_\varphi\propto a^{-4}$, we
have
$\rho_\chi=\frac{\chi_*^2}{6M_p^2}\left(m/\Gamma\right)^{1/2}\rho_\varphi$
when the curvaton decays. So
\begin{equation}
r=\frac{\chi_*^2}{6M_p^2}\left(\frac{m}{\Gamma}\right)^{1/2}\end{equation}
In terms of $f_{NL}$, we have
\begin{equation}\label{f1}
f_{NL}=\frac{15}{2}\frac{M_p^2}{\chi_*^2}\left(\frac{\Gamma}{m}\right)^{1/2}
\end{equation}
On the other hand, from the power spectrum
\begin{equation}\label{f2}
P_\zeta^{1/2}=\frac{1}{3\pi} r \frac{H_*}{\chi_*}=\frac{5}{12\pi} \frac{1}{f_{NL}}
\frac{H_*}{\chi_*}~.
\end{equation}
Use (\ref{f1}) and (\ref{f2}) to cancel the unknown $\chi_*$, we
have
\begin{equation}\label{f3}
f_{NL}=\frac{5}{432} r_T
\left(\frac{m}{\Gamma}\right)^{1/2}~,
\end{equation}
where $r_T\equiv {P_T}/{P_\zeta}$ is the tensor-to-scalar ratio, and
$P_T\equiv 2H_*^2/(\pi^2M_p^2)$ is the tensor mode power spectrum.
Finally, note that the curvaton decay rate $\Gamma$ should be larger
than the decay rate via gravitational coupling,
\begin{equation}\label{gammag}
\Gamma>\Gamma_g\simeq\frac{m^3}{M_p^2}~.
\end{equation}
The above results in this section have been present in
\cite{Huang:2008ze}. Now we use them together with the isocurvature
constraint to derive a new inequality. Use (\ref{gammag}) to cancel
$m$ in (\ref{f3}), we have
\begin{equation}\label{f4}
f_{NL}<\frac{5}{432}r_T\left(\frac{M_p}{\Gamma}\right)^{1/3}~.
\end{equation}
It is shown in \cite{WMAP, Lyth:2002my, Beltran:2008ei} that to avoid a large
isocurvature perturbation, cold dark matter (CDM) should be produced
after the curvaton decays. So we have
\begin{equation}
\Gamma>H_{\rm CDM}~,
\end{equation}
where $H_{\rm CDM}$ denotes the Hubble parameter when CDM is
produced. $H_{\rm CDM}$ can be related with the temperature of the
universe $T$ when CDM decays as
\begin{equation}
H_{\rm CDM}\simeq T^2/M_p~.
\end{equation}
Using Eq. (\ref{f4}), we have
\begin{equation}\label{f5}
f_{NL}< \frac{5}{432} r_T \left(\frac{M_p}{T}\right)^{2/3}~.
\end{equation}
This bound should be used combined with the bound given in
\cite{Huang:2008ze},
\begin{equation}\label{qg}
f_{NL}< 522 r_T^{1/4}~.
\end{equation}
The more constraining one should be used to get the final constraint.
\begin{figure}
\center
\includegraphics[width=11cm]{r.eps}
\caption{In this figure, we show the constraint (\ref{f5}) and (\ref{qg}).
The up plate of the figure denotes the allowed region of $f_{NL}$ as
a function of $r_T$ from (\ref{qg}). The shaded region is the allowed region.
The down plate of the figure denotes the temperature when CDM decays which saturates the
inequalities. In the shaded region, (\ref{qg}) is more strict than (\ref{f5}).
In the unshaded region, (\ref{f5}) is more strict than (\ref{qg}). }
\end{figure}
For example, assume $f_{NL}$ is produced by the curvaton scenario.
If $f_{NL}=50$, then $r_T>10^{-4}$. If the lower bound $r_T=10^{-4}$
is saturated, we get $T< 10^{-11} M_p\sim 10^{7}$GeV. This
constraint is not very tight for $m_{\rm CDM}$. However, it already
rules out some CDM candidates within the curvaton scenario, such as
invisible axions, magnetic monopoles and pyrgons.
On the other hand, if $f_{NL}= 5$ then $r_T>10^{-8}$. This seems
more natural in the small field inflation models. In this case, if
the lower bound is saturated, we get $T<10^{-17}M_p\sim 10$GeV. This
further rules out some dark matter candidates such as primordial black
holes.
The inequality (\ref{f5}) is applicable until $f_{NL}\sim 1$. After
that, other corrections for $f_{NL}$ begins to dominate. In this
limit, $r_T>10^{-11}$. If this bound is saturated, then
$T<10^{-19}M_p\sim 100$ MeV. This bound becomes borderline for LSP
and quark nuggets type dark matter.
Finally, we would like to compare our result with a recent paper
\cite{Beltran:2008ei}. In \cite{Beltran:2008ei}, the author also aims
to get a bound for the temperature of $f_{NL}$, $r_T$ and
$T$. The difference between our work and \cite{Beltran:2008ei} is that
we use different methods to constrain the curvaton mass
$m$. \cite{Beltran:2008ei} uses the spectral index, which gives
$m<0.1H_*$. While we use Eq. (\ref{gammag}) to constrain $m$. This
difference leads to different final results. In \cite{Beltran:2008ei},
the constraint takes the form
\begin{equation}\label{Beltran}
T < 1.9\times 10^{-4} r_T^{5/4}f_{NL}^{-1}M_p~.
\end{equation}
When $r_T^{1/2}<0.023f_{NL}$, our bound Eq. (\ref{f5}) is more tight
than Eq. (\ref{Beltran}). When $r_T^{1/2}>0.023f_{NL}$,
Eq. (\ref{Beltran}) is more tight.
\section{Improved Treatment for Combined Inflaton and Curvaton
Perturbations}
Using the techniques developed in this paper, we are able to
simplify some calculations for the mixed inflaton and curvaton
scenario in the literature.
Note that the perturbations in the curvaton scenario (setting
$\psi^{\rm (n)}=0$), isocurvaton scenario, and mixed inflaton and
curvaton scenario are the same. The simplifications arises because
we have chosen the $\delta\rho_\varphi=0$ gauge. This gauge choice,
or rearranging the variables, can diagonalize the perturbation
equations outside the horizon.
Consider the scenario investigated in \cite{Langlois:2004nn}, with the curvaton
evolving in the radiation dominated era. Using the method in
\cite{Langlois:2004nn}, one is forced to solve the Bessel equation with
source term
\begin{equation}
\ddot{\delta\chi}^{\rm (n)}+
3H\dot{\delta\chi}^{\rm (n)}+m^2\delta\chi^{\rm (n)}=4\dot\chi\dot\psi^{\rm (n)}-2m^2\chi\psi^{\rm
(n)}~, ~~~H=\frac{1}{2t}~.
\end{equation}
Although this equation can be solved analytically, it saves
some calculation and be easier to generalize if one rewrites
this equation in a sourceless manner. This simplification is just
what we have done in Eq. (\ref{eq:freechi}), where we have
considered a general potential for the curvaton and a general equation
of state for the inflaton.
For the $m^2\chi^2$ type curvaton potential, Eq. (15) reads
\begin{equation}
\ddot{\delta\chi}+ 3H\dot{\delta\chi}+m^2\delta\chi=0~.
\end{equation}
Note that this equation has the same form with the background
evolution for $\chi$. So without solving any differential equations,
we know that for the non-decaying solution,
\begin{equation}
\delta\chi=\frac{\chi}{\chi_0}\delta\chi_0~,
\end{equation}
where $\delta\chi_0$ is an integration constant. Use (\ref{gauge})
to translate this result into the Newtonian gauge, we have
\begin{equation}
\delta\chi^{\rm (n)}=\frac{\chi}{\chi_0}\left(\delta\chi_0^{\rm (n)}
-\dot\chi_0 t_0 \psi_0^{\rm (n)}\right)+t\dot\chi\psi^{\rm (n)}~.
\end{equation}
Note that $\delta\chi_0^{\rm (n)}
-\dot\chi_0 t_0 \psi_0^{\rm (n)}$ is just a constant, which can
be redefined to $\delta\chi_0^{\rm (n)}$. Then we recover one of the
key results in \cite{Langlois:2004nn},
\begin{equation}
\delta\chi^{\rm (n)}=\frac{\chi}{\chi_0}\delta\chi_0^{\rm (n)}
+t\dot\chi\psi^{\rm (n)}~,
\end{equation}
where we have not made any reference to the equation of state of
the inflaton component. Other results for the $m^2\chi^2$ potential
in \cite{Langlois:2004nn} can be recovered similarly.
\section{Generalizations for the Curvaton Model}
In this section, we investigate the possibility for large negative
$f_{NL}$ and nonlocal shape $f_{NL}$ in the curvaton model. This
possibility can be realized by phantom curvaton and k-curvaton
respectively. These models seem exotic, and not supported by any
evidence so far. For example, the vacuum stability and the
quantization problem in the phantom model are not solved. However,
we investigate these models as pure phenomenological possibilities.
In reference \cite{WMAP}, the WMAP5 data has
been analyzed by two different methods. It is intriguing that the two
methods prefer central values
of $f_{NL}^{local}$ opposite in sign. In particular, both at the
$95\%$ confidence level, the bispectrum analysis gives the best
estimate $-9<f_{NL}^{local}<111$, while the analysis of Minkowski
functionals prefers $-178<f_{NL}^{local}<64$ in contrast. It is
still unclear why they are so different. However, if one
naively disregards the bispectrum analysis for the moment, and takes
seriously the central value from Minkowski functionals, we will be
motivated to search for models with $f_{NL}^{local}\ll-1$. Let us
see what will happen if the curvaton component is phantom-like
\cite{Caldwell:1999ew}. In that case, the Eqs. (\ref{eq:zeta}) and
(\ref{eq:zetas}) are still valid. However, now $\dot\rho_\chi>0$. So
we have $r<0$, and $\zeta_\chi$ has the different sign from $\zeta$.
It is well-known that a different sign in $\zeta$ should produce a
different sign in $f_{NL}$. The calculation for $f_{NL}$ goes
through in the phantom model, so when $|r|\ll 1$, we have
\begin{equation}
f_{NL}\sim \frac{1}{r}\ll -1~.
\end{equation}
Although the bispectrum analysis of WMAP5 (which is widely taken as
the best estimate) does not prefer a large negative $f_{NL}$, its
lower bound $f_{NL}\simeq -9$ still allows for the phantomlike
curvaton to live in a narrow space. From the opposite viewpoint, this
can be another piece of evidence that nature disfavors phantom.
It is also
worth to note that if the index of equation of state for the
curvaton crosses $-1$ \cite{Feng:2004ad}, then the non-Gaussianity
produced by the curvaton model also crosses $-1$. To realize this
possibility, one usually need more than one curvaton fields
\cite{Xia:2007km}.
Now consider the k-curvaton possibility. If the curvaton has
generalized kinetic terms, then the equilateral non-Gaussianity for
the curvaton is also large. Similar to (\ref{eq:nonlocal}), we have
\begin{equation}
\label{nonlocal}
f_{NL}^{\rm (nonlocal)}=\frac{1}{r} f_{NL\chi}^{\rm (nonlocal)}~.
\end{equation}
This amplification can easily produce very large equilateral
non-Gaussianity. Note that $f_{NL}^{\rm (nonlocal)}\sim 1/c_s^2$.
For example, if $1/c_s^2\simeq 5$, and $f_{NL}\simeq 50$, then we
find $f_{NL}^{\rm (nonlocal)}\sim 250$. The experimental
bound $-151<f_{NL}^{\rm (nonlocal)}<253$ (95\% CL). If both large
local and nonlocal $f_{NL}^{\rm (nonlocal)}$ is observed, the
k-curvaton provides a satisfying explanation.
\section{Conclusion and Discussion}
To conclude, in this paper, we have investigated the
isocurvaton scenario. We found that although the isocurvaton
scenario possesses attractive features such as enhancement of
non-Gaussianity and gravitational waves, the scenario can not be
realized in the slow roll multi-field models. This no-go theorem can
be extended to generalized kinetic terms with assumption $\Xi=0$.
The techniques used in this paper can simplify some calculations in
the mixed curvaton and inflaton scenario, providing an easier
investigation for more general mixed perturbations.
We showed that the no-go result does no harm to the curvaton
scenario. However, the experimental bound on non-Gaussianity,
isocurvature perturbation, and gravitational waves provide a combined
constraint (Eq. (\ref{f5})) on the curvaton model.
We also investigated the phenomenology of phantom and kinetic
curvatons. We showed that the phantom curvaton provides
$f_{NL}\ll-1$, and the k-curvaton provides very large equilateral
non-Gaussianity as well as the local non-Gaussianity.
Finally, let us discuss some possibilities to bypass the no-go
theorem for isocurvaton. The following possibilities are not covered
by the no-go theorem:
\begin{enumerate}
\item Adding interactions. It is reported from numerical
calculation that
interaction can suppress the super horizon perturbations
\cite{Multamaki:2006tb}. It would be interesting to investigate
whether similar mechanisms can realize the isocurvaton scenario.
\item Relaxing the slow roll condition for the isocurvaton field.
It is challenging to construct fast rolling isocurvaton
field which can fit the experimental results.
\item Other form of generalized kinetic terms, including
separately generalized kinetic terms for inflaton and curvaton
during inflation, high derivatives like the box term. The
possibility $\Xi\neq 0$ is also worth investigating.
\end{enumerate}
\section*{Acknowledgment}
This work is supported by grants of NSFC. We thank Xian Gao for
discussions.
\section*{Appendix}
We explain the notation we use, and review some well-known facts in
the cosmological perturbation theory.
In the linear perturbation theory, assuming a flat universe ($K=0$),
and without choosing any gauge, the metric for the scalar
perturbation takes the form
\begin{equation}
g_{\mu\nu}=\left(
\begin{array}{ccc}
1+2\phi & -\beta_{,i}\\
-\beta_{,i}& -a^2((1-2\psi)\delta_{ij}+2E_{,ij})
\end{array}
\right)~,
\end{equation}
The Newtonian gauge is defined by setting
\begin{equation}
\beta^{\rm (n)}=0~,~~~E^{\rm (n)}=0~.
\end{equation}
For the $\delta\rho_\varphi=0$ gauge, the equation
$\delta\rho_\varphi=0$ is just one gauge condition, and as in
\cite{NG}, we set the other gauge condition to be $E=0$. In
this notation, the gauge transformation takes the form of Eq.
(\ref{gauge}).
The conserved quantity can be introduced as follows. Assuming that there
is no energy change between $\varphi$ and $\chi$, the local energy
conservation equation for $\varphi$ takes the form
\begin{equation}\label{conserved}
H-\dot\psi=-\frac{1}{3}\frac{\partial_t(\rho_\varphi+\delta\rho_\varphi)}{\rho_\varphi+\delta\rho_\varphi+p_\varphi+\delta p_\varphi}+{\cal O}\left[\left(\frac{k}{aH}\right)^2\right]~,
\end{equation}
Note that $H$ is a background quantity, and does not change with the
spatial coordinate. If we assume the pressure is a
function of only the energy density, then in the $\delta\rho_\varphi$
gauge, we have $\delta p_\varphi=0$. Thus the RHS of
(\ref{conserved}) is also independent of spacial coordinates. In
order that (\ref{conserved}) holds, $\dot\psi$ should also
independent of spacial coordinates outside the horizon.
As a perturbation variable, $\psi$ should have no zero mode, so does
$\dot\psi$. The only possibility is $\dot\psi=0$. We can
define a conserved quantity
\begin{equation}
\zeta_\varphi=-\psi \big|_{\delta\rho_\varphi=0}~,
\end{equation}
which is conserved after horizon crossing. This conserved quantity
can be rewritten in the gauge invariant form
\begin{equation}
\zeta_\varphi=-\psi-H\frac{\delta\rho_\varphi}{\dot\varphi}~.
\end{equation}
From the same reasoning, there is also a gauge invariant conserved
quantity for $\chi$,
\begin{equation}
\zeta_\chi=-\psi-H\frac{\delta\rho_\chi}{\dot\chi}~.
\end{equation}
Note that these conserved quantities can be defined beyond the
leading order perturbation theory. But we only need the leading
order result for our purpose.
The above proof is under the assumption that in the
$\delta\rho_\varphi=0$ gauge, we have $\delta p_\varphi=0$. This
assumption is obviously true for fluids such as radiation and
matter. It is also worth to note that this assumption is also true
for inflaton after horizon crossing. This is because the above
statement can be rewritten as the adiabatic condition
\begin{equation}
\dot p_\varphi \delta\rho_\varphi = \dot \rho_\varphi \delta
p_\varphi~.
\end{equation}
This condition can be checked directly using the Einstein equations
in the $k\ll aH$ limit.
|
1,477,468,751,008 | arxiv | \section{Torus with respect to BLR and NLR}
The unification model, invoking a dusty torus located in between the
broad-line region (BLR) and narrow-line region (NLR)
of active galactic nuclei (AGNs),
is well supported for Seyfert galaxies.
However, there are still some important open questions:
Does the subtending angle of the torus vary with nuclear luminosity
(e.g., the `QSO 2' problem)?
What is the extent (geometry) of the torus
(e.g., to what extent does it cover and obscure the NLR,
and how does the blocking/obscuration depend on the orientation)?
What is its dust-to-gas ratio?
To address these questions, we have conducted a series of investigations
\textit{via} the partially obscured NLR and BLR,
and the partially covered NLR.\footnote{
By ``partially obscured'', we mean that the obscuration is moderate,
as indicated by the presence of large Balmer decrement;
by ``partially covered'', we mean that the inner part of the NLR
is covered by the torus, either totally or partially obscured. }
\section{Partially obscured BLR}
Partially obscured (i.e. intermediate type) AGNs are
ideal targets to study some of these questions.
Dong et al. (2005) have carried out a systematic search for partially obscured
quasars in the entire sample of the $z<0.3$ broad-line AGN in the SDSS EDR,
with the BLR extinction estimated from the large broad-line H\ensuremath{\alpha}/H\ensuremath{\beta}\ ratios.
The use of broad-line Balmer decrement as an extinction indicator
is further justified in Dong et al. (2008);
they showed that the broad-line H\ensuremath{\alpha}/H\ensuremath{\beta}\ ratios of blue Seyfert 1s and quasars
cluster around 3.06 with a tiny standard deviation of $\approx$0.03\,dex
and do not correlate with either the continuum slope, Eddington ratio, or luminosity.
According to their statistics, partially obscured quasars is at least as abundant
as normal quasars in the local Universe.
By a comparison of the Balmer decrements of the broad and narrow components,
they pointed out that the reddening of the NLR is much smaller than
that of the BLR in most of the partially obscured AGNs;
i.e., a dusty torus likely exists even in low-redshift quasars,
with large subtending angles (see Fig. 1).
Based on a larger homogeneous sample of about 9000 $z<0.35$ broad-line AGNs
culled from the SDSS DR4 according to our criteria (Dong et al., in preparation),
we have been deriving the internal \ensuremath{E_{\rm B-V}}\ distribution of the AGN BLR
in the local Universe, and hence to
obtain the fractions of obscured AGNs of various intrinsic luminosity
and of various degrees of extinction (Zhang et al., in preparation).
The significant merits of partially obscured AGNs are
to obtain a demography of SMBHs in the obscured type/phase,
to get information both of the AGN and of the host galaxy simultaneously
(thus to investigate the SMBH--bulge and starburst--AGN connections), etc.
Besides, partially obscured AGNs allow a reliable measurement of
the dust-to-gas ratio (e.g., Wang et al. 2005);
A dust-to-gas ratio as high as the Galactic value can be present in moderately
thick gas within a few parsecs of the AGN (e.g., Wang et al. 2009).
Moreover, by the monitoring of the variation of the dust extinction,
we can obtain information of the distribution, kinematics and even the origin
of the obscuring material (Wang et al. 2009).
\begin{figure}[tbp]
\centering
\includegraphics[width=1\textwidth]{zhang1_s.ps}
\caption{
Left:
Luminosity of broad H$\alpha $ versus broad-line H\ensuremath{\alpha}/H\ensuremath{\beta}\ ratio
for the broad-line AGN sample in the SDSS EDR.
Upper limits on the H\ensuremath{\alpha}/H\ensuremath{\beta}\ ratio are tagged
with a right-pointing arrow.
Blue Seyfert 1s and QSOs are denoted by solid circles.
The inclined dash line corresponds to the estimated
M$_{g}^{nuc}=-$22$^{m}$.5 after internal extinction correction.
The partially obscured quasars scatter in the upper-right
region of the plot.
Right:
Broad-line H$\protect\alpha $/H$\protect\beta $ ratios versus narrow-line
H$\protect \alpha$/H$\protect\beta$ ratios of the intermediate type AGN in
the sample (Dong et al. 2005).
}
\end{figure}
\section{Partially covered NLR}
To investigate the extent of the torus,
we compare the narrow emission lines in type 1 and type 2 AGNs,
based on the entire AGN sample in the SDSS DR4 (Zhang et al. 2008).
We found that
(1) Seyfert 1 and Seyfert 2 galaxies have different distributions on the
[O\,III]/H\ensuremath{\beta}\ versus [N\,II]/H\ensuremath{\alpha}\ diagram (BPT diagram) for narrow lines;
(2) Among Seyfert 1 galaxies the distribution
varies with the extinction to broad lines;
and (3) The relationship between the [O\,III]\ and broad H\ensuremath{\alpha}\ luminosities
depends on the broad-line extinction in the way that
high-extinction objects have lower uncorrected [O\,III]\ luminosities
(see Fig. 2).
These results suggest that,
unlike low-ionization or low-critical density narrow lines such as [N\,II]\ and [S\,II],
a significant fraction of the [O\,III]\ $\lambda 5007$
and Balmer-line emissions is blocked by the torus.
The inner edge of the dusty torus is known to be on scales
of parsecs; its extent, likely on scales of several tens of parsecs
(e.g., Schmitt et al. 2003, Jaffe et al. 2004).
Torus covers the inner dense part of the NLR.
This kind of partial covering causes the apparent anisotropy
(including the dependence on AGN type) of some narrow emission lines.
These narrow lines either are of high-ionization or high-critical density
(such as coronal lines and [O\,III]; see also Nagao et al. 2001) or
favor high-density and high-column density clouds (such as Fe\,II;
Dong et al., 2009).
\begin{figure}
\centering
\includegraphics[width=1.\textwidth]{zhang2_s.ps}
\caption{
Left:
BPT diagram for Seyfert 2 galaxies.
The lower curve is the empirical line
separating AGNs from star-forming galaxies (Kewley et al 2006).
Most Seyfert 2s locate on the right side of the line S12.
Middle:
BPT diagram for Seyfert 1 galaxies.
Purple crosses represent objects with $E_{B-V}^{b}$\,$<$\,0.2
and green triangles those with $E_{B-V}^{b}$\,$\in$\,$[0.6,1]$.
Right:
The uncorrected luminosity of [O\,III]\,$\lambda 5007$
versus extinction corrected luminosity of broad H$\alpha$
for Seyfert 1 galaxies.
The blue line shows the best linear fit to the whole sample
(Zhang et al. 2008). }
\end{figure}
|
1,477,468,751,009 | arxiv |
\subsubsection{The ATLAS and CMS detectors}
ATLAS and CMS are general-purpose detectors with a cylindrical geometry and a nearly hermetic coverage
in $\theta$ and $\phi$\footnote{Both ATLAS and CMS use right-handed coordinate systems with their origin placed
at the nominal interaction point and the $z$-axis running along the beam direction. The $x$- and $y$-axes point to the
centre of the LHC ring and upward, respectively. Cylindrical coordinates $(r,\theta,\phi)$ are used in this
coordinate system. The subscript $T$ refers to quantities measured in the $(x,y)$, or transverse plane, while
the pseudorapidity is defined as $\eta = -\log[\tan(\theta/2)]$.}. Figure~\ref{fig:spaccati} shows longitudinal views of
a quadrant of the ATLAS and CMS detectors.
\begin{figure*}[t]
\centering
\includegraphics[width=0.75\textwidth]{Figures/Cross-section-of-a-quadrant-of-the-ATLAS-Muon-Spectrometer-in-the-r-z-plane-left-and.jpg} \\
\includegraphics[width=0.70\textwidth]{Figures/cms_quadrant_run_ii.pdf}
\caption{Longitudinal views of a quadrant of the ATLAS (top) and CMS (bottom) detectors. Sub-apparati used for
the detection of different particles and their positioning are shown.}
\label{fig:spaccati}
\end{figure*}
In both ATLAS and CMS the interaction point is surrounded by tracking detectors. For both the innermost system
consists of a silicon pixel detector, providing precise estimation of track impact parameters and vertices, and
is complemented by outer layers of silicon microstrip detector. In ATLAS tracking information is also provided
by a transition radiation tracker. In both experiments, these inner detectors provide precise measurements
of charged-particle tracks in the pseudorapidity range $|\eta| < 2.5$.
Electromagnetic and hadronic calorimeters cover the region $|\eta| < 3.2$ in ATLAS and $|\eta| < 3.0$ in CMS.
In ATLAS the electromagnetic calorimeter is based on high-granularity, lead/liquid-argon (LAr) sampling
technology, while in CMS it consists of high-resolution lead tungstate crystals.
The ATLAS hadronic calorimeter is comprised of a steel/scintillator-tile sampling detector in the central
region and a copper/LAr detector in the region $1.5 < |\eta| < 3.2$, while CMS uses a brass/scintillator detector.
For VBS signatures, it is of paramount importance to collect electromagnetic and hadronic energies at larger
values of pseudorapidity. To achieve this goal, both ATLAS and CMS are equipped with forward calorimeters,
that have lower granularities but must satisfy stringent radiation hardness requirements. In ATLAS the region of
the detector $3.1 < |\eta| < 4.9$ features a forward calorimeter (FCal), measuring electromagnetic and
hadronic energies in copper/LAr and tungsten/LAr modules. In CMS the forward calorimeter (HF) covers the
region up to $|\eta| \lesssim 5.0$ and consists
of a steel absorber equipped with quartz fibres of two different lengths which distinguish the electromagnetic
and hadronic components.
The magnet arrangement is different in the two experiments. CMS features a large superconducting solenoid
with a 6-m inner diameter providing an axial magnetic field of 3.8~T. In ATLAS, a smaller solenoid providing a
magnetic field of 2~T surrounds the inner tracker, while three large superconducting toroidal magnets are
placed with an eightfold coil symmetry outside the calorimeters.
In both ATLAS and CMS the muon spectrometer comprises trigger and high-precision tracking chambers to measure the
trajectory of muons. Detector technologies include drift tubes, cathode strip chambers in
the forward regions, resistive-plate chambers, and thin-gap chambers. The CMS muon coverage is $|\eta| < 2.4$
while in ATLAS it is $|\eta| < 2.7$ for tracking and $|\eta| < 2.4$ for trigger chambers.
Events of interest are selected in real time using two-tiered trigger systems in both experiments~\cite{Aad:2020wji,Khachatryan:2016bia}.
The first level is composed of specialized hardware processors and uses information from the calorimeters and muon
detectors. The second level (high-level)
consists of farms of processors running a fast, optimized version of the event reconstruction software
that reduce the event rate before data storage. Data are stored in different streams according to which
high-level trigger path(s) find compatibility between an event and a specific particle hypothesis (single electron,
double muon, etc.).
\subsubsection{Tagging-jet reconstruction}
\label{sec:tagjet}
All VBS processes have in common a pair of jets in the final state, originating from the hard scattering process.
These have been precisely defined in Sec.~\ref{sec:sob} and we refer to those jets here as \emph{VBS-tagging jets}.
In ATLAS, jet constituents are topologically-grouped clusters (``topo-clusters'') of electromagnetic and hadronic calorimeter cells~\cite{Aad:2016upy}.
In CMS events are reconstructed using a more detailed particle-flow algorithm~\cite{Sirunyan:2017ulk} that identifies each individual particle with an optimized combination of all subdetector information.
However, at the very high pseudorapidity values covered by the HF only, a hadron or electromagnetic particle-flow candidate is just defined
by its energy release in a $\eta$-$\phi$ HF cell, since information from no other subdetector is available.
In both experiments jets are reconstructed from either particle-flow candidates or topo-clusters using the anti-$k_{\rm T}$ clustering algorithm~\cite{Cacciari:2008gp},
as implemented in the FastJet package~\cite{Cacciari:2011ma}, with typically a distance parameter of $0.4$.
This value ensures a good particle containment while reducing the instrumental background, as well as the contamination from pileup. In both ATLAS and CMS identification criteria for jets are very loose and retain
almost all physically meaningful jets. Since these jets include leptons with the surrounding QED activity
(sometimes referred to as \emph{dressed} leptons), analyses with leptonic final states always require a minimum
tagging-jet/lepton $\Delta R$ separation in defining their fiducial regions, usually taken equal to the jet distance parameter.
Pileup effects, which are particularly relevant in the forward regions, affect jet measurement in two ways:
by adding entire jets that do not originate from the hardest-scattering event (thus requiring \emph{pileup rejection} techniques)
and by adding particles in the jet that do not belong to signal jets but overlap in space (requiring \emph{pileup subtraction} techniques).
In evaluating both effects, there is an important difference between charged and neutral particles. For charged particles, the hypothesis of being originated in the hardest-scattering \emph {primary vertex} can be evaluated track by track and is based on impact-parameter compatibility\footnote{In both ATLAS and CMS, the hardest-scattering vertex is defined as the primary vertex in the event for which the scalar sum of the $\ensuremath{p_\text{T}}\xspace^2$ of the associated tracks is maximum.}.
For neutral particles, this association is not possible and subtraction or rejection must be done on a statistical basis. It has to be noted that outside the tracker coverage ($|\eta| < 2.5$ for both ATLAS and CMS) all particles must be treated as neutral.
ATLAS and CMS apply pileup subtraction as part of the jet energy corrections~\cite{Aaboud:2017jcu,Khachatryan:2016kdb}.
The subtraction has the analytical form suggested in Ref.~\cite{Cacciari:2007fd}
\begin{equation}
p^{corr}_{\rm T} = \ensuremath{p_\text{T}}\xspace - \rho \times A_{jet},
\end{equation}
where $\rho$ is the estimated average pileup $\ensuremath{p_\text{T}}\xspace$ density in specific regions of the detector and
$A_{jet}$ is the jet area. Depending on the experiment and specific analysis, this subtraction can be performed on the original jet, or in combination with charged-hadron subtraction based on vertex compatibility.
ATLAS~\cite{Aad:2015ina,Aaboud:2017pou} and CMS~\cite{Sirunyan:2020foa} have dedicated pileup rejection methods. Within
the tracking volume, a jet-vertex combined compatibility is computed from the charged tracks found inside a jet,
while in the forward regions different jet shapes are employed to build discriminating variables that isolate signal
jets from pileup jets. Not all VBS analyses apply selections based on these very recent methods.
Techniques enriching
the selected sample with quark-initiated jets over the more abundant gluon-initiated background were studied both in
ATLAS~\cite{ATLAS:2016wzt} and CMS~\cite{CMS:2013kfa} and could be useful in VBS searches. Their limitations, however,
reside in the limited resolution and granularity of the subdetectors covering the very forward regions. In ATLAS,
performances outside the tracking volume are not
even reported, while in CMS they were found to be poor in the forward region.
Requirements on the transverse momentum ($\ensuremath{p_\text{T}}\xspace$) of tagging jets, applied after jet energy corrections, vary among the different analyses and can be symmetric or not between the two.
Pseudorapidity requirements need to be as loose as possible because of the particular spatial distribution of tagging jets: typically all jets with $|\eta| < 4.5$ (4.7) are selected in ATLAS (CMS).
Variables describing kinematics of the jet pair are the most discriminating between VBS and various sources of
background.
As mentioned previously, typical selections include a minimum rapidity or pseudorapidity difference (\ensuremath{\Delta y_{\Pj\Pj}}\ or \ensuremath{\Delta \eta_{\Pj\Pj}}, usually
taken as unsigned quantities), a minimum invariant mass of the jet pair (\ensuremath{m_{\Pj\Pj}}) and, in some cases, selection on
more complex quantities, like the Zeppenfeld variables that we shall define in the following.
A common choice in case of more than two reconstructed jets in an event is to retain the event if fulfilling the selection, and choose the two jets with the largest $\ensuremath{p_\text{T}}\xspace$ or energy as the VBS-tagging jets.
Both choices have non-trivial implications on the analyses.
First of all, a third-jet veto is in principle a powerful handle for background rejection, since the VBS topology implies a rapidity gap between the tagging jets with very little hadronic activity.
However, besides possible pileup contributions occurring in the gap, a long-standing theory problem was the observation of large differences in the third-jet kinematics when comparing predictions obtained with some commonly-used parton-shower programs~\cite{Ballestrero:2018anz}. The origin of these differences has recently been understood (see Sec.~\ref{sec:theory}), thus in principle making it possible to
include veto techniques in future analyses without being hampered by large theoretical systematic uncertainties.
Second, the definition of the tagging jets can be particularly relevant for phase-space regions including heavy-gauge boson resonances decaying into quarks.
Such an example is given in Sec.~\ref{sec:zz} when discussing the case of $\ensuremath{\text{Z}}\xspace\PZ$ scattering. Finally, it has to
be noted that, as detailed in the next section, in presence of jets originating from heavy gauge-boson decays in the selection,
the choice of the tagging jets is not obvious and varies between different analyses.
\subsubsection{Vector-boson reconstruction}
The analysis techniques to reconstruct and select vector bosons in VBS processes depend strongly on the final state under investigation.
High-energy photons ($\gamma$) are reconstructed in electromagnetic calorimeters with very high efficiencies.
On the other hand, when one or both vector bosons are heavy ($\ensuremath{\text{W}}\xspace$ or $\ensuremath{\text{Z}}\xspace$) the reconstruction is based on their decay products and three classes of analyses can be distinguished:
\paragraph{1) Fully leptonic channels:}
Heavy gauge bosons decay into the $\ensuremath{\text{W}}\xspace^\pm \rightarrow \ell^\pm \nu_\ell$ and $\ensuremath{\text{Z}}\xspace \rightarrow \ell^+ \ell^-$ final states, where $\ell$ denotes either an electron or a muon.
Even though the branching fractions are approximately only $20\%$ and $6.7\%$, respectively, these final states are the cleanest and can satisfactorily cover the phase spaces of all VBS processes.
For this reason, all evidences or observations of SM VBS processes so far relies on fully leptonic (or lepton $+\,\gamma$) channels.
The decays of $\ensuremath{\text{W}}\xspace$ and $\ensuremath{\text{Z}}\xspace$ bosons into $\tau$ leptons are not considered in existing analyses because, while having identical branching fractions as electrons and muons,
they are much more challenging to reconstruct due to the presence of the missing neutrinos in the $\tau$ secondary decays.
Nevertheless, the events where the $\tau$ decay leptonically can enter the selected samples in the fully leptonic channels.
This contamination is much smaller in size than the $\tau$ leptonic branching fraction, since secondary leptons from $\tau$ decays have smaller transverse momenta and/or fail invariant mass requirements.
However, all analyses do (or should in principle) state if this small contribution is considered or not in their definition of fiducial analysis volumes.
In the $\ensuremath{\text{W}}\xspace^\pm \rightarrow \ell^\pm \nu_\ell$ case, due to the presence of neutrinos, the process cannot be fully reconstructed.
As opposed to the $\ensuremath{\text{Z}}\xspace\to\ell^+\ell^-$ case, it implies that non-resonant contributions also enter the selected sample, and therefore must also be included in the simulation.
\paragraph{2) Semi-leptonic channels:}
One heavy gauge bosons is reconstructed from the $\ensuremath{\text{W}}\xspace^\pm \rightarrow \ell^\pm \nu_\ell$ or $\ensuremath{\text{Z}}\xspace \rightarrow \ell^+ \ell^-$ final state,
and the other one from the $\ensuremath{\text{W}}\xspace^\pm \rightarrow q{\bar q}'$ or $\ensuremath{\text{Z}}\xspace \rightarrow q{\bar q}$ final state.
These final states exhibit larger cross sections because of the higher branching ratios.
However, performing a standard reconstruction of the jets from $\ensuremath{\text{W}}\xspace$ or $\ensuremath{\text{Z}}\xspace$ hadronic decays, as described in Sec.~\ref{sec:tagjet},
results in samples overwhelmingly dominated by the production of single-bosons in association with jets, in which sensitivity to SM VBS is negligible compared to fully leptonic channels.
However, special reconstruction techniques apply in case of \emph{boosted} vector bosons, \emph{i.e.}\ when their Lorentz $\gamma$-factor is large~\cite{jetsatLHC}.
In particular if the aperture angle of the quark-antiquark pair is $\Delta R \simeq 2/\gamma \simeq 2M(q{\bar q})/\ensuremath{p_\text{T}}\xspace(q{\bar q}) < 0.8$, that is for $\ensuremath{p_\text{T}}\xspace(q{\bar q}) \gtrsim 220$ GeV,
the hadronic decay products of the gauge boson do not cluster into two separated $R =0.4$ jets but are instead merged into a larger-area jet.
In this case, hadronic $\ensuremath{\text{W}}\xspace$ and $\ensuremath{\text{Z}}\xspace$ decays are identified by anti-$k_{\rm T}$ jets with $R =0.8$ which contain all the products of the decay.
As opposed to standard jets, these \emph{merged} jets have two important characteristics that help distinguishing them from regular-jet background:
after removing soft QCD radiation, the invariant mass of all jet constituents peaks at the $\ensuremath{\text{W}}\xspace$ or $\ensuremath{\text{Z}}\xspace$ mass;
and the inner structure of the jet is such that two \emph{subjets} with smaller radii can be identified.
There are two main characteristics in analyses employing boosted vector bosons: first, they usually address mixed final
states because jet-mass resolutions are such that $\ensuremath{\text{W}}\xspace$ and $\ensuremath{\text{Z}}\xspace$ cannot be easily separated. Therefore these final
states are indicated by V (indicating generically a vector boson, either $\ensuremath{\text{W}}\xspace$ or $\ensuremath{\text{Z}}\xspace$). Secondly, the requirement
$\ensuremath{p_\text{T}}\xspace({\rm V}) \gtrsim 220$ GeV implies that only small parts of the SM VBS phase space are accessible, which compensates
for the higher branching fraction. On the other hand, BSM effects in EFT approaches produce cross sections with
larger components in the boosted phase space as stated in Sec.~\ref{sec:bsm}.
Hence, the most stringent limits on the Wilson coefficients of EFT operators are obtained from semi-leptonic channels.
\paragraph{3) Fully hadronic channels:}
Because of the dominant multijet background, these analyses can only be
performed for final states with two boosted gauge bosons, VV. While potentially they could have even better
sensitivities than semi-leptonic channels on EFT operators, there are no public LHC analyses to date employing these
final states.
\paragraph{Lepton and missing energy reconstruction}
A brief review of charged-lepton reconstruction follows, which is common to many VBS analyses.
Photon and merged-jet reconstruction and selection are specific to some analyses and will be discussed in Sec. 3.
In ATLAS and CMS, muons are reconstructed by combining information from the inner tracking system with the signals
in the muon chambers and finding matches between reconstructed tracks in the two detectors. In most analyses, muons
must satisfy identification criteria which are called
\emph{medium} in ATLAS~\cite{Aad:2016jkr} and \emph{tight} in CMS~\cite{Sirunyan:2018fpa} but in both
cases correspond to efficiencies exceeding 90\% after fiducial selections. A minimum number of
hits in the related subdetectors is required, which rejects fake matchings, as well as kaon and pion decays in flight.
Tight primary-vertex compatibility criteria select only \emph{prompt} muons, rejecting those
originating from long-lived particle decays. CMS utilizes the transverse and longitudinal track impact
parameter $d_{xy}$ and $d_{z}$ as selection variables for primary-vertex track compatibility, while ATLAS uses the
significance of the impact parameter in the transverse plane $|d_{xy}/\sigma_{d_{xy}}|$ and the
vertex-track distance computed from the longitudinal impact parameter, $d_{z} \sin \theta$, with tighter
requirements. Isolation requirements, further reducing the $B$- and $D$-meson decay background, are in general
loose for both experiments.
Similarly, in both ATLAS and CMS, electron reconstruction combines information from inner-detector tracks
and electromagnetic energy clusters. Primary vertex compatibility of the electron track is evaluated in a similar way
as for muons, with equal or tighter requirements. However, identification criteria, again defined as
\emph{medium} in ATLAS~\cite{Aaboud:2019ynx,Aad:2019tso} and \emph{tight} in CMS~\cite{Khachatryan:2015hwa},
are more complex in order to cope with potentially large backgrounds of misidentified photons and jets. These criteria
involve many aspects of the reconstruction, including: angular and energy-momentum matching between track and cluster,
electromagnetic shower shape variables, energy ratios between the central cluster cell and the surrounding ones,
and maximum energy released in hadronic calorimeters in close-by regions of the detector. In ATLAS, these criteria
are combined using a likelihood-ratio method, while in CMS selections are either applied sequentially or combined
in a Boosted-Decision Tree (BDT)\footnote{A decision tree is an algorithm which takes a set of input features and splits input data recursively based on those.
Boosting is a machine-learning method which combines several decision trees to make a stronger signal-background classifier.
BDT algorithms are coded in commonly used programs like TMVA~\cite{Speckmayer:2010zz} or Keras (\texttt{https://keras.io}).}. Electron isolation considers separately the energy/momentum reconstructed
around the electron direction in trackers, electromagnetic calorimeters, and hadronic calorimeters, taking into
account possible \emph{bremsstrahlung} effects in the traversed detectors: isolation requirements are part of the
CMS identification criteria, while they are applied separately in ATLAS. Total efficiencies are lower than for muons,
ranging in about 80-85\% for typical electrons from $\ensuremath{\text{W}}\xspace$ or $\ensuremath{\text{Z}}\xspace$ decays.
The only analyses significantly departing from the above choices are those having $4\ell$ in the final state,
because the simultaneous presence of four charged leptons removes many types of backgrounds and looser selections
can be used to recover efficiency.
Another important ingredient in the case of final states involving one or more $\ensuremath{\text{W}}\xspace^\pm$ boson(s) is the
reconstruction of the missing transverse momentum $p^{\mathrm{miss}}_{{\mathrm{T}}}$, that can be identified as the (total) transverse momentum of
the undetected high-energy neutrino(s). In both ATLAS and CMS this is defined as the opposite of the vector sum of all
reconstructed particle momenta, so its precise determination depends on all energy/momentum corrections applied
to visible particles and in particular to jets~\cite{Aaboud:2018tkc,Sirunyan:2019kia}.
\subsubsection{Monte Carlo simulation}
\label{sec:mcsim}
Monte Carlo (MC) simulation of VBS signals and backgrounds which cannot be estimated from data-driven techniques (\emph{e.g.}\ QCD background, which has inherently the
same signature) is an essential ingredient of experimental analyses.
Regarding simulations at NLO QCD, matched to parton shower or merged with higher parton multiplicities, the most used generator tools for physics events are
{\sc\small MG5\_aMC}\xspace ~\cite{Alwall:2014hca}, version 2.3 (v2.3) and above, {\sc POWHEG}\xspace~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd} v2, and {\sc Sherpa}\xspace~\cite{Bothmann:2019yzt} v2.1 and above.
Generation parameters can vary in different processes and experimental analyses,
but there are some common choices. In general, the central renormalization and factorization scales are set automatically
by {\sc\small MG5\_aMC}\xspace to the central $m_{\rm T}^2$ scale after $k_{\rm T}$-clustering of
the event, while in {\sc POWHEG}\xspace\ and {\sc Sherpa}\xspace\ the default choice is
process-dependent (for diboson and VBS processes a common choice is to use the
diboson invariant mass). Uncertainties from renormalization and factorization scales are mostly derived from the \emph{7-fold scale variation scheme}, where both are
varied independently by a factor of two up and down, but avoiding the cases
where the two vary in opposite directions (that is, differ by a factor four).
Quite peculiarly, in CMS, the standard sets of parton distribution functions (PDFs) used are different for the simulation of the 2016 detector conditions (NNPDF3.0 NLO) and for the 2017-18 conditions (NNPDF3.1 NNLO).
In ATLAS, the the NNPDF3.0 NNLO PDF set is used in most cases.
The estimation of PDF uncertainties follow prescriptions from the NNPDF collaboration~\cite{Ball:2014uwa}.
While {\sc Sherpa}\xspace\ has an internal parton shower (PS) and underlying-event
program, other Monte Carlo generators need an external tool providing PS,
which are usually {\sc Pythia8}~\cite{Sjostrand:2007gs} or {\sc HERWIG}~\cite{Bellm:2015jjp}. Underlying-event tuning is slightly different in the two experiments and tunes have also
been updated in some cases during the course of Run-2~\cite{Khachatryan:2015pea,Sirunyan:2019dfx}.
Detector simulation is obtained through the {\sc GEANT4} software~\cite{GEANT,GEANT2} in both experiments.
\subsubsection{Theoretical calculations}
Compared to standard LHC measurement, the corresponding theoretical predictions are not very much advanced.
In particular, for the elastic part \emph{i.e.}\ $\gamma\gamma\to\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$, these are usually LO predictions matched to parton shower.
They can be obtained from standard Monte Carlo generators and have to be combined with the corresponding photon flux (see for example Ref.~\cite{Budnev:1974de}).
The main background for such a measurement is $q\bar q \to \ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$ which is very well known from a theoretical point of view with
NLO QCD \cite{Ohnemus:1991kk,Baur:1995uv,Campbell:1999ah},
NLO QCD + PS \cite{Frixione:2002ik,Hamilton:2010mb,Nason:2013ydw},
NNLO QCD \cite{Gehrmann:2014fva,Grazzini:2016ctr},
gluon--gluon loop-induced \cite{Caola:2015rqy,Grazzini:2020stb},
resummed \cite{Grazzini:2015wpa},
and NLO EW \cite{Kuhn:2011mh,Bierweiler:2012kw,Baglio:2013toa,Gieseke:2014gka,Biedermann:2016guo,Kallweit:2017khh} predictions.
Further predictions even feature combinations of the above mentioned calculations \cite{Re:2018vac,Kallweit:2019zez,Kallweit:2020gva,Brauer:2020kfv,Chiesa:2020ttl}.
\subsubsection{Experimental approaches}
ATLAS and CMS studies so far employ the technique which does not require proton tagging. ATLAS
recently reported observation of \ensuremath{\gamma \gamma \rightarrow \PW^+\PW^-}\ in Run-2 data~\cite{ATLAS:ggWW} while CMS has issued only results
on less recent data sets at 7 and 8 TeV, finding evidence for this process and stringent
aQGC limits on the operators $f_{M,0}$ and $f_{M,1}$~\cite{CMS:ggWW}.
Since this work deals with LHC Run-2 results, we will just briefly review the ATLAS analysis in the following.
The analysis proceeds by requiring an electron and a muon of opposite signs with large dilepton transverse momentum and exactly zero additional charged particles in the event.
\footnote{The same-flavor case is not considered because of the dominant $\gamma\gamma \rightarrow \ell^+ \ell^-$ background.}
Simulation of the signal events
and of the $\gamma\gamma \rightarrow \ell^+ \ell^-$ background without intermediate \ensuremath{\text{W}}\xspace\ bosons proceeds through
the {\sc HERWIG}\xspace generator~\cite{Bellm:2015jjp}, interfaced to a suitable photon-flux provider and QED-dedicated
PDFs. The main background, which is $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$ production through quark-antiquark annihilation, is simulated
using {\sc POWHEG}\xspace. \cite{Nason:2013ydw}
The fiducial region is defined by event with an electron and a muon with
$p_{\rm T,\ell} > 27/20\ensuremath{\,\text{GeV}}\xspace$ (after QED FSR recovery), $|\eta_{\ell}| < 2.5$,
$m_{e \mu} > 20\ensuremath{\,\text{GeV}}\xspace$, $p_{\rm T,e \mu} > 30\ensuremath{\,\text{GeV}}\xspace$ and $n_{trk} = 0$ where $trk$ is
any charged particle with $p_{\rm T} > 0.5\ensuremath{\,\text{GeV}}\xspace$ and $|\eta| < 2.5$. Further offline
requirements are applied for the leptons to be compatible with a common vertex.
Background estimation is quite complex because of the multiple sources that can feed additional charged particles
in the event and therefore fail the $n_{trk} = 0$ requirement. To improve pile-up descriptions, data-driven
methods are used to derive corrections to the simulated events, targeting the
density of proton-proton interactions and the number of tracks per interaction separately. Underlying-event modelling
of non-exclusive $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$ production is also corrected from a Drell-Yan control sample in data, while a
correction factor
for the proton-rescattering effect is derived from a $\gamma\gamma \rightarrow \ell^+ \ell^-$ control sample.
Signal systematic uncertainties are dominated by the mass-dependent transfer
factors between \ensuremath{\gamma \gamma \rightarrow \PW^+\PW^-}\ and the $\gamma\gamma \rightarrow \ell^+ \ell^-$ control sample, while the main systematic background uncertainties arise from
variations from different theory predictions of non-exclusive $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$ production. Overall, the largest impact comes from simulated statistics in
background samples.
The fiducial cross-section is determined by fitting the signal region
($p_{\rm T,e \mu} > 30$ GeV and $n_{trk} = 0$) after checking data/prediction
agreement in control regions where either $p_{\rm T,e \mu} < 30$ GeV or $1 \leq n_{trk} \leq 4$. The significance of the observation is 8.4$\sigma$ and the
corresponding result is: $\sigma_{\mathrm{fid}} = 3.1 \pm 0.4{\ensuremath\unskip\,\text{fb}}\xspace$, where the
statistical and systematic uncertainties are of the same order. Comparing
with the corresponding CMS analysis, it can be inferred that constraints
on relevant dim-8 operators could be competitive or better than those obtained from deep-inelastic $\ensuremath{\text{p}}\xspace\Pp$ collisions.
\subsubsection{Theoretical calculations}
Finally the \ensuremath{\PW^+ \PW^- \Pj\Pj}\ channel raised a lot of theoretical interest in the past years.
The QCD corrections to the EW signal are known in the VBS approximation \cite{Jager:2006zc} and have been matched to parton shower \cite{Jager:2013mu}.
The QCD background is also known at the same accuracy, namely NLO QCD \cite{Melia:2011dw,Greiner:2012im} and matched to parton shower \cite{Rauch:2016upa}.
A summary of the available predictions is provided in Table~\ref{tab:osWWTH}.
It is reasonable to think that the theoretical accuracy for this final state will be comparable to the one of the other channels in the next few years.
In order not to include resonant top quarks contributions, care has to be taken.
One should either use the 4-flavour scheme or use the 5-flavour scheme and carefully remove top-quark contributions (see for example Ref.~\cite{Melia:2011dw}).
\begin{table}[htb]
\caption{Summary of higher-order predictions currently available for the os-WW channel: at fixed order and matched to parton shower.
The symbols {\bf \color{green} \checkmark}, {\bf \color{green} \checkmark$^*$}, and {\bf \color{red} X}
means that the corresponding predictions are available, in the VBS approximation, or not yet.}
\center
{\begin{tabular}{l|cccc}
Order & $\mathcal{O}\left(\alpha^7 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}} {\alpha}^6 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^2 {\alpha}^5 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^3 {\alpha}^4 \right)$\\
\hline
NLO & {\bf \color{red} X} & {\bf \color{green} \checkmark$^*$} & {\bf \color{red} X} & {\bf \color{green} \checkmark} \\
NLO+PS & {\bf \color{red} X} & {\bf \color{green} \checkmark$^*$} & {\bf \color{red} X} & {\bf \color{green} \checkmark}
\end{tabular} \label{tab:osWWTH}}
\end{table}
\subsubsection{Experimental approaches}
There are so far no published results from ATLAS and CMS on the \ensuremath{\PW^+ \PW^- \Pj\Pj}\ channel. With respect to other channels,
in this case the top-pair background with dileptonic $\ensuremath{\text{W}}\xspace^\pm$ decays is dominant and requires a very efficient
b-quark jet veto, bringing larger experimental uncertainties. The analysis method and preliminary results are
available in Ref.~\cite{tesiCardini}.
\subsubsection{Definition of polarised cross section}
Several aspects need to be considered when one tries to define the production cross section for a specific polarisation state of a vector boson. In this
discussion, we will follow the discussion of Ref.~\cite{Ballestrero:2017bxn}, which also documents the implementation of the polarised cross
section for \ensuremath{\PW^+ \PW^- \Pj\Pj}\ in the {\sc Phantom}\xspace code~\cite{Ballestrero:2007xq}. Extension to the cases of WZ and ZZ production are documented in Ref.~\cite{Ballestrero:2019qoy}.
The first and most immediate aspect is that vector bosons are unstable particles which undergo a decay. Hence any information on their polarisation must be kept through the decay
processes. If we consider the case of a single vector boson (the generalisation to the case of multiple vector bosons is trivial) which is produced from
an initial state $I$ and decays into a final state $F$:
\begin{equation}
I \to V \to F,
\end{equation}
the corresponding matrix element can be written as:
\begin{equation}
\mathcal M _{I \to V \to F} = \mathcal M_{I \to V}^\mu \frac{-g_{\mu\nu} + \frac{p_\mu p_\nu}{M_V^2}}{\left(p^2-M_V^2\right)^2 + M_V^2\Gamma_V^2}
\mathcal M_{V \to F}^\nu .
\label{eq:polme}
\end{equation}
Here, $p$ is the momentum of the intermediate vector boson. The projector $-g_{\mu\nu} + \frac{p_\mu p_\nu}{M_V^2}$ can be expressed as the sum over
the polarisations of the intermediate vector bosons:
\begin{equation}
-g_{\mu\nu} + \frac{p_\mu p_\nu}{M_V^2} = \sum_\lambda \epsilon^\lambda _{\mu} {\epsilon^\lambda _{\nu}}^*.
\end{equation}
Now, when the amplitude in Eq.~\eqref{eq:polme} is squared, one obtains
\begin{eqnarray}
\left|\mathcal M _{I \to V \to F}\right|^2 & = &
\mathcal M_{I \to V}^\mu \frac{ \sum_\lambda \epsilon^\lambda _{\mu} {\epsilon^\lambda _{\nu}}^*} {\left(p^2-M_V^2\right)^2 + M_V^2\Gamma_V^2}
\mathcal M_{V \to F}^\nu \nonumber\\
& & \times \left(\mathcal M_{I \to V}^\mu \frac{ \sum_{\lambda'} \epsilon^{\lambda'} _{\mu} {\epsilon^{\lambda'} _{\nu}}^*} {\left(p^2-M_V^2\right)^2 + M_V^2\Gamma_V^2}
\mathcal M_{V \to F}^\nu \right)^* \nonumber\\
&\ne&
\sum_\lambda
\left[
\mathcal M_{I \to V}^\mu \frac{ \epsilon^\lambda _{\mu} {\epsilon^\lambda _{\nu}}^*} {\left(p^2-M_V^2\right)^2 + M_V^2\Gamma_V^2}
\mathcal M_{V \to F}^\nu \right. \nonumber\\
& & \left. \times \left(\mathcal M_{I \to V}^\mu \frac{\epsilon^{\lambda} _{\mu} {\epsilon^{\lambda} _{\nu}}^*} {\left(p^2-M_V^2\right)^2 + M_V^2\Gamma_V^2}
\mathcal M_{V \to F}^\nu \right)^* \right]. \label{eq:polinterf}
\end{eqnarray}
The meaning of Eq.~\eqref{eq:polinterf} is that, since the vector bosons are not external particles, their polarisation states interfere with each other. This, in principle,
jeopardises the definition of a polarised cross section. However, interference terms integrate to zero over the whole range of the decay azimuth angle,
and this makes it possible, at least in principle, to have a well-defined polarised cross section.
It has to be stressed that, whenever cuts are imposed (as it is the case in any realistic setup) or when
one is interested in observables sensitive to the decay degrees of freedom, particularly to the azimuth angle, the cancellation of interferences is not bound to happen. This can
be observed in Fig.~\ref{fig:vbspol1} for \ensuremath{\PW^+ \PW^- \Pj\Pj}\ VBS, where singly-polarised cross sections (the positively-charged \ensuremath{\text{W}}\xspace\ remains unpolarised) and their incoherent sum are compared with the full cross sections:
the upper left plot shows the dijet invariant mass, which has no dependence on the lepton azimuth angle. The upper right plot shows the lepton transverse momentum, which has an indirect dependence on the azimuth angle.
Finally, the bottom plot shows the lepton azimuth angle. It can be observed that, while for the dijet invariant mass the incoherent sum of the polarisation and the full cross section
are indistinguishable, small but visible differences appear for the electron transverse momentum, and obvious effects appear for the azimuth angle. Observables
which display good agreement between the incoherent sum of the different polarisation states and the full result can be employed to extract the polarisation fractions for
the different states. While, as mentioned above, cuts on the leptons can in principle spoil such an agreement, in practice their effect is generally mild on most observables.
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\linewidth,clip=true,trim={0.2cm 0cm 1.5cm 0.cm}]{Figures/Mjj_polOSPvsFUL_nolepcut.pdf}
\includegraphics[width=0.45\linewidth,clip=true,trim={0.2cm 0cm 1.5cm 0.cm}]{Figures/Pte_polOSPvsFUL_nolepcut.pdf} \\
\includegraphics[width=0.45\linewidth,clip=true,trim={0.2cm 0cm 1.5cm 0.cm}]{Figures/phiBern_nolepcut.pdf}
%
\caption{Differential singly-polarised cross sections for opposite-sign W scattering at the LHC. Polarisations are defined in the laboratory frame. The unpolarised prediction (black)
is compared with the incoherent sum (purple) of the polarized
ones (blue, green and red; these use on-shell projection and take into account only the resonant diagrams). No cut is imposed on leptonic
variables. These figures are taken from Ref.~\cite{Ballestrero:2017bxn}.}
\label{fig:vbspol1}
\end{figure*}
There are other aspects which need to be considered in the definition of polarised cross section. First, vector bosons may be produced off-shell, \emph{i.e.}\ far from the resonance peak.
This issue is typically addressed by using the so-called on-shell projection (OSP), where some momentum transformation is used changing the momenta such that the intermediate vector-boson is on its mass shell.
This is similar to the so-called pole scheme approximation, usually employed in the computation of higher-order corrections (for example in
Refs.~\cite{Aeppli:1993cb,Aeppli:1993rs,Beenakker:1998gr,Billoni:2013aba,Denner:2000bj,Denner:2016jyo,Denner:2020bcz} and therein).
Second, non-resonant diagrams may exists, \emph{i.e.}\ diagrams for the process $I\to F$ that do not feature an intermediate vector boson in the $s$-channel (\emph{e.g.}\ the left diagram in the second row of Fig.~\ref{fig:diag}), but those are usually assumed not to contribute significantly to the cross section.
The last relevant aspect to be considered is that polarisation vectors are not Lorentz-covariant, hence a given reference frame must be chosen. Typical choices
are the partonic centre-of-mass frame, the laboratory frame or the diboson centre-of-mass frame. In particular the latter has been used in the analysis of
Ref.~\cite{Aaboud:2019gxl}. The choice of a given frame is mostly dictated by practical reasons, like the experimental capability to reconstruct the frame.
\footnote{For a recent study on different reconstruction techniques, see \emph{e.g.}\ Ref.~\cite{Grossi:2020orx}.}
Studies available to date, see \emph{e.g.}\ Ref.~\cite{Ballestrero:2020qgv}, show that no frame choice has particular advantage over the others.
To summarize, the definition of a polarised cross section relies on the following assumptions: interferences cancel in
Eq.~\eqref{eq:polinterf} (strictly true only for integrated quantities); non-resonant diagrams are neglected; and an OSP is introduced to reshuffle the external
momenta onto the vector-boson mass shell. In this context, the polarisation fractions $f_{L/R}$ and $f_0$ can be introduced (with polarisation vectors in a frame of choice)
for a specific kinematic variable $X$. If one considers the case of a $\ensuremath{\text{W}}\xspace^\pm$ boson decaying into lepton and neutrino, where $\theta$ is the polar angle
in the $\ensuremath{\text{W}}\xspace$ rest frame (and the azimuth angle $\phi$ is integrated over), one obtains
\begin{equation}
\frac{1}{\frac{d\sigma(X)}{dX}} \frac{d\sigma(\theta,X)}{d\cos \theta dX} =
\frac{3}{8}\left(1\mp\cos\theta\right)^2 f_{\rm L}(X)+ \frac{3}{8}\left(1\pm\cos\theta\right)^2 f_{\rm R}(X)+ \frac{3}{4}\sin^2\theta f_0(X).
\end{equation}
Using this equation, one could extract the polarisation fractions from data by fitting the angular distributions.
Experimental analysis is in practice more complicated, since selections alter the shape as a function of $\cos\theta$ and angular-dependent acceptance/efficiency factors must be taken into account.
The method discussed above is general, in the sense that it can be applied to any process featuring intermediate vector bosons. While the discussion
has been carried assuming there is a single polarised vector boson, it can be easily extended to the case where more vector bosons appear, such as VBS. Besides the case mentioned
above (single vector boson, and top production), the method of Ref.~\cite{Ballestrero:2017bxn} has been also applied to vector-boson pair production in Refs.~\cite{Baglio:2019nmc,Denner:2020bcz,Denner:2020eck},
and in Ref.~\cite{BuarqueFranzosi:2019boy} it has been automatised using {\sc\small MadGraph5\_aMC@NLO}\xspace~\cite{Alwall:2014hca} (henceforth denoted as {\sc\small MG5\_aMC}\xspace) and {\sc MadSpin}~\cite{Artoisenet:2012st},
paving the path to the possibility of including NLO QCD corrections in VBS analysis.
\subsubsection{Phenomenological results}
Having introduced the polarisation fractions and their prerequisites to be well defined, we will show some phenomenological results
which highlight how these fractions can be employed to probe beyond the SM effects.
The first case we consider is discussed in Ref.~\cite{Ballestrero:2017bxn}, and it is the case of a Higgs-less SM, \emph{i.e.}\ corresponds to pushing the Higgs boson mass to infinity.
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\linewidth,clip=true]{Figures/Mww300_noHvsSM.pdf}
%
\caption{Polarisation fractions in the SM and in the Higgs-less SM, in opposite-sign W scattering at the LHC, for the invariant mass of the W
pair. Polarisations are defined in the laboratory frame. The (negatively-charged) lepton from
the polarised W boson is required
to satisfy the cuts $p_T(\ell) > 20 \ensuremath{\,\text{GeV}}\xspace$, $|\eta(\ell)| < 2.5$. This figure is taken from Ref.~\cite{Ballestrero:2017bxn}.}
\label{fig:vbspolnoH}
\end{figure*}
In Fig.~\ref{fig:vbspolnoH}, the $\ensuremath{\text{W}}\xspace\PW$ invariant-mass distribution is shown, for the unpolarised process as well as for the different polarisation states of one of the the negatively-charged vector boson. The SM and the Higgs-less SM (dubbed \emph{NoH} in the figure) are shown. Since the Higgs boson unitarises the scattering of longitudinal
vector bosons, one expects the longitudinally-polarised component in the Higgs-less SM to display a harder behaviour with respect to the SM case, as can be seen
in the plot of Fig.~\ref{fig:vbspolnoH}. While the left and right polarisations display an identical behaviour in the two models, the behaviour of the longitudinal polarisation
is radically different at high energies. Indeed, in the SM, $21\%$ of events feature a longitudinally-polarised vector boson when a minimum cut, $M_{\ensuremath{\text{W}}\xspace\PW} > 300\ensuremath{\,\text{GeV}}\xspace$ is required
on their invariant mass. The fraction decreases to $15\%$ when the invariant-mass cut is raised to $1\ensuremath{\,\text{TeV}}\xspace$. In the Higgs-less case, the two fractions become respectively
$27\%$ and $35\%$, with more than a factor-2 effect when the hardest cut is imposed.
The second example, from Ref.~\cite{BuarqueFranzosi:2019boy}, compares the SM case with a composite-Higgs
model~\cite{Kaplan:1983fs,Kaplan:1983sm,Georgi:1984af,Dugan:1984hq,Contino:2003ve,Agashe:2004rs,Contino:2006qr,Agashe:2006at}. In this class of models, or at least in their most recent versions, the interaction of the Higgs
boson and the weak gauge bosons is rescaled with a common factor $a$, and can be described by the following effective Lagrangian~\cite{Bellazzini:2014yua,Panico:2015jxa}
\begin{equation}
L \supset \left(\frac{m_\ensuremath{\text{Z}}\xspace^2}{2} \ensuremath{\text{Z}}\xspace^\mu \ensuremath{\text{Z}}\xspace_\mu + M_W^2 \ensuremath{\text{W}}\xspace^\mu \ensuremath{\text{W}}\xspace_\mu\right)\left(1 + 2 a \frac{h}{v} + \ldots \right)\,.
\end{equation}
The SM case is recovered when the scattering of longitudinal vector bosons is unitarised, which corresponds to $a=1$. Other values different from unity will display a unitary-violating
behaviour. As in the previous case, the process at hand is \ensuremath{\PW^+ \PW^- \Pj\Pj}\ production and the $\ensuremath{\text{W}}\xspace\PW$ invariant mass distribution shown in Fig.~\ref{fig:vbspolCH} is examined,
where the polarisation fractions of both vector bosons are given. The upper plot in the figure
shows the polarisation fractions in the SM ($a=1$), while the lower inset shows how they are affected when one sets $a=0.8$ (dashed) or $a=0.9$ (solid), by plotting the ratio:
\begin{equation}
\mathcal R(M_{\ensuremath{\text{W}}\xspace\PW})
\equiv
\left. \frac{d \sigma(a)} {d M_{\ensuremath{\text{W}}\xspace\PW}} \middle/ \frac{d \sigma(a=1)}{d M_{\ensuremath{\text{W}}\xspace\PW}} \right.\,.
\label{eq:vbspolR}
\end{equation}
\begin{figure*}[t]
\centering
\includegraphics[width=0.7\linewidth,clip=true]{Figures/cmww_binvar_rf12_cut5_2.pdf}
%
\caption{Polarisation fractions in the SM in opposite-sign W scattering at the LHC, for the invariant mass of the W pair. Polarisations are defined in the partonic centre-of-mass frame.
The namings 0 and T identify the longitudinal and transverse polarisation, respectively.
The inset displays the ratios $\mathcal R (M_{WW})$ defined in Eq.~\protect\eqref{eq:vbspolR} for $a=0.8$ (dashed) and $a=0.9$ (solid).
This figure is taken from Ref.~\cite{BuarqueFranzosi:2019boy}.}
\label{fig:vbspolCH}
\end{figure*}
The most striking behaviour can be observed in the ratio for the longitudinal-longitudinal scattering fraction.
It shows effects of the order of $30\%$ in the largest-mass bin considered for $a=0.9$ and grows up to $100\%$ when $a=0.8$.
\subsubsection{Theoretical calculations}
As seen in the previous sections, theoretical predictions for VBS signatures with fully leptonic final states are numerous and rather complete.
Unfortunately, to date, there are no public predictions beyond LO accuracy for any of the semileptonic signatures.
Such processes include $\ensuremath{\text{p}}\xspace\Pp\to\ell^+\ell^-\ensuremath{\text{j}}\xspace\Pj\ensuremath{\text{j}}\xspace\Pj$ and $\ensuremath{\text{p}}\xspace\Pp\to\ell\nu_\ell\ensuremath{\text{j}}\xspace\Pj\ensuremath{\text{j}}\xspace\Pj$, meaning that they involve four jets and two leptons in the final state.
This is currently at the edge of the state of the art, as only two such computations have been performed \cite{Denner:2017kzu,Anger:2017glm} and both describe processes with
bottom quarks in the final state, meaning that in a VBS analysis they would probably be discarded because of b-jet vetoes.
While the necessary technology is already there, the two aforementioned semileptonic processes require the combination of two (\ensuremath{\PW^\pm \PZ \Pj\Pj}\ and \ensuremath{\PZ\PZ\Pj\Pj}) or three (\ensuremath{\PW^\pm \PW^\pm \Pj\Pj}, \ensuremath{\PW^\pm \PZ \Pj\Pj}, and \ensuremath{\PW^+ \PW^- \Pj\Pj}) VBS processes, respectively, implying a significant computing burden for such signatures.
Nonetheless, generating the on-shell gauge bosons at NLO QCD and then decaying them either leptonically or hadronically would be an option.
This would lead to neglecting some non-resonant and interference contributions which are expected to be small, or even negligible, but it would considerably reduce the computing time.
For example, for the $\ell\nu_\ell\ensuremath{\text{j}}\xspace\Pj\ensuremath{\text{j}}\xspace\Pj$ final state, this would imply computation at NLO QCD of the on-shell processes
$\ensuremath{\text{p}}\xspace\Pp\to\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-\ensuremath{\text{j}}\xspace\Pj$, $\ensuremath{\text{p}}\xspace\Pp\to\ensuremath{\text{W}}\xspace^\pm\ensuremath{\text{Z}}\xspace\ensuremath{\text{j}}\xspace\Pj$, and $\ensuremath{\text{p}}\xspace\Pp\to\ensuremath{\text{W}}\xspace^\pm\ensuremath{\text{W}}\xspace^\pm\ensuremath{\text{j}}\xspace\Pj$, before adding the corresponding decays of the heavy gauge bosons.
This can be done for both the background and the signal, provided care is taken regarding the VBS approximations.
\subsubsection{Experimental approaches}
Both ATLAS and CMS have reported studies of semileptonic VBS final states in the 2016 subset of the
13-TeV LHC data, including $\ensuremath{\text{W}}\xspace^\pm V$ and $\ensuremath{\text{Z}}\xspace V$. However,
the two analyses are rather different in their principles and scopes.
The ATLAS study~\cite{ATLAS:VVsemilep} is targeting a SM VBS measurement, therefore combining a
larger set of experimental signatures to cover the largest possible phase space at the price of very high
backgrounds. The study employs a complex multivariate analysis based on BDTs in a total of nine event categories.
Events in categories differ in the number of charged leptons used to
identify the leptonic vector-boson decay (0 leptons for $\ensuremath{\text{Z}}\xspace \rightarrow \nu {\bar \nu}$, 1 lepton for
$\ensuremath{\text{W}}\xspace^\pm \rightarrow \ell^\pm \nu$, 2 leptons for $\ensuremath{\text{Z}}\xspace \rightarrow \ell^+ \ell^-$) and in the way the hadronic
vector-boson decay (denoted as $V$) is identified: two categories are used for \emph{merged} single-jet
reconstruction, which differ
in the purity of the selection working point, and one for the \emph{resolved} two-jet reconstruction. The
reconstruction of the merged jet employs the anti-$k_{\rm T}$ algorithm with a large distance parameter $R = 1.0$ and
its identification is based on the \emph{jet trimming} technique~\cite{Krohn:2009th}, which looks for candidate
subjets inside the larger-area jet and discards constituents not associated to those subjets. The invariant
mass of the merged jet $m_{\rm J}$ is computed after trimming and the jet is re-calibrated.
The CMS study~\cite{CMS:VVsemilep} is only
devoted to BSM searches and therefore it is just a cut-based analysis, optimized for $VV$ high-invariant-mass
regions. Only two categories are
considered, $\ensuremath{\text{W}}\xspace^\pm \rightarrow \ell^\pm \nu$ and $\ensuremath{\text{Z}}\xspace \rightarrow \ell^+ \ell^-$ plus a merged jet.
Sensitivity to SM VBS is not evaluated, while the absence of excesses at high mass are interpreted in terms
of constraints on aQGC in EFT or on a complete BSM theory, the Georgi-Machacek model~\cite{Georgi:1985nv}. The merged
jet reconstruction in CMS uses the anti-$k_{\rm T}$ algorithm with $R = 0.8$: jets are identified by the more
recent modified mass-drop algorithm~\cite{Larkoski:2014wba}, providing a cleaner estimate of the invariant mass.
The ``$\tau_N$-subjettiness'' variable, related to the compatibility of the large-area with being composed
of $N$ subjets, can be used to select jet candidates compatible with hadronic $V$ decays.
\paragraph{Monte Carlo simulation}
CMS uses {\sc\small MG5\_aMC}\xspace at LO without additional partons to simulate the EW, strong, and interference components for all the
possible final states considered in the analysis, as well as for all BSM samples.
An important feature of semileptonic searches is that
there are important contributions from the single-$\ensuremath{\text{W}}\xspace$ and Drell-Yan processes, where additional jets
from QCD are misreconstructed as the hadronic $V$ decay, as well as from processes with top quarks.
The processes $V$+jets with up to four outgoing partons at Born level are simulated
at QCD LO accuracy using {\sc\small MG5\_aMC}\xspace and merged using the MLM matching scheme.
The $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace}$, $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace}V$, and
single-top processes are generated at NLO accuracy using {\sc POWHEG}. The simulated samples
of background processes are normalized to the best predictions available for the total cross sections.
PDF and scale choices follow those made in other 13-TeV CMS analyses for 2016 simulations.
The ATLAS simulation choices are very similar. The $\ensuremath{\text{W}}\xspace$+jets and Drell-Yan processes are generated using {\sc Sherpa}\xspace 2.2.1,
and a NLO alternative description with up to two extra partons is also used. Strong $VV$ production is also simulated with {\sc Sherpa}\xspace
and is more advanced than in CMS, up to one additional parton at NLO and up to three additional partons at LO.
Matching and merging for Sherpa samples are performed in the MEPS scheme. Interference between EW and strong
$VV$ production is neglected and therefore used as a systematic uncertainty in the measurement, variable between
5 and 10\% at different values of the BDT score (see below).
\paragraph{Fiducial region definitions and/or reconstruction-level selections}
Fiducial regions considered in the ATLAS analysis are shown in Table~\ref{tab:semilepfr}. Since CMS only targets
BSM constraints, there is no corresponding fiducial region, so offline event selection requirements are shown
instead.
\begin{table}[htb]
\caption{Fiducial region definitions and related EW (VBS) cross-section values in the ATLAS semileptonic VBS measurement~\cite{ATLAS:VVsemilep}, and
the reconstruction-level selections in the analogous CMS analysis~\cite{CMS:VVsemilep}. The symbol $J$ (capitalized) stands for a
merged jet. The subscripts $V$ and \emph{tag} stand for jets from $V$ decays or VBS-tagging jets, respectively.}
\center
{\begin{tabular}{@{}ccc@{}} \toprule
Variable & ATLAS & CMS (reconstruction level) \\
\midrule
$p_{\rm T}(\ell)$ & $> 27$ GeV (1$\ell$), $> 28/20$ GeV (2$\ell$) & $> 50$ GeV (1$\ell$), $> 50/30$ GeV (2$\ell$) \\
$|\eta(\ell)|$ & $< 2.5$ & $< 2.4/2.5$ \\
$m_{\ell\ell}$ & $[83,99]$ GeV (2$\ell$) & $[76,106]$ GeV (2$\ell$)\\
$p_{\rm T,miss}$ & $> 200$ GeV (0$\ell$), $> 80$ GeV (1$\ell$) & $> 50/80$ GeV (1$e$/1$\mu$) \\
$p_{\rm T}({\rm J})$ & $> 200$ GeV (merged) & $> 200$ GeV \\
$|\eta({\rm J})|$ & $< 2.0$ (merged) & $< 2.4$ \\
$\tau_2/\tau_1({\rm J})$ & - & $< 0.55$ \\
$|\eta(\ensuremath{\text{j}}\xspace)|$ & $< 4.5$ & $< 5.0 $\\
$p_{\rm T}(\ensuremath{\text{j}}\xspace_{V})$ & $> 40/20$ GeV (resolved) & - \\
$m_{\ensuremath{\text{j}}\xspace\Pj,V}$ or $m_{\rm J}$ & $[64,106]$ GeV & $[65,105]$ GeV \\
$p_{\rm T}(\ensuremath{\text{j}}\xspace\Pj_{\rm tag})$ & $> 30$ GeV & $> 30$ GeV \\
$m_{\ensuremath{\text{j}}\xspace\Pj, \rm tag}$ & $> 400$ GeV & $> 800$ GeV \\
JRS & $\eta_{\rm \ensuremath{\text{j}}\xspace_1,tag} \cdot \eta_{\rm \ensuremath{\text{j}}\xspace_2,tag} < 0$ & $ \Delta\eta_{\rm \ensuremath{\text{j}}\xspace\Pj,tag} > 4.0$ \\
\midrule
$\sigma$ LO & $43.0 \pm 2.4$ fb & - \\
\bottomrule
\end{tabular} \label{tab:semilepfr}}
\end{table}
In both ATLAS and CMS the 2$\ell$ and 1$\ell$ channel events are selected with single-electron
or single-muon triggers while events for the ATLAS 0$\ell$ channel were recorded with triggers requiring large
$p_{\rm T,miss}$. Both experiment require strictly zero $\ensuremath{\text{b}}\xspace$-tagged jets in channels with reconstructed $\ensuremath{\text{W}}\xspace^\pm$,
to strongly suppress top-quark background.
In ATLAS, reconstruction-level selections follow the fiducial regions requirements very closely with minor
additions (tightened \ensuremath{\text{Z}}\xspace-mass requirement, jet-lepton and jet-jet angular separation, multijet suppression
in the 0$\ell$ channel). Enhancement of the BSM EW components in the 1-$\ell$ channel is performed in CMS using Zeppenfeld variables
previously defined and the boson \emph{centrality}, defined as:
$\Theta = \min[\min(\eta_\ensuremath{\text{W}}\xspace, \eta_V) - \min(\eta_{\rm \ensuremath{\text{j}}\xspace_1,tag}, \eta_{\rm \ensuremath{\text{j}}\xspace_2,tag}), \max(\eta_\ensuremath{\text{W}}\xspace, \eta_V) - \max(\eta_{\rm \ensuremath{\text{j}}\xspace_1,tag}, \eta_{\rm \ensuremath{\text{j}}\xspace_2,tag})]$.
\paragraph{Analysis strategy and background estimation}
The ATLAS analyses uses different BDTs for the resolved- and merged-jet categories, comprising many variables
(16 and 23, respectively, when considering all lepton channels) to isolate the SM VBS signal over the large
backgrounds. Among the list of variables, not just kinematic and jet-identification ones are used, but also
variables sensitive to the quark or gluon origin of small-area jets such as the jet width, the number of
charged tracks inside the jet, and the number of ``track jets'' which are built as alternative to calorimeter-based
jets with only charged tracks compatible with the hardest event vertex.
CMS uses the transverse mass $m_{\rm T}(\ensuremath{\text{W}}\xspace V)$ or the invariant mass $m(\ensuremath{\text{Z}}\xspace V)$ for the final fits.
The background estimation is obtained by analytical fits with suitable empirical functions. Fit template
shapes for the various background components are obtained using the $m_{\rm J}$ sidebands, and correcting with
sideband-to-signal transfer factors obtained from simulation. In ATLAS, the estimation of the two main
backgrounds $\ensuremath{\text{W}}\xspace/\ensuremath{\text{Z}}\xspace$+jets and $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace}$ is performed in dedicated control regions, obtained by using the
$m_{\rm J}$ sidebands or inverting the $\ensuremath{\text{b}}\xspace$-tag requirement, respectively. From these regions, simulation-to-data
correction factors are obtained as a function of $m_{\rm \ensuremath{\text{j}}\xspace\Pj,tag}$ and applied to simulated BDT shapes in the
signal regions. The nine BDT output
distributions, as well as the $m_{\rm \ensuremath{\text{j}}\xspace\Pj,tag}$ shapes in the control regions, are used in the statistical analysis.
The ATLAS and CMS $\ensuremath{\text{W}}\xspace V$ data with superimposed signal and background components are shown in Figure~\ref{fig:semilep}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{Figures/CMS-SMP-18-006_Figure_003-a.pdf}
\includegraphics[width=0.41\textwidth]{Figures/fig_05c_semilep.pdf}
\caption{Distribution of $m_{\rm T}(\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace)$ in the 1$\ell$ CMS signal region (left) and of the BDT score in the 1$\ell$
ATLAS signal region with resolved jets. The dominance of $\ensuremath{\text{W}}\xspace$+jets background is evident in both cases. In the
CMS plot, the continuous line indicates the analytical fit.}
\label{fig:semilep}
\end{figure*}
\paragraph{Systematic uncertainties}
In ATLAS, many systematic uncertainties contribute to a total uncertainty which is larger than the statistical one.
Among the theoretical and modelling uncertainties, the main contributions are the limited size of the simulated
samples and the modelling of the $\ensuremath{\text{Z}}\xspace$+jet component; while among the experimental uncertainties, the largest
are the uncertainties on the $\ensuremath{\text{b}}\xspace$-tag veto and the identification and energy calibration of merged jets.
CMS uncertainties are defined by variation of single components and their impact on the results is not provided.
In general, larger theory uncertainties than in ATLAS appear to contribute for both the EW, QCD, and BSM components. Other significant contributions come from limited simulated statistics and merged-jet description
in agreement with the ATLAS study.
\paragraph{Results}
ATLAS reports a measured fiducial cross-section of $\sigma_{\mathrm{EW}} = 45 \pm 18{\ensuremath\unskip\,\text{fb}}\xspace$, where the
systematic uncertainty is about twice the statistical one.
It corresponds to a background-only hypothesis rejection with a significance of 2.7$\sigma$, while 2.5$\sigma$ is expected.
The merged analysis has a much better systematic/statistical ratio with respect to the resolved one,
and gives the largest contribution to the significance.
In the CMS analysis, limits on aQGC are set by fitting the mass distributions mentioned before. The list of
constrained Wilson coefficients is the same as in the combined leptonic \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ and \ensuremath{\PW^\pm \PZ \Pj\Pj}\ CMS analysis. While obtained
with about one fourth of the data, constraints using the semileptonic analysis are comparable or better. In some
case the limits supersede those obtained in the leptonic analysis by a factor of 3-5, in particular for what
concerns the $f_{S,1}$ and the mixed operators.
\subsubsection{Theoretical calculations}
From a theoretical point of view, the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ channel is without a doubt the most accessible one, because of the reduced number of partonic channels and Feynman diagrams.
Calculations started already ten years ago with the computation of the QCD corrections to the EW process at order $\mathcal{O}\left(\alpha_{\rm s} \alpha^6 \right)$ in the VBS
approximation~\cite{Jager:2009xx,Denner:2012dz}.
Such corrections have been then matched to parton shower in programs such as {\sc POWHEG} \cite{Jager:2011ms} or {\sc VBFNLO} \cite{Arnold:2008rz,Arnold:2011wj,Baglio:2014uba}.
The QCD background is also known since some time at NLO \cite{Melia:2010bm,Campanario:2013gea} and has been matched to PS \cite{Melia:2011gk}.
Only recently the NLO EW corrections of order $\mathcal{O}\left(\alpha^7 \right)$ have been computed \cite{Biedermann:2016yds} and found to be large.
The full tower of NLO corrections has been computed a few months later in Ref.~\cite{Biedermann:2017bss}.
Given the size of the EW corrections, these have been implemented in the computer program {\sc POWHEG} \cite{Chiesa:2019ulk} so that they can be combined with other matched predictions.
As \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ is a representative channel for all VBS processes, in Ref.~\cite{Ballestrero:2018anz} several computer programs
\cite{Ballestrero:2007xq,Kilian:2007gr,Moretti:2001zz,Schwan:2018nhl,Nason:2004rx,Frixione:2007vw,Alioli:2010xd,Arnold:2008rz,Arnold:2011wj,Baglio:2014uba,Alwall:2014hca}
have been used for a comparative study of fixed and matched predictions.
One of the main findings of this study is that different NLO QCD predictions matched to parton shower can vastly differ for observables involving the third jet
(\emph{i.e.}\ a non-tagging jet).
This is particularly apparent on the right-hand side of Fig.~\ref{fig:ssWWTH}.
Nonetheless, we would like to emphasize that, upon using {\sc Pythia8} with the correct recoil scheme or {\sc Herwig},
reliable predictions can be obtained even for the third-jet observables.
This has been discussed in Sec.~\ref{sec:theory} and studied in detail in Ref.~\cite{Jager:2020hkz} for Higgs production via VBF.
It implies that jet veto in central regions can be used in experimental analysis provided that good care is taken in using appropriate theoretical predictions.
The second main finding of this study is that the VBS approximation is good up to few per cent for typical VBS event selections.
This implies that the VBS approximations at fixed order or used in combination with parton shower are reliable as long as selection cuts are able to suppress non-VBS contributions such as tri-boson contributions
(see further discussion in Sec.~\ref{sec:zz}).
After the publication of Ref.~\cite{Ballestrero:2018anz}, other comparative studies have appeared~\cite{ATLAS:WWMC,ATLAS:2020ryx}.
It should be made clear that, while the former study was a tuned comparison among the different event generators, where all
the parameters relevant for the partonic cross sections were identical, this is not always the case for the latter studies. In the
first case, discrepancies among predictions are unambiguously due to the different shower and hadronisation models of the Monte Carlo programs.
In the second case a certain degree of ambiguity remains in the origin of the discrepancies, which may jeopardise a proper understanding of these effects.
\begin{figure*}[t]
\centering
\includegraphics[width=0.42\linewidth,height=0.35\textheight,clip=true,trim={0.cm 0.cm 0.cm 0.cm}]{Figures/ptj1}
\includegraphics[width=0.54\linewidth,height=0.34\textheight,clip=true,trim={0.cm 0.cm 0.cm 0.cm}]{Figures/z_j3}
%
\caption{Differential distributions for the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ channel.
Left: transverse momentum of the hardest jet with LO, NLO EW, and NLO EW + PS accuracy.
Right: normalised average rapidity of the third jet at NLO QCD + PS accuracy for different predictions.
These figures are taken from Ref.~\cite{Chiesa:2019ulk} and Ref.~\cite{Ballestrero:2018anz}, respectively.}
\label{fig:ssWWTH}
\end{figure*}
A summary of the available predictions is provided in Table~\ref{tab:ssWWTH}.
If it fair to say that the theoretical status is rather good, given the experimental accuracy available now and expected in the next ten years.
In particular, almost all NLO orders matched to parton shower are known.
The only exception is the order $\mathcal{O}\left(\alpha_{\rm s}^2 \alpha^5 \right)$, which has been shown in Ref.~\cite{Biedermann:2017bss} to be suppressed.
Also the order $\mathcal{O}\left(\alpha_{\rm s} \alpha^6 \right)$ matched to PS is only known in the VBS approximation.
Nonetheless, provided that typical VBS phase-space regions are used this should be a very good one.
Going beyond this approximation would require a method to match mixed-type corrections, which is currently not existing.
\begin{table}[htb]
\caption{Summary of higher-order predictions currently available for the ss-WW channel: at fixed order and matched to parton shower.
The symbols {\bf \color{green} \checkmark}, {\bf \color{green} \checkmark$^*$}, and {\bf \color{red} X}
means that the corresponding predictions are available, in the VBS approximation, or not available yet.
}
\center
{\begin{tabular}{l|cccc}
Order & $\mathcal{O}\left(\alpha^7 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}} {\alpha}^6 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^2 {\alpha}^5 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^3 {\alpha}^4 \right)$\\
\hline
NLO & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark} \\
NLO+PS & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark$^*$} & {\bf \color{red} X} & {\bf \color{green} \checkmark}
\end{tabular} \label{tab:ssWWTH}}
\end{table}
\subsubsection{Experimental approaches}
Both ATLAS and CMS have reported the observation of electroweak \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ production using a partial 13 TeV data set~\cite{ATLAS:ssWW,CMS:ssWW}.
CMS has already published the same search on the full data set, in combination with the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ final state~\cite{CMS:ssWWandWZ}.
\paragraph{Monte Carlo simulation}
ATLAS and CMS use Monte Carlo simulations to evaluate the signal and several background contributions to the selected
data samples.
CMS uses the {\sc\small MG5\_aMC}\xspace generator~\cite{Alwall:2014hca}, version 2.4.2 to simulate the electroweak,
strong, and interference components separately at LO. All samples have no extra partons beyond the two quarks in the
simulated process and are hence inclusive in the number of extra jets. The interference is estimated to be about $4\%$
of the signal and is included in the signal yield.
Since the CMS analysis is more recent, it could benefit from the complete application of
NLO QCD+EW corrections computed in~\cite{Biedermann:2017bss} (see previous section), that decrease the cross sections by
10-15\%, the correction being larger at higher values of \ensuremath{m_{\Pj\Pj}}\ and $m_{\ell\ell'}$. Similar settings are used to
simulate the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ component, that is analyzed together in a single study. Other minor backgrounds, including
tribosons, processes with at least a top quark ($\ensuremath{\text{t}}\xspace {\bar \ensuremath{\text{t}}\xspace}$, $\ensuremath{\text{t}}\xspace {\bar \ensuremath{\text{t}}\xspace}\ensuremath{\text{W}}\xspace$, $\ensuremath{\text{t}}\xspace {\bar \ensuremath{\text{t}}\xspace}\ensuremath{\text{Z}}\xspace$, $\ensuremath{\text{t}}\xspace\ensuremath{\text{W}}\xspace$, $\ensuremath{\text{t}}\xspace\ensuremath{\text{Z}}\xspace q$ etc.)
as well as other diboson processes, are simulated with either {\sc POWHEG}\xspace~\cite{Nason:2004rx,Frixione:2007vw,Alioli:2010xd} or {\sc\small MG5\_aMC}\xspace, mostly
with inclusive NLO QCD accuracy.
ATLAS uses the {\sc Sherpa}\xspace generator, version 2.2.2~\cite{Bothmann:2019yzt} to simulate the electroweak, strong and interference processes at LO. All samples are simulated with up to 1 extra parton beyond the two quarks and the 0- and 1-parton processes are merged using the MEPS matching scheme included in {\sc Sherpa}\xspace. The interference is estimated to be about 6\% of the signal.
An alternative description of the VBS signal process is obtained using {\sc POWHEG}\xspace at NLO in QCD~\cite{Jager:2009xx}.
A large difference between the two theoretical fiducial cross sections is found
\footnote{In Ref.~\cite{Hoeche} it has been documented that there was an issue in {\sc Sherpa}\xspace regarding the colour flow setup when using parton shower for VBS-like processes.
To our knowledge this issue has been resolved, but the corresponding results have not yet appeared in any further publication.}. We believe that these differences should be investigated beyond the work done in Ref.~\cite{ATLAS:WWMC}
(see remarks in the previous section, as well as in the corresponding part of Sec.~\ref{sec:wz} for \ensuremath{\PW^\pm \PZ \Pj\Pj}).
Other backgrounds considered (in general the same as CMS, but with more emphasis
on $V\gamma$ and electroweak $V\gamma \ensuremath{\text{j}}\xspace\Pj$, which are found to be an important contribution to the background) are generated
using different tools, perturbative accuracies, and extra-parton multiplicities.
\paragraph{Fiducial region definitions and reconstruction-level selections}
Fiducial regions considered in the ATLAS and CMS analyses are compared in Table~\ref{tab:ssWWfr}. In both analyses
leptons from $\tau$ decays are not considered in the fiducial region and lepton momenta are corrected by adding
possible final-state photon radiations in a cone of $\Delta R < 0.1$ around the lepton direction.
\begin{table}[htb]
\caption{Comparison of \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ fiducial region definitions and related EW (VBS) cross-section values in the ATLAS and CMS measurements~\cite{ATLAS:ssWW,CMS:ssWWandWZ}.
\emph{JRS} stands for generic Jet-Rapidity Separation selections. ATLAS $\sigma_\mathrm{LO}$ has the issues reported in the text.}
\center
{\begin{tabular}{@{}ccc@{}} \toprule
Variable & ATLAS & CMS \\
\midrule
$p_{\rm T}(\ell)$ & $> 27$ GeV & $> 20$ GeV \\
$|\eta(\ell)|$ & $< 2.5$ & $< 2.5$ \\
$\Delta R(\ell\ell')$ & $> 0.3$ & - \\
$m_{\ell\ell'}$ & $> 20$ GeV & $> 20$ GeV \\
$p_{\rm T,miss}$ & $> 30$ GeV & - \\
$p_{\rm T}(\ensuremath{\text{j}}\xspace)$ & $> 65/35$ GeV & $> 50$ GeV \\
$|\eta(j)|$ & $< 4.5$ & $< 4.7$ \\
\ensuremath{m_{\Pj\Pj}} & $>500$ GeV & $> 500$ GeV \\
JRS & $ \ensuremath{\Delta y_{\Pj\Pj}} > 2$ & $\ensuremath{\Delta \eta_{\Pj\Pj}} > 2.5$ \\
\midrule
$\sigma_\mathrm{LO} $ & $2.0^{+0.3}_{-0.2}{\ensuremath\unskip\,\text{fb}}\xspace$ & $3.9 \pm 0.6{\ensuremath\unskip\,\text{fb}}\xspace$ \\
$\sigma_\mathrm{NLO}$ & $3.1^{+0.4}_{-0.5}{\ensuremath\unskip\,\text{fb}}\xspace$ (NLO QCD) & $3.3 \pm 0.5{\ensuremath\unskip\,\text{fb}}\xspace$ (NLO QCD+EW) \\
\bottomrule
\end{tabular} \label{tab:ssWWfr}}
\end{table}
Reconstruction-level selections follow closely the definition of the fiducial regions. In both analyses, events
are selected at the trigger level by the presence of just one electron or muon, in order to increase efficiency.
Both experiment veto events with jets likely originated from a bottom quark, in order to reject backgrounds featuring
top-quark decays. CMS requires that the leading lepton has $p_{\rm T} > 25\ensuremath{\,\text{GeV}}\xspace$, and that $p_{\rm T,miss} > 30$ GeV. In both ATLAS and CMS, background
from wrong charge reconstruction in $\ensuremath{\text{e}}\xspace^\pm \ensuremath{\text{e}}\xspace^\pm jj$ events is reduced by requiring $|m_{\ensuremath{\text{e}}\xspace\Pe} - m_\ensuremath{\text{Z}}\xspace| > 15$ GeV, and in
ATLAS dielectron events with $|\eta(\ensuremath{\text{e}}\xspace)| > 1.4$ are also rejected. In CMS the maximum Zeppenfeld variable $z^*_\ell$ of the
two leptons must be smaller than 0.75, where~\cite{Rainwater:1996ud}:
\begin{equation}
z^*_\ell = \left|\frac{\eta(\ell) - [\eta(\ensuremath{\text{j}}\xspace_1)+\eta(\ensuremath{\text{j}}\xspace_2)]/2}{\ensuremath{\Delta \eta_{\Pj\Pj}}}\right| .
\end{equation}
\paragraph{Analysis strategy and background estimation}
Both experiments fit the observed data after estimating backgrounds from either simulation or control
regions. In ATLAS, the contribution from~\emph{non-prompt leptons} is estimated in different control regions, depending
if they are originated from heavy-flavored meson decays or from $V \gamma$ events with photon conversions (only for
final states with electrons). Such regions are enriched in $\ensuremath{\text{b}}\xspace {\bar \ensuremath{\text{b}}\xspace}$ events and $\gamma$ from final-state radiation
in $\ensuremath{\text{Z}}\xspace$ decays, respectively. CMS uses events which pass the final selection except for one rejected lepton, which
is selected with looser requirements to enter the control region and further validates non-prompt leptons from
heavy-flavor decays by inverting the $\ensuremath{\text{b}}\xspace$-jet veto. $\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace$ events are fit together in the CMS analysis, while
in ATLAS the normalization of a control region with $3\ell$ selected events is floated in the fit. Both analyses
use fully selected events with $200 < \ensuremath{m_{\Pj\Pj}} < 500\ensuremath{\,\text{GeV}}\xspace$ to constrain background-component normalizations.
In the ATLAS analysis the signal-region data in four \ensuremath{m_{\Pj\Pj}}\ bins with different signal purities are fit together
with the $3\ell$ and the low-\ensuremath{m_{\Pj\Pj}} regions, in order to optimize significance. In CMS, which analyzes a
larger dataset, a similar technique is used, but using two-dimensional distributions in bins of
\ensuremath{m_{\Pj\Pj}}\ and $m_{\ell\ell'}$ in the signal region and three control regions, leaving free in the fit the normalization
of two on them, in addition to the EW and strong cross sections.
The ATLAS and CMS data with superimposed signal and background components are shown in Figure~\ref{fig:ssww}. While
the labelling of the process composition is different, its relative amount is similar between the two analyses,
CMS exhibiting more non-prompt background because of the softer lepton selections.
\begin{figure*}[t]
\centering
\includegraphics[width=0.4\textwidth]{Figures/CMS-SMP-19-012_Figure_003-a.pdf}
\includegraphics[width=0.5\textwidth]{Figures/fig_02b_ssww.pdf}
\caption{Post-fit \ensuremath{m_{\Pj\Pj}}\ distributions in CMS (left) and ATLAS (right) in the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analysis. \emph{Nonprompt} in CMS
includes the photon conversion background. In ATLAS, \emph{e/$\gamma$ conversions} includes events with wrong-sign electron reconstruction.}
\label{fig:ssww}
\end{figure*}
\paragraph{Systematic uncertainties}
Both ATLAS and CMS list systematic uncertainties according to their impact on the measured cross sections. In CMS the
dominant uncertainties come from the estimate of the non-prompt background, the limited size of simulated samples
in the two-dimensional distributions, and theoretical errors on the various simulated components. In ATLAS,
similar uncertainties are considered, but a larger impact from jet-energy corrections and the related $p_{\rm T,miss}$
measurement is present. No single contribution has an impact larger than $4\%$ in either analysis.
\paragraph{Results}
ATLAS reports a measured VBS fiducial cross-section of $\sigma_{\mathrm{EW}} = 2.89^{+0.59}_{-0.55}{\ensuremath\unskip\,\text{fb}}\xspace$,
where the total uncertainty is dominated by the statistical one,
in agreement with the NLO QCD estimate of the SM cross section.
It corresponds to a background-only hypothesis rejection with a significance of 6.5$\sigma$.
CMS similarly reports $\sigma_{\mathrm{EW}} = 3.98 \pm 0.45{\ensuremath\unskip\,\text{fb}}\xspace$ , also statistically dominated and in agreement with the NLO QCD+EW
estimation in the respective fiducial region. It corresponds to a background-only hypothesis rejection with a significance much larger than 5$\sigma$. The total \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ cross section
including EW and strong components is also measured to be $\sigma_{\mathrm{tot}} = 4.42 \pm 0.47{\ensuremath\unskip\,\text{fb}}\xspace$.
Differential cross-sections in four bins of \ensuremath{m_{\Pj\Pj}}, $m_{\ell\ell'}$ and the leading lepton $p_{\rm T}$ are obtained by fitting simultaneously
the corresponding regions of the phase space, with negligible bin-migration effects, and when needed replacing the fitted observables
with the ones under study. All the results are in agreement with SM expectations, although the
experimental uncertainties are of the order of $20\%$ because of limited statistics. Figure~\ref{fig:ssww2} (left) shows the EW differential cross sections as a function of $m_{\ell\ell'}$.
\begin{figure*}[t]
\centering
\includegraphics[width=0.47\textwidth]{Figures/CMS-SMP-19-012_Figure_004-c.pdf}
\includegraphics[width=0.42\textwidth]{Figures/CMS-SMP-20-006_Figure_005.pdf}
\caption{CMS \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analysis. Left: EW differential cross sections as a function of $m_{\ell\ell'}$ measured in data, in LO and NLO-corrected simulation.
Right: likelihood scan as a function of the cross section of $\ensuremath{\text{W}}\xspace_L^\pm \ensuremath{\text{W}}\xspace_L^\pm$ events.}
\label{fig:ssww2}
\end{figure*}
Constraints on aQGC are set by fitting the diboson transverse mass distribution\footnote{The diboson transverse mass is defined as
$m_{\rm T}(\ensuremath{\text{W}}\xspace\PW) = \sqrt{(\Sigma E_i)^2 - (\Sigma p_{z,i})^2}$, where $E_i$ and $p_{z,i}$ are the energies and
$z$ components of the momenta of all particles from the decay of the $\ensuremath{\text{W}}\xspace$ in the event. The four-momentum of
the di-neutrino system is defined using the $p_{\rm T,miss}$ vector and
assuming that the values of the longitudinal component of the momentum and the invariant
mass are zero.} in the signal region: no BSM excess is found and
the limits set on $f_{T,0}$, $f_{T,1}$, $f_{T,2}$, $f_{M,0}$, $f_{M,1}$,
$f_{M,7}$, and $f_{S,0}$ are the world second-best limits after the CMS semi-leptonic VBS analysis.
\paragraph{Polarisation measurement}
In a separate analysis, CMS~\cite{CMS:WWpolar} examines the same selected dataset in order to measure
the polarisation of $\ensuremath{\text{W}}\xspace$ bosons in \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ events. The analysis is identical to the previously described CMS
results, for what concerns the simulation and estimation of the backgrounds, event selection,
systematic uncertainties, and fitting techniques.
Polarised signal for the three possible combinations $\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_{\rm L}^\pm$, $\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_{\rm T}^\pm$, and $\ensuremath{\text{W}}\xspace_{\rm T}^\pm \ensuremath{\text{W}}\xspace_{\rm T}^\pm$
are generated using a new version of {\sc\small MG5\_aMC}\xspace~\cite{BuarqueFranzosi:2019boy}. The two-dimensional fits use different
variables than in the original analysis: both are output scores of BDT algorithms, an
\emph{inclusive} one optimized to select EW \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ over backgrounds, and a \emph{signal} BDT defined in two versions, alternatively
optimized to select purely longitudinal or longitudinal-unpolarised ($\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_X^\pm$) signals over
other polarisation combinations. Training variables, besides those already used in the event selection, include
the transverse mass, $p_{\rm T}$, angular or $\Delta R$ separations between leptons and jets, and the ratio of $p_{\rm T}$ products
between leptons and jets.
The resulting cross-sections of $\sigma_{\mathrm{fid}} = 1.2^{+0.6}_{-0.5}{\ensuremath\unskip\,\text{fb}}\xspace$ for the $\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_X^\pm$ process and
$\sigma_{\mathrm{fid}} < 1.17{\ensuremath\unskip\,\text{fb}}\xspace$ at the 95\% Confidence Level (CL) for the $\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_{\rm L}^\pm$ process
are in agreement with the SM. There is not yet an evidence even for a single-boson polarisation state, the significance
of the $\ensuremath{\text{W}}\xspace_L^\pm \ensuremath{\text{W}}\xspace_X^\pm$ background-hypothesis rejection being only 2.3$\sigma$.
Figure~\ref{fig:ssww2} (right) shows the likelihood scan as a function of the cross section of $\ensuremath{\text{W}}\xspace_{\rm L}^\pm \ensuremath{\text{W}}\xspace_{\rm L}^\pm$ events.
Results are also extracted by considering polarisation eigenstates not in the default helicity frame, but in the colliding-parton frame.
Results in both frames have been found to be comparable.
\subsubsection{Effects of QCD origin}
\paragraph{NLO QCD corrections}\mbox{}\\
The anatomy of QCD corrections in VBS processes is quite peculiar, and it is dictated by the underlying structure of
the process. A typical $t$-channel VBS diagram at tree level, such as those in the top row of Fig.~\ref{fig:diag}, features two quark
lines that exchange electroweak bosons. Since no color charge is exchanged between the two quarks, QCD corrections tend to factorise, in
the sense that they affect one quark line at a time.
While non-factorisable corrections exist, for example in the case of the scattering of identical quarks,
they are suppressed by color considerations and by kinematics. If one neglects non-VBS diagrams, the situation is completely analogous to Higgs production in vector-boson fusion (VBF), where NLO QCD corrections were
first computed in the factorised approximation using the so-called structure-function approach~\cite{Han:1992hr}. Indeed, also for VBS, the first results including NLO QCD
corrections were obtained discarding non-factorisable corrections~\cite{Jager:2006zc,Jager:2006cp,Bozzi:2007ur,Jager:2009xx}. Within this approximation, NLO QCD corrections to VBS are rather mild, and their exact impact depends
on the cuts employed to define the VBS signal, on the choice of renormalisation and factorisation scales and, of course, on the specific process.
Going beyond this approximation, \emph{i.e.}\ including non-factorisable corrections, entails a major step in computational complexity.
On the one hand, loops with
many (six or more) external legs and high-rank numerators, due to the presence of vector bosons, have to be evaluated.
On the other hand, non-factorisable corrections are in general not Infra-Red (IR)-finite, and hence possibly dependent on the specific IR regulator.
This could be avoided only by considering all contributions of $\mathcal O (\alpha_s \alpha^6)$, including those which classify as EW corrections to the LO QCD-EW interference of VVjj production, as depicted in Fig.~\ref{fig:orders} (in that figure, the NLO QCD corrections to VBS signal correspond to the $\mathcal O (\alpha_s \alpha^6)$ contribution). Only recently, with advanced techniques having paved
the way to the automation of EW and mixed QCD-EW corrections, non-factorisable contributions have been included in the NLO QCD corrections
to VBS~\cite{Biedermann:2017bss,Denner:2019tmn,Denner:2020zit}. When they are compared to
the approximation that assumes factorisation, the impact of non-factorisable QCD corrections is found to be small in typical VBS phase spaces~\cite{Ballestrero:2018anz}, exceeding
$5\%$ only in more inclusive phase spaces. This can be seen in Fig.~\ref{fig:mjjnlo} which shows two differential distributions in the invariant mass of the two tagging jets.
The left-hand side shows a comparison of NLO predictions in a inclusive phase-space,
while the right-hand one is in a more exclusive phase space, typical of experimental analyses.
The full predictions which include non-factorisable corrections
are denoted by either \emph{full} or \textsc{MoCaNLO}+\textsc{Recola}.
The differences among other predictions are due either to the inclusion or not of non-VBS contributions, or to the details of the definition
of non-factorisable corrections, or both.
Once NLO QCD corrections are included, theoretical uncertainties estimated through the variation of renormalisation
and factorisation scale are of the order of few per cent on NLO-accurate observables. In the case of \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ production,
this can be observed in Fig.~\ref{fig:mjjnlo} for the \ensuremath{m_{\Pj\Pj}}\ observable,
where the theory uncertainty is shown as a blue band around the {\textsc{MoCaNLO}+\textsc{Recola}} prediction.
The scale uncertainty is obtained through 7-fold scale variation (see Sec.~\ref{sec:mcsim}).
The inclusion of non-factorisable corrections does not significantly impact the size of the scale uncertainty.
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\linewidth,height=0.35\textheight,clip=true,trim={0.cm -2cm 1.cm 0.1cm}]{Figures/mjj_nlo_scan}
\includegraphics[width=0.45\linewidth,height=0.35\textheight,clip=true,trim={0.4cm 2.15cm 0.cm 0.7cm}]{Figures/mjj_NLO_comparison}
%
\caption{Differential distributions in the invariant mass of the two tagging jets at the NLO (fixed order).
Various theoretical predictions with different approximations are here compared in an inclusive set-up and a more exclusive one (as in the experimental analysis).
The best prediction is denoted by either \emph{full} or \textsc{MoCaNLO}+\textsc{Recola}.
These figures are taken from Ref.~\cite{Ballestrero:2018anz}.}
\label{fig:mjjnlo}
\end{figure*}
\paragraph{Beyond NLO-QCD: NNLO and parton-shower effects}\mbox{}\\
For what concerns QCD effects beyond NLO, it is a fair statement that NNLO QCD corrections, even in the factorised approximation, are extremely challenging to compute,
and will likely not be available on a short-term timescale. Indeed, at variance with the case of single- or even double-Higgs production in
VBF, where corrections up to NNNLO in QCD have been computed within a factorised approach~\cite{Bolzoni:2010xr,Bolzoni:2011cu,Cacciari:2015jma,Dreyer:2016oyx,Dreyer:2018qbw,Dreyer:2018rfu,Dreyer:2020xaj}, VBS processes
present more complex topologies, since the outgoing vector bosons can couple to the quark lines.\footnote{Some recent achievements towards the computation
of non-factorisable corrections to Higgs production in VBF are worth to be cited~\cite{Liu:2019tuy,Dreyer:2020urf}, which may be relevant also for VBS.}
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\linewidth]{Figures/amc-yj3-2jw3.pdf}
\includegraphics[width=0.48\linewidth]{Figures/amc-yj3-3jw2.pdf}
%
\caption{The rapidity of the third jet in Higgs production via VBF, obtained using {\sc\small MG5\_aMC}\xspace + {\sc Pythia8} or {\sc Herwig7}. The left panel shows
predictions for the production of a Higgs boson plus two jets at NLO+PS matched with {\sc Pythia8} (blue) or {\sc Herwig7} (red),
including renormalisation and factorisation scale uncertainties (blue band) as well as variations of the shower starting scale (dashed lines),
together with the prediction for Higgs plus three jets at NLO+PS matched with {\sc Herwig7} (orange). The right panel shows
predictions for the Higgs boson plus three jets at NLO, matched with {\sc Herwig7} (orange) or {\sc Pythia8} (green).
together with the prediction
for Higgs plus two jets at NLO+PS matched with {\sc Herwig7} (red solid).
These figures are taken from Ref.~\cite{Jager:2020hkz}.}
\label{fig:VBFj3}
\end{figure*}
However, for almost all processes, NLO QCD corrections have been matched to parton-showers (PS) within different matching
schemes~\cite{Jager:2011ms, Jager:2013mu, Jager:2013iza, Ballestrero:2018anz, Jager:2018cyo}, and now this kind of processes are within the reach of automatic frameworks. In general, when compared
with NLO predictions at fixed order, NLO+PS results show quite small (10-15\%) effects on the shape and normalisation of distributions. The spread of predictions
obtained with different matching schemes and/or different PS programs is also found to be at the 10\% level, and it has been investigated in detail for \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ production
in Ref.~\cite{Ballestrero:2018anz}.
A relevant exception, worth to be discussed, is related to the modelling of the third-jet kinematics. It has been observed that, for
predictions matched with {\sc Pythia8}~\cite{Sjostrand:2007gs,Sjostrand:2014zea}, the global-recoil scheme leads to a large unphysical enhancement of the third-jet activity in the mid-rapidity region, related
to a wrong assignment of the phase-space boundaries for processes with initial-final color connections. Such an enhancement, absent in predictions obtained with
other parton showers such as {\sc Herwig7}~\cite{Bahr:2008pv,Bellm:2015jjp,Bellm:2019zci}, is observed both with the {\sc Powheg}~\cite{Nason:2004rx, Frixione:2007vw} and
the {\sc MC@NLO}~\cite{Frixione:2002ik}
matching schemes, although it is larger with the latter, owing to the fact that {\sc Powheg} generates the first emission with
an internal Sudakov factor (and thus shower effects only enter from the second emission on). This effect is discussed in detail for
same-sign $\ensuremath{\text{W}}\xspace\PW$ production in Ref.~\cite{Ballestrero:2018anz}, but it is in fact a general issue affecting processes with VBF/VBS-type topologies. Indeed, a similar enhancement has been observed also
in the measurement of electroweak single-$\ensuremath{\text{Z}}\xspace$ production~\cite{Sirunyan:2017jej}, and for Higgs production in VBF~\cite{Jager:2020hkz}.
While the unphysical enhancement disappears when a new recoil scheme, developed for Deep-Inelastic-Scattering processes (dipole recoil~\cite{Cabouat:2017rzi}),
is employed, the way Monte Carlo counterterms are currently implemented \emph{e.g.}\ in {\sc\small MG5\_aMC}\xspace\ prevents the user to employ a recoil scheme different from
the global recoil~\footnote{A new implementation of the Monte Carlo counterterms has been recently presented in Ref.~\cite{Frederix:2020trv}, and future
developments on allowing a more flexible choice of shower parameters are in progress.}. This does not apply to a {\sc Powheg}-type matching, where the user can instead
change the shower parameters more freely.
In Ref.~\cite{Jager:2020hkz}
it has been shown that, even within the global-recoil scheme, this effect disappears when a NLO-accurate description of the third jet at the matrix-element level is employed, as can be observed in
Fig.~\ref{fig:VBFj3}. This is a further demonstration that the central-rapidity enhancement observed for predictions matched with {\sc Pythia8} is unphysical and, as such, it should not
be considered as an uncertainty source for the third-jet description. Given the similarities
between VBS and VBF from the QCD point of view, these conclusions can be extended from the latter to the former.
They could also be verified explicitly using a NLO prediction for VBS with three jets, which is available at the moment, but should
not be beyond the reach of modern event generators and matrix-element providers.
\paragraph{PDF uncertainties}\mbox{}\\
Accounting for all sources of uncertainty stemming from QCD requires also the inclusion of those coming from parton distribution functions (PDFs).
At LO, only quarks appear in the initial state of VBS processes,
regardless of the specific final state. Within typical VBS cuts, they mostly feature intermediate values
of the Bjorken $x$'s and scales $Q=\mathcal O (100)\ensuremath{\,\text{GeV}}\xspace$.
From Fig.~\ref{fig:vbsx12} one can appreciate that
the bulk of the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ cross-section comes from $x\simeq0.2$, a region where quark densities, especially valence ones,
are quite well constrained nowadays, with uncertainties below $5\%$~\cite{Butterworth:2015oua}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.58\linewidth]{Figures/VBS_x1x2_heatmap.pdf}
%
\caption{The Bjorken $x$ regions contributing to same-sign $\ensuremath{\text{W}}\xspace\PW$ production via VBS, at the $13\ensuremath{\,\text{TeV}}\xspace$ LHC.
The colour coding is a follow: maximal is for black, minimal is for white.
The set-up used for this plot is the same as in Ref.~\cite{Ballestrero:2018anz}.}
\label{fig:vbsx12}
\end{figure*}
As gluons only enter at NLO, they give a subleading contribution to the
cross section, considering the small size of NLO corrections discussed above. The produced final state,
in particular the charges of the gauge bosons, affect the combination of flavours which can initiate the process: positively charged final states, such as $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^+\ensuremath{\text{j}}\xspace\Pj$ production,
are mostly sensitive to valence quarks, while for neutral or negatively charged ones the contribution of sea quarks becomes more important. Hence, PDF uncertainties are
expected to be quite process-specific.
If we consider again, as an example, the case of same-sign $\ensuremath{\text{W}}\xspace$ boson production, and
specifically the rapidity separation of the two jets, one can
appreciate from Fig.~\ref{fig:dyjj_pdfunc} that the PDF uncertainties, evaluated with the PDF4LHC15 set~\cite{Butterworth:2015oua},
are at the level of $\pm 2\%$ for a large part of the range, up to $4\%$ only for extreme separations. For the same observable,
uncertainties due to $\alpha_s$ are (as expected) totally negligible, below $1\%$ across almost the whole considered range.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\linewidth]{Figures/dyjj_pdf.pdf}
\includegraphics[width=0.48\linewidth]{Figures/dyjj_alphas.pdf}
%
\caption{PDF (left) and $\alpha_s$ (right) uncertainty for the rapidity separation of the two tagging jets,
evaluated with the PDF4LHC15 set. These figures are taken from Ref.~\cite{Bellan:2019xpr}.}
\label{fig:dyjj_pdfunc}
\end{figure*}
As mentioned above, these numbers are expected to be rather process specific
and are calculated in experimental analyses targeting the corresponding final states. In order to have an idea of how the charge of the final state can
affect their size, one can consider the case of charged-Higgs production via VBF,
for which these studies are available~\cite{Zaro:2010fc,Zaro:2015ika,deFlorian:2016spz}. In this case, the Higgs mass plays the
role of the invariant mass of the vector-boson pair. Estimations in Ref.~\cite{Zaro:2015ika} show that PDF uncertainties never exceed a few percent,
being smaller for lighter final states and when valence-quark contributions are mostly probed (Fig.~\ref{fig:dyjj_pdfunc}).
\subsubsection{Effects of Electroweak origin}
Given the magnitude of the strong and electroweak couplings, for typical LHC processes NLO EW corrections are generally of the order of NNLO QCD corrections, that is a few percent.
This power-counting argument is usually valid at the level of the total cross sections, but the situation is rather different when considering differential distributions, as EW and QCD corrections exhibit a rather different behaviour and are relevant in different phase-space regions.
In general, they become negative and large (typically several $10\%$) in the high-energy limit because of Sudakov logarithms~\cite{Denner:2019vbn}.
In the case of VBS, the global picture is quite different, namely the
NLO EW corrections are large relative to QCD corrections of the same order.
As shown in Ref.~\cite{Biedermann:2016yds}, large EW corrections are an
intrinsic feature of VBS at the LHC. At the level of the total cross section, they can be of the order of
$-20\%$ and reach up to $-40\%$ in tails of differential distributions.
Their origin can be attributed to the massive $t$-channel which enhance the typical scale of the process~\cite{Denner:1997kq},
as well as the fact that the EW Casimir operators are larger for bosons than fermions~\cite{Denner:2000jv}.
For same-sign WW scattering, where all NLO corrections are known, the EW corrections to the VBS process of order $\mathcal O (\alpha^7)$
are the largest corrections~\cite{Biedermann:2017bss}. Such a pattern has been confirmed for the $\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace$~\cite{Denner:2019tmn} and $\ensuremath{\text{Z}}\xspace\PZ$~\cite{Denner:2020zit} signature.
It is also worth mentioning that such EW corrections are largely independent of the charge of the final state as shown in Ref.~\cite{Chiesa:2019ulk} for \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}.
Even more, the Leading-Log approximations derived in Refs.~\cite{Biedermann:2016yds,Denner:2019tmn,Denner:2020zit} are rather universal due to the identical $\rm SU(2)_w$ couplings occurring in all scatterings.
In Fig.~\ref{fig:vbsew}, the differential distribution in the rapidity of the two tagging jets is displayed.
In the lower plot, the yellow band represents the expected statistical experimental uncertainty in each bin for the high-luminosity LHC collecting $3000{\ensuremath\unskip\,\text{fb}}\xspace^{-1}$.
Given their magnitude, one can thus expect that high-luminosity measurements will be sensitive to such EW corrections.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\linewidth,height=0.35\textheight,clip=true,trim={2.3cm 0.8cm 0.cm 0.cm}]{Figures/histogram_rapidity_j1j2}
\includegraphics[width=0.48\linewidth,height=0.35\textheight,clip=true,trim={2.3cm -0.2cm 0.cm 0.cm}]{Figures/histogram_invariant_mass_mjj12}
%
\caption{
NLO EW corrections to same-sign WW scattering for the rapidity distribution of the two tagging jets (left).
The yellow band in the lower plot represents the expected statistical experimental uncertainty at $3000{\ensuremath\unskip\,\text{fb}}\xspace^{-1}$.
Full NLO corrections to same-sign WW scattering for the invariant mass of the two tagging jets (right).
The left figure is taken from Ref.~\cite{Biedermann:2016yds} while the right one is taken from Ref.~\cite{Biedermann:2017bss}.}
\label{fig:vbsew}
\end{figure*}
Another source of EW effects is the inclusion of photon PDF.
The determination of the photon PDF has witnessed a complete change of paradigm in 2016, when the LUXqed methodology was introduced~\cite{Manohar:2016nzj,Manohar:2017eqh}, which employs a more robust determination from first principles of the photonic density. Thanks
to these works, the photon density can now be constrained at a level comparable to that of the strong-interacting partons. This was made possible
by relating photon-induced processes with their non-photon induced counterpart, and using this relation to extract the photon density. Before 2016, the only available approaches were
either relying on some a-priori parametrization of the photon density~\cite{Martin:2004dh,Schmidt:2015zda}, or on leaving it completely free to be fit~\cite{Ball:2013hta}.
This led in the first case to an impossible, or very difficult quantification of theoretical uncertainties, and in the second to huge uncertainties, often
of the order of $100\%$, relative to the impact of the photon density.
The LUXqed methodology is now employed by all major PDF providers, such as NNPDF~\cite{Bertone:2017bme} and MMHT~\cite{Harland-Lang:2019pla} as well as the PDF4LHC working group~\cite{Butterworth:2015oua}.
For example, in same-sign WW scattering \cite{Biedermann:2017bss}, the photon-induced contributions are of the order of $2.7\%$ with NNPDF-3.0 QED \cite{Ball:2013hta}
while they go down to $1.5\%$ when using the LUXqed\_plus\_PDF4LHC15\_nnlo\_100 set.
Due to charge conservation, LO photon-induced contributions are present for the $\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace$, $\ensuremath{\text{Z}}\xspace\PZ$, and $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-$ channels as well.
They involve one or two initial-state photons and contribute to the orders $\mathcal (\alpha^6)$ and $\mathcal (\ensuremath{\alpha_\text{s}}\xspace\alpha^5)$.
They amount to about $0.4\%$ with respect of the LO of order $\mathcal (\alpha^6)$ for $\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace$~\cite{Denner:2019tmn}.
When referring to NLO EW corrections, it was so far implied that only real photon radiations are included.
The radiation of heavy gauge bosons occurs at the same perturbative order, and in principle can also be accounted for.
To date, this effect is relatively unexplored in the context of VBS, while studies exist for other processes~\cite{Frixione:2014qaa,Frixione:2015zaa,Czakon:2017wor}.
In Ref.~\cite{Azzi:2019yne}, the related correction has been estimated to be of the order of few percent for the High-Luminosity LHC at the level of the total cross section.
Finally, for signatures other than same-sign WW scattering, there also exist a photon-to-jet conversion function
which is necessary to cancel IR divergences associated to photons splitting into a quark-antiquark pair~\cite{Denner:2019zfp}.
While it ensures a proper treatment of this non-perturbative contribution,
its numerical impact is rather small and has been evaluated to be the order of $0.01\%$ for $\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace$~\cite{Denner:2019tmn}.
\section{Introduction}
\label{sec:intro}
\input{intro}
\section{General aspects of vector-boson scattering at the LHC}
\label{sec:vbs}
\input{introduction}
\subsection{Definition of vector-boson scattering}
\label{sec:sob}
\input{definition}
\subsection{Polarised vector-boson scattering}
\label{sec:polar}
\input{polarisation}
\subsection{Theoretical predictions}
\label{sec:theory}
\input{theory}
\subsection{Experimental techniques}
\label{sec:experim}
\input{experim}
\subsection{Impact on Beyond-the-Standard-Model theories}
\label{sec:bsm}
\input{bsm}
\section{Vector-boson scattering processes at the LHC}
\label{sec:processes}
\subsection{The $\ensuremath{\PW^\pm \PW^\pm \Pj\Pj} \rightarrow \ell^\pm \nu \ell'^\pm \nu \ensuremath{\text{j}}\xspace\Pj$ final state}\label{sec:ssww}
\input{ssww}
\subsection{The $\ensuremath{\PW^\pm \PZ \Pj\Pj} \rightarrow \ell^+ \ell^- \ell'^\pm \nu \ensuremath{\text{j}}\xspace\Pj$ final state}
\label{sec:wz}
\input{wz}
\subsection{The $\ensuremath{\PZ\PZ\Pj\Pj} \rightarrow \ell^+ \ell^- \ell'^+ \ell'^- \ensuremath{\text{j}}\xspace\Pj$ and $\rightarrow \ell^+ \ell^- \nu {\bar \nu} \ensuremath{\text{j}}\xspace\Pj$ final states}
\label{sec:zz}
\input{zz}
\subsection{The $\ensuremath{\PW^+ \PW^- \Pj\Pj} \rightarrow \ell^\pm \nu \ell'^\mp \nu \ensuremath{\text{j}}\xspace\Pj$ final state}
\label{sec:osww}
\input{osww}
\subsection{The semileptonic final states}
\label{sec:semilep}
\input{semilep}
\subsection{The $\ensuremath{\PW^\pm \gamma \Pj\Pj}$ and $\ensuremath{\PZ \gamma \Pj\Pj}$ final states}
\label{sec:wzgamma}
\input{wzgamma}
\subsection{The exclusive $\ensuremath{\gamma \gamma \rightarrow \PW^+\PW^-}$ final state}
\label{sec:ggww}
\input{ggww}
\section{Prospects for High-Luminosity and High-Energy LHC scenarios}
\label{sec:perspectives}
\input{perspectives}
\section{Conclusion}
\label{sec:conclusion}
\input{conclusion}
\section*{Acknowledgements}
We are grateful to the ``VBSCan'' COST action network CA16108 for offering a stimulating and dynamic atmosphere over the past few years.
Without this action, this review would probably not have seen birth. Also, we are deeply indebted to our colleagues and collaborators for the numerous discussions on the topic of VBS.
In particular, we would like to thank Pietro Govoni for his commitment in the VBSCan action as well as for feedback on the present manuscript.
RC acknowledges support from the Italian and Serbian Ministries for Foreign Affairs through the Researcher Mobility Program RS19MO06.
MP acknowledges support from the German Research Foundation (DFG) through the Research Training Group RTG2044.
MZ is supported by the ``Programma per Giovani Ricercatori Rita Levi Montalcini'' granted by the Italian Ministero dell'Universit\`a e della Ricerca (MUR).
\bibliographystyle{utphys.bst}
\subsubsection{Theoretical calculations}
With respect to the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ channel, \ensuremath{\PW^\pm \PZ \Pj\Pj}\ is more complicated as it has more partonic channels and more involved Feynman diagrams.
Therefore the state of the art in the theoretical knowledge is not as advanced as for \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}.
Nonetheless, QCD corrections have been known for some time in the VBS approximation \cite{Bozzi:2007ur} and have also been matched to parton shower \cite{Jager:2018cyo}.
The QCD background is also known at NLO QCD accuracy since some time \cite{Campanario:2013qba} and NLO predictions matched to parton shower can nowadays be obtained from automatised tools.
Recently the full NLO computations for the orders $\mathcal{O}\left(\alpha^7 \right)$ and $\mathcal{O}\left(\alpha_{\rm s} \alpha^6 \right)$ have been obtained \cite{Denner:2019tmn}.
It is worth noticing that it confirmed that the EW corrections are in general large for VBS processes at the LHC.
Such results can for example be seen in the left-hand side of Fig.~\ref{fig:WZTH}.
Along the lines of Ref.~\cite{Ballestrero:2018anz}, Ref.~\cite{Jager:2018cyo} showed that the details of the parton shower can be particularly relevant for phenomenological studies.
In addition, the authors showed that the effect of hadronisation and multiple-parton interaction can be significant, especially for observables involving the third jet.
This is exemplified in the right hand-side of Fig.~\ref{fig:WZTH}.
Finally, in the proceedings of Les Houches 2017 \cite{Bendavid:2018nar}, a tuned comparison of theoretical predictions up to LO+PS accuracy has been performed.
It highlights the same features as the NLO comparative study of \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ \cite{Ballestrero:2018anz}.
In particular, all differential distributions are in very good agreement at LO and LO+PS accuracy,
the only exception being, as in Ref.~\cite{Ballestrero:2018anz}, observables involving the third jet.
It is worth emphasising that the {\sc Sherpa}\xspace predictions of Ref.~\cite{Bendavid:2018nar} are LO or LO+PS predictions and
are thus different from the {\sc Sherpa}\xspace predictions usually adopted by the ATLAS collaboration, for example in Ref.~\cite{ATLAS:2019hoc}, the latter being merged predictions with 2 and 3 jets matched to PS and largely differing from other predictions.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\linewidth,height=0.35\textheight,clip=true,trim={0.3cm 0.cm 0.cm 0.cm},page=14]{Figures/nlos_wz}
\includegraphics[width=0.48\linewidth,height=0.34\textheight,clip=true,trim={0.cm 0.cm 0.cm 0.cm}]{Figures/pythia8-cms-Yj3}
%
\caption{Various differential distributions for the WZ channel.
Left: invariant mass of the two tagging jets with NLO EW + QCD corrections.
Right: Zeppenfeld variable for the third jet at NLO QCD + PS accuracy including hadronisation or multiple-parton interaction.
These figures are taken from Ref.~\cite{Denner:2019tmn} and Ref.~\cite{Jager:2018cyo}, respectively.}
\label{fig:WZTH}
\end{figure*}
A summary of the available predictions is provided in Table~\ref{tab:WZTH}.
It is perfectly reasonable to expect that the accuracy obtained for the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ channel can be also achieved in the next few years for the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ channel.
\begin{table}[htb]
\caption{Summary of higher-order predictions currently available for the WZ channel: at fixed order and matched to parton shower.
The symbols {\bf \color{green} \checkmark}, {\bf \color{green} \checkmark$^*$}, and {\bf \color{red} X}
means that the corresponding predictions are available, in the VBS approximation, or not yet.}
\center
{\begin{tabular}{l|cccc}
Order & $\mathcal{O}\left(\alpha^7 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}} {\alpha}^6 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^2 {\alpha}^5 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^3 {\alpha}^4 \right)$\\
\hline
NLO & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark} & {\bf \color{red} X} & {\bf \color{green} \checkmark} \\
NLO+PS & {\bf \color{red} X} & {\bf \color{green} \checkmark$^*$} & {\bf \color{red} X} & {\bf \color{green} \checkmark}
\end{tabular} \label{tab:WZTH}}
\end{table}
\subsubsection{Experimental approaches}
ATLAS has reported observation of electroweak \ensuremath{\PW^\pm \PZ \Pj\Pj}\ production using a partial Run-2 data set~\cite{ATLAS:WZ}.
CMS performed the same search on the full data set, in combination with the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ final state~\cite{CMS:ssWWandWZ}, also leading to an observation of this process.
\paragraph{Monte Carlo simulation}
Since the CMS study combines the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ and \ensuremath{\PW^\pm \PZ \Pj\Pj}\ final states, the simulations uses the same settings as in the previous section.
The EW-QCD interference is positive and estimated to be $1\%$ of the EW signal in the fiducial region.
The QCD-induced $\ensuremath{\text{W}}\xspace^\pm \ensuremath{\text{Z}}\xspace+{\rm jets}$ background is simulated at LO with up to 3 additional partons using {\sc\small MG5\_aMC}\xspace,
merging the jet multiplicities according to the MLM scheme~\cite{Alwall:2007fs}, and normalizing the total cross-section to diboson NNLO QCD predictions~\cite{Grazzini:2017ckn}\footnote{In the cases where QCD background is the dominant component, it is a common experimental practice to simulate all parton multiplicities starting from zero, even if only 2-jet events should in principle pass selections, provided the PS-matching scale is lower than the jet $p_{\rm T}$ thresholds. This ensures that events with non-signal (fake or pileup) jets are correctly taken into account in the simulation}.
ATLAS uses Sherpa, version 2.2.2, to simulate the electroweak process at LO, inclusive in the number of jets, while the strong process is simulated at NLO QCD with up to one extra jet.
The interference is estimated separately with {\sc\small MG5\_aMC}\xspace at LO to be about $10\%$ of the EW signal, in noticeable disagreement with the CMS estimation.
Nevertheless, its full amount is conservatively used as an uncertainty.
Alternative descriptions of the EW and strong processes are obtained using {\sc\small MG5\_aMC}\xspace at LO or {\sc POWHEG}\xspace at NLO in QCD~\cite{Melia:2011tj}.
Other backgrounds, such as $\ensuremath{\text{Z}}\xspace\PZ$, tribosons, $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace} \ensuremath{\text{W}}\xspace^\pm$, and $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace} \ensuremath{\text{Z}}\xspace$ are generated using different tools, QCD accuracies, and extra-parton multiplicities.
\paragraph{Fiducial region definitions and reconstruction-level selections}
Fiducial regions considered in the ATLAS\footnote{All ATLAS cross sections are reported for a single lepton flavor and are therefore multiplied by four to compare to CMS.} and CMS analyses are compared in Table~\ref{tab:WZfr}.
The same assumptions as in \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ are used for $\tau$ decays and lepton ``dressing''.
\begin{table}[htb]
\caption{Comparison of \ensuremath{\PW^\pm \PZ \Pj\Pj}\ fiducial region definitions and related EW (VBS) cross-section values in the ATLAS and CMS measurements~\cite{ATLAS:WZ,CMS:ssWWandWZ}.
\emph{JRS} stands for generic Jet-Rapidity Separation selections.}
\center
{\begin{tabular}{@{}ccc@{}} \toprule
Variable & ATLAS & CMS \\
\midrule
$p_{\rm T}(\ell)$ & $> 20/15/15$ GeV & $> 20$ GeV \\
$|\eta(\ell)|$ & $< 2.5$ & $< 2.5$ \\
$\Delta R(\ell\ell')$ & $> 0.3/0.2$ & - \\
$m_{\ell\ell}$ & $[81,101]$ GeV & $[76,106]$ GeV \\
$m_{\rm T}(\ensuremath{\text{W}}\xspace^\pm)$ & $> 30$ GeV & - \\
$p_{\rm T}(\ensuremath{\text{j}}\xspace)$ & $> 40$ GeV & $> 50$ GeV \\
$|\eta(\ensuremath{\text{j}}\xspace)|$ & $< 4.5$ & $< 4.7$ \\
\ensuremath{m_{\Pj\Pj}} & $>500$ GeV & $> 500$ GeV \\
JRS & $ \eta(\ensuremath{\text{j}}\xspace_1)\cdot\eta(\ensuremath{\text{j}}\xspace_2) < 0$ & $\ensuremath{\Delta \eta_{\Pj\Pj}} > 2.5$ \\
\midrule
$\sigma$ LO & $1.28 \pm 0.12$ fb & $1.41 \pm 0.21$ fb \\
$\sigma$ NLO QCD+EW & - & $1.24 \pm 0.18{\ensuremath\unskip\,\text{fb}}\xspace$ \\
\bottomrule
\end{tabular} \label{tab:WZfr}}
\end{table}
Reconstruction-level selections follow the fiducial regions requirements. Both analyses use
single-lepton triggers and use a $\ensuremath{\text{b}}\xspace$-jet veto.
In ATLAS (CMS) the leading lepton is required to have $p_{\rm T} > 27 (25) \ensuremath{\,\text{GeV}}\xspace$, and both experiments require $p_{\rm T,miss} > 30\ensuremath{\,\text{GeV}}\xspace$.
In ATLAS 4$\ell$ events are explicitly vetoed, while CMS requires that $m_{\ell\ell\ell'} > 100\ensuremath{\,\text{GeV}}\xspace$.
In CMS the maximum $z^*_\ell$ of the three leptons must be smaller than 1.
\paragraph{Analysis strategy and background estimation}
Both ATLAS and CMS use BDT algorithms to isolate the EW signal over the large QCD background, combining 11 to 12
variables that are related to jet kinematics, vector-boson kinematics, or to both jets and leptons kinematics
at the same time. It must be noticed that variables such as the $\ensuremath{\text{W}}\xspace^\pm$ rapidity or $m_{\rm T}(\ensuremath{\text{W}}\xspace\ensuremath{\text{Z}}\xspace)$ are computed
with much looser assumptions than in the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analysis, since just one neutrino is missing in this case
and its longitudinal momentum can be inferred by a $\ensuremath{\text{W}}\xspace$-mass constraint on the $\ell$-neutrino pair ($\ell$
being the lepton not associated to the $\ensuremath{\text{Z}}\xspace$ decay).
Background contributions from other SM processes or from non-prompt leptons are sub-dominant
and are estimated in a similar way as for the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analyses.
The ATLAS and CMS data with superimposed signal and background components are shown in Figure~\ref{fig:wzjj}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.5\linewidth,height=0.35\textheight,clip=true,trim={1.5cm 0.cm 0.cm 0.cm}]{Figures/CMS-SMP-19-012_Figure_003-d.pdf}
\includegraphics[width=0.48\linewidth,height=0.35\textheight,clip=true,trim={0.2cm 0.cm 0.cm 0.cm}]{Figures/fig_01d_wz.pdf}
\caption{Post-fit BDT-score distributions in CMS (left) and ATLAS (right) in the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ analysis. Process labeling
is consistent, with the exception of events containing top quarks and vector bosons, which are merged in one
category in the CMS plot.}
\label{fig:wzjj}
\end{figure*}
\paragraph{Systematic uncertainties}
In both ATLAS and CMS, two of the dominant uncertainties are the limited size of simulated samples and the theory errors on the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ production
via 2 strong vertices. Among the experimental ones, lepton momentum and efficiency determinations bring the largest uncertainty in CMS,
while in ATLAS jet energy scale and resolution uncertainties have a larger impact.
No single contribution has an impact larger than $6\%$ in either analysis, the EW search analysis being
statistically dominated in both experiments.
\paragraph{Results}
ATLAS reports a measured fiducial cross-section of $\sigma_{\mathrm{EW}} = 2.28^{+0.48}_{-0.42}{\ensuremath\unskip\,\text{fb}}\xspace$, where the total
uncertainty is dominated by the statistical one, finding a fairly large measured-to-SM ratio of 1.77.
It corresponds to a background-only hypothesis rejection with a significance of 5.3$\sigma$, while only 3.2$\sigma$ is expected. CMS similarly reports $\sigma_{\mathrm{EW}} = 1.81 \pm 0.41{\ensuremath\unskip\,\text{fb}}\xspace$,
also larger but more in agreement with the NLO QCD+EW estimations in the respective fiducial region. It corresponds to a background-only hypothesis rejection with a significance of 6.8$\sigma$.
The total \ensuremath{\PW^\pm \PZ \Pj\Pj}\ cross section
including EW and strong components is also measured to be $\sigma_{\mathrm{tot}} = 6.7 \pm 1.0$ fb in ATLAS and $\sigma_{\mathrm{tot}} = 4.97 \pm 0.46$ fb in CMS.
Differential cross-sections are reported only for the EW+strong case,
as a function of \ensuremath{m_{\Pj\Pj}}\ in CMS and as a function of many variables in the ATLAS analysis. In CMS, agreement is found
with the SM predictions.
In ATLAS the same conclusion is reached, but only after scaling up these predictions
by the quite large measured-to-SM cross-section ratio.
In the CMS study, limits on aQGC are set by fitting the diboson transverse mass distribution in the signal region and
the results are statistically combined to those of the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analysis to obtain more stringent limits.
\subsubsection{Theoretical calculations}
At the moment, the theory predictions for the production of a heavy gauge boson in association with a photon via VBS are rather limited.
For \ensuremath{\PW^\pm \gamma \Pj\Pj}, they consist in the NLO QCD corrections to the EW signal \cite{Campanario:2013eta} and to the QCD background \cite{Campanario:2014dpa}.
The same holds true for \ensuremath{\PZ \gamma \Pj\Pj}, with NLO QCD corrections available in the VBS approximation \cite{Campanario:2017ffz} and fully for the background \cite{Campanario:2014wga}.
We note that the $\gamma\gamma$ case, which has two jets and two photons in the final state, has also been studied theoretically at NLO QCD for the signal \cite{Campanario:2020xaf}
and the background \cite{Gehrmann:2013bga,Badger:2013ava,Bern:2014vza}.
Unfortunately, as opposed to \ensuremath{\PW^\pm \gamma \Pj\Pj}\ and \ensuremath{\PZ \gamma \Pj\Pj}\, this EW production has not been measured yet at the LHC, due to the overwhelming multijet background.
\subsubsection{Experimental approaches}
ATLAS and CMS have reported evidence of electroweak \ensuremath{\PZ \gamma \Pj\Pj}\ production using a partial Run-2 data set~\cite{ATLAS:Zgamma,CMS:Zgamma}.
The CMS result is used in combination with $8\ensuremath{\,\text{TeV}}\xspace$ data to obtain the observed evidence. Only CMS performed the search for the \ensuremath{\PW^\pm \gamma \Pj\Pj}\ final state~\cite{CMS:Wgamma} in the same data set,
leading to the observation of this process.
Photon reconstruction and identification proceed in similar ways in ATLAS and CMS~\cite{Khachatryan:2015iwa,Aaboud:2018yqu} and have many points is common
with the corresponding procedures for electrons. The electromagnetic shower reconstruction in the calorimeter
is analogous and the
identification criteria include shower shape (in the ATLAS case information is used in each single LAr layer)
and small energy leakage in the hadronic calorimeter. Isolation requirements in trackers and calorimeters
are also similar to those used to select electrons, even if in tracker isolation no charged particle is excluded
from the cone. In ATLAS, identification is performed separately for photons
converted in $\ensuremath{\text{e}}\xspace^+\ensuremath{\text{e}}\xspace^-$ pairs in the detector material, while in CMS this contribution is not considered. Selection
requirements are in general tighter than for electrons and different for the barrel and endcap regions of the
detectors, with barrel photons showing higher purity because of smaller jet backgrounds~\footnote{Both ATLAS
and CMS exclude from photon acceptance a small $|\eta|$ region corresponding to the transition between
barrel and endcap calorimetry.}.
\paragraph{Monte Carlo simulation}
The \ensuremath{\PW^\pm \gamma \Pj\Pj}\ and \ensuremath{\PZ \gamma \Pj\Pj}\ analyses adopt similar simulation strategies as other VBS analyses for signal and background.
VBS signals are simulated at LO with {\sc\small MG5\_aMC}\xspace or {\sc Sherpa}\xspace, with no additional partons besides the two tagging jets.
The EW-QCD interferences are estimated to be 1-3\% (3-8\%) of the EW \ensuremath{\PW^\pm \gamma \Pj\Pj}\ (\ensuremath{\PZ \gamma \Pj\Pj}) signal in the fiducial
regions.
The corresponding QCD-induced processes are simulated either at LO with a number of extra partons exceeding
two or at NLO with a number of extra partons of at least two using the {\sc\small MG5\_aMC}\xspace and {\sc Sherpa}\xspace generators. In general,
the Monte Carlo generator results are used to normalize the cross sections of simulated samples, since higher-order
QCD calculations are not matching the analysis phase spaces~\cite{Campanario:2014wga,Campanario:2014ioa}.
A unique background to these searches is the $\ensuremath{\text{t}}\xspace {\bar \ensuremath{\text{t}}\xspace} \gamma$ SM process, with at least one top quark
decaying leptonically, which is simulated using {\sc\small MG5\_aMC}\xspace at LO (NLO) in ATLAS (CMS).
\paragraph{Fiducial region definitions and reconstruction-level selections}
Fiducial regions considered in the ATLAS and CMS analyses are compared in Table~\ref{tab:gammafr}. The same assumptions as in \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ are used for $\tau$ decays and lepton ``dressing''.
\begin{table}[tb]
\caption{Comparison of \ensuremath{\PW^\pm \gamma \Pj\Pj}\ and \ensuremath{\PZ \gamma \Pj\Pj}\ fiducial region definitions and related EW (VBS) cross-section values in the ATLAS and CMS measurements~\cite{ATLAS:Zgamma,CMS:Zgamma,CMS:Wgamma}.
\emph{JRS} stands for generic Jet-Rapidity Separation selections.
The large differences in the \ensuremath{\PZ \gamma \Pj\Pj}\ fiducial-region choice for the ATLAS and CMS searches is noticeable.}
\center
{\begin{tabular}{@{}cccc@{}} \toprule
Variable & ATLAS \ensuremath{\PZ \gamma \Pj\Pj} & CMS \ensuremath{\PZ \gamma \Pj\Pj} & CMS \ensuremath{\PW^\pm \gamma \Pj\Pj} \\
\midrule
$p_{\rm T}(\ell)$ & $> 20$ GeV & $> 25/20$ GeV & $> 30$ GeV \\
$|\eta(\ell)|$ & $< 2.5$ & $< 2.5/2.4$ & $< 2.4$ \\
$\Delta R(\ell\gamma)$ & $> 0.4$ & $> 0.7$ & - \\
$m_{\ell\ell}/m_{\rm T}(\ensuremath{\text{W}}\xspace)$ & $> 40$ GeV & $[70,110]$ GeV & $> 30$ GeV \\
$p_{\rm T}(\gamma)$ & $> 15$ GeV & $> 20$ GeV & $> 25$ GeV \\
$|\eta(\gamma)|$ & $< 2.37$ & $< 2.5$ & $< 2.5$ \\
$m(\ell\ell\gamma) + m(\ell\ell)$ & $> 182$ GeV & - & - \\
$p_{\rm T,miss}$ & - & - & $> 30$ GeV \\
$p_{\rm T}(\ensuremath{\text{j}}\xspace)$ & $> 50$ GeV & $> 30$ GeV & $> 50/40$ GeV \\
$|\eta(\ensuremath{\text{j}}\xspace)|$ & $< 4.5$ & $< 4.7$ & $< 4.7$ \\
\ensuremath{m_{\Pj\Pj}} & $> 150$ GeV & $> 500$ GeV & $> 500$ GeV \\
JRS & $ \ensuremath{\Delta y_{\Pj\Pj}} > 1$ & $\ensuremath{\Delta \eta_{\Pj\Pj}} > 2.5$ & $\ensuremath{\Delta \eta_{\Pj\Pj}} > 2.5$ \\
centrality & $< 5$ & - & - \\
\midrule
$\sigma$ LO & $7.8 \pm 0.5{\ensuremath\unskip\,\text{fb}}\xspace$ & $5.0 \pm 0.3{\ensuremath\unskip\,\text{fb}}\xspace$ & $17.03{\ensuremath\unskip\,\text{fb}}\xspace$ \\
\bottomrule
\end{tabular} \label{tab:gammafr}}
\end{table}
Reconstruction-level selections follow the fiducial regions requirements. All analyses use
single- or double-lepton triggers to select events online.
In the ATLAS \ensuremath{\PZ \gamma \Pj\Pj}\ analysis, where particle-flow categorization is not
employed, a complex procedure is used to remove potential overlaps between detector signals identified as lepton
QED radiation, photons and jets.
In CMS, a selection on $m(V\gamma) > 100$ GeV reduces the contribution from
final-state radiation in Z-boson decays. For the \ensuremath{\text{W}}\xspace\ case, the invariant mass is computed by imposing a \ensuremath{\text{W}}\xspace-mass
constraint to determine the longitudinal momentum of the missing neutrino. Further VBS enrichment is obtained
using the Zeppenfeld variable of the $V\gamma$ system and its azimuthal-angle difference with the dijet system.
\paragraph{Analysis strategy and background estimation}
The ATLAS \ensuremath{\PZ \gamma \Pj\Pj}\ analysis uses a BDT algorithm to isolate the EW signal over the backgrounds, where the 13 input
variables are related to the kinematic properties of the two tagging jets, the photon, and the reconstructed Z boson.
CMS uses a two-dimensional fits using the most discriminating variables, which are (\ensuremath{m_{\Pj\Pj}}, \ensuremath{\Delta \eta_{\Pj\Pj}}) in the \ensuremath{\PZ \gamma \Pj\Pj}\
case and (\ensuremath{m_{\Pj\Pj}}, $m(\ell\gamma)$) in the \ensuremath{\PW^\pm \gamma \Pj\Pj}\ case. In both analyses, events are first separated by
lepton flavor and central or forward rapidity regions, and control regions with small VBS yields are fit together
with the signal regions to constrain background normalizations from data.
In all cases the
largest background component is the QCD \ensuremath{\PW^\pm \gamma \Pj\Pj}\ or \ensuremath{\PZ \gamma \Pj\Pj}\ production and the next-to-largest comes from $V$+jets
events where the photon is misidentified or not coming from the hard-scattering event.
The former is estimated from simulation while the second is obtained using control regions in data with relaxed photon selections.
The ATLAS and CMS data with superimposed signal and background components are shown in Figure~\ref{fig:wzgamma}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.45\textwidth]{Figures/CMS-SMP-19-008_Figure_003-c.pdf}
\includegraphics[width=0.44\textwidth]{Figures/fig_03a_zgammajj.pdf}
\caption{Post-fit \ensuremath{m_{\Pj\Pj}}\ distributions in $m(\ell\gamma)$ intervals in the CMS \ensuremath{\PW^\pm \gamma \Pj\Pj}\ analysis
(central-rapidity $\mu\gamma$ channel, left) and ATLAS output BDT score in the \ensuremath{\PZ \gamma \Pj\Pj}\ analysis (right).
The ``MisID photon'' event category in CMS is dominated by \ensuremath{\text{W}}\xspace+jets events.}
\label{fig:wzgamma}
\end{figure*}
\paragraph{Systematic uncertainties}
In ATLAS the largest systematic impacts are from the limited size of simulated samples, theory errors
on the \ensuremath{\PZ \gamma \Pj\Pj}\ EW production, and non-prompt-photon background modelling from data.
In CMS, the uncertainties are given as variations on a specific process and not as
total impacts on the measurements. From their absolute size, it can be inferred that a larger uncertainty
than in ATLAS is considered for the theory modelling of the QCD production, at least for \ensuremath{\PW^\pm \gamma \Pj\Pj}.
Among the experimental uncertainties, jet energy scale and resolution uncertainties have a larger impact with
respect to lepton and photon identification, in both experiments.
\paragraph{Results}
ATLAS reports a measured fiducial cross-section of $\sigma_{\mathrm{EW}} = 7.8 \pm 2.0{\ensuremath\unskip\,\text{fb}}\xspace$, where the total
uncertainty is almost equally shared between statistical and systematic+modelling uncertainties.
It corresponds to a background-only hypothesis rejection with a significance of 4.1$\sigma$, in perfect agreement with expectations according to the {\sc\small MG5\_aMC}\xspace simulation,
while {\sc Sherpa}\xspace predicts a somehow larger cross section.
For \ensuremath{\PZ \gamma \Pj\Pj}\ CMS similarly reports $\sigma_{\mathrm{EW}} = 3.2 \pm 1.2{\ensuremath\unskip\,\text{fb}}\xspace$, where the ratio to the expectation
is $0.65$. It corresponds to a background-only hypothesis rejection with a significance of 3.9$\sigma$, while 5.2$\sigma$ was expected,
and the compatibility with the SM is approximately at the 1.5$\sigma$ level. In the CMS case, the fiducial region is more restrictive, hence the statistical uncertainty is dominant.
For \ensuremath{\PW^\pm \gamma \Pj\Pj}\ the corresponding CMS results are $\sigma_{\mathrm{EW}} = 20 \pm 5{\ensuremath\unskip\,\text{fb}}\xspace$, where the ratio to the expectation is $1.2$ and in agreement with the SM.
It corresponds to a background-only hypothesis rejection with a significance of 5.3$\sigma$, while 4.8$\sigma$ was expected.
The total cross sections
including EW and strong components are measured to be $\sigma_{\mathrm{\ensuremath{\PZ \gamma \Pj\Pj},tot}} = 71^{+23}_{-19}{\ensuremath\unskip\,\text{fb}}\xspace$ in ATLAS,
$\sigma_{\mathrm{\ensuremath{\PZ \gamma \Pj\Pj}, tot}} = 14 \pm 3{\ensuremath\unskip\,\text{fb}}\xspace$ and $\sigma_{\mathrm{\ensuremath{\PW^\pm \gamma \Pj\Pj}, tot}} = 108 \pm 16{\ensuremath\unskip\,\text{fb}}\xspace$ in CMS,
all in good agreement with the SM.
In the CMS analyses, limits on aQGC are set by fitting the diboson-mass distributions in the signal region with
additional requirements on the minimum transverse momentum of the photon, and/or \ensuremath{m_{\Pj\Pj}}. The range of explored
aQGC includes both mixed and transverse operators. The \ensuremath{\PW^\pm \gamma \Pj\Pj}\ analysis is the most sensitive to the operators
which do not affect anomalous production of heavy vector bosons (where the semileptonic or exclusive
analyses give the most stringent limits), namely $f_{M,2}$, $f_{M,3}$, $f_{M,4}$, $f_{M,5}$, $f_{T,5}$, $f_{T,6}$, and $f_{T,7}$.
\subsubsection{Theoretical calculations}
The theoretical status of the \ensuremath{\PZ\PZ\Pj\Pj}\ channel is rather similar to the \ensuremath{\PW^\pm \PZ \Pj\Pj}\ one.
In particular, the QCD corrections to EW contributions are known since some time \cite{Jager:2006cp} and have been matched subsequently to PS \cite{Jager:2013iza}.
NLO QCD corrections have also been computed for the QCD background in Ref.~\cite{Campanario:2014ioa} and the matched results to parton-shower can be obtained from modern Monte Carlo generators.
The full NLO contributions of orders $\mathcal{O}\left(\alpha^7 \right)$ and $\mathcal{O}\left(\alpha_{\rm s} \alpha^6 \right)$
and all contributing leading orders along with the loop-induced contribution of order $\mathcal{O}\left(\alpha_{\rm s}^4 \alpha^4 \right)$ have been computed in Ref.~\cite{Denner:2020zit}.
Results show the same hierarchy as for \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ and \ensuremath{\PW^\pm \PZ \Pj\Pj}\ and, furthermore, that loop-induced channels can contribute significantly in typical fiducial regions.
This can be seen for example in Fig.~\ref{fig:ZZTH} where, beyond a di-jet invariant mass of $500\ensuremath{\,\text{GeV}}\xspace$, QCD corrections are small while the EW ones are substantial.
In Ref.~\cite{Denner:2020zit} the loop-induced contribution simply amounts to include the $\ensuremath{\text{g}}\xspace\Pg\to\ensuremath{\text{e}}\xspace^+\ensuremath{\text{e}}\xspace^-\mu^+\mu^-\ensuremath{\text{g}}\xspace\Pg$ channel.
Reference~\cite{Li:2020nmi} went beyond, by studying loop-induced ZZ diboson production with up to 2 jets merged and matched to parton showers.
\footnote{Very recently NLO QCD corrections matched to parton shower for the loop-induced process $\ensuremath{\text{g}}\xspace\Pg\to\ensuremath{\text{e}}\xspace^+\ensuremath{\text{e}}\xspace^-\mu^+\mu^-$ were presented in Refs.~\cite{Alioli:2021wpn,Grazzini:2021iae}}
In particular, a proper description of loop-induced contributions can have a significant impact in relevant phase-space regions (see Fig.~\ref{fig:ZZTH}).
\begin{figure*}[t]
\centering
\hspace{-0.4cm}
\includegraphics[width=0.48\linewidth,height=0.34\textheight,clip=true,trim={2.1cm 0.5cm 0.cm 0.cm}]{Figures/histogram_invariant_mass_mj1j2}
\includegraphics[width=0.53\linewidth,height=0.34\textheight,clip=true,trim={0.cm 0.cm 0.5cm 0.6cm}]{Figures/valVBS_mjj}
%
\caption{Various differential distributions for the ZZ channel.
Left: invariant mass of the two tagging jets with NLO EW + QCD corrections.
Right: invariant mass of the two tagging jets with matching and merging of different jet multiplicities.
These figures are taken from Ref.~\cite{Denner:2020zit} and Ref.~\cite{Li:2020nmi}, respectively.}
\label{fig:ZZTH}
\end{figure*}
A summary of the available predictions is provided in Table~\ref{tab:ZZTH}.
Note that the loop-induced contributions are not indicated there as they are only known at LO.
\begin{table}[htb]
\caption{Summary of higher-order predictions currently available for the ZZ channel: at fixed order and matched to parton shower.
The symbols {\bf \color{green} \checkmark}, {\bf \color{green} \checkmark$^*$}, and {\bf \color{red} X}
means that the corresponding predictions are available, in the VBS approximation, or not yet.}
\center
{\begin{tabular}{l|cccc}
Order & $\mathcal{O}\left(\alpha^7 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}} {\alpha}^6 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^2 {\alpha}^5 \right)$ &
$\mathcal{O}\left({\alpha_{\rm s}}^3 {\alpha}^4 \right)$ \\
\hline
NLO & {\bf \color{green} \checkmark} & {\bf \color{green} \checkmark} & {\bf \color{red} X} & {\bf \color{green} \checkmark} \\
NLO+PS & {\bf \color{red} X} & {\bf \color{green} \checkmark$^*$} & {\bf \color{red} X} & {\bf \color{green} \checkmark}
\end{tabular} \label{tab:ZZTH}}
\end{table}
In Ref.~\cite{Ballestrero:2018anz}, it has been pointed out that for \ensuremath{\PZ\PZ\Pj\Pj}\ the VBS approximation is less accurate at LO than at NLO.
The main reason is that performing the full computation implies including also tri-boson contributions where one of the gauge boson decays hadronically.
This means that, when including real QCD radiation,
the two tagging jets could be the gluon radiation and a jet from the aforementioned heavy gauge boson. The gauge boson can be resonant if the two quarks originating from its decay recombine in a single jet.
While rather suppressed, such configurations can be significant when low di-jet invariant masses are used.
This is particularly explicit in the left-hand side of Fig.~\ref{fig:ZZTH} where at low invariants, the QCD corrections are large while they become small for larger invariant masses.
While such corrections are perfectly legitimate as they describe a physical effect due to the interplay between the nature of the process and the event selection, in experimental analysis, the tri-boson contributions are often subtracted using LO Monte-Carlo simulations.
First, as argued previously, this makes the measurement more dependent on theoretical input.
Second, when using low di-jet invariant masses, the physical effect described above will then never been described by the theoretical predictions not including tri-boson contributions.
This example illustrates perfectly the need to perform measurements as theory-independent as possible and have exchange between experiment and theory.
\subsubsection{Experimental approaches}
ATLAS has reported observation of electroweak \ensuremath{\PZ\PZ\Pj\Pj}\ production using the full Run-2 data set~\cite{ATLAS:ZZ}
and combining the two $ZZ$ decay channels. CMS only analyzed events with four charged leptons and, after
an earlier publication with limited sensitivity~\cite{CMS:ZZ}, has recently reported a strong evidence for the
EW production with the entire 13-TeV statistics~\cite{CMS:ZZ2}.
\paragraph{Monte Carlo simulation}
CMS uses {\sc\small MG5\_aMC}\xspace at LO without additional partons to simulate the EW and interference components. The EW-QCD
interference is positive and estimated to be 3-9\% of the EW signal in the various fiducial regions considered
in the analysis. An alternative estimation at NLO QCD using {\sc POWHEG}\xspace~is also considered~\cite{Jager:2013iza}.
The QCD-induced $\ensuremath{\text{Z}}\xspace\PZ+{\rm jets}$ process (without loop-induced contributions) is simulated at NLO with up to one extra partons using {\sc\small MG5\_aMC}\xspace,
merging the jet multiplicities according to the FxFx scheme~\cite{Frederix:2020trv}, and normalizing the
total cross section to NNLO QCD predictions for diboson production~\cite{Grazzini:2018owa}.
While the loop-induced contribution only appears at NNLO in QCD, this contribution is significant because of the $\ensuremath{\text{g}}\xspace\Pg$ initial state,
which is dominant at the LHC.
This contribution is included via a dedicated simulation at LO with up to two extra partons,
using advanced {\sc\small MG5\_aMC}\xspace settings as described in Ref.~\cite{Li:2020nmi}.
Parton Distribution Functions and scale choices follow those made in the \ensuremath{\PW^\pm \PW^\pm \Pj\Pj}\ analysis.
ATLAS uses the same simulation as CMS for EW and interference, but with LO PDFs, and the full size of the
interference component is conservatively used as an uncertainty. {\sc Sherpa}\xspace 2.2.2 is used instead to simulate both
the loop-induced and tree-level strong processes, at LO (NLO) and with up to one (three) extra partons,
depending on the final state. In the $2\ell 2\nu \ensuremath{\text{j}}\xspace\Pj$ channel, a dedicated Monte Carlo generator, {\sc gg2VV},
is used for the inclusive diboson loop-induced component~\cite{Kauer:2013qba}.
Other backgrounds, such as tribosons, $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace} \ensuremath{\text{W}}\xspace^\pm$, and $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace} \ensuremath{\text{Z}}\xspace$, which are generated using different
tools, QCD accuracies, and extra-parton multiplicities in the two experiments.
These are in general a minor contribution
to the selected data samples. The only exception is the most inclusive fiducial region of the CMS analysis,
where the triboson contribution is significant at low \ensuremath{m_{\Pj\Pj}}\ values and is subtracted using simulation.
\paragraph{Fiducial region definitions and reconstruction-level selections}
Fiducial regions considered in the ATLAS and CMS analyses are compared in Table~\ref{tab:ZZfr}. In ATLAS
a single value for the combined 4$\ell jj$ and $2\ell2\nu jj$ phase space is given. In CMS, results are given
in three different fiducial region corresponding to increasing degrees of VBS enrichment.
\begin{table}[htb]
\caption{Comparison of \ensuremath{\PZ\PZ\Pj\Pj}\ fiducial region definitions and related EW (VBS) cross-section values in the ATLAS and CMS
measurements ~\cite{ATLAS:ZZ,CMS:ZZ2}. \emph{JRS} stands for generic Jet-Rapidity Separation selections.}
\center
{\begin{tabular}{@{}cccc@{}} \toprule
Variable & ATLAS $4\ell \ensuremath{\text{j}}\xspace\Pj$ & ATLAS $2\ell 2\nu \ensuremath{\text{j}}\xspace\Pj$ & CMS (inclusive/loose/tight) \\
\midrule
$p_{\rm T}(\ell)$ & $> 20/20/10/7\ensuremath{\,\text{GeV}}\xspace$ & $> 30/20$ GeV & $> 20/10/5/5$ GeV \\
$|\eta(\ell)|$ & $< 2.7/2.5$ & $< 2.5$ & $< 2.5$ \\
$\Delta R(\ell\ell')$ & $> 0.2$ & - & - \\
$m_{\ell\ell}$ & $[60,120]$ GeV & $[80,100]$ GeV & $[60,120]$ GeV \\
$m_{4\ell}$ & - & - & $> 180$ GeV \\
$p_{\rm T,miss}$ & - & $> 130$ GeV & - \\
$p_{\rm T}(\ensuremath{\text{j}}\xspace)$ & $> 40/30$ GeV & $> 60/40$ GeV & $> 30$ GeV \\
$|\eta(\ensuremath{\text{j}}\xspace)|$ & $< 4.5$ & $<4.5$ & $< 4.7$ \\
\ensuremath{m_{\Pj\Pj}} & $>300$ GeV & $> 400$ GeV & $>100/400/1000$ GeV \\
JRS & $ \ensuremath{\Delta y_{\Pj\Pj}} > 2$ & $ \ensuremath{\Delta y_{\Pj\Pj}} > 2$ & $\ensuremath{\Delta \eta_{\Pj\Pj}} > 0/2.4/2.4$ \\
\midrule
$\sigma$ LO & \multicolumn{2}{c}{$0.61 \pm 0.03$ fb} & $0.28\pm 0.02\,/\,0.19 \pm 0.02\,/\,0.10\pm 0.01$ fb \\
$\sigma$ NLO QCD & \multicolumn{2}{c}{-} & $0.28\pm 0.02\,/\,0.20 \pm 0.02\,/\,0.11\pm 0.01$ fb \\
\bottomrule
\end{tabular} \label{tab:ZZfr}}
\end{table}
Reconstruction-level selections follow the fiducial regions requirements very closely with minor
additions (in CMS electron $p_{\rm T}$ thresholds are raised to 7 GeV, in ATLAS the $\ensuremath{\text{Z}}\xspace$-mass requirement is
$[66,116]$ GeV and the $p_{\rm T,miss}$ variable is replaced by its significance). Double or single-lepton
triggers are used to select data, keeping in general a very high efficiency.
\paragraph{Analysis strategy and background estimation}
Both ATLAS and CMS use multivariate analyses to isolate the EW signal over the large QCD background.
In the 4$\ell \ensuremath{\text{j}}\xspace\Pj$ channel, ATLAS uses a BDT comprised of 12 variables: \ensuremath{m_{\Pj\Pj}}, \ensuremath{\Delta y_{\Pj\Pj}}, jet $p_{\rm T}$ and rapidities,
$\ensuremath{\text{Z}}\xspace$ candidate $p_{\rm T}$ and rapidities, and combinations of full-event variables. A similar choice is done for the
$2\ell2\nu \ensuremath{\text{j}}\xspace\Pj$ channel, replacing the quantities related to the undetected $\ensuremath{\text{Z}}\xspace$ with $p_{\rm T,miss}$, its
significance, and related variables. In CMS, the kinematic EW discriminant $K_D$ is instead built from analytical
matrix elements of the EW and strong processes at LO, obtained from the {\sc MCFM} generator and using
the {\sc MELA} event-probability calculator~\cite{Gao:2010qx,Bolognesi:2012mm,Gritsan:2020pib}.
Both ATLAS and CMS use QCD-enriched control regions to validate background estimates in the 4$\ell$ channel,
while the background from one $\ensuremath{\text{Z}}\xspace$ boson and non-prompt leptons is sub-dominant. In the ATLAS $2\ell2\nu \ensuremath{\text{j}}\xspace\Pj$ channel,
the evaluation of the QCD contribution is instead simulation-based while control regions are used to estimate
other important background processes such as \ensuremath{\PW^\pm \PZ \Pj\Pj}, $\ensuremath{\text{W}}\xspace^+\ensuremath{\text{W}}\xspace^-\ensuremath{\text{j}}\xspace\Pj$, and $\ensuremath{\text{t}}\xspace{\bar \ensuremath{\text{t}}\xspace}$.
The ATLAS and CMS $4\ell \ensuremath{\text{j}}\xspace\Pj$ data with superimposed signal and background components are shown in Figure~\ref{fig:zzjj}.
\begin{figure*}[t]
\centering
\includegraphics[width=0.48\textwidth]{Figures/CMS-SMP-20-001_Figure_003.pdf}
\includegraphics[width=0.41\textwidth]{Figures/fig_03b_zzjj.pdf}
\caption{Post-fit $K_D$ distribution in CMS (left) and BDT score distribution in ATLAS (right) in the \ensuremath{\PZ\PZ\Pj\Pj}\ analysis.
The very different signal-to-background ratio is due to the different VBS-enrichment level in the two figures.}
\label{fig:zzjj}
\end{figure*}
\paragraph{Systematic uncertainties}
In both ATLAS and CMS uncertainties are given as fraction of predicted yields and not as impacts on the
cross-section measurements, which makes comparisons more difficult. The ATLAS uncertainties appear to be
dominated by a very large ($30\%$) theory uncertainty on the strong \ensuremath{\PZ\PZ\Pj\Pj}\ production simulated with {\sc Sherpa}\xspace.
In CMS this uncertainty, EW uncertainties, as well as uncertainties on lepton and jet measurements contribute
in similar amounts.
\paragraph{Results}
ATLAS reports a measured fiducial cross-section of $\sigma_{\mathrm{EW}} = 0.82 \pm 0.21{\ensuremath\unskip\,\text{fb}}\xspace$, where the total
uncertainty is dominated by the statistical one.
It corresponds to a background-only hypothesis rejection with a significance of 5.5$\sigma$, while 4.3$\sigma$ is expected, with the 4$\ell jj$ channel exhibiting a much larger sensitivity. In the three fiducial regions
CMS reports $\sigma_{\mathrm{EW,incl}} = 0.33^{+0.12}_{-0.11}{\ensuremath\unskip\,\text{fb}}\xspace$, $\sigma_{\mathrm{EW,loose}} = 0.18^{+0.09}_{-0.08}{\ensuremath\unskip\,\text{fb}}\xspace$, and $\sigma_{\mathrm{EW,tight}} = 0.09^{+0.05}_{-0.04}{\ensuremath\unskip\,\text{fb}}\xspace$,
all in agreement with SM expectations at both LO and NLO in QCD. The background-only hypothesis is rejected with a significance of 4.0$\sigma$.
In ATLAS, the total \ensuremath{\PZ\PZ\Pj\Pj}\ cross section
including EW and strong components is also measured to be $\sigma_{\mathrm{tot}} = 1.27 \pm 0.14{\ensuremath\unskip\,\text{fb}}\xspace$ in the
4$\ell \ensuremath{\text{j}}\xspace\Pj$ channel and $\sigma_{\mathrm{tot}} = 1.2 \pm 0.3{\ensuremath\unskip\,\text{fb}}\xspace$ in the 2$\ell 2\nu \ensuremath{\text{j}}\xspace\Pj$ channel.
In CMS the measured value is $\sigma_{\mathrm{tot}} = 5.3 \pm 0.6{\ensuremath\unskip\,\text{fb}}\xspace$, in the most inclusive fiducial region.
In the CMS analysis, limits on aQGC are set by fitting the $\ensuremath{\text{Z}}\xspace\PZ$ invariant mass distribution in the most inclusive region.
These are particularly constraining for the T8 and T9 operators, which only involve neutral fields and to which this channel has therefore the greatest sensitivity.
|
1,477,468,751,010 | arxiv | \section{Introduction}
Entity resolution (ER) is the process of identifying records that refer
to the same real-world entity.
Accurate and efficient ER is needed in various data-intensive
applications, including but not limited to health studies, fraud
detection, and national censuses~\cite{Christen:2012:DMC:2344108}.
More specifically, ER plays a pivotal role in the context of
Australia's whole-of-government approach to tackle our most pressing
social issues -- including terrorism and welfare fraud -- by combining
and analysing datasets from multiple government agencies.
Two typical challenges in entity resolution are imperfect
data quality and large data size.
Common data quality issues that can introduce ambiguity in the ER process include:
\begin{itemize}\itemsep1.5mm\parskip0mm
\item \textbf{Incompleteness}: Records with incomplete attribute
values or even missing attribute values.
\item \textbf{Incompatible formats}: The formats of names,
addresses, dates, numbers, etc., can be different between
countries, regions, and languages.
\item \textbf{Errors}: Records containing wrong information due to
either user or system errors, or deliberate attempts at
obfuscation are widely seen in databases.
\item \textbf{Timeliness}: Records have become outdated due to poor maintenance or
data refresh practices, such as people changing their name or
address.
\end{itemize}
In databases containing upwards of tens to hundreds of millions records, ER can also be challenging because exhaustively comparing records in a
pairwise manner is computationally infeasible~\cite{Chr12b}.
In fact, any algorithm with time complexity worse than
linear is prohibitive on large databases.
In this paper, we present a simple and scalable ER algorithm that addresses the challenges of performing ER on poor quality and high volume data.
The key ideas behind our proposed approach are described next.
\subsubsection*{Using Redundancy to Overcome Data Quality Issues}
The most common way to tackle data quality issues is to standardise
and cleanse raw data before the linking operation
\cite{Christen:2012:DMC:2344108}. Standardisation and cleansing are
umbrella terms covering operations which can fill in incomplete data,
unify inconsistent formats, and remove errors in data.
The problem with standardisation and cleansing is that it is in
itself a challenging problem. For example, \emph{01/02/2000} can be
parsed as either \emph{1st of Feb 2000} or \emph{2nd of Jan 2000}.
\emph{St} can mean either \emph{Street} or \emph{Saint} in addresses.
If a mistake is made during standardisation and cleansing, it is
usually difficult to recover from it to perform linkage correctly.
Instead of standardising and cleansing data into canonical forms, we
rely on redundancy in data to overcome data quality issues. We say a
record contains redundancy if one of its subrecords can uniquely
identify the same entity. For example, if there is only one
\emph{John Smith} living in \emph{Elizabeth Street}, then \emph{John
Smith, 45 Elizabeth Street} as a record of a person contains redundancy,
because specifying street number \emph{45} is not really necessary.
Redundancy exists widely in data. Not every country has a city named
\emph{Canberra}. Not every bank has a branch in \emph{Bungendore}. As
an extreme case, three numbers \emph{23 24 5600} can be sufficient
to specify an address globally, if there is only one address in the
world containing these three numbers at the same time. In this case,
we do not even need to know if \emph{23} is a unit number or the first
part of a street number.
Such seemingly extreme examples are actually quite common in practice.
For example, $1,374,998$ of the $13.9$ million Australian addresses in the Open Address
\cite{openaddress} database can be uniquely identified by just three numbers in them.
Redundancy simplifies ER. If two records share a common subrecord that
can be used to uniquely identify an entity, then these two
records can be linked no matter what data quality issues they each have.
We call such a subrecord a \emph{signature} of its entity.
Probabilistic identification of signatures in data and linking records using such probabilistic signatures is the first key idea of our algorithm.
\subsubsection*{Data-Driven Blocking using Signatures}
Blocking is a widely used technique to improve ER
efficiency~\cite{Christen:2012:DMC:2344108}. Na{\"i}vely, linking two
databases containing $m$ and $n$ records respectively requires
$O(mn)$ record pair comparisons. Most of these comparisons lead to
non-matches, i.e.\ they correspond to two records that refer to
different entities. To reject these non-matches with a lower cost,
one may first partition the raw records according to criteria
selected by a user. These criteria are called \emph{blocking
keys}~\cite{Chr12b}. Examples of blocking keys include attributes
such as first and last name, postcode, and so on. During linkage,
comparison is only carried out between records that fall into the
same partition, based on the assumption that records sharing no
blocking keys do not match with each other.
The efficiency and completeness of ER is largely determined by
blocking-key selection, which again is challenging in itself. If the
keys are not distinctive between disparate entities, many irrelevant
records will be placed into the same block, which gains little
improvement in efficiency. If the keys are not invariant with
respect to records of the same entities, records of the same entity
will be inserted into different blocks and many true matching record
pairs will be missed. If the key values do not distribute evenly
among the records, the largest few blocks will form the bottleneck of
ER efficiency. When dealing with a large dataset, it is challenging
to balance all these concerns. Moreover, the performance of blocking
keys also depends on the accuracy of any data standardisation and
cleansing performed~\cite{Christen:2012:DMC:2344108}.
In an ideal world, we would like to use signatures as the blocking key and
place only records of the same entity into the same block.
In practice, we do not know which subrecords are signatures but we can still approximate the strategy by blocking on probabilistically identified signatures, as we describe in
Section~\ref{sec-signatures}.
These probabilistic signatures tend to be empirically distinctive and exhibit low-frequency in the database, which allows small and accurate blocks to be constructed.
The only risk is these blocking keys may not be invariant with respect to records of the same entities.
To address this, we introduce an inter-block connected component algorithm,
which is explained next.
\subsubsection*{Connected Components for Transitive Linkage}
As discussed above, the blocking-by-probabilistic-signature technique
leads to quite targetted blocking of records, with high precision but
possibly low recall. This is in contrast to standard blocking
techniques that tend to have low precision but high
recall~\cite{Chr12b}.
To compensate for the loss in recall, we allow each record to be inserted
into multiple blocks, using the fact that each record may contain
multiple distinct signatures. Moreover, to link records of the same
entity that do not share the same signature, we allow two records in
two different blocks to be linked if they are linked to the same
third record in their own blocks.
To implement such an indirect (transitive) link, we run a connected
component algorithm to assign records connected directly or
indirectly with the same label (entity identifier).
A particular challenge in our context is the size of the graphs we
have to deal with. There are as many nodes as the number of records.
Such a graph can be too large to fit into main memory. Random
access to nodes in the graph, which is required by traditional
depth/breadth-first search algorithms, might therefore not be
feasible.
To addres this, we propose a connected-component labelling algorithm that fits large
graphs that are stored in a distributed database. The algorithm uses
standard relational database operations, such as grouping and join,
in an iterative way and converges in linear time. This connected
component operation allows us not only to use small-sized data
blocks, but also to link highly inconsistent records of the same
entity transitively.
\subsubsection*{Implementation on Parallel Databases}
Massively parallel processing databases like Teradata and Greenplum
have long supported parallelised SQL that scales to large datasets.
Recent advances in large-scale in-database analytics platforms
\cite{DBLP:journals/pvldb/HellersteinRSWFGNWFLK12,
Zaharia:2010:SCC:1863103.1863113} have shown how sophisticated
machine learning algorithms can be implemented on top of a
declarative language like SQL or MapReduce to scale to
Petabyte-sized datasets on cluster computing.
One merit of our proposed method is it can be implemented on
parallelised SQL using around ten SQL statements. As our experiments
presented in Section~\ref{sec-experiments} show, our algorithm can
link datasets containing thousands of records in seconds, millions of
records in minutes, and billions of records in hours on medium-sized clusters built using inexpensive commodity hardware.
\subsubsection*{Paper Contributions}
The contributions of this paper is a novel ER algorithm that
\begin{enumerate}\itemsep1mm\parskip0mm
\item introduces a probabilistic technique to identify, from
unlabelled data, entity signatures derived from a first-principles
formulation of the ER problem;
\item introduces a new and effective data-driven blocking technique based on the occurrence of common probabilistic signatures in two records;
\item incorporates a scalable connected-component labelling
algorithm that uses inverted-index data structures and
parallel databases to compute transitive linkages in large
graphs (tens to hundreds of millions of nodes);
\item is simple and scalable, allowing the whole algorithm to be written in
$\sim$10 standard SQL statements on
modern parallel data platforms like Greenplum and Spark;
\item achieves state-of-the-art performance on several benchmark
datasets and pushes the scalability boundary of existing ER algorithms.
\end{enumerate}
Our paper also provides a positive answer to an open research problem
raised by \cite{papadakis17} about the
existence of scalable and accurate data-driven blocking algorithms.
The paper is organised as follows.
In Section~\ref{sec:problem formulation} we formulate the ER problem precisely.
In Section~\ref{sec-signatures} we describe how entity signatures can be identified in a probabilistic way.
In Section~\ref{sec:cc} we propose a scalable graph-labelling algorithm for identifying transitive links.
We present the overall algorithm for signature-based ER in Section~\ref{sec:signature-er}.
Experimental results are presented in Section~\ref{sec-experiments}, followed by a literature review and discussion in Section~\ref{sec:rel-work} and conclusion in Section~\ref{sec:conclusion}.
\section{Problem Formulation}\label{sec:problem formulation}
The ER problem is usually loosely defined as the problem of
determining which records in a database refer to the same entities.
This informal definition can hide many assumptions, especially on the
meaning of the term ``same entities''. To avoid confusion, we now
define our ER setting in a more precise manner.
\begin{definition}
A {\bf possible world} is a tuple $(W, R, E, D)$, where $W$ denotes a
set of words; $R$ denotes the set of all records, where a record $r
\in R$ is a sequence of words from $W$ (i.e.\ order matters);
$E=\{e_1, e_2, \dots\}$ denotes a set of entity identifiers; and
$D: E \times R$ is a subset of the Cartesian product between $E$ and
$R$.
\end{definition}
We say record $r\in R$ refers to entity $e \in E$, if
$(e,r) \in D$. Note that an entity may be referred to by multiple (possibly
inconsistent) records, and each record may refer to multiple
entities, i.e., there are ambiguous records. Some records may belong
to no entities in $E$. For example, \emph{John Smith, Sydney} is
likely referring to several individuals named \emph{John Smith} who
live in \emph{Sydney}, and therefore this record is ambiguous as it
can refer to any of them. On the other hand, in real-world databases
there are often records that contain randomly generated, faked, or
corrupted values, such as those used to test a system or that were
intentionally modified (for example \emph{John Doe} or
\emph{(123) 456-7890}) by a user who does not want to provide their
actual personal details~\cite{Chr16}.
In practice, a possible world is only `knowable' through a (finite)
set of observations sampled from it.
\begin{definition}\label{def:observations}
Given a possible world $(W,R,E,D)$, we can sample an $(e,r)$ pair
using some (usually unknown) probability distribution on $D$. By
repeating the sampling $n$ times, we obtain a set of {\bf labelled
observations} of the possible world, $\{ (e_i, r_i) \}_{i=1 \ldots
n}$. From labelled observations, we can derive {\bf unlabelled
observations} by removing all the $e_i$'s.
\end{definition}
Roughly speaking, ER is the problem of reconstructing labelled
observations from unlabelled observations.
\begin{definition}\label{def:er}
Given a set of unlabelled observations $O$
sampled from a possible world $(W,R,E,D)$, {\bf entity resolution}
is the problem of constructing a partition of $O$
\[ O = \bigcup_{i} O_i \] satisfying the following two properties:
(1) for each $O_i$, there exists an $e \in E$ such that
$\{ (e,r) \,|\, r \in O_i \} \subseteq D$; and
(2) the number of partitions is minimised.
\end{definition}
A trivial way to satisfy the first condition of Definition~\ref{def:er} is to assign each record in $O$ to its own partition.
The second condition is needed to make sure records of the same underlying entity are assigned to the same partition.
ER as defined above is an underconstrained optimisation problem. For example,
there could be multiple ways of partitioning a set of unlabelled
observations that all satisfy Definition~\ref{def:er} because of the
existence of ambiguous records that refer to multiple entities.
We need further assumptions on the structure of possible worlds, in
particular the structure of $D$, to be able to distinguish between
possible solutions.
The following are some common ways of refining the ER problem, each
with its own assumptions on $D$.
\begin{enumerate}
\item \textbf{Supervised Learning Methods}:
The first class of methods assume that a set of labelled
observations is available with which we can apply supervised
learning techniques to label a much larger set of unlabelled
observations~\cite{Christen:2012:DMC:2344108,
Kopcke:2010:EER:1920841.1920904}. In particular, these methods
assume the joint probability distribution of entities and
records $P:E \times R \rightarrow [0,1]$ induced by the
unknown $D$ and the observations' sampling process have enough
structure, in the learning-theoretic sense
of~\cite{anthony-bartlett99, bartlett-mendelson02}, to be
learnable from finite sample sizes and suitable model
classes. Note the probability of learning good models is with
respect to a probability distribution on the possible worlds
that are consistent with a set of labelled observations.
\item \textbf{Distance Based Methods}:
The second class of methods work only with unlabelled
observations and assume records can be embedded into a metric
space, where records of an entity fall in a compact
region~\cite{Jin03}. One first finds such a metric space in
the form of a suitable distance function that incorporates
domain knowledge on what constitutes records that
likely belong to the same entity. Records are then clustered,
either exactly through a nearest-neighbour algorithm or
approximately using blocking or clustering
techniques~\cite{Chr12b}, and then labelled based on some
linkage rule. This is by far the most common approach to ER
and has a long history going back nearly fifty
years~\cite{fellegi-sunter69}. Distance based methods are
sometimes used in conjunction with supervised learning
algorithms to determine the linkage rule or clustering
thresholds~\cite{Christen:2012:DMC:2344108}.
\end{enumerate}
\subsubsection*{Signature-Based Entity Resolution}
We consider in this paper a family of signature-based methods, where
we assume each entity has distinctive signatures that can be detected
from a set of unlabelled observations (sampled from a possible world)
and that the signatures so-detected can be used to link records of the
same entities. Compared to the other two types of methods described
above, signature-based methods make a number of alternative
assumptions on the structure of possible worlds which we now describe.
A sufficient condition for a record to be a signature is that it belongs to one and only one entity in a possible world.
However, the condition is not a necessary one because a signature of an entity $e$ does not have to be a record of $e$, but merely one computationally derivable from a record belonging to $e$.
We now formalise the idea.
\begin{definition}\label{def:sig3}
Given a possible world $(W,R,E,D)$ and a computable relation $T: R\times R$,
record $s$ is a \textbf{signature} of an entity $e$ subject to $T$ iff
\begin{enumerate}
\item $\exists r \in R$ such that $(s,r) \in T$ and $(e,r)
\in D$; and
\item $\forall f \in E$, $\forall r \in R$, if $(f,r) \in D$ and $(s,r) \in T$, then $e = f$. \label{def:sig3:part2}
\end{enumerate}
\end{definition}
One way to understand Definition~\ref{def:sig3} is that $T$ defines a computable
transform of a record $s$ into all its variants $\{ r \,|\, (s,r) \in T \}$,
and $s$ is a signature of $e$ if all its variants obtained via $T$
contain and only contain records belonging to $e$.
A signature provides sufficient condition to link two records.
\begin{proposition}\label{prop:sufficient}
Let $P$ be a possible world, $T$ a relation, and $s$ a signature of an entity subject to $T$.
Two unlabelled observations $r,t$ sampled from $P$ belong to the same entity if $(s,r) \in T$ and $(s,t) \in T$.
\end{proposition}
\begin{proof}
By Definition~\ref{def:observations}, there exists entities $e_r, e_t \in E$ such that $(e_r,r) \in D$ and $(e_t,t) \in D$.
By Definition~\ref{def:sig3} part~\ref{def:sig3:part2}, we can infer $e_r = e$ from $(s,r) \in T$ and $(e_r,r) \in D$, and $e_t = e$ from $(s,t) \in T$ and $(e_t,t) \in D$.
\end{proof}
\begin{figure}[t!]
\centering
\includegraphics[width=0.47\textwidth]{street-example.pdf}
\caption{A possible world of addresses of three entities ($e_1, e_2, e_3$) and their signatures as described in Example~\ref{ex:subrecord}, where
records of different entities are shown in different colours, and
thick outlines show records/subrecords which are signatures
subject to the subrecord relation. \label{fig:st} }
\end{figure}
To familiarise readers with our formulation, we now describe some traditional ER algorithms with our concepts.
\begin{example}
Rule-based ER: link two records if they share patterns
predefined by some regular expressions, i.e., link $r$ and $s$ if
$\mathtt{regexp}(r) = \mathtt{regexp}(s)$:
\[ T=\{(s,r) \mid \mathtt{regexp}(s) = \mathtt{regexp}(r)\} \]
\end{example}
\begin{example}
Distance-based ER: link two records if their distance is
below a threshold $\tau$ according to a selected/learned distance
function $d$~\cite{Christen:2012:DMC:2344108}:
\[ T=\{(s,r) \mid d(s,r)<\tau\} \]
\end{example}
A common design in traditional ER algorithms is to find a relation $T$ which contains all pairs of records $s$ and $r$ referring to the same entities. Two records $s$ and $r$ are then linked using the fact $(s,s) \in T$, $(s,r) \in T$ and Proposition~\ref{prop:sufficient}. The concept of signature is not explicitly used in this design because every unambiguous record in a dataset will then be a signature. The challenge is all about finding the relation $T$.
In this paper, we follow a different strategy. Instead of searching
for an unknown relation $T$, we start with one (or more) known relation(s) $T$ and then search for records which are signatures subject to this known $T$
A trivial example is when $T$ is defined by equality: $T=\{(s,r) \mid s=r\}$.
Signatures subject to equality are those records that belong to one and only one entity.
These signatures are not particularly interesting, as they can only be used to find exact duplicate records in a database.
\subsubsection*{Signatures based on the Subrecord Relation}
Consider now the more powerful $T$ defined by the \emph{subrecord} relation.
Given a record $r$, we say $s$ is a subrecord of $r$, denoted
$s \preceq r$, if $s$ is a subsequence of $r$, i.e.\ $s$ can be
derived from $r$ by deleting some words without changing the order of
the remaining words.
Equivalently, we sometimes say $r$ is a superrecord of $s$ to mean $s \preceq r$.
\begin{example}\label{ex:subrecord}
Define $T=\{(s,r) \mid s \preceq r\}$. Suppose we have the
possible world shown in Figure~\ref{fig:st}, in which
\begin{itemize}
\item $W$=\{Victoria, Street, St, George\}
\item $E$=\{$e_1$, $e_2$, $e_3$\}
\item $D$=\{($e_1$, ``Victoria Street''), \-\hspace{8pt}($e_1$,
``Victoria St''), \\
\-\hspace{17pt} ($e_2$, ``George Street''),
\-\hspace{12pt}($e_2$, ``George St''), \\
\-\hspace{17pt} ($e_3$, ``St George Street''), ($e_3$,
``St George St'')\}.
\end{itemize}
Figure~\ref{fig:st} shows the six records in $D$ as well as their
subrecords. Records of different entities are shown in different
colours. We add thick outlines to records/subrecords which are
signatures subject to the subrecord relation. For example, the word
\emph{Victoria} is a signature because all records in $D$ containing
\emph{Victoria} as a subrecord belong to the same entity $e_1$. We
can therefore link these records during ER despite their inconsistency. In contrast,
\emph{Street} is not a signature because it has three superrecords in
$D$ that belong to three distinct entities. Since a record is a
subrecord of itself, some of the records appearing in $D$ are
signatures as well. A special case is entity $e_2$, which does not
have any signature subject to the subrecord relation because all
its records,\emph{George Street} and \emph{George St}, are
subrecords of another entity's records as well. Therefore all their
subrecords are shared by at least two entities. However, entities
like this, whose records are all subrecords of other entities, are
rare in practice, especially when multiple attributes are considered.
\end{example}
From the example above, we can also see the following distinction between our method and traditional ER methods. By explicitly introducing the concept of signatures, we no longer deal with pairwise linkage between records in $O$, but the linkage between records in $O$ and signatures.
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{compare.png}
\caption{\label{fig:distinction} Left: linkage to be established by traditional ER methods; right: linkage to be established by the proposed method.}
\end{figure}
This distinction is illustrated in Figure~\ref{fig:distinction}, where records are variants of the same address.
Although both graphs depict the same linkage solution, the one used by our method (right-hand side) contains less links due to the usage of signatures. This distinction partly explains why the proposed method is more efficient.
\iffalse
\begin{definition}\label{def:tsubseteq}
Given a set $O$ of unlabelled observations, $T_{\sqsubseteq}$ is defined inductively as follows:
\begin{enumerate}
\item If $t \in O$ and $s \sqsubseteq t$, then $(s,t) \in T_{\sqsubseteq}$.
\item If $(s,t) \in T_{\sqsubseteq}$ and $r \sqsubseteq s$, then $(r,s) \in T_{\sqsubseteq}$.
\end{enumerate}
\end{definition}
\fi
\begin{definition}\label{def:tsubseteq}
Given a set $O$ of unlabelled observations,
we define $\sqsubseteq$ to be the restriction of $\preceq$ to only terms that are subrecords of observations in $O$.
\[ \sqsubseteq \,= \, \{ (s,r) \mid \exists o\in O.(s \preceq r \wedge r \preceq o) \} \]
\end{definition}
\begin{definition}
Given a set $O$ of unlabelled observations, we call a signature subject to $\sqsubseteq$ a $\sqsubseteq$-signature.
\end{definition}
\iffalse
\begin{proposition}\label{prop:superrecord signature}
Given observations $O$, if $s$ is a $\sqsubseteq$-signature of an entity $e$ and $(s,r) \in T_{\sqsubseteq}$,
then $r$ is a $\sqsubseteq$-signature of $e$.
\end{proposition}
\begin{proof}
Consider any $t \in R$ and $f \in E$ such that $(r,t) \in T_{\sqsubseteq}$ and $(f,t) \in D$.
By Defintion~\ref{def:tsubseteq}, $(s,r),(r,t) \in T_{\sqsubseteq}$ implies $(s,t) \in T_{\sqsubseteq}$.
Since $s$ is a $\sqsubseteq$-signature of $e$,
we have $f=e$.
\end{proof}
\fi
\begin{proposition}\label{prop:superrecord signature}
Given a set $O$ of unlabelled observations,
if $s$ is a $\sqsubseteq$-signature of an entity $e$ and
$s \sqsubseteq r$, then $r$ is also a $\sqsubseteq$-signature of $e$.
\end{proposition}
\begin{proof}
Part 1 of Definition~\ref{def:sig3}:
Since $s \sqsubseteq r$, there exists $o \in O$ such that $s \preceq r$ and $r \preceq o$.
We have $s \sqsubseteq o$ since $\preceq$ is transitive and $o \in O$.
To show $(e,o) \in D$, observe that $o \in O$ implies there exists $f$ such that $(f,o) \in D$.
Since $s$ is a $\sqsubseteq$-signature of e and $s \sqsubseteq r$, we have $e = f$.
Part 2 of Definition~\ref{def:sig3}:
Consider any $t$ and $f$ such that $r \sqsubseteq t$ and $(f,t) \in D$.
We have $s \sqsubseteq t$ since $\sqsubseteq$ is transitive and $s \sqsubseteq r$.
Since $s$ is a $\sqsubseteq$-signature of $e$,
we have $f=e$.
\end{proof}
In practical applications of ER, $\sqsubseteq$-signatures are common.
For example, in a database where entities have unique identifiers such as passport numbers, driver's licenses or tax file numbers,
each unique ID is a signature of its entity. (Recall that the
subrecord relation captures the equality relation as a subset.) Even
in the absence of such unique IDs, countries like Australia have
identity verification systems like the 100 point check \cite{ftract} that allows a
combination of possibly non-unique attributes to be used as a person's
signature.
Given a set of unlabelled observations sampled from an unknown possible world, in the following section we provide an algorithm that
can resolve, with high probability, those entities
that have (one or more) $\sqsubseteq$-signatures.
In the rest of this paper, signatures always refer to
$\sqsubseteq$-signature unless otherwise indicated.
\section{Identification of Signatures}
\label{sec-signatures}
Our general strategy for ER is to {\em probabilistically} identify
signatures from unlabelled observations and then transitively link
records via the identified signatures.
Given a set of unlabelled observations $O$, our first step is to
remove all exact duplicate records to arrive at a deduplicated set
of records. In a deduplicated dataset containing $n$ records, a
subrecord recurs $m$ times if $m$ out of the $n$ records are its
superrecord. By definition, a signature is unique to an entity.
Further, a signature may not appear in every record of the entity to
which it belongs. A non-signature, in contrast, can appear in many
distinct records of many distinct entities. Thus as more and more records of are added to a dataset, after deduplication, the
recurrence frequency of a signature is upper bounded by the number of
distinct records of its entity. The recurrence frequency of a
non-signature, however, may keep on growing.
This is intuitively
clear from Figure~\ref{fig:st}, where the recurrence frequencies of
non-signature records like \emph{Street} and \emph{St} increases much more quickly, upper-bounded only by the size of the database, as more street names are added into the database.
This difference in recurrence frequency between signatures and non-signatures is the major clue behind our technique to (probablistically) separate them.
\subsection{Probability of Observing a Signature}
Empirically, setting the probability of a subrecord being a signature to go down as its recurrence goes up using
a Poisson distribution with a low mean or a power-law distribution appears sufficient.
In the following, we attempt to derive such a distribution from first principles, which at least will provide an understanding of the inherent assumptions we are making in using such a distribution.
Given a record $s$, the probability of a randomly sampled record $r$ satisfying $s \sqsubseteq r$ is given by a Bernoulli distribution with parameter $p_s$.
The probability for the given $s$ to recur $k$ times as a subrecord in a deduplicated dataset of size $n$ is therefore governed by a Binomial distribution $\mathit{binom(k;n,p_s)}$.
Now consider the probability of a randomly sampled subrecord to recur $k$ times in a deduplicated dataset of size $n$, which is given by
\begin{equation}
P(k) = \sum_{s} P(t) \cdot \mathit{binom(k; n, p_s)}.
\end{equation}
If the $p_s$'s are mostly small, which is true in our case, then one can show, from empirical simulations, that the pointwise addition of Binomial distributions with different parameters can be approximated by a Poisson distribution
\begin{equation}
P(k)\approx e^{-\lambda}\frac{\lambda^k}{k!}
\end{equation}
for a suitable $\lambda$ that can be estimated from data.
Therefore, the recurrence of a subrecord, whether a signature or not, follows Poisson distributions.
The difference between signatures and
non-signatures is with the average recurrence frequency.
\iffalse
We assume the probability of two randomly sampled $s$ and $r$ satisfying
$s \sqsubseteq r$ follows a Bernoulli distribution:
\begin{equation}
P(s\sqsubseteq r)=p
\end{equation}
where $0 \leq p\leq 1$. Given a deduplicated set of records
of size $n$, the probability for a random subrecord $s$ to recur $k$
times therefore follows a Binomial distribution
\begin{equation}
P(k)= \binom{n}{k} p^k(1-p)^{n-k}~.
\end{equation}
In a deduplicated
dataset, the number of records
$np$ sharing a common subrecord is expected to be small relative to
the total number of records, $np \ll n$, especially when the total
number of records, $n$, is large. The Binomial distribution above
can therefore be approximated by a Possion distribution
\begin{equation}
P(k)\approx e^{-np}\frac{(np)^k}{k!}
\end{equation}
where $np$ is the expected recurrence frequency of a subrecord.
Therefore, the recurrence of a subrecord, being a signature or not, follows Poisson distributions. The difference between signatures and
non-signatures is with the average recurrence frequency.
\fi
Denote the set of signatures with $S$. Let $\lambda$ and $\mu$ be
the expected recurrence frequency of a signature and a non-signature,
respectively. The probability of observing a signature or a
non-signature $k$ times is therefore
\begin{equation}
P(k \mid r\in S)=e^{-\lambda}\frac{\lambda^k}{k!} ~\;\mathtt{and}\;~ P(k\mid r\notin S)=e^{-\mu}\frac{\mu^k}{k!}.
\end{equation}
By Bayes rule, when we observe a subrecord $k$ times in a dataset,
the probability for this subrecord to be a signature is given by
\begin{equation}\label{eq:poisson}
P(r\in S\mid k)=\frac{P(k \mid r\in S)P(r\in S)}{P(k)},
\end{equation}
where
\begin{align}
P\big(k\big) =&P\big(k\wedge r\in S\big)+P\big(k\wedge r\notin S\big)\\
=&P(k \mid r\in S)P(r\in S)+P(k \mid r\notin S)P(r\notin S).
\end{align}
We also assume $P(r \in S)$
follows a Bernoulli distribution with parameter $c$:
\begin{equation}
P(r\in S)=c.
\end{equation}
Substituting these into Equation~(\ref{eq:poisson}), we have
\begin{align}
&~P(r\in S\mid k)\\
=&~\frac{P(k \mid r\in S)P(r\in S)}{P(k \mid r\in
S)P(r\in S)+P(k \mid r\notin S)P(r\notin S)}\\
=&~\frac{1}{1+\frac{P(k \mid r\notin S)(1-P(r\in S))}{P(k \mid r\in
S)P(r\in S)}}\\
=&~\frac{1}{1+e^{\lambda-\mu}(\frac{\mu}{\lambda})^k\frac{1-c}{c}}
\end{align}
Letting $a=\frac{\mu}{\lambda}$ and $b=e^{\lambda-\mu}\frac{1-c}{c}$,
\iffalse
\begin{equation}\label{eq:ab}
\left\{\begin{array}{l}
a=\frac{\mu}{\lambda}\\
~\\
b=e^{\lambda-\mu}\frac{1-c}{c}
\end{array}\right.
\end{equation}
\fi
we can state the result as
\begin{equation}\label{eq:result}
P(r\in S\mid k)=\frac{1}{1+a^kb}\quad.
\end{equation}
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{plot.pdf}
\caption{Plot of Equation~(\ref{eq:result})\label{fig:ab} by fixing either $a$ or $b$ and changing the other.}
\end{figure}
In practice, since there are more distinct signatures than
non-signatures, i.e.\ $c>1-c$, and a non-signature appears more
frequently than a signature, i.e.\ $\mu>\lambda$, we usually have
$b<1<a$. We can understand the parameters of $P(r \in S \mid k)$
by noting that $a$ controls how fast $P(r \in S \mid k)$ decays as
$k$ increases, and $b$ controls the maximum of $P(r \in S \mid k)$,
as shown in Figure~\ref{fig:ab}.
\iffalse
Two records $r_i$ and $r_j$ contain a common signature if one of the subrecords they share is a signature:
\begin{equation}
P(r_i\cap r_j\cap S\neq \varnothing)=P(r_{ij}^{(1)}\in S\vee \dots\vee r_{ij}^{(k)}\in S),
\end{equation}
where $r_{ij}^{(1)}, \dots , r_{ij}^{(k)}$ are all the common subrecords of $r_i$ and $r_j$.
Following the definition of signature and the closure of $f$ under subrecords,
we have
\begin{equation}\label{eq:rorsr}
P(r_{ij}^{(1)}\in S\vee r_{ij}^{(2)}\in S)= P(r_{ij}^{(2)}\in S).
\end{equation}
if record $r_{ij}^{(1)}$ is a subrecord of $r_{ij}^{(2)}$ since $r_{ij}^{(1)} \in S$ implies $r_{ij}^{(2)} \in S$.
From Eq~(\ref{eq:rorsr}), we see that the probability of two records sharing a
common signature is determined by the probability of their maximum common subrecord being a signature.
We can therefore compute the probability for two records $r_i$ and $r_j$ to share at least one common signature as
\begin{equation}
P(r_{ij}^\star \in S)= \frac{1}{freq(\widehat{r_{ij}})},
\end{equation}
where $\widehat{r_{ij}}$ denotes the maximum common subrecord between record $r_i$ and $r_j$ and we use the probability distribution as defined in Equation~\ref{eq:reciprocal}.
\fi
\subsection{Record Linkage via Common Signatures}
\label{subsec:record linkage}
So far we have worked out how to compute the probability for a single
record to be a signature given its recurrence. In practice, computing
the common subrecords between every pair of records, checking the
recurrence of these subrecords in the database, and then computing
the signature probabilities is prohibitively expensive.
We now show how these probabilities can be approximated efficiently
in a large database. The main idea is to pre-compute a set of
subrecords -- call them \emph{candidate signatures} --
from each record in the database, as well as the probability for each
of these subrecords to be a signature. Given two records $r_i$ and $r_j$, we
approximate the probability for them to share a signature with the
probability of at least one candidate signature shared by both
records being a signature. This approximation can be accelerated by
inverted indices.
More specifically, let $I=\{(s,R_s,p_s)\}$ denote the inverted index
of a database, where each $s$ (inverted index key) denotes a subrecord, $R_s$ denotes the
set of records that contain $s$ as a subrecord, and $p_s=P(s\in S
\mid k=|R_s|)$ is the probability of $s$ being a signature. Computing
linkage probabilities consists of the following steps:
\begin{enumerate}
\item \textbf{Generation}: From each $(s,R_s,p_s)\in I$, generate
all tuples of the form of $(r_i,r_j,s,p_s)$ such that
$r_i,r_j\in R_s$.
\item \textbf{Elimination}: From all tuples $(r_i,r_j,s,p_s)$
containing the same $r_i$ and $r_j$, we eliminate those
tuples whose $s$ appears as a subrecord in another tuple.
Following Proposition~\ref{prop:superrecord signature}, this is because if a subrecord is a
signature, then all its superrecords must be signatures.
We therefore only need to assess the superrecords.
\item \textbf{Product}: We assume the probability for two
subrecords being signatures to be independent if they are
not a subrecord of each other. The probability of $r_i$ and
$r_j$ sharing a signature can then be computed as
$1-\prod_s(1-p_s)$ over all $s$ in the remaining tuples
$(r_i,r_j,s,p_s)$ for the record pair $r_i$ and $r_j$.
\end{enumerate}
We can further improve the efficiency by setting a probability
threshold during generation. That is, we only generate tuples
$(r_i,r_j,s,p_s)$ whose $p_s > \rho$. In other words, when
generating tuples we only consider subrecords whose probability of
being a signature exceeds the threshold $\rho$. This filtering allows
us to remove a large number of subrecords with low probability of
being signatures at an early stage.
The Elimination step above can be skipped, if the precomputed
subrecords from each raw record by design do not contain each other
as subrecords.
After obtaining the probability for a pair of records to share a
signature, we can place the two in a block if this probability exceeds the threshold $\tau$. Note that blocks built this way contain two and only two records each.
One can then employ any similarity function, such as Jaccard similarity, edit distances like Levenhstein and Jaro, or some other domain-specific functions~\cite{Christen:2012:DMC:2344108}, to decide whether to link them at all.
When the parameter $a$, $b$, and $\tau$ are carefully tuned using training data, one can simply link all pairs of records sharing a probability higher than threshold $\tau$.
\section{Connected Components: A Scalable In-Database Algorithm}\label{sec:cc}
The previous section describes how pairs of records can be linked via
probabilistic identification of common signatures.
In this section, we present a scalable algorithm to assign a
consistent label (entity identifier) to records which are linked
either directly or indirectly. The problem is equivalent to the
problem of finding connected components in a general graph \cite{Cormen:2009}, except that the graph in our case is too large to allow random access.
In the following, we propose a
connected-component labelling algorithm that works on large graphs
stored in a distributed, parallel database.
Without loss of generality, we will label each connected component
with the smallest node (record) identifier of the component. Our
algorithm contains two iterative steps. We first transform the input
graph into equivalent trees (a forest) such that nodes on each
connected component are in the same tree, and that the identifier of
a descendant is always larger than that of its ancestors. We then
transform the forest into an equivalent forest in which the height of
all the trees is one. Upon convergence, all nodes in the same
connected component will be connected directly to the root node,
which can then be used as the consistent identifier for all entities
in the tree.
Figure~\ref{fig:cc} shows an example. The input (left) is a set of
node-pairs ($e_1$,$e_2$), ($e_1$,$e_4$), ($e_2$,$e_3$), ($e_2$,$e_4$),
($e_2$,$e_5$), and ($e_3$,$e_5$). Without losing generality, we
always use the smaller entity identifier as the first element in
each pair. We know this is not yet a forest because some nodes, such
as nodes $e_4$ and $e_5$, have more than one parents. When a node has
more than one parents, namely when the node-pairs contain patterns
like ($e_1$,$e_j$), ($e_2$,$e_j$), $\dots$, and ($e_i$,$e_j$), we do
the following replacement:
\begin{align}\label{eq:replacement 1}
(e_1,e_j),(e_2,e_j),\dots,(e_i,e_j) &\Rightarrow \nonumber\\
(e^\star,e_j),(e^\star,e_1),&(e^\star,e_2),\dots,(e^\star,e_i)
\end{align}
where
\begin{equation}
e^\star=\min(e_1,e_2,\dots,e_i)\quad.
\end{equation}
This is a grouping operation per node $e_j$ that can be implemented
efficiently in a parallel database. During the replacement we drop
duplicated edges and self-loops (an edge connecting a node to the
node itself).
\begin{figure}[t!]
\centering
\includegraphics[width=0.45\textwidth]{conn-component.pdf}
\caption{\label{fig:cc}Left: input graph; middle: transformed to
trees; right: reduce tree height to one.}
\end{figure}
Through such a replacement, we guarantee that
\begin{enumerate}\itemsep0mm\parskip0mm
\item the connections between $e_1$, $e_2$, $\dots$, $e_i$, $e_j$
are preserved; and
\item $e_j$ ends up with a single parent.
\end{enumerate}
The newly added node pairs may introduce new parents to existing
nodes in the graph. We therefore apply the replacement step
(Equation~(\ref{eq:replacement 1})) recursively until every node has a
single parent. Convergence is guaranteed because the sum of node
identifiers in the list is non-negative and each replacement always
reduce this sum by a positive integer. Upon convergence of the first
replacement step, we obtain the second graph (middle) in Figure~\ref{fig:cc}
which is a forest with node-pairs ($e_1$,$e_2$), ($e_1$,$e_4$),
($e_2$,$e_3$), and ($e_2$,$e_5$).
A tree's height is larger than one if its node-pairs contain
patterns like ($e_i$,$e_j$) and ($e_j$,$e_k$), namely a node exists
as a parent and a child at the same time. For a tree whose height is
larger than one, we iteratively do the following replacement
\begin{equation}
(e_i,e_j),(e_j,e_k)\Rightarrow(e_i,e_j),(e_i,e_k)
\end{equation}
until the height of all trees become one, as shown in the right side of Figure~\ref{fig:cc}. This is a join operation
that can be implemented efficiently in a parallel database. If we
denote by $h$ the height of the highest tree in the forest, then
the above
converges in $\log_2(h)$ rounds.
\section{The p-Signature Algorithm}
\label{sec:signature-er}
We are now ready to present our signature-based algorithm for ER, which is given in Algorithm~\ref{alg:1}.
The algorithm requires these inputs:
\begin{itemize}
\item $O=\{r\}$, a set of unlabelled observations;
\item $U=\{s\}$, a set of subrecords selected by users as candidate signatures based on domain knowledge;
\item $\rho$ and $\tau$, thresholds: we consider a
subrecord only if its probability of being a signature
exceeds $\rho$; and we adopt a link if the probability for two
records to share a signature exceeds $\tau$; and
\item $v$, an optional similarity function.
\end{itemize}
The first four steps of the algorithm are as described in Section~\ref{subsec:record linkage}.
In the algorithm, $\gets$ denotes the operation of adding an element to a set and $\backslash$ denotes the operation of removing an element from a set.
In Step~\ref{algo1:step1}, $I=\{(s,R_s,p_s)\}$ denotes the inverted index of $O$ with respect to $U$, where $s\in U$, $R_s\subseteq O$ denotes the set of records all containing $s$ as a subrecord, and each $p_s=P(s\in S\mid k=|R_s|)$ is the probability of $s$ being a signature given that $s$ appears in $|R_s|$ different records in the database (see Equation~(\ref{eq:result})).
In Step~\ref{algo1:step2}, the condition $r_i < r_j$ is there to ensure we don't generate symmetric entries.
Step~\ref{algo1:step3} can be done because of Proposition~\ref{prop:superrecord signature}.
Step~\ref{algo1:pairwise} selects the final pairwise linkages based on the potential linkages computed earlier.
The first three steps can be thought of as the blocking/indexing step in a standard ER framework, and Step~\ref{algo1:pairwise} can be thought of as the record comparison step.
At the end of Step~\ref{algo1:pairwise}, $L=\{(r_i,r_j)\}$
holds all the detected links between records in $O$.
In Step~\ref{algo1:last step}, $c$ denotes the connected components algorithm described in Section~\ref{sec:cc}.
\begin{figure}[t!]
\begin{algorithm}[H]
\begin{algorithmic}[1]
\REQUIRE $O=\{r\}$, $U=\{s\}$, $\rho$, $\tau$, $v$ \medskip
\STATE\label{algo1:step1} Build inverted index:
$$I \gets (s,R_s,p_s) ~,\; \forall s \in U, R_s \subseteq O, p_s
> \rho$$
\STATE\label{algo1:step2} Generate potential linkages:
$$K \gets (r_i,r_j,s,p_s)~, \forall (s,R_s,p_s) \in I,~ r_i,r_j
\in R_s, r_i<r_j$$
\STATE\label{algo1:step3} Eliminate redundant linkages:
$$K=K \backslash (r_i,r_j,s,p_s),~ \forall r_i, r_j, s,p_s $$
if $(r_i,r_j,\hat{s},p_{\hat{s}})\in K$ and
$s$ is a subrecord of $\hat{s}$.\medskip
\STATE\label{algo1:pairwise} Finalise pairwise linkages:
$$L\gets(r_i,r_j)$$
for all $r_i$ and $r_j$ such that
$$ 1-\prod_{(r_i,r_j,s,p_s)\in K}(1-p_s) >\tau\quad $$
and $v(r_i,r_j)=\text{true}$ \medskip
\RETURN $c(L)$.\label{algo1:last step}
\end{algorithmic}
\caption{\label{alg:1} Signature-based Entity Resolution}
\end{algorithm}
\end{figure}
\subsubsection*{Candidate Signatures}
The ER algorithm above requires a user to specify a set of
candidate signatures as input.
These candidate signatures have an impact on both the accuracy and computational complexity of the algorithm and should be chosen based on domain
knowledge about a database and can differ from case to case.
In Section~\ref{sec-experiments}, we will provide some concrete examples of candidate signature specifications and discuss the issue of how to construct good candidate signatures.
\subsubsection*{Post-Verification Rules}
An important but optional parameter in Algorithm~\ref{alg:1} is $v$, the post-verification rules.
It is largely an optional parameter when training data is available to tune the other parameters.
But when training data is not available, $v$ is a mechanism for the user to supply additional domain knowledge to improve ER accuracy.
The post-verification rules can be as simple as a suitably thresholded distance function like Jaccard or Jaro-Winkler.
However, it is more commonly used to resolve prickly and context-dependent cases like family members that live in the same address, a person and his company (e.g. John Smith and John Smith Pty Ltd), and distinct franchisees that use a common bank account.
\subsubsection*{Computational Complexity and Implementation}
The computational complexity of Algorithm~\ref{alg:1} is dominated by the first two steps, which have time and space complexity $O(m)$, where $m$ is the number of distinct candidate signatures extracted.
Most natural choices of candidate signatures leads to $m \sim O(n)$, where $n$ is the size of the (deduplicated) dataset. The scalability of the algorithm is studied empirically in Section~\ref{sec-experiments}.
We have two implementations of the algorithm, one in SQL running on Greenplum, and one in Scala running on Spark.
The SQL code is similar in structure to that in \cite{zhang17} and
involves only joins (all efficiently executable using hash-join \cite{zeller90}) and straightforward group-by operations.
The Spark version has less than 100 lines of code and is the one we use in a production system.
Both the parallelised SQL and Spark code are undergoing due dilligence to be made open-source and available on Github.
\section{Experimental Evaluation}
\label{sec-experiments}
\newcolumntype{C}[1]{>{\centering\let\newline\\\arraybackslash
\hspace{0pt}}m{#1}}
\newcolumntype{M}[1]{>{\centering\let\newline\\\arraybackslash
\hspace{0pt}\columncolor[gray]{0.9}}m{#1}}
\begin{table*}[th!]
\caption{\label{tab:statistics}A summary of the results of applying the proposed
algorithm on benchmark datasets.}
\centering
\begin{small}
\begin{tabular}{|c|c|c|c|c|c|c|c|c|c|c|} \hline
~ & DBLP & Scholar & DBLP & ACM & Abt & Buy & Amazon &
Google & NCVR-2014 & NCVR-2017 \\ \hline \hline
Records & 2,616 & 64,263 & 2,616 & 2,294 & 1,081 & 1,092
& 1,363 & 3,226 & 5,616,368 & 7,861,249 \\ \hline
Subrecords & 547,722 & 6,052,597 & 742,952 & 558,731 &
24,348 & 25,179 & 21,037 & 77,787 & 131,218,277 &
162,115,747 \\ \hline
Ground truth & \multicolumn{2}{c|}{5,347} & \multicolumn{2}
{c|}{2,224} & \multicolumn{2}{c|}{1,097} & \multicolumn{2}
{c|}{1,300} & \multicolumn{2}{c|}{5,015,915} \\\hline
Precision & \multicolumn{2}{c|}{91.0\%} & \multicolumn{2}
{c|}{97.7\%} & \multicolumn{2}{c|}{87.9\%} & \multicolumn{2}
{c|}{60.2\%} & \multicolumn{2}{c|}{96.3\%} \\ \hline
Recall & \multicolumn{2}{c|}{89.5\%} & \multicolumn{2}{c|}{97.4\%}
& \multicolumn{2}{c|}{60.4\%} & \multicolumn{2}{c|}{66.1\%} &
\multicolumn{2}{c|}{89.5\%}\\ \hline
F-measure & \multicolumn{2}{c|}{90.2\%} & \multicolumn{2}
{c|}{97.6\%} & \multicolumn{2}{c|}{71.6\%} & \multicolumn{2}
{c|}{63.0\%} & \multicolumn{2}{c|}{92.8\%} \\ \hline
Time & \multicolumn{2}{c|}{10 sec} & \multicolumn{2}{c|}{6 sec} &
\multicolumn{2}{c|}{6 sec} & \multicolumn{2}{c|}{10 sec} &
\multicolumn{2}{c|}{307 sec}\\\hline
\end{tabular}
\end{small}
\end{table*}
We use six different ER problems to empirically evaluate the proposed algorithm.
The entities in these six
problems range from academic publications and commercial products, to
individuals and organisations. The datasets range from thousands
to billions of records in size. There is also a large
diversity of data quality issues, including incompleteness,
incompatible formats, errors, and temporal inconsistency. We use
these datasets to benchmark the accuracy as well as scalability of
our proposed algorithm.
All the experiments are done using the open-source Greenplum Database running on 8 servers (1 master + 7 slaves), each with 20 cores, 320 GB, and 4.5 TB usable RAID10 space.
The results are summarised in Table~\ref{tab:statistics}.
\subsection{Entity Resolution Quality}
In the first experiment, we apply our algorithm to the four publicly
available datasets evaluated in~\cite{Kopcke:2010:EER:1920841.1920904}
where ground truth is available:
(1) DBLP-ACM, (2) DBLP-Google Scholar, (3) Apt-Buy, and (4)
Amazon-Google-Products.
The entities in the first two datasets are academic publications,
and each record contains title, authors, venue, and year of
publication. The entities in the third and fourth datasets are
consumer products, and each record contains name, description,
manufacturer, and price.
\begin{table*}[th!]
\caption{\label{tab:f} F-measure of our proposed method
($\sqsubseteq$-signature) as well as five existing methods on
four benchmark datasets, as reported
by~\protect\cite{Kopcke:2010:EER:1920841.1920904}. The top
performer of each dataset is presented in bold font.}
\centering
\begin{small}
\begin{tabular}{|l|C{0.1\textwidth}|C{0.1\textwidth}|C{0.1
\textwidth}|C{0.1\textwidth}|C{0.1\textwidth}|C{0.1\textwidth}|}
\hline
~ & COZY & FEBRL \newline FellegiSunter & FEBRL SVM & MARLIN
ADTree & MARLIN SVM & $\sqsubseteq$-Signature\\ \hline \hline
DBLP-Scholar & 82.9 & 81.9 & 87.6 & 82.9 & 89.4 & \textbf{90.2}
\\ \hline
DBLP-ACM & 93.8 & 96.2 & \textbf{97.6} & 96.4 & 97.4 &
\textbf{97.6} \\ \hline
Abt-Buy & 65.8 & 36.7 & 71.3 & 54.8 & 70.8 &\textbf{71.6} \\
\hline
Amazon-Google & 62.2 & 53.8 & 60.1 & 50.5 & 59.9 &
\textbf{63.0}\\\hline
\end{tabular}
\end{small}
\end{table*}
For academia publications, we use the following two types of subrecords as candidate signatures:
\begin{enumerate}\itemsep1mm\parskip0mm
\item three consecutive words in title; and
\item two consecutive words in title, plus two random words in
authors.
\end{enumerate}
For commercial products, we use the following three types of candidate signatures:
\begin{enumerate}\itemsep1mm\parskip0mm
\item one word from name;
\item two consecutive words from name; and
\item three consecutive words from name.
\end{enumerate}
Following previous evaluation
work~\cite{Kopcke:2010:EER:1920841.1920904}, we run our algorithm
multiple times with varying parameters
and then pick the best-performing model. We use F-measure to quantify the
performance, which is defined as the harmonic mean of precision and recall
\cite{Christen:2012:DMC:2344108}.
We note that the legitimacy of using the F-measure to evaluate ER algorithms is questioned in a recent paper~\cite{Hand2017}.
However, we use the F-measure here because it allows direct
comparisons with the earlier evaluation presented
in~\cite{Kopcke:2010:EER:1920841.1920904} (which does not include
precision and recall results).
The result of our method and five other algorithms, three of which
are supervised machine learning based classification algorithms, are
presented in Table~\ref{tab:f}. The performance of the other
algorithms is taken from \cite{Kopcke:2010:EER:1920841.1920904}.
The top performer for each dataset is highlighted in bold.
Our proposed method turns out to achieve state-of-the-art results
on all four datasets (tied for first in one case).
Although the winning margin may not always be statistically significant, the consistent
good performance across the four diverse datasets is noteworthy, however.
Note also that the performance of our method is achieved by fixed
subrecord types as described above. It is possible to further
improve the current performance with other types of subrecords that
are customised for each dataset.
\subsection{Entity Resolution Scalability}
To test the scalability of our method, we employ it to link records across
two snapshots of the North Carolina Voter Registration (NCVR)
database (\url{http://dl.ncsbe.gov/}). We used a snapshot from
October 2014 and linked it with a snapshot from October 2017. We
used the following information of each voter for the linkage:
\begin{itemize} \itemsep1mm\parskip0mm
\item full name (first, middle, and last name);
\item residential address (street, city, zipcode and state);
\item mail address (street, city, zipcode and state);
\item phone number;
\item birth state; and
\item birth age.
\end{itemize}
Note that there is a temporal aspect to this particular ER problem,
in that each attribute above for the same voter may change over
the three years, except birth state and birth age (with age being
increased by three from 2014 to 2017). Among the 5,015,915 voters
who appear in both datasets, the percentage of voters who changed
their name, residential address, mail address, or phone number are
$5\%$, $33\%$, $33\%$, and $48\%$, respectively. Moreover, $3\%$ of
the voters changed their birth state, and $6\%$ of the voters have
inconsistent age (not differing by 3 years) in the two datasets.
Each voter also has an identifier (NCID), which is used to generate
the ground truth for ER.
We used the following subrecords as candidate signatures:
\begin{itemize} \itemsep1mm\parskip0mm
\item two random words from name, two consecutive words from
residential address;
\item two random words from name, two consecutive words from mail
address;
\item two random words from name, last six digits from phone number;
and
\item full name, birth state, birth age;
\end{itemize}
where birth age from NCVR-2014 is incremented by 3 to align with
that in NCVR-2017.
As Table~\ref{tab:statistics} shows,
while the size of the NCVR dataset is
about 1,000 times larger than the other benchmark datasets,
the total time used for ER only increased 30 to 50 times.
No previous ER work has been applied to the same NCVR dataset at the scale we have done, which makes comparison difficult.
A relevant previous work is \cite{Hu2017}, which randomly sampled subsets of
size 5,000, 10,000, 50,000, and 100,000 from the NCVR dataset to
implement temporal ER. As \cite{Hu2017} shows, the performance of
the considered algorithms monotonically declines as the size of the
sampled dataset increases. The top performer on the largest subset,
which contains 100,000 records, achieved an F-measure of $92\%$ per
\cite{Hu2017}. Our method is applied to the complete datasets
between two time points and achieved comparable accuracy.
\subsection{Large Scale Transitive ER}
So far we only considered the scenarios where ER is between two
different datasets in a pairwise manner. Now we consider ER
within a single dataset (deduplication). The considered dataset is maintained by
an Australian Government agency, containing over 690 million reports
submitted by over 10,000 organisations over 10 years. More than 3.9
billion individuals and organisations appear in these
reports. Our aim is to identify records by the same individuals and
organisations and link them together.
When an entity appears in a report, some or all of the following
information may be provided: name, proof of ID (such as driver's
license or passport), address, date of birth, company number,
telephone number, email, bank account number.
The format of each type of information differs from report to report.
In most reports, one or more attributes are not available. Since we
have no ground truth for this dataset, we report only the
scalability of our proposed algorithm.
After removing exact duplicate records, the number of distinct
records was reduced to around 300 millions. To handle the poor
data quality, we generated 13 types candidate signatures from each record. In particular, the first 7 types of candidate signatures contain two random words from name followed by any of the following
\begin{enumerate}\itemsep1mm\parskip0mm
\item two consecutive address words;
\item last six digits of ID number;
\item date of birth;
\item last six digits of company
number;
\item last six digits of telephone,
number;
\item email; and
\item last six digits of account number.
\end{enumerate}
The other 6 types of candidate signatures contain two consecutive address words followed by either of item 2-7 above.
We do not require two name words to be consecutive to allow names in
inconsistent formats to be compared. We however require address words
to maintain their input order because the order of address words is
more consistent than that of name, and an address is usually much
longer than a name, and there would be too many unordered
combinations to consider. We use the last six digits of account
number, telephone number, and proof of ID, because these attributes
are usually longer than six digits, the ending parts of these
attributes usually have more consistent format than their starting
parts, and being identical in the last six digits rarely leads to
false matches especially when they are concatenated with name.
\begin{table}
\caption{\label{tab:huge} Large-scale Transitive ER: size of each intermediate output and the time taken}
\centering
\begin{small}
\begin{tabular}{|l|r|r|} \hline
& Size~~~~~~ & Time~~~ \\ \hline \hline
Records & 3,989,630,008 & \\ \hline
Distinct records & 268,657,406 & 1,585 sec \\ \hline
Candidate signatures & 4,084,438,114 & 626 sec \\ \hline
Pairwise links & 1,002,675,163 & 6,839 sec \\ \hline
Verified links & 623,498,453 & 3,083 sec \\ \hline
Connected components & 148,163,665 & 496 sec \\ \hline
Overall on Greenplum & & 12,629 sec \\ \hline\hline
Overall on SparkSQL && 5,044 sec\\ \hline
\end{tabular}
\end{small}
\end{table}
One practical difficulty in applying the proposed algorithm to a real and large
dataset is that we have no labelled data to tune our parameters.
In our business context, a false link usually has a much higher cost than a
missing link.
We therefore adopted some post-verification rules such as Jaccard distance on
linked entities to further improve precision at the cost of lower recall.
Some statistics of our proposed method on this large dataset is
given in Table \ref{tab:huge}. As can be seen, resolving over 3.9
billion records with the proposed method takes around three and a
half hours. Compared to resolving 12 million records in the NCVR
datasets in 307 seconds, our algorithm scales in sublinear time.
Besides Greenplum, we also implement our algorithm with SparkSQL and
resolve the over 3.9 billion entities a server which 4-time as large as the Greenplum server.
The processing time reduces to 5,044 seconds. Note that the 5,044 seconds include the time of saving output of each step to HDFS for debugging purpose.
\subsection{Practical Considerations}
We now discuss several important practical considerations of our approach.
\medskip
\noindent {\bf Choice of Candidate Signatures:}
As stated earlier, the choice of candidate signatures depends on domain knowledge and has an impact on both the accuracy and computational complexity of the ER algorithm.
Here are some general guidelines on setting this parameter.
\begin{enumerate}\itemsep1mm\parskip0mm
\item A candidate signature should be short so that it has a good
chance of recurring in multiple records.
\item A candidate signature should be distinctive enough so that
it has a good chance to be a signature.
\item All (unambiguous) records should have at least one non-empty signature.
\end{enumerate}
The three guidelines can pull us in opposite directions.
As can be seen in Section~\ref{sec-experiments}, we usually want to extract small subrecords from key attributes in a record as candidate signatures, but these subrecords may not be sufficiently distinctive on their own.
An effective way to improve the distinctive power of such short candidate signatures
is to concatenate subrecords from multiple attributes, such as using
\emph{name}+\emph{address}, \emph{name}+\emph{phone number}, and so on.
To the extent possible, we want to make sure each record in the dataset has at least one signature
by making sure at least one candidate signature with sufficiently high probability can be extracted from each record.
This is not always possible when there exist inherently ambiguous records like (John, Sydney NSW) that cannot be adequately resolved no matter what.
But there are plenty of interesting cases in the spectrum of distinctiveness that we would need to handle. Examples of difficult cases include names like John James Duncan (all common first names), names from certain ethnicity like Arabic and Vietnamese names, and addresses in certain countries like India.
In such situations, we should take longer candidate signatures into consideration.
When prior knowledge is not available or inadequate, we can generate candidate
signatures randomly. Because of our probabilistic formulation,
randomly generated subrecords are unlikely to cause false links
but to fully link all relevant records, we may need
to generate a large number of candidate signatures.
In such cases, we may resort to the use of grammars \cite{cohen94,lloyd03logic-learning} to concisely define a search space of candidate signatures that can be enumerated in a systematic and exhaustive way for testing.
The sensitivity of our ER algorithm to the choice of candidate signatures is both a strength and a weakness.
It is a strength in that when good domain knowledge is present, the candidate signatures provide a natural mechanism to capture and exploit that domain knowledge to achieve good accuracy and scalability.
Many existing ER algorithms do not have such a natural mechanism to exploit available domain knowledge.
On the other hand, the sensitivity to the choice of candidate signatures is obviously a weakness in ER applications where no good domain knowledge is available, in which case other ``parameter-free" algorithms may be more suitable. \medskip
\noindent {\bf Efficiency Overkill?}
Do we really need an ER algorithm that can process millions of records in a few hours?
Ideally, data volume at that scale are processed once using a batch algorithm and then an incremental algorithm is used to incorporate new data points as they appear.
In practice, many ER algorithms do not have an incremental version.
Even when they do, the results obtained from the batch and incremental algorithms are usually not perfectly consistent with each other.
In our actual target application, up to 1 million new records are added to the database every day.
Incrementally resolving such large numbers of new records in a manner that maintains consistency with the batch algorithm -- a key requirement in the intelligence domain where analytical results can be used as evidence in court proceedings -- is as hard as the problem of performing batch ER on the new full dataset.
Having an ER algorithm that can be rerun on the full dataset every day in a couple of hours is thus important in our setting.
Further, such an efficient algorithm gives us the agility to make changes and experiment with parameters during the development of the algorithm, something impossible to do if the algorithm take days or weeks to process the entire dataset. \medskip
\noindent {\bf Limitations of P-Signature:}
For efficiency, we choose not to compute all the common subrecords between a pair of records, but to approximate them with a set of precomputed subrecords, typically of limited length. When the precomputed subrecords of a record are all non-distinctive, we will not be able to link this record distinctively to other records of the same entity. To improve the situation, one may consider more diversified and longer candidate signatures at the price of lower efficiency. Besides, the granularity of our token set $W$ also affects how robust our signatures are against inconsistency. Currently words are the finest granularity of our algorithm. That means, we will not be able to link a record if it contains typos in every word. To tackle this challenge, we need to define our vocabulary on q-grams (character substrings of length $q$) or even individual characters instead. Yet in return, the distinctiveness of each candidate-signature will be weaker. The challenge is, following its current design, P-Signature can hardly link Smith with Smithh, but not link Julie with Juliet at the same time.
\section{Related Work and Discussion}\label{sec:rel-work}
We will start with a review of related work followed by a discussion of key connections between our signature ER framework and some existing ER techniques.
\subsubsection*{Related Work in Entity Resolution}
Entity resolution (ER), also known as record linkage and data
matching~\cite{Christen:2012:DMC:2344108}, has a long history with
first computer based techniques being developed over five decades
ago~\cite{fellegi-sunter69,New59}. The major challenges of linkage
quality and scalability have been ongoing as databases continue to
grow in size and complexity, and more diverse databases have to be
linked~\cite{Don15}. ER is a topic of research in a variety of
domains, ranging from computer
science~\cite{Christen:2012:DMC:2344108,Don15,Nau10} and
statistics~\cite{Her07} to the health and social
sciences~\cite{Har15}. While traditionally ER has been applied on
relational databases, more recently the resolution of entities in
Web data~\cite{DBLP:series/synthesis/2015Christophides} has become
an important topic where the aim is to for example facilitate
entity-centric search. The lack of well defined schemas and data
heterogeneity~\cite{Has13}, as well as dynamic data and the sheer size of Web
data, are challenging traditional ER approaches in this
domain~\cite{DBLP:series/synthesis/2015Christophides}.
The general ER process can be viewed to consist of three major
steps~\cite{Christen:2012:DMC:2344108}: blocking/indexing, record
comparison, and classification, which is sometimes followed by a
merging step \cite{Ben09,DBLP:series/synthesis/2015Christophides}
where the records identified to refer to the same entity are
combined into a new, consistent, single record.
In the first step, as discussed earlier, the databases are split
into blocks (or clusters), and in the second step pairs of records
within the same blocks are compared with each other. Even after data
cleansing and standardisation of the
input databases (if applied) there can still be variations of and
errors in the attribute values to be compared, and therefore
approximate string comparison functions (such as edit distance, the
Jaro-Winkler comparator, or Jaccard
similarity~\cite{Christen:2012:DMC:2344108,Nau10}) are employed to
compare pairs of records. Each compared record pair results in a
vector of similarities (one similarity per attribute compared), and
these similarity vectors are then used to classify record pairs into
\emph{matches} (where it is assumed the two records in a pair
correspond to the same entity) and \emph{non-matches} (where the
records are assumed to correspond to two different entities).
Various classification methods have been employed in
ER~\cite{Christen:2012:DMC:2344108,Don15,Nau10}, ranging from
simple threshold-based to sophisticated clustering and supervised
classification techniques, as well as active learning approaches.
Traditional blocking~\cite{Chr12b} uses one or more attributes as
\emph{blocking key} to insert records that share the same value in
their blocking key into the same block. Only records within the same
block are then compared with each other. To overcome variations and
misspellings, the attribute values used in blocking keys are often
phonetically encoded using functions such as Soundex or
Double-Metaphone~\cite{Christen:2012:DMC:2344108} which convert a
string into a code according to how the string is pronounced. The
same code is assigned to similar sounding names (such as `Dickson'
and `Dixon'). Multiple blocking keys may also be used to deal with
the problem of missing attribute values~\cite{Chr12b}.
An alternative to traditional blocking is the sorted neighbourhood
method~\cite{Dra12,Mon96,Nau10}, where the databases to be
linked are sorted according to a \emph{sorting key} (usually a
concatenation of the values from several attributes), and a sliding
window is moved over the databases. Only records within the window
are then compared with each other. Another way to block databases is
using canopy clustering~\cite{Mcc00a}, where a computationally
efficient similarity measure (such as Jaccard similarity based on
character q-grams as generated from attribute values~\cite{Nau10})
is used to inserts records into one or more overlapping clusters,
and records that are in the same cluster (block) are then compared
with each other.
While these existing blocking techniques are \emph{schema-based} and
require a user to decide which attributes(s) to use for blocking,
sorting or clustering, more recent work has investigated
\emph{schema-agnostic} approaches that generate some form of
signature for each record automatically from all attribute
values~\cite{DBLP:series/synthesis/2015Christophides, papadakis2015schema,papadakis13tkde,War06}. While schema
agnostic approaches can be attractive as they do not
require manual selection of blocking or sorting keys by domain
experts, they can lead to sub-optimal blocking performance and might
require additional meta-blocking
steps~\cite{DBLP:series/synthesis/2015Christophides,
Efthymiou:2017:PMS:3050918.3050949,papadakis14tkde} to
achieve both high effectiveness and efficiency by for example
removing blocks that are too large or that have a high overlap with
other blocks.
One schema-agnostic approach to blocking is Locality Sensitive
Hashing (LSH), as originally developed for efficient
nearest-neighbour search in high-dimensional spaces~\cite{Ind98}.
LSH has been employed for blocking in ER by hashing attribute
values multiple times and comparing records that share some hash
values. One ER approach based on MinHash~\cite{Bro1997} and LSH is
\emph{HARRA}~\cite{Kim10}, which iteratively blocks, compares, and
then merges records, where merged records are re-hashed to improve
the overall ER quality.
However, a recent evaluation of blocking
techniques has found~\cite{Ste14}, blocking based on LSH needs to be
carefully tuned to a specific database in order to achieve both high
effectiveness and efficiency. This requires high quality training
data which is not available in many real-world ER applications.
With the increasing sizes of databases to be linked, there have been
various efforts to parallelize ER algorithms, where both
algorithmic~\cite{Kaw06,Kim07} as well as platform dependent
(assuming for example Map Reduce)~\cite{Efthymiou:2017:PMS:3050918.3050949,Kol13} solutions
have been proposed. A major challenge for parallel ER is load
balancing due to the irregular distribution of the data, resulting
for example in blocks of very different sizes.
Compared to existing approaches to ER, the distinguishing feature of our ER algorithm is a data-driven blocking-by-signature technique that deliberately trade-off recall in favour of high precision.
This is in contrast to the standard practice of trading off precision in favour of high recall in most existing blocking algorithms.
To compensate for potential low-recall resulting from our blocking technique, we introduce an additional Global Connected Component step into the ER process, which turns out to be efficiently computable.
As shown in Section~\ref{sec-experiments}, this slightly unusual combination of ideas yielded a new, simple algorithm that achieves state-of-the-art ER results on a range of datasets, both in terms of accuracy and scalability.
\subsubsection*{Connections to the Signature ER Framework}
Perhaps unsurprisingly, many existing ER techniques can be understood in the signature framework described in Section~\ref{sec:problem formulation}. We now point out a few such connections. \medskip
\noindent {\bf Standardisation and Cleansing:}
The most common question on our ER algorithm from industry practitioners is the (deliberate) avoidance of an explicit up-front data standardisation and cleansing step.
We now address this.
The canonical form of each record obtained from standardisation and cleansing is actually a type of signature.
Whereas the transform from a record to signatures is the generation of subrecords in our algorithm, in traditional methods the transforms are context- and data-dependent and usually implemented using business rules that can become complicated and hard-to-maintain over time.
The main benefit of using standardisation and cleansing transforms is that the derived canonical form is almost guaranteed to be a signature.
In our method, by contrast, a derived subrecord is only a signature
with a certain probability.
To compensate, we generate many subrecords for each
database record.
An important benefit of generating many subrecords or signatures
is that two records will be linked if any of these signatures are
shared. In contrast, data standardisation methods produce only one
signature from each record, and the signature/canonical form for
two records of the same entity may be quite different.
This issue is then (partially) addressed by allowing the signatures to be matched in a non-exact way using similarity measures that capture different criteria.
To summarise, our method generates low-cost signatures and match signatures exactly.
We put uncertainty in whether a generated subrecord is a signature and mitigate the
risk with a number of candidate signatures.
Traditional ER methods that rely on data-standardisation generate high-cost signatures and match signatures approximately.
They put uncertainty in whether two signatures match or not.\medskip
\noindent {\bf MinHash and LSH:}
There are several connections between $\sqsubseteq$-signatures and the signatures generated by MinHash:
\begin{enumerate}
\item Two records have an identical MinHash band (an array of
MinHash values) if they both contain some words, and not contain
some other words, at the same time. In our design, two records
have an identical candidate $\sqsubseteq$-signature, as long as
they both contain some words.
MinHash signatures
therefore better fit scenarios where global consistency between
two records matters, such as Jaccard similarity over large documents~\cite{Christen:2012:DMC:2344108}. Our $\sqsubseteq$-signatures better fits scenarios where partial similarity matters, for example where records of the same entity can contain significant inconsistency.
\item Both method generate multiple signatures from each record.
Each MinHash band and candidate signature only captures partial (but important) information in the original
records. Therefore both methods allow inconsistent records to
be linked together.
\item To achieve good balance between accuracy and efficiency,
one can vary the length of signatures and the length
of each band in MinHash. As shown in \cite{Ste14}, finding suitable
values of these two parameters that lead to high quality ER
results is context and data-dependent and requires ground truth data to tune.
In our method, the choice of candidate signatures
and probability thresholds are parameters that can be similary tuned to achieve the same balance.
\item Record linkage is probabilistic in both cases. MinHash has a probabilistic explanation with respect to the
Jaccard similarity between two records~\cite{Bro1997}. Our
method has a probabilistic explanation with respect to the
co-occurrence of probable signatures in two records.
\end{enumerate}
\noindent {\bf Optimal Blocking:}
Our blocking approach is also related to the blocking framework described in \cite{papadakis14tkde}.
In particular, our method can be perceived as providing an approximate
way to construct ideal blocks. The blocks generated by our method
always contains only two records, and the number of blocks a record
may appear in is also upper-bounded. These criteria correspond to the
optimal Comparison Cardinality (CC) as discussed by Papadakis et
al.~\cite{papadakis13tkde}.
Papadakis et al. argue the optimal value for CC equals 2, which is when every block contains exactly two records, and each record appears in one and only one block.
However, high CC is only a necessary but not sufficient condition for high-quality blocking. In practice, the higher CC is, the higher the risk is of missing a true match.
In our algorithm, blocks always contain only two records, but a record can belong to multiple blocks to minimise the risk of missing matches.
\iffalse
\section{Discussion}\label{sec:discussion}
As discussed in Section~\ref{sec:rel-work}, the distinguishing feature of our ER algorithm is a data-driven blocking-by-signature technique that achieves high precision at the price of possible low recall, which is compensated by allowing records to belong to multiple blocks and performing connected components to link records \emph{across} blocks.
We now compare and relate this algorithmic strategy to a few representative ER approaches.
\fi
\iffalse
\section{Linkage Confidence}
After pairwise linkage, each group of records that are linked together either directly or indirectly give birth to an entity. We now compute the confidence in assigning a record to this entity.
Let $P(r\in G)$ denote the probability of record $r$ belonging to group $G$. Let $N(r)$ denote the records which are directly linked to $r$. We therefore has
\fi
\section{Conclusion}\label{sec:conclusion}
We have presented and evaluated in this paper a novel Entity Resolution algorithm that
\begin{itemize}
\item introduces a data-driven blocking and record-linkage technique based on the probabilistic identification of $\sqsubseteq$-signatures in data;
\item incorporates an efficient connected-components algorithm to link records \emph{across} blocks;
\item is scalable and robust against data-quality issues.
\end{itemize}
The simplicity and practicality of the algorithm allows it to be implemented simply on modern parallel databases and deployed easily in large-scale industrial applications, which we have done in the financial intelligence domain.
\balance
\bibliographystyle{abbrv}
|
1,477,468,751,011 | arxiv |
\section{Conclusion}\label{sec:conclusion}
The discovery of ALPs using helioscopes requires high efficiency and low background X-ray detectors. The possibility to study the properties of ALPs implies the use of high resolution and low energy threshold detectors. Metallic magnetic calorimeters can be optimized to fulfill all these requirements. We have presented the development and characterization of the first MMC-based detector system designed to be mounted on the BabyIAXO helioscope. The detector consists of a two-dimensional 64-pixel MMC array with a fill factor of \SI{93}{\percent} covering an area of \SI{16}{\milli\metre\squared}. The absorbers of the detector are made out of \SI{20}{\micro\metre} thick gold each covering a surface of $\SI{500}{\micro\metre} \times \SI{500}{\micro\metre}$ and ensure a quantum efficiency of more than \SI{99}{\percent} for photons up to \SI{10}{\kilo\electronvolt}. A first characterization of the MMC array showed an average FWHM energy resolution of \SI{6.1}{\electronvolt} at \SI{0}{\electronvolt} and \SI{7.2}{\electronvolt} at \SI{5.9}{\kilo\electronvolt} while reaching energy thresholds below \SI{100}{\electronvolt}. The analysis of the background measured for an unshielded detector provided a background rate of \SI{3.2(1)e-4}{\per\kilo\electronvolt\per\centi\metre\squared\per\second} between \SI{1}{\kilo\electronvolt} and \SI{10}{\kilo\electronvolt}. We could attribute this background partially to fluorescence in the material surrounding the detector induced mainly by cosmic muons and radioactive impurities of our material. We have identified the possibility to reduce the background by adding a shield out of a material with a low atomic number directly above the detector. This was tested in a second characterization which showed the positive effect of the used polytetrafluoroethylene piece. The background was reduced by \SI{58(3)}{\percent} to \SI{1.20(8)e-4}{\per\kilo\electronvolt\per\centi\metre\squared\per\second} which matches the expected background reduction by the effective shielded solid angle seen by the detector. This demonstrates that a polytetrafluoroethylene shield plays already an important role to reduce the background significantly. This implies that the background can be even further reduced by the presence of active and passive shielding surrounding the detector, as already demonstrated for other detector technologies~\cite{Garza_2015}. With the results obtained in the discussed measurements we can conclude that MMCs are suitable detectors to be used in helioscopes.
\section{System design}\label{sec:prototype}
The detector system developed in this work was designed to be suitable for the installation as a focal plane detector in the BabyIAXO helioscope. The detector platform is dimensioned to host MMC-based detector chips with a size up to $\SI{24}{\milli\metre} \times \SI{24}{\milli\metre}$. This gives flexibility to choose a detector geometry optimized for the focal plane defined by the X-ray optics \cite{Abeln_2020}. In addition, we have chosen a simple and modular design which allows to easily improve and exchange individual components as well as to add active and passive shields in the future. For the fabrication of the setup, we selected high purity materials to reduce the presence of radioactive contamination near the detector.
Figure~\ref{fig:solidworks} shows a rendered image of the designed metal components of the platform consisting of several copper parts and a niobium cover acting as a superconducting shield while cooled down below \SI{9.3}{\kelvin}. All copper parts are made out of oxygen-free high thermal conductivity (OFHC) copper\footnote{Allmeson GmbH, Ottostraße 9-11, 63150 Heusenstamm, Germany} with a purity of at least \SI{99.99}{\percent} and have been annealed after manufacturing to achieve a better heat conductivity at low temperatures and thus a lower operating temperature of the detector. We have chosen niobium\footnote{Haines \& Maassen Metallhandelsgesellschaft mbH, Pützchens Chaussee 60, 53227 Bonn, Germany} with a purity of at least \SI{99.9}{\percent} for the material of the superconducting shield due to its very high critical temperature. The detector and SQUID chips were glued onto the dedicated copper parts with a bicomponent epoxy\footnote{Huntsman International LLC, 10003 Woodloch Forest Drive, The Woodlands, Texas 77380, United States}. This type of glue is also applied in the Cryogenic Underground Observatory for Rare Events (CUORE) experiment and was tested to have low radioactive contamination~\cite{Alduino_2016}. The electrical connections from the detector module to the amplifier module are realized by flexible polyimide circuit boards\footnote{Multi Leiterplatten GmbH, Brunnthaler Straße 2, 85649 Brunnthal, Germany} with low radioactivity. To further reduce potential radioactivity, the circuit boards were manufactured neither with a stiffer layer nor a surface finish.
\begin{figure}[htbp]
\centering
\footnotesize\input{gfx/gfx_4.pdf_tex}
\caption{Rendered image of the detector platform consisting of several copper parts and a niobium cover. The main part hosting the detector is the detector module assembled out of three octagonal copper parts.}
\label{fig:solidworks}
\end{figure}
The main component of the system is the detector module which consists of three copper parts. On the lower copper part of the detector module shown in figure~\ref{fig:solidworks}, the detector and eight first-stage SQUID chips are glued on a raised area in the center. Eight polyimide circuit boards are glued on the second copper part which has a hole in the center matching the raised area of the first part. Both parts are afterwards screwed together. The chips and circuit boards are then electrically connected with aluminum bonding wires, shown in figure~\ref{fig:maXs30}. The third part of the detector module is a collimator which is fixed on top of the other two parts. The complete detector module is shown in figure~\ref{fig:sandwich}.
\begin{figure}[htbp]
\centering
\includegraphics[width=4in]{gfx/gfx_5.jpg}
\caption{Octagonal detector module consisting of three copper parts. Polyimide circuit boards are used to connect the detector module to a SQUID amplifier module.}
\label{fig:sandwich}
\end{figure}
The octagonal detector module with a distance between parallel sides of \SI{6}{\centi\metre} and a height of \SI{1.5}{\centi\metre} is mounted with four triangle shaped copper support structures to a copper adapter plate which can be screwed to the mixing chamber plate of a cryostat. The triangle structure prevents vibrations and rotations of the detector module whereas the adapter plate is designed to match the mounting holes of one of our dilution refrigerators\footnote{Bluefors Oy, Arinatie 10, 00370 Helsinki, Finland}. We use a tiny amount of vacuum grease between the copper parts except for the detector module to increase the thermal conductance. The niobium cover, acting as a superconducting shield, is screwed to the adapter plate to protect the SQUIDs and MMCs from magnetic field fluctuations. The complete system mounted inside a dilution refrigerator is shown in figure~\ref{fig:setup}. The niobium shield has a height of \SI{18}{\centi\metre} and a diameter of \SI{9}{\centi\metre}. Holes in the copper collimator and the niobium shielding allow the usage of external X-ray sources for characterization. For the discussed measurements, the source is positioned outside the cryostat at room temperature in front of an X-ray window. Other X-rays windows were also present in each of the thermal shields.
\begin{figure}[htbp]
\centering
\includegraphics[width=3in]{gfx/gfx_6.jpg}
\caption{Complete system, mounted inside a cryostat and covered by the cylindrical superconducting niobium shield. The amplifier module is placed inside a rectangular cryoperm shield and is mounted on top of the mixing chamber plate. Ribbon cables connect the installation to room temperature electronics.}
\label{fig:setup}
\end{figure}
The wide polyimide circuit boards for the SQUID operation have standardized 16-pin connectors at the end which are connected to the SQUID amplifier module with cables as shown in figure~\ref{fig:setup}. The amplifier module as well as the narrow polyimide circuit boards are connected to ribbon cables. These cables, each with \num{30} wires made of copper with \SI{2}{\percent} nickel and having a diameter of \SI{200}{\micro\metre} each and a length of about \SI{2}{\metre}, are thermalized at each temperature stage of the dilution refrigerator and are connected at room temperature to 24-pin connectors\footnote{LEMO S.A., Chemin de Champs-Courbes 28, 1024 Ecublens, Switzerland} positioned in a vacuum tight aluminum box mounted on top of the cryostat~\cite{Mantegazzini_2020}. There, SQUID electronics\footnote{Magnicon GmbH, Barkhausenweg 11, 22339 Hamburg, Germany} for the flux-locked-loop readout are connected. Two ADCs\footnote{Struck Innovative Systeme GmbH, Harksheider Straße 102A, 22399 Hamburg, Germany} with 16 channels each are used for data acquisition. The signals from the 32 SQUID channels are acquired by triggering each channel individually and are passed from the ADCs to a dedicated PC via Ethernet.
\section{Introduction}\label{sec:introduction}
Axions are hypothetical particles, originally predicted by the Peccei–Quinn theory as a possible solution to the strong $\mathrm{CP}$ problem~\cite{Peccei_1977a,Peccei_1977b,Weinberg_1978,Wilczek_1978}. Via a fermion loop, they have a very weak coupling to photons. The coupling to photons as well as the axion mass are proportional to the inverse of the energy scale related to the spontaneous breaking of the Peccei–Quinn symmetry. Axions are therefore characterized by only one parameter. Particles having a similar two photon interaction, but with a not necessarily related mass and photon coupling, are called axion-like particles (ALPs) and are proposed in several theories beyond the standard model~\cite{Jaeckel_2010}. ALPs are of particular interest as they are also considered a well-motivated dark matter candidate~\cite{Preskill_1983,Abbott_1983,Arias_2012}. Their existence could also explain astrophysical observations like the $\gamma$-ray transparency of the Universe and stellar cooling anomalies~\cite{De_Angelis_2007,Mirizzi_2007,Isern_2008,Giannotti_2017}. Several experiments are looking for them using different methods differentiated by the investigated ALP source: light-shining-through-a-wall experiments are designed to produce and convert ALPs in laboratories~\cite{Baehre_2013}, haloscopes look for relic ALPs as part of the local dark matter halo~\cite{Du_2018}, whereas helioscopes search for ALPs generated in the Sun~\cite{Lazarus_1992,Ohta_2012,Anastassopoulos_2017}.
The Sun is potentially the strongest ALP source in our vicinity. The expected solar ALP flux can be described by two components, originating from ALP-photon and ALP-electron interactions respectively. Figure~\ref{fig:axion} shows the expected solar ALP spectrum on Earth, assuming an ALP-photon coupling $g_{\mathrm{a}\gamma}$ of $\SI{e-11}{\per\giga\electronvolt}$ and an ALP-electron coupling $g_\mathrm{ae}$ of $\num{e-13}$ as suggested by stellar cooling anomalies~\cite{Giannotti_2017}. ALPs from Primakoff conversion (orange, dashed) are generated by the interaction of black-body photons with virtual photons of the dense plasma in the interior of the Sun. The spectrum has a maximum at about \SI{3}{\kilo\electronvolt}, corresponding to the inner solar temperature. The spectrum from electron processes (blue, solid) has a smooth constituent with a maximum at about \SI{1}{\kilo\electronvolt} due to Bremsstrahlung and Compton scattering with outgoing ALPs. The resonances are due to ALP-recombination and ALP-deexcitation, which depend on the metal composition of the Sun~\cite{Redondo_2013}. The possibility to determine the relative intensity of the flux components will be important to identify the underlying ALP theory.
\begin{figure}[htb]
\centering
\input{gfx/gfx_1.pgf}
\vspace{-0.1in}
\caption{Expected solar ALP flux on Earth from electron processes (blue, solid) and Primakoff conversion (orange, dashed) assuming $g_{\mathrm{a}\gamma} = \SI{e-11}{\per\giga\electronvolt}$ and $g_\mathrm{ae} = \num{e-13}$~\cite{Jaeckel_2019b}.}
\label{fig:axion}
\end{figure}
Helioscopes look for solar ALPs on Earth. In a helioscope, a long evacuated volume which is permeated by a strong magnetic field can be rotated and tilted to point towards the Sun for a large fraction of the day. The magnetic field is used to convert solar ALPs to more easily detectable X-rays via the generic ALP coupling to two photons~\cite{Sikivie_1983}. Three helioscopes have been built: the helioscope in Brookhaven~\cite{Lazarus_1992}, the Tokyo Axion Helioscope~\cite{Ohta_2012} and the CERN Axion Solar Telescope (CAST)~\cite{Anastassopoulos_2017}. So far, the most powerful helioscope is CAST which has set the current limit on $g_{\mathrm{a}\gamma}$ of \SI{6.6e-11}{\per\giga\electronvolt} for ALP masses $m_\mathrm{a}$ below \SI{0.02}{\electronvolt}~\cite{Anastassopoulos_2017}. The successor of CAST will be the International Axion Observatory (IAXO) with an expected sensitivity of a few \SI{e-12}{\per\giga\electronvolt} on $g_{\mathrm{a}\gamma}$ for $m_\mathrm{a}$ up to \SI{0.01}{\electronvolt}~\cite{Armengaud_2014}. IAXO will have the potential to probe axion models in the \SI{1}{\milli\electronvolt} to \SI{1}{\electronvolt} mass range as well as an unexplored fraction of the ALP parameter space of particular interest where ALPs could be part of the cold dark matter and explain stellar cooling anomalies~\cite{Armengaud_2019}. This is technologically a very big step with respect to CAST and, therefore, the intermediate experiment BabyIAXO is currently under development to test major components like magnet, optics and X-ray detectors required for IAXO~\cite{Abeln_2020}. It will also be able to probe the existence of ALPs with $g_{\mathrm{a}\gamma}$ up to \SI{1.5e-11}{\per\giga\electronvolt} for $m_\mathrm{a}$ below \SI{0.02}{\electronvolt}.
Based on the expected very low ALP to photon conversion rate, ultra-low background X-ray detectors with a high efficiency up to \SI{10}{\kilo\electronvolt} and an active area of the order of \SI{0.2}{\centi\metre\squared} are required for IAXO. Gaseous time projection chambers (TPCs) equipped with Micromegas as used in CAST with a combination of active and passive shields achieve background rates below \SI{e-6}{\per\kilo\electronvolt\per\centi\metre\squared\per\second} and are considered as the baseline technology for BabyIAXO~\cite{Garza_2015}. However, different detector technologies with comparable efficiency and low background are essential to reduce systematic uncertainties in the interpretation of the data. At the same time, detectors with good energy resolution and low energy threshold are desired to study the solar ALP spectrum after discovery. In this case, the coupling strength of ALPs to photons and electrons as well as the underlying ALP model, could be identified by studying the spectrum in detail~\cite{Jaeckel_2019a}. Moreover, the ALP mass with $m_\mathrm{a}$ between \SI{3}{\milli\electronvolt} and \SI{100}{\milli\electronvolt} could be investigated from decoherence effects in ALP-photon oscillations~\cite{Dafni_2019}. Also information of the interior of the Sun like the metal composition and the solar magnetic field could be investigated~\cite{Jaeckel_2019b, O_Hare_2020}. Detectors based on low temperature metallic magnetic calorimeters (MMCs) feature good energy resolution and low energy threshold besides low intrinsic background and high quantum efficiency~\cite{Fleischmann_2005,Fleischmann_2009,Kempf_2018}. Therefore, MMCs are a perfect candidate to search for ALPs with helioscopes and in particular to study them beyond discovery.
We present the first MMC-based X-ray detector system developed for IAXO. In section~\ref{sec:maXs30}, we introduce the detector used for this system and describe the expected performance of the array. The design and the integration of the detector platform is depicted in section~\ref{sec:prototype}. In section~\ref{sec:results}, we show the results of the characterization, in particular the energy resolution and the background rate of the unshielded system. Finally, we review the achieved performance in section~\ref{sec:conclusion}.
\section{MaXs30 detector}\label{sec:maXs30}
Metallic magnetic calorimeters (MMCs) are operated at very low temperatures, usually below \SI{30}{\milli\kelvin}, and can reach remarkable energy resolution over a wide energy range~\cite{Fleischmann_2009}. They are used in various experiments due to their high resolving power $\frac{E}{\Delta E}$ up to \num{6000} and fast intrinsic response time, in the order of \SI{100}{\nano\second}, besides excellent linearity, high efficiency and low energy threshold~\cite{Pies_2012,Gastaldo_2017}. For example, a full width at half maximum (FWHM) energy resolution of \SI{1.6}{\electronvolt} was obtained for \SI{5.9}{\kilo\electronvolt} photons with a quantum efficiency of nearly \SI{100}{\percent}~\cite{Kempf_2018}. These properties, in combination with low intrinsic background, make MMC arrays a promising technology for helioscopes.
\begin{figure}[htb]
\centering
\footnotesize\input{gfx/gfx_2.pdf_tex}
\caption{Schematic drawing of a planar double meander geometry. Superconducting coils (black) connect two MMCs in parallel to a readout circuit. Each MMC consists of a particle absorber (blue) well thermally coupled to a paramagnetic sensor (orange) and is weakly coupled to a thermal bath (green).}
\label{fig:mmc}
\end{figure}
The detection principle of MMCs is based on calorimetry. A typical design for MMCs is the so-called double meander geometry, shown in figure~\ref{fig:mmc}. This planar design allows for the operation of two pixels using one readout channel and the microfabrication of large and dense MMC arrays~\cite{Fleischmann_2005}. A single MMC pixel is composed of a particle absorber well thermally coupled to a paramagnetic temperature sensor sitting in a static magnetic field. When a particle interacts with the absorber, it deposits energy causing a small temperature increase. The temperature increase $\Delta T$ of absorber and sensor is approximately given by $\frac{E}{C}$, where $E$ is the energy deposited by the particle and $C$ is the total heat capacity of the MMC. The temperature increase of the sensor leads to a decrease of the magnetization $\Delta M$ given by $\frac{\partial M}{\partial T} \Delta T$ and creates a magnetic flux $\Delta \Phi$, proportional to $\Delta M$, in a superconducting pick-up coil directly underneath the sensor. The change of flux $\Delta \Phi$ is therefore proportional to $\frac{\partial M}{\partial T} \frac{E}{C}$ and thus proportional to the deposited energy of the particle. The flux change can be converted to a change of voltage using superconducting quantum interference devices (SQUIDs)~\cite{Clarke_2006}. A weak thermal link to a heat bath allows the MMC to slowly cool down to its operating temperature after the interaction of a particle.
In the case of the depicted double meander geometry, the superconducting pick-up coils underneath the two pixels are connected in parallel to the input coil of a {dc-SQUID} as indicated in figure~\ref{fig:mmc}. As a result, the two pick-up coils form a first order gradiometer which allows for distinguishing events in the two pixels by the polarity of the pulses and, in addition, this configuration reduces the effect of temperature fluctuations of the substrate on the output signal. The weak static magnetic field necessary to operate MMCs can be produced by a persistent current in the superconducting loop formed by the two meanders while the connection to the SQUID input coil is in its normal conducting state. The double meander geometry is also the basic design of the 32 channels of the maXs30 (micro-calorimeter array for X-ray spectroscopy) chip we chose for the first MMC-based detector system for BabyIAXO due to its relatively large active area with a high stopping power even above \SI{10}{\kilo\electronvolt} in combination with a good energy resolution~\cite{Hengstler_2015}.
\begin{figure}[htbp]
\centering
{\footnotesize\color{white}\input{gfx/gfx_3.pdf_tex}}
\caption{Photograph of the maXs30 chip glued on the copper platform together with eight first-stage SQUID chips. Electrical connections between the chips as well as to the polyimide circuit boards are provided by aluminum bonding wires which get superconducting.}
\label{fig:maXs30}
\end{figure}
Figure~\ref{fig:maXs30} shows the maXs30 detector chip mounted on the newly developed copper platform together with eight first-stage SQUID chips, each hosting four SQUID channels, optimized for the readout of the MMCs. The detector and the SQUID chips were microfabricated in the cleanroom at the Kirchhoff Institute for Physics at Heidelberg University \cite{Kempf_2015}. The detector is a \num{64}-pixel two-dimensional MMC array, originally designed for experiments at the heavy ion storage ring ESR at the GSI and optimized for high-resolution X-ray spectroscopy up to \SI{30}{\kilo\electronvolt}~\cite{Hengstler_2015,Sikorsky_2020}. The maXs30 arrays are fabricated on three inch silicon wafers of about \SI{0.4}{\milli\metre} thickness. Each wafer contains \num{36} maXs30 chips with a size of $\SI{8}{\milli\metre} \times \SI{8}{\milli\metre}$ each. The absorbers are arranged in an eight by eight array with an area of \SI{16}{\milli\metre\squared}. Each absorber, made out of gold, has an area of $\SI{500}{\micro\metre} \times \SI{500}{\micro\metre}$ and a thickness of \SI{20}{\micro\metre} which guarantees a quantum efficiency higher than \SI{99}{\percent} for X-rays up to \SI{10}{\kilo\electronvolt}. For a small focal spot, the efficiency of the detector is limited by the fill factor of the absorbers and is given by \SI{93}{\percent}. The granularity of the array allows for a position sensitivity determined by the area of a single absorber. The temperature sensors with an area of $\SI{300}{\micro\metre} \times \SI{300}{\micro\metre}$ and a height of \SI{1.5}{\micro\metre} are made out of a dilute paramagnetic alloy of \SI{430}{\ppm} rare-earth metal erbium in the host material silver. The niobium meander-shaped pick-up coils have a line width of \SI{5}{\micro\metre}, a pitch of \SI{10}{\micro\metre} and a height of \SI{250}{\nano\metre}. The four double meanders at the corners of the array have a non-gradiometric design, obtained by reducing the area of one of the two sensors to $\SI{250}{\micro\metre} \times \SI{250}{\micro\metre}$. Due to this artificial asymmetry, the signal of these channels is sensitive to temperature fluctuations of the substrate and can be used to obtain the temperature of the detector chip.
The detector is optimized to operate at a temperature of \SI{20}{\milli\kelvin} with a persistent current of roughly \SI{70}{\milli\ampere}, which corresponds to an average magnetic field in the sensors of \SI{5}{\milli\tesla}. Under these conditions, the expected energy resolution $\Delta E_{\mathrm{FWHM}}$ is about $\SI{6}{\electronvolt}$. The voltage signal is completely characterized by an amplitude and the time constants of both, the exponential rise and decay. The amplitude is proportional to the energy deposited in the absorber during an event. The rise time is artificially limited by a thermal bottle neck between absorber and sensor which increases the intrinsic signal rise time to about \SI{10}{\micro\second}, else limited by the electron-spin coupling to \SI{100}{\nano\second}. Increasing the risetime is necessary to guarantee a position independent signal shape for particle interactions over the complete volume of the relatively large absorber. The decay time of about \SI{3}{\milli\second} is determined by the ratio of the total heat capacity of the MMC and the thermal conductance to the thermal bath, defined by the geometry of the gold thermal link. The pulse shape as well as the rise and decay time of different pixels vary slightly by a few percent due to inhomogeneities within the micro-structured layers and geometrical effects of the chip boundaries. Therefore, we perform the data analysis independently for each pixel.
Aluminum bonding wires, which are superconducting at the operating temperature, connect the double meander in parallel to input coils of {dc-SQUIDs} located on different chips. The MMCs generate signals in the SQUIDs of roughly \SI{10}{\milli\phiz\per\kilo\electronvolt} where $\si{\phiz} = \frac{h}{2 e}$ is the magnetic flux quantum. The signals from these first-stage SQUIDs are then amplified at \si{\milli\kelvin} temperatures using second-stage SQUID series-arrays~\cite{Mantegazzini_2020}. This two-stage SQUID readout scheme allows for reducing the noise contribution from the room temperature electronics. In this configuration, the first-stage SQUIDs are voltage biased via bias resistors fabricated on the same chips as the second-stage SQUIDs which reduces the power dissipation on the first-stage chips and, in turn, near the detector chip. The SQUID signal is linearized by room temperature electronics using a flux-locked-loop readout scheme~\cite{Drung_2006}.
\section{Results}\label{sec:results}
We have characterized the detector at different temperatures and with different persistent currents to operate the MMCs at different magnetic fields. The used dilution refrigerator reaches a temperature below \SI{7}{\milli\kelvin} at the mixing chamber plate. Comparing the amplitude of the acquired signals with amplitudes obtained by calculations based on the thermodynamical properties of the MMCs, we find that the base temperature of the cryostat corresponds to a detector temperature of \SI{15(1)}{\milli\kelvin}. The temperature difference originates from the heat produced by the first-stage SQUIDs near the detector.
\subsection{Detector performance}
For the calibration of the detector system we used an $^{55}\mathrm{Fe}$~source\footnote{Eckert \& Ziegler, Robert-Rössle-Straße 10, 13125 Berlin, Germany}\addtocounter{footnote}{-1}\addtocounter{Hfootnote}{-1} as well as an $^{241}\mathrm{Am}$~source{\footnotemark} for the characterization at higher energies. Both are closed sources, such that only X-rays can leave the housing. The radioactive sources were periodically positioned in front of the outer X-rays window of the cryostat. The response of the detector upon the absorption of $\mathrm{K}_\alpha$ photons at about \SI{5.9}{\kilo\electronvolt} from the $^{55}\mathrm{Fe}$ source is used to characterize the performance of the detector. To obtain the characteristic pulse shape, a few thousand pulses of this energy were averaged for each pixel. The averaged pulse is then scaled and fit to all acquired signals from the same pixel. This allows for the derivation of several parameters, in particular the signal amplitude and variables related to the pulse shape. Since the amplitude of the signal depends on the detector temperature, for each acquired trace we also record the output voltage of non-gradiometric detector channels which provide information on the chip temperature at the time the signal has been triggered. As a result, we can study the correlation between the temperature information and the amplitude of the signal and thus can correct for temperature fluctuations of the detector chip. In fact, slow temperature variations of the chip of the order of \SI{10}{\micro\kelvin} which induce variations on the signal amplitude of the order of \SI{0.5}{\percent} would else decrease the resolving power. To calibrate the signal amplitudes, we use the known energy of the $\mathrm{K}_\alpha$ lines as well as the $\mathrm{K}_\beta$ lines at about \SI{6.5}{\kilo\electronvolt} and adapt a quadratic fit to match the temperature corrected amplitude to the corresponding energy for each channel. We get a nonlinearity of roughly \SI{0.1}{\percent} at \SI{6}{\kilo\electronvolt}.
\begin{figure}[htb]
\centering
\input{gfx/gfx_7.pgf}
\vspace{-0.1in}
\caption{Histogram obtained in one of the calibration measurements with an $^{55}\mathrm{Fe}$ source (blue, solid) for a single pixel. The Poisson uncertainty drawn on the bins is given by $\sqrt{N}$ where $N$ is the number of counts in the respective bin. The histogram has 100 bins with a bin width of \SI{0.5}{\electronvolt}. The FWHM energy resolution of \SI{6.4(2)}{\electronvolt} is determined by a calibration fit (orange, dashed). The natural line shape (green, dotted) shown for comparison is scaled to the maximum of the calibration fit.}
\label{fig:iron}
\end{figure}
As an example, the histogram of the $\mathrm{K}_\alpha$ multiplett from the $^{55}\mathrm{Fe}$ source acquired for a single pixel during multiple calibration measurements is shown in figure \ref{fig:iron}. We fit the convolution of the intrinsic shape of the $\mathrm{K}_\alpha$ lines based on~\cite{Hoelzer_1997} and a Gaussian detector response with variable width to the histogram. The obtained Gaussian full width at half maximum (FWHM) of \SI{6.4(2)}{\electronvolt} represents the energy resolution of the MMC. Figure~\ref{fig:eres} shows, over a map representing the 64 pixels of the maXs30 chip, the FWHM energy resolution for the channels which have been operated during the discussed characterization run. Three of 32-channels could not be operated: two of them had a missing electrical connection at the SQUID amplifier level while for the third one the first-stage {dc-SQUID} had a visible damage. The three channels can be repaired for future experiments. Excluding the channel C8/D8 with a significantly higher noise, we obtained an average FWHM energy resolution of \SI{7.2}{\electronvolt} in this run. An evaluation of the energy resolution at \SI{0}{\electronvolt} via a baseline analysis yielded across \num{27} channels an average baseline energy resolution of \SI{6.1}{\electronvolt} FWHM which is in very good agreement with the expected \SI{6}{\electronvolt}. The baseline energy resolution was analyzed at a mixing chamber temperature of \SI{12}{\milli\kelvin} which corresponds to a slightly increased detector temperature of \SI{17(1)}{\milli\kelvin}. The very good energy resolution allows us to define very low trigger thresholds below \SI{100}{\electronvolt}.
\begin{figure}[htb]
\centering
\input{gfx/gfx_8.pgf}
\vspace{-0.1in}
\caption{Distribution of the FWHM energy resolution given by the Gaussian detector response evaluated at \SI{5.9}{\kilo\electronvolt} for the pixels operated during the first characterization run. The uncertainty is about \SI{0.2}{\electronvolt}. The average FWHM energy resolution is \SI{7.2}{\electronvolt} excluding the channel C8/D8 with a significantly higher noise.}
\label{fig:eres}
\end{figure}
\subsection{Background rate}
To determine the background of the detector it is important to distinguish events that are related to actual X-ray absorption within the aborber from other sources. As already mentioned, particles depositing their full energy in the MMC absorber lead to signals having a characteristic rise and decay time which is independent on the deposited energy for the energy range of interest. The typical shape of a single AC filtered \SI{5.9}{\kilo\electronvolt} pulse is shown in figure~\ref{fig:ellipse} on the left hand side. Charged particles passing through the absorber will have a chance to release some energy via ionization in the sensor or in the substrate close to the sensor or in both. This leads to modifications of the signal shape which can be recognized through pulse shape analysis. Furthermore, such particles can produce possible coincidence events in neighboring pixels.
We use the two parameters $\chi^2$ and $\zeta$ to select events, whose full energy was deposited within the absorber. Both parameters are related to the shape of single traces compared to the characteristic pulse shape. Here, $\chi^2$ is given by the sum over each sample of the quadratic difference between a pulse and the fitted average pulse divided by the number of samples. This is also well known as the reduced $\chi^2$ from $\chi^2$ tests. The parameter $\zeta$ is based on a matched filter. To calculate $\zeta$ for a given pulse, two cross-correlations are performed: the pulse with the average pulse as well as the average pulse with itself . The parameter $\zeta$ is then an amplitude ratio given by the ratio of the two maxima divided by the ratio of the two integrals over the convolution \cite{Goeggelmann_2021}. On the right hand side of figure \ref{fig:ellipse} the $\chi^2$ - $\zeta$ plot corresponding to the calibration data from the histogram in figure \ref{fig:iron} is shown. Based on the analysis of the calibration measurements with external sources, we define an area with the shape of an ellipse in the $\chi^2$ - $\zeta$ plane. The semi-axes of the ellipse are determined by Gaussian fits, evaluating the form of the $\chi^2$ and $\zeta$ distributions for each pixel. For the discussed data analysis we set it to multiples of the Gaussian widths so that roughly \SI{1}{\percent} of the calibration events are located outside of this region and are rejected. We apply the same cut also to background measurements performed over a period of several days between two calibration runs. This ellipse cut yet shows an energy dependent efficiency for events having an energy lower than \SI{500}{\electronvolt} leading to a loss of rejection efficiency. For the background analysis we will consider the energy range between \SI{1}{\kilo\electronvolt} and \SI{10}{\kilo\electronvolt}, which is the range most interesting for IAXO. Improved algorithms for the data analysis are at present under development, promising a reliable pulse shape cut also at energies below \SI{500}{\electronvolt} with an efficiency loss less than \SI{1}{\percent} \cite{Barth_2020}. Typical signals which are removed with this approach are triggered noise, pile-up events and events related to massive particles releasing some of their energy in the sensor or in the substrate.
\begin{figure}[htb]
\begin{minipage}{.5\textwidth}\centering\input{gfx/gfx_10.pgf}\end{minipage}\hfill%
\begin{minipage}{.5\textwidth}\centering\input{gfx/gfx_11.pgf}\end{minipage}
\vspace{-0.1in}
\caption{Single, AC filtered pulse at \SI{5.9}{\kilo\electronvolt} from the $^{55}\mathrm{Fe}$ source (left). Pulses from the same pixel have a characteristic shape with the same rise and decay time constants. The $\chi^2$ - $\zeta$ plane, representing variations from the characteristic shape has an elliptical form (right). This contribution can be used to define a cut, identifying pulses with even tiny distortions from the characteristic shape.}
\label{fig:ellipse}
\end{figure}
Very often triggered noise traces occur as burst of signals. To remove those traces during the background measurement we removed all recorded traces that where acquired within one minute if a threshold of 30 events per minute was exceeded in one of the two ADCs. Furthermore, one additional minute was removed before and after such a burst. The constraint was set such that signals induced from communication devices like mobile phones, creating many signals per minute can be easily detected while random background coincidences are very likely never affected. This cut reduces the effective measurement time by only \SI{5}{\percent} while we reduce the number of events by nearly two orders of magnitude. Noise traces which are not part of a burst and thus not affected by the burst cut usually contain instantaneous jumps, which by design cannot be produced from MMCs, and are easily identified with the ellipse cut. To remove fluorescence and particle showers that could for example be generated by muons interacting in the surrounding materials, we also removed all signals that were simultaneously triggered within \SI{1}{\micro\second} by more than one channel.
During the first background analysis, we acquired about one month of raw background data with multiple calibration measurements in between to verify the stable operation of the system. Figure~\ref{fig:background} shows the background spectrum for the unshielded detector obtained after applying the described cuts (blue, solid). Between \SI{1}{\kilo\electronvolt} and \SI{10}{\kilo\electronvolt} the background rate is \SI{3.2(1)e-4}{\per\kilo\electronvolt\per\centi\metre\squared\per\second}. One can clearly identify copper $\mathrm{K}_\alpha$ lines at \SI{8.0}{\kilo\electronvolt} and the niobium $\mathrm{K}_\alpha$ lines at \SI{16.6}{\kilo\electronvolt}. Both fluorescence lines are potentially originating from the interactions of muons or with small probability by natural radioactivity. Minimal radioactive contamination of the materials used for the detector system might also contribute to the fluorescence in copper and niobium as well as to the energy-independent background spectrum.
At the Canfranc Underground Laboratory the intrinsic radioactive contamination of samples from the used copper, niobium and polyimide parts were analyzed with the help of low-background germanium detectors~\cite{Aznar_2013}. For the copper sample only upper activity limits were given. In the \SI{490}{\gram} niobium shield, $^{94}\mathrm{Nb}$ with an activity of \SI{33(3)}{\milli\becquerel\per\kilo\gram} was detected. From the $^{232}\mathrm{Th}$ chain, an activity of \SI{8.7(24)}{\milli\becquerel\per\kilo\gram} from $^{228}\mathrm{Ac}$ and \SI{8.8(23)}{\milli\becquerel\per\kilo\gram} from $^{228}\mathrm{Th}$ was found, hinting at a secular equilibrium. For the polyimide circuit boards, activities of \SI{30(11)}{\milli\becquerel\per\kilo\gram} and \SI{40(12)}{\milli\becquerel\per\kilo\gram} were found from $^{212}\mathrm{Pb}$ originating from the $^{232}\mathrm{Th}$ chain and $^{226}\mathrm{Ra}$ from the $^{238}\mathrm{U}$ chain respectively. For the system described in this work, polyimide circuit boards with a total mass of roughly \SI{11}{\gram} are used. A detailed simulation is required to determine the effect of the material contamination on the acquired background spectrum which is out of the scope of this publication. Nevertheless, we are at present designing a new superconducting shield based on copper which is plated with a superconducting film like tin~\cite{Mantegazzini_2020}.
\begin{figure}[htb]
\centering
\input{gfx/gfx_9.pgf}
\vspace{-0.1in}
\caption{Comparison between the background flux after pulse shape analysis, burst cut and coincidence cut without shielding (blue, solid) and with an additional PTFE shield (orange, dashed). The histogram has 100 bins with a bin width of \SI{200}{\electronvolt}. The Poisson uncertainty is given by $\sqrt{N}$.}
\label{fig:background}
\end{figure}
Some of the detected fluorescence events have a relatively low energy and could be screened by materials with low atomic number placed between the collimator and the detector. In the second characterization run we studied the effect of a polytetrafluoroethylene (PTFE) piece with a diameter of \SI{43}{\milli\metre} and a thickness of \SI{4.5}{\milli\metre} on the background spectrum. The PTFE piece has a large squared $\SI{1}{\centi\metre} \times \SI{1}{\centi\metre}$ inner hole, since it was designed for a new, larger MMC array for the BabyIAXO experiment which is still in production. We were able to repair two of the three broken channels by replacing two second-stage SQUID chips of the amplifier module. We acquired roughly 20 days of background events and performed the same data analysis as described previously to compare the two measurements. The resulting background spectrum is also shown in figure~\ref{fig:background} (orange, dashed). Between \SI{1}{\kilo\electronvolt} and \SI{10}{\kilo\electronvolt} we observed a background rate of \SI{1.20(8)e-4}{\per\kilo\electronvolt\per\centi\metre\squared\per\second}. The PTFE shield reduces the intensity of the copper K$_\alpha$ line by \SI{85(4)}{\percent} while the white background between \SI{1}{\kilo\electronvolt} and \SI{10}{\kilo\electronvolt} is reduced by \SI{58(3)}{\percent}. This reduction matches very well the estimation of the effectively shielded solid angle seen by the detector assuming a shield efficiency of \SI{100}{\percent} in the respective energy range. |
1,477,468,751,012 | arxiv | \section{Introduction}
One of the main objectives of linear collider experiments is the precision spectroscopy
of new particles predicted in theories of physics beyond the Standard Model (SM), such as
Supersymmetry (SUSY). Since some, or most, of these particles may have masses of
$\cal{O}\mathrm{(1~TeV)}$, these studies may be central to the physics program of a
multi-TeV $e^+e^-$ linear collider, such as CLIC.
In this note, we study the production of the supersymmetric partners of the muon in a
specific SUSY scenario, where we assume R--parity conservation within the so-called
constrained Minimal Supersymmetric extension of the SM (cMSSM).
In this model the neutralino (\neutralino{1}) is the lightest supersymmetric particle and the specific
parameters of the benchmark point~\cite{Battaglia:2003ab} are chosen to make it compatible
with current collider and cosmology data. In particular, the properties of the lightest neutralino
are such that it generates the correct amount of relic dark matter density in the universe, as obtained
from the analysis of the WMAP data~\cite{Komatsu:2008hk}.
Scalar muons ($\smuon{\pm}_R$ and $\smuon{\pm}_L$) are the supersymmetric partners of the right- and
left-handed charged muons. Smuons are produced in pair through $s$-channel $\gamma/\mathrm{Z}$ exchange
in the process $e^+e^- \to \tilde \mu_R^+ \tilde \mu_R^-$ and each decay into an ordinary muon and a
neutralino, \neutralino{1}. The neutralino, being weakly-interacting, escapes detection.
Therefore, the experimental signature of the process is two oppositely charged muons plus missing energy.
This study concentrates on the lightest smuon, $\smuon{\pm}_R$, which, for the chosen model parameters,
has a mass of 1108.8~GeV , while the mass of the lightest neutralino is 554.3~GeV.
The accurate determination of their masses is an essential part of the spectroscopy study of a high
mass SUSY scenario at CLIC. A study of the variation of the predicted relic dark mass density in the
universe $\Omega h^2$ with the lightest neutralino mass in the cMSSM shows that a $\pm$1.0~GeV uncertainty
on its mass corresponds to a $\pm$0.05 relative uncertainty on $\Omega h^2$, i.e.\ the current accuracy
from cosmic microwave background observations~\cite{Komatsu:2010fb}.
The main aim of this study is to assess the requirements for a detector at CLIC operating at a
centre-of-mass energy, $\sqrt{s}$, of 3~TeV as a function of the track
momentum resolution, luminosity spectrum and beam polarisation. The reconstruction of the particle masses
through the endpoints of the muon momentum spectrum is a good example for these requirements since the
analysis is particularly simple and can be carried out using a simple momentum smearing on generator-level
observables. Results are validated using full simulation and reconstruction with the CLIC-ILD detector model.
\section{Simulation data sample} \label{dtsmcs}
The simulation is performed for the cMSSM parameters of point K' of ref.~\cite{Battaglia:2003ab}.
In the cMSSM the mass parameters are defined at the GUT scale. The subsequent evolution to the
electro-weak scale is performed using the renormalisation group equations of
{\sc Isasugra~7.69}~\cite{Paige:2003mg}. Signal events are generated using
{\sc Pythia~6.125}~\cite{Sjostrand:2006za}. At 3 TeV, the production cross section for the
process $\ee \rightarrow \tilde \mu_R^+ \tilde \mu_R^-$, for unpolarised beams, is 0.71~fb.
Beamstrahlung effects on the luminosity spectrum are included using results of the CLIC beam
simulation for the 2008 accelerator parameters~\cite{Braun:2008zzb}.
Initial state radiation (ISR) is included in the event generation in {\sc Pythia}.
The following background processes have been included in the background calculation:
\begin{table} [h]
\begin{tabular}{ll}
Process & Cross section \\
$\ee \rightarrow \mathrm{W^+ W^-} \rightarrow \mu^+ \mu^- \nu_{\mu} \nu_{\mu}$ &~~ $\mathrm \sigma$=10.5 fb \\
$\ee \rightarrow \mathrm{Z^0 Z^0} \rightarrow \mu^+ \mu^- \nu \nu$ &~~ $\mathrm \sigma$=0.5 fb \\
$\ee \rightarrow \mathrm{Inclusive~SUSY} \rightarrow \mu^+ \mu^- X$ &~~ $\mathrm \sigma$=0.4 fb \\
$\ee \rightarrow \mu \nu_{e} \mu \nu_{e}$ (inclusive SM) &~~ $\mathrm \sigma$=135 fb \\
\end{tabular}
\end{table}
The first three processes have been simulated with {\sc Pythia}.
In addition, the inclusive SM process $e^+e^- \to \mu^+ \mu^- \nu_{e} \nu_{e}$ is
generated using CompHep~\cite{Comphep}, removing the contributions from the $W^+W^-$ and $Z^0Z^0$
diagrams, to avoid double counting. The estimated cross section is 135~fb. In the background study
we neglect the $e^+e^- \to \mu^+ \mu^- \nu_{\mu} \nu_{\mu}$ contribution not due to $W^+W^-$ and
$Z^0Z^0$ decays, since its cross section is only $\simeq$0.2~fb. We assume a data sample corresponding
to an integrated luminosity of 2~ab$^{-1}$ taken at a nominal $\sqrt{s}$ energy of 3~TeV, corresponding
to $\simeq$3.5 years (1 year = $10^{7}$~s) of run at the nominal CLIC luminosity of
6$\times$10$^{34}$~cm$^{-2}$s$^{-1}$. Beam polarisation is in general extremely helpful in the
study of SUSY processes both to improve the signal-to-background ratio and as an
analyser~\cite{MoortgatPick:2005cw}. We consider here three options for beam polarisation:
i) unpolarised beams, ii) P($e^-$)=+80~\% and P($e^+$)=0~\% and iii) P($e^-$) = +80~\% and
P($e^+$)=-60~\%.
The main benefits of beam polarisation for this analysis are the suppression of the $W^+W^-$
background (by a factor of five for ii, and ten for iii)), the enhancement of the smuon
cross section (by a factor 1.5 for ii), and 2.3 for iii)) and the possibility to disentangle
$\tilde \mu_R \tilde \mu_R$ from $\tilde \mu_R \tilde \mu_L$ and $\tilde \mu_L \tilde \mu_L$
production. In this analysis we use observables at the generator level applying a track momentum
smearing. Results are validated by comparing with fully simulated and reconstructed events in
section 3.3.7.
\section{Analysis Procedure}
\subsection{Signal topologies and event pre-selection}
The signal process has two undetected \neutralino{1}'s in the final state.
Therefore, the main characteristics of signal events are large
missing transverse momentum, missing energy and acoplanarity (see Figure~\ref{fig:evt}).
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.0cm,clip]{smuon_evt01.eps}
\end{center}
\caption{Display of a simulated $e^+e^- \to \tilde \mu_R^+ \tilde \mu_R^- \to \mu^+ \mu^- \tilde \chi^0_1 \tilde \chi^0_1$}
\label{fig:evt}
\end{figure}
Despite the striking signature of two muons and large missing energy, the small
anticipated signal production cross section at the K' benchmark point, makes this
analysis rather challenging. In our analysis, the signal selection proceeds as follows.
First, we apply an event pre-selection, which requires two oppositely-charged
muons with $p_{t} \ge$ 5~GeV and $|\cos \theta| < 0.985$, where $\theta$ is the
particle polar angle. Next, we combine the values of the signal probabilities for
the following discriminating variables into a global likelihood variable {\tt Prob}:
\begin{itemize}
\item visible energy $E_{\rm vis}$,
\item missing transverse energy $E_{\perp miss}$,
\item sum of transverse momentum of the muons $\sum |p_{t}|$,
\item maximum acollinearity and acoplanarity,
\item polar angle of the missing energy vector ($\theta_{miss}$)
\item invariant mass of the two muons,
\item the thrust of the two muons,
\item unbalance of the muon momenta $\Delta$
\item missing mass $M_{\rm mis}$
\end{itemize}
where $\Delta= \left(1-\frac{ ( P_{\mu1}-P_{\mu2} )^{2} } {( P_{\mu1}+P_{\mu2})^{2} } \right)^{1/2}$
and $M_{\rm mis}=(s+M_{\rm vis}^2-2\sqrt{s} E_{\rm vis})^{1/2}$
with the missing mass calculated from the visible energy $E_{\rm vis}$ and momentum $P_{\rm vis}$,
and $M_{\rm vis}=(E_{\rm vis}^2-P_{\rm vis}^2)^{1/2}$. Figures~\ref{fig:c2mu-p1} and~\ref{fig:c2mu-p2}
show the distributions of some of the discriminating observables for signal and
background samples after pre-selection and requiring $\sqrt{s} \ge 2500$~GeV.
\begin{figure}[h]
\begin{center}
\includegraphics[width=15.0cm,clip]{psel_H1EVIS205.epsi}
\end{center}
\caption{Discriminating variables used in the combined likelihood function:
(upper left) $\mathrm E_{\rm vis}$ visible energy,
(upper right) $E_{\rm t miss}$ missing transverse energy,
(lower left) $\sum \rm p_{t}$ sum of the $\rm p_{t}$ of the muons
and (lower right) $M_{( \mu \mu) }$ invariant mass of the two muons
}
\label{fig:c2mu-p1}
\end{figure}
\newpage
\begin{figure}[h]
\begin{center}
\begin{tabular}{c}
\includegraphics[width=15.0cm,clip]{psel_H1ACOL205.epsi}
\end{tabular}
\end{center}
\caption{Discrimination variables used in the combined likelihood function:
(upper left) $\theta$ acollinearity,
(upper right) $\sin(\theta_{\rm Emiss}$ missing energy direction,
(lower left) thrust of the two muon system and
(lower right) distribution of the variable $\Delta$ (see text) }
\label{fig:c2mu-p2}
\end{figure}
\subsection{Final selection efficiency and background estimate}
The normalised signal-to-background ratio, S/B, values of these variables, as well as the
combined probability {\tt Prob} are computed for different detector resolution assumptions:
$\delta \rm p_{t}/p_{t}^{2}$= $2 \times 10^{-5}$, $4 \times 10^{-5}$, $6 \times 10^{-5}$,
$8 \times 10^{-5}$ and $2 \times 10^{-4}$~GeV$^{-1}$.
Fig.~\ref{fig:h1prob1} shows the
distribution of the combined probability for signal and background events, the selection efficiency
and the signal-over-background ratio as a function of the combined probability value, as well as
the signal selection efficiency as a function of muon momentum.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_SandBPROBpol0smear2e-5.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_EFFIPROBpol0smear2e-5.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_psel_H1LPA205_smear2e-5_pol0_0BX_e2500C80_EFC.epsi}} \\
\end{tabular}
\end{center}
\caption{(a) (left panel) distribution of the combined probability variable for signal events (blue)
and background events (red); $\delta \rm p_{t}/p_{t}^2$ = $2 \times 10^{-5}$~GeV$^{-1}$,
(b) (middle panel) efficiency and S/B as a function of the probability value without polarisation,
(c) (right panel) selection efficiency for a probability cut larger than 0.8 as a function of muon momentum.}
\label{fig:h1prob1}
\end{figure}
There are two main effects on the muon momentum distribution in selected events. First, the efficiency of
the selection on the combined probability is not flat with the muon momentum. Therefore, a cut on this
variable introduces an inefficiency at the lower edge of the distribution and a subsequent bias towards
higher momenta, see Figure~\ref{fig:h1prob1}(c). This inefficiency increases with the value of the
probability cut applied. The inefficiency and the bias increase also when the momentum resolution
degrades. Fig.~\ref{fig:h1prob2} shows the same distributions for
$\delta \rm p_{t}/p_{t}^2$ = $2 \times 10^{-4}$~GeV$^{-1}$, Fig.~\ref{fig:h1prob2}(c) shows a
bias towards higher momenta. This effect is accounted and corrected for in the fits performed for
signal+background (see Figure~\ref{fig:H1SPB1} (b))
Both beamstrahlung and momentum resolution introduce a smearing of the upper momentum edge.
Both effect have a potential impact on the statistical accuracy and the bias in extracting
the SUSY particle masses from a fit to the reconstructed momentum distribution, as discussed below.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_SandBPROBpol0smear2e-4.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_EFFIPROBpol0smear2e-4.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_psel_H1LPA205_smear2e-4_pol0_0BX_e2500C80_EFC.epsi}} \\
\end{tabular}
\end{center}
\caption{Same as Fig,~\ref{fig:h1prob1} for $\delta \rm p_{t}/p_{t}^2$ = $2 \times 10^{-4}$~GeV$^{-1}$.
In (c) the deformation of both the lower and the upper end of the spectrum after selection cuts is visible.}
\label{fig:h1prob2}
\end{figure}
The Signal-over-background ratio depends also on the beam polarisation.
Fig.~\ref{fig:hpol} shows the efficiency and S/B as a function of the probability value for
different polarisation options.
\begin{figure}[h]
\begin{center}
\begin{tabular}{ccc}
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_205_smear2e-5_pol0_0BX_e2500_EFFI.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_205_smear2e-5_pol5_0BX_e2500_EFFI.epsi}} &
\subfloat[]{\includegraphics[width=5.0cm,clip]{fsel_205_smear2e-5_pol10_0BX_e2500_EFFI.epsi}} \\
\end{tabular}
\end{center}
\caption{efficiency and S/B as a function of the probability value for
different polarisation options, (a) no polarisation
(b) $80\%~\mathrm{e^{-}}$~polarisation~ and (c) $80\%~\mathrm{e^{-}~+~60\%~e^{+}}$ polarisation
}
\label{fig:hpol}
\end{figure}
Table~\ref{tab2} lists the number of signal (S) and background (B) events, the selection efficiency
$\epsilon$ and the S/B ratio for different values of the probability cut, momentum resolution,
polarisation and time stamping values.
\begin{table} [h]
\begin{center}
\begin{tabular}{|l|l|c|c|c|c|c|c|} \hline
Prob cut & $\delta \rm {p_{t}} / {p_{t}}^{2}$ & pol & BX & $\mathrm N_{\rm sig}$ & $\mathrm N_{\rm bkg}$
& $\epsilon$ & $\mathrm N_{\rm sig} / \mathrm N_{\rm bkg}$ \\
& & ($e^{-}/e^{+})$ & & & & & \\ \hline
0.80 & 0 & 0/0 & 0 & 1315 & 2937 & 0.93 & 0.45 \\ \hline
0.80 & 2 $\times$ 10$^{-5}$ & 0/0 & 0 & 1319 & 2984 & 0.93 & 0.44 \\ \hline
0.80 & 4 $\times$ 10$^{-5}$ & 0/0 & 0 & 1319 & 2953 & 0.93 & 0.45 \\ \hline
0.80 & 6 $\times$ 10$^{-5}$ & 0/0 & 0 & 1318 & 3098 & 0.93 & 0.43 \\ \hline
0.80 & 8 $\times$ 10$^{-5}$ & 0/0 & 0 & 1317 & 3316 & 0.93 & 0.40 \\ \hline
0.80 & 2 $\times$ 10$^{-4}$ & 0/0 & 0 & 1318 & 4033 & 0.93 & 0.33 \\ \hline
0.80 & 2 $\times$ 10$^{-5}$ & 80/0 & 0 & 1319 & 1381 & 0.93 & 0.96 \\ \hline
0.80 & 2 $\times$ 10$^{-5}$ & 80/60 & 0 & 1319 & 1180 & 0.93 & 1.11 \\ \hline
0.80 & 2 $\times$ 10$^{-5}$ & 80/60 & 5 & 1317 & 1271 & 0.93 & 1.04 \\ \hline
0.80 & 2 $\times$ 10$^{-5}$ & 80/60 & 20 & 1299 & 1301 & 0.91 & 1.0 \\ \hline
0.90 & 2 $\times$ 10$^{-5}$ & 0/0 & 0 & 1285 & 2619 & 0.91 & 0.49 \\ \hline
0.90 & 2 $\times$ 10$^{-5}$ & 80/0 & 0 & 1285 & 1179 & 0.91 & 1.09 \\ \hline
\end{tabular}
\end{center}
\caption{Scalar muon selection: number of signal, $\mathrm N_{\rm sig}$, and background, $\mathrm N_{\rm bkg}$,
events for 2~ab$^{-1}$ of integrated luminosity, selection efficiency, $\epsilon$, and signal over
background ratio, $\mathrm N_{\rm sig}/ \mathrm N_{\rm bkg}$, for different probability
cut, momentum resolution, polarisation and time stamping values.}
\label{tab2}
\end{table}
\newpage
\subsection{Smuon and neutralino mass determination}
The smuon and neutralino masses are extracted from the position of the kinematic edges of the
muon momentum distribution, a technique first proposed for squarks~\cite{Feng:1993sd}, then
extensively applied to sleptons~\cite{Martyn:1999tc}:
\begin{eqnarray}
\mathrm {E_{H,L}}=\frac{\sqrt{s}}{4}\left( 1- \frac { m_{\neutralino{1}}^{2} } { m_{\smuon{\pm}_R}^{2} } \right)
\left( 1 \pm \sqrt{1 - 4 \frac {m_{\smuon{\pm}_R}^{2}} {S}} \right)
\label{formula:eleh}
\end{eqnarray}
The smuon and neutralino masses depend on the beam energy $\sqrt{s}/2$ and the kinematic edges
$E_{H,L}$ as:
\begin{eqnarray}
m_{\smuon{\pm}_R}=\frac{\sqrt{s}}{2} \left(1-\frac{( E_{H}-E_{L} )^{2}}{( E_{H}+E_{L})^{2}} \right)^{1/2}
\hspace{0.2cm} \mathrm {and} \hspace{0.2cm}
m_{\neutralino{1}}=m_{\smuon{\pm}_R} \left( 1-\frac{ 2 (E_{H}+E_{L})}{\sqrt{s}} \right)^{1/2}
\label{formula:m1m2}
\end{eqnarray}
where $E_{H}$ and $E_{L}$ are the high and low momentum edges of the muon momentum distribution.
This shows that an accurate measurement of the shape of the luminosity spectrum must be achieved
and the value of masses extracted from the momentum spectrum are correlated.
We extract the $\tilde \mu_R$ and \neutralino{1} masses from a 2-par $\chi^2$ fit to the
reconstructed momentum distribution. The fit is performed with the {\sc Minuit} minimisation
package~\cite{James:1975dr}. We model the momentum spectrum according to (\ref{formula:eleh}),
where $\sqrt{s}$ accounts for beamstrahlung and ISR effects, as discussed below. Momentum resolution
is included through a parametric smearing of the $\rm p_{t}$ distribution for the analysis performed
at generator level or full tracking for simulated and reconstructed events.
The fit also accounts for
the correlations between the $\tilde \mu_R$ and $\neutralino{1}$ masses.
To investigate the different contributions to the statistical uncertainty on the
smuon and neutralino masses, several fits are performed by changing the
input conditions.
\subsubsection{Energy spread and ISR}
We study the contribution of the centre-of-mass energy spread to the statistical accuracy
of the fit. There are three sources of energy spread: the momentum spread in the linac, which gives
a $\simeq$7.5~GeV Gaussian smear on $\sqrt{s}$ for the CLIC parameters, beamstrahlung, which
contributes a long tail and initial state radiation (ISR); the first two are induced by the machine
and we shall refer to them collectively as ``luminosity spectrum''. We estimate the contribution of
the luminosity spectrum to the statistical accuracy on the masses and of the knowledge of its shape
to the mass accuracy and bias. We use the luminosity spectrum obtained from the
{\sc GuineaPig}~\cite{c:thesis} beam simulation for the 2008 CLIC parameters.
First, we compare the results of the fit for i) events generated without luminosity spectrum spread
at $\sqrt{s}$ = 3~TeV, ii) events in the main peak of the luminosity spectrum, 2950$< \sqrt{s} <$ 3020~GeV
and iii) all events with $\sqrt{s} >$ 2500~GeV. In all these cases we apply a loose signal selection and
we assume no resolution smearing for the muon momentum. Even without the luminosity spectrum
contribution, the sum of the energies of the colliding electrons extends to energies significantly below
the nominal $\sqrt{s}$ due to QED effects. We model the ISR spectrum by an approximate solution to the
Gribov-Lipatov equation, proposed in~\cite{Skrzypek:1990qs}. In the formula we leave free the $\eta$
parameter and the fraction of events off the full energy peak. We determine them by a fit to the ISR
spectrum of {\sc Pythia} signal events (see Figure~\ref{fig:isr_circe}).
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[ISR]{\includegraphics[width=7.0cm,clip]{isr_fit.eps}} &
\subfloat[Beamstrahlung]{\includegraphics[width=7.0cm,clip]{circe_fit.eps}} \\
\end{tabular}
\end{center}
\caption{Centre-of-mass energy distribution including (a) ISR and (b) ISR and beamstrahlung.
The points represent the simulation
and the lines the functions used for describing their distribution in the mass fit.}
\label{fig:isr_circe}
\end{figure}
The resulting function is used to fold the ISR contribution in the shape of the muon momentum spectrum
used in the mass fits. Fig.~\ref{fig:isrbs} shows the effect of ISR and ISR + beamstrahlung on the signal
muon momentum spectrum.
\begin{figure}[h]
\begin{center}
\includegraphics[width=7.5cm,clip]{smuonL_ISR_BS.eps} \\
\end{center}
\caption{Signal muon momentum spectrum with no ISR/FSR nor beamstrahlung effects (light grey),
ISR and FSR only (grey) and also beamstrahlung effects (black) showing the progressive smearing
of the upper kinematic edge.}
\label{fig:isrbs}
\end{figure}
In order to assess the effect of the knowledge of the luminosity spectrum on the mass measurement
accuracy, we consider the luminosity spectrum obtained from {\sc Calypso} for simulated signal
events and we model it using the parametrisation proposed in~\cite{Ohl:1996fi}. This parametrisation
has two components: a core, which we assume to be Gaussian, and a tail. We perform a $\chi^2$ to the
luminosity spectrum with five free parameters: the width of the Gaussian core, two parameters describing
the tail shape and two normalisation coefficients. The result of the fit is shown in Figure~\ref{fig:isr_circe}.
Then, we compare the results of the mass fit when we use the fitted parameters of the luminosity
spectrum parametrisation to those we obtain by varying these by $\pm$15\% of their values in a fully
correlated way. This change of parameters corresponds to a change of the average $\sqrt{s}$ value by
$\pm$2$\times$10$^{-3}$. The mass and statistical uncertainty of the smuon change by $\pm$0.8~GeV and
$\pm$15\%, respectively, and that of the neutralino by $\pm$1.6~GeV and 10\%, respectively. The actual
accuracy on the determination of the shape of the luminosity spectrum will need to be assessed from a
detailed study of observables such as the electron acollinearity in Bhabha events~\cite{cliclum}, but
are expected to be not larger than those assumed here.
Fig.~\ref{fig:H1AAA} (left) shows the fitted muon momentum distribution for events with
2950~GeV $\le \sqrt{s} \le$ 3000~GeV and (right) for events with
2500~GeV $\le \sqrt{s} \le$ 3000~GeV. Results are summarised in Table~\ref{tab:summary}.
The fitted masses are in agreement with those generated $m_{\smuon{\pm}_R}$ = 1109~GeV and
$m_{\neutralino{1}}$=554~GeV, within statistical uncertainties.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[2950~GeV $\le \sqrt{s} \le$ 3000~GeV]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPG205_205_selall_nosmear_pol0_0BX_e2950C50.epsi}} &
\subfloat[2500~GeV $\le \sqrt{s} \le$ 3000~GeV]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPG205_205_selall_nosmear_pol0_0BX_e2500C50.epsi}} \\
\end{tabular}
\end{center}
\caption{Fits to the signal muon momentum spectrum for two selections on $\sqrt{s}$.}
\label{fig:H1AAA}
\end{figure}
\subsubsection{Muon photon radiation (FSR)}
A source of resolution loss is photon radiation from muons. At 3 TeV, in about 15\% of the events
the muon radiates a photon.
A fit to the muon momentum distribution for signal events applying only a loose selection,
probability cut=0.5, a cut on the centre-of-mass energy, and without momentum resolution smearing
leads to a small increases of the uncertainty on the neutralino mass, but a shift on the mass value.
(see Table~\ref{tab:summary}).
\subsubsection{Event selection systematics}
The signal selection cut may introduce a bias on muon momentum distribution which
propagates on the result of the fit to the smuon and neutralino masses. In order to
study the effect of this cut, we fit the muon momentum distribution for signal events
with a momentum resolution smearing and two different probability cuts, in the range
0.8 to 0.99,
For a cut at 0.99 $m_{\smuon{\pm}_R}=1127.6 \pm 3.5$ GeV and $m_{\neutralino{1}}=557.6 \pm 1.7$ GeV.
For a cut at 0.8 $m_{\smuon{\pm}_R}=1104.6 \pm 3.0$ GeV and $m_{\neutralino{1}}=560.0 \pm 1.6$ GeV.
For the events selected with a cut of 0.8 the fitted masses are in agreement with
those generated, while for the tighter cut at 0.99 results are significantly biased.
This could be eliminated by applying an efficiency correction which could carry systematic
uncertainties. Therefore, for this analysis we adopt a selection cut at 0.8, which appears
safe both in terms of signal-to-background ratio and signal bias.
\subsubsection{Muon momentum resolution}
Next, we estimate the contribution of the muon momentum resolution
on the accuracy of the the masses coming from the fit. In multi-TeV collisions
there is no equivalent of the Higgstrahlung $e^+e^- \to H^0Z^0 \to X \ell^+ \ell^-$,
($\ell$ = $e$, $\mu$) process, which sets a strict requirement for momentum
resolution at lower $\sqrt{s}$ values. Reactions such as smuon production in SUSY
and $H^0 \to \mu^+ \mu^-$ in the SM~\cite{Battaglia:2008aa} can provide useful
guidance on the track momentum resolution requirements at high energies. We express
the resolution in terms of $\delta \rm p_{t}/p_{t}^2$, where $\rm p_{t}$ is
momentum component in the plane normal to the beam axis.
We perform the mass fit for signal events fulfilling a loose selection and
2500~GeV $\le \sqrt{s} \le$ 3000~GeV assuming different momentum resolution
values:
$\delta \rm {p_t} / {p_t}^{2}$ = 0,
$\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-5}$~GeV$^{-1}$,
$\delta \rm {p_t} / {p_t}^{2}$ = $4 \times 10^{-5}$~GeV$^{-1}$,
$\delta \rm {p_t} / {p_t}^{2}$ = $6 \times 10^{-5}$~GeV$^{-1}$,
$\delta \rm {p_t} / {p_t}^{2}$ = $8 \times 10^{-5}$~GeV$^{-1}$
and $\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-4}$~GeV$^{-1}$.
Fig.~\ref{fig:H1DP1} and \ref{fig:H1DP2} show the fits to the signal muon momentum distribution
for various momentum resolution values.
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[$\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-5}$~GeV$^{-1}$]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPA205_205_selall_smear2e-5_pol0_0BX_e2500C50.epsi}} &
\subfloat[$\delta \rm {p_t} / {p_t}^{2}$ = $4 \times 10^{-5}$~GeV$^{-1}$]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPA205_205_selall_smear4e-5_pol0_0BX_e2500C50.epsi}} \\
\end{tabular}
\end{center}
\caption{Fits to the signal muon momentum spectrum for momentum smearing of
(a) $\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-5}$~GeV$^{-1}$ and
(b) $\delta \rm {p_t} / {p_t}^{2}$ = $4 \times 10^{-5}$~GeV$^{-1}$.}
\label{fig:H1DP1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[$\delta \rm {p_t} / {p_t}^{2}$ = $8 \times 10^{-5}$~GeV$^{-1}$]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPA205_205_selall_smear8e-5_pol0_0BX_e2500C50.epsi}} &
\subfloat[$\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-4}$~GeV$^{-1}$]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPA205_205_selall_smear2e-4_pol0_0BX_e2500C50.epsi}} \\
\end{tabular}
\end{center}
\caption{Fits to the signal muon momentum spectrum for momentum smearing of
(a) $\delta \rm p_{t} / p_{t}^{2}$ = $8 \times 10^{-5}$ and
(b) $\delta \rm p_{r} / p_{r}^{2}$ = $2 \times 10^{-4}$~GeV$^{-1}$.}
\label{fig:H1DP2}
\end{figure}
The smuon and neutralino masses are in good agreement with the generated masses.
The uncertainty on the masses starts being significantly impacted from the momentum
resolution when $\delta \rm {p_t} / {p_t}^{2}$ is larger than $5 \times 10^{-5}$~GeV$^{-1}$
(see Table~\ref{tab:summary}).
\subsubsection{Background subtraction}
The cross sections for the SM processes which can lead to the same final state as the signal
are one to two orders of magnitude larger compared to that of the $\tilde \mu_R^+ \tilde \mu_R^-$
signal, in absence of beam polarisation. In order to assess the impact of the background on the
statistical accuracy for the extraction of the $\tilde \mu_R$ and $\tilde \chi^0_1$ masses we
repeat the analysis to the momentum distribution with both signal and background events.
\begin{figure}[h!]
\begin{center}
\begin{tabular}{cc}
\subfloat[]
{\includegraphics[width=6.0cm,clip]{SPB_fsel_H1LPA_205_selall_smear2e-5_pol5_0BX_e2500_C80.epsi}} &
\subfloat[]
{\includegraphics[width=6.0cm,clip]{FITSPB_fsel_H1LPA_205_selall_smear2e-5_pol5_0BX_e2500_C80.epsi}}\\
\end{tabular}
\end{center}
\caption{(a) Muon momentum spectrum for signal + background events with highlighted the
different components and the fitted background shape, (b) fit to the
muon momentum distribution for background-subtracted events. Simulation assumes 80~\% electron
polarisation, momentum resolution $\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-5}$~GeV$^{-1}$
and selection cut value of 0.8 }
\label{fig:H1SPB1}
\end{figure}
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[]
{\includegraphics[width=6.0cm,clip]{FITSPB_fsel_H1LPA_205_selall_smear2e-5_pol0_0BX_e2500_C80.epsi}} &
\subfloat[]
{\includegraphics[width=6.0cm,clip]{FITSPB_fsel_H1LPA_205_selall_smear2e-5_pol10_0BX_e2500_C80.epsi}} \\
\end{tabular}
\end{center}
\caption{Fit to the muon momentum distribution for background-subtracted events. Simulation assumes
(a) no beam polarisation and (b) 80~\% electron and 60~\% positron polarisation, momentum resolution
$\delta \rm {p_t} / {p_t}^{2}$ = $2 \times 10^{-5}$~GeV$^{-1}$ and selection cut value of 0.8 }
\label{fig:H1SPB2}
\end{figure}
The $W^+W^-$
background is modelled using an ``ARGUS'' function~\cite{argus} in the range $p_{\mu} >$200~GeV
and a first order polynomial in the range 100~GeV$< p_{\mu} <$200~GeV. The other backgrounds are
modelled using a polynomial distribution. These functions are fitted
on the momentum distribution of background events passing all the selection cuts and used
to subtract the estimated background contribution from the signal + background momentum
distribution. After background subtraction the signal distribution is corrected to
take into account the momentum dependent selection efficiency.
The fit is performed on the background-subtracted momentum spectrum.
Fig.~\ref{fig:H1SPB1} shows the muon momentum distribution for signal and background
events before (a) and after (b) background subtraction. Events are selected with a probability cut
of 0.8 and the background is scaled assuming a 80~\% electron beam polarisation.
Fig.~\ref{fig:H1SPB2} shows the muon momentum distribution for background-subtracted events assuming
(a) no polarisation and (b) both electron and positron polarisation.
The polarisation of the electron beam only (option ii)) allows us to improve the measurement of the smuon and
neutralino masses by 44~\% and 59~\% to a relative statistical accuracy of 0.8\%. Adding positron beam
polarisation (option iii)) further reduces these uncertainties to 0.6\% and 0.5\%, respectively
(see Table~\ref{tab:summary}). Background rejection by the use of polarised beams is far
superior compared to what can be achieved using tighter cuts in absence of polarisation, as shown by a comparison
of the results obtained with a 0.8 probability cut and electron polarisation to those for a tighter cut at 0.9
for unpolarised beams in Table~\ref{tab:summary}. A dedicated energy scan of the smuon pair production threshold
can further improve the measurements of these masses, also reducing their correlation.
\subsubsection{$\gamma\gamma \to {\mathrm{hadrons}}$ Background}
In $e^+e^-$ collisions a high rate of $\gamma\gamma$ collisions arises from photons radiated in the
electro-magnetic interactions.
On average there are about 3.3 $ \gamma\gamma \rightarrow $ hadrons per bunch crossing (BX).
The products of the $\gamma\gamma$ interactions overlap with those from the interactions under
study. At CLIC, the 312 bunches of a train, separated by 0.5 ns, generate a significant number
of extra particles which are superimposed to the products of the main $e^+e^-$ events
and degrade the quality of the measurement of its properties~\cite{gghad}.
To estimate the contribution of this background to the uncertainty on the smuon and
neutralino masses, particles from $ \gamma\gamma \rightarrow $ hadrons background are overlayed
on signal and SM events, assuming a detector time stamping capability corresponding to the integration
of 5~BX and 20~BX. In this analysis the main effect is the change in the efficiency of the signal
selection. The normalised signal-to-background ratio, S/B, probabilities of the discriminating variables,
as well as the combined probability {\tt Prob} are computed for a detector resolution:
$\delta \rm p_{t}/p_{t}^{2}$ = 2 $\times$ 10$^{-5}$~GeV$^{-1}$. We find that for the integration
of 5~BX, the selection efficiency remains virtually unchanged at 0.93, while for 20~BX it becomes 0.91.
\subsubsection{Full Simulation and Reconstruction}
Finally, we repeat the analysis using fully simulated and reconstructed signal events.
The beamstrahlung effects on the the luminosity spectrum are included.
The simulation is performed using the {\sc Geant-4}-based~\cite{Agostinelli:2002hh}
{\sc Mokka} program~\cite{MoradeFreitas:2004sq} with the CLIC01-ILD detector geometry,
which is based on the ILD detector concept being developed for the ILC.
Events are subsequently reconstructed using the {\sc Marlin} reconstruction
program~\cite{Gaede:2006pj}. Figure~\ref{fig:ptres} shows the measured momentum resolution
$\delta \rm p_{t}/p_{t}^2$ obtained for muons in signal events. The masses and accuracies from the fit to
the fully simulated and reconstructed events, (1118.4$\pm$~3.0)~GeV and (569.1 $\pm$~1.5)~GeV,
agree with those obtained at generation level with 2$\times$10$^{-5}$~GeV$^{-1}$ momentum smearing
(see Table~\ref{tab:summary}).
\begin{figure}[h]
\begin{center}
\begin{tabular}{cc}
\subfloat[$\delta \rm {p_t} / {p_t}^2$ Resolution from Fully Simulated and Reconstructed Events]
{\includegraphics[width=9.0cm,clip]{ptRes.eps}} &
\subfloat[Momentum Spectrum from Fully Simulated and Reconstructed Events]
{\includegraphics[width=6.0cm,clip]{MASS_H1LPA205_205_selall_smear2e-5_pol0_0BX_e2500DST.epsi}}
\end{tabular}
\end{center}
\caption{Validation using fully simulated and reconstructed events for the CLIC01-ILD detector.
(Left) Distribution of the difference between the generated and reconstructed $\rm p_{t}$
of muons normalised to the squared $\rm p_{t}$ ($\delta \rm p_{t}/p_{t}^2$), after full simulation
and reconstruction. The width of the fitted Gaussian curve is 1.8 $\times$ 10$^{-5}$~GeV$^{-1}$.
(Right) Fit to the signal muon momentum spectrum.}
\label{fig:ptres}
\end{figure}
\begin{table}[h]
\caption{Summary of the results of the fits to the smuon and neutralino mass for various assumptions on
track momentum resolution, beamstrahlung, polarisation and number of bunch crossings integrated in one
events. The results obtained on signal only (S) at generator level are also compared to those from full
simulation and reconstruction and signal+background (S+B) fits.}
\begin{center}
\begin{tabular}{|c|c|c|c|c|c|c|}
\hline
$\delta \rm p_{t}/p_{t}^2$ & $\sqrt{s}>$ & Data & Pol & BX &\multicolumn{2}{c|}{(M$\pm \sigma_M$) (GeV)} \\
($\times 10^{-5}$ GeV$^{-1}$) & (GeV) & Set & (e$^-$/e$^+$) & & $\tilde \mu_R^{\pm}$ & $\tilde \chi_1^0$ \\ \hline
~0. & 2950 & S & ~0/~0 & ~0 & 1106.3$\pm$~2.9 & 558.8 $\pm$~1.3 \\
~0. & 2500 & S & ~0/~0 & ~0 & 1098.8$\pm$~2.6 & 555.4 $\pm$~1.2 \\
~0. & 2500 (ISR only) & S & ~0/~0 & ~0 & 1109.2$\pm$~3.2 & 555.4 $\pm$~1.2 \\
\hline
~0. & 2500 & S (No FSR Cor) & ~0/~0 & ~0 & 1095.3$\pm$~3.2 &
557.7 $\pm$~1.3 \\
\hline
~2. & 2500 & S & ~0/~0 & ~0 & 1104.6$\pm$~2.9 & 560.0 $\pm$~1.7 \\
~2. & 2500 & S (G4+Reco) & ~0/~0 & ~0 & 1107.1$\pm$~2.8 & 560.1 $\pm$~1.5 \\
~4. & 2500 & S & ~0/~0 & ~0 & 1102.8$\pm$~2.9 & 557.2 $\pm$~2.8 \\
~6. & 2500 & S & ~0/~0 & ~0 & 1098.8$\pm$~3.1 & 559.1 $\pm$~3.6 \\
~8. & 2500 & S & ~0/~0 & ~0 & 1101.0$\pm$~3.4 & 564.2 $\pm$~4.0 \\
20. & 2500 & S & ~0/~0 & ~0 & 1107.5$\pm$~4.2 & 575.7 $\pm$~5.3 \\ \hline
~2. & 2500 & S+B (0.8) & ~0/~0 & ~0 & 1107.5$\pm$15.5 & 542.5 $\pm$~11.3 \\
~2. & 2500 & S+B (0.9) & ~0/~0 & ~0 & 1107.5$\pm$14.4 & 551.2 $\pm$~12.0 \\
~2. & 2500 & S+B (0.8) & 80/~0 & ~0 & 1107.7$\pm$~8.7 & 542.6 $\pm$~4.6 \\
~2. & 2500 & S+B (0.8) & 80/60 & ~0 & 1118.5$\pm$~6.1 & 551.3 $\pm$~3.0 \\
\hline
~2. & 2500 & S+B (0.8) & 80/60 & ~5 & 1105.7$\pm$~6.3 & 549.4 $\pm$~3.9 \\
~2. & 2500 & S+B (0.8) & 80/60 & 20 & 1113.2$\pm$~6.8 & 550.3 $\pm$~3.4 \\
\hline
\end{tabular}
\end{center}
\label{tab:summary}
\end{table}
\subsection{Summary}
This study allows us to draw some conclusions on the potential of a 3~TeV
CLIC collider in SUSY spectroscopic measurements and some of the requirements on the
detector and the beams. Because of the tiny production cross section in the chosen high-mass
scenario, background subtraction is the dominant source of statistical uncertainty.
Electron beam polarisation at $\simeq$~80~\% gives an equivalent luminosity gain of a factor
of six and is essential to recover precision. Positron polarisation is desirable, since it
gives an additional gain of a factor of two in equivalent luminosity and it also allows us to
disentangle the contributions of $\tilde \mu_L$ and $\tilde \mu_R$. Smuon and neutralino masses
of 1108.8~GeV and 554.3~GeV, respectively can be extracted from the muon kinematics, in events
with two oppositely charged muons and missing energy, with a relative statistical accuracy
$\sim$~0.5~\% with 2~ab$^{-1}$ of integrated luminosity and both beams polarised.
In addition, the signal production cross section of 0.7 fb can be determined
with a relative statistical uncertainty of 2.0~\%.
Since a major source of smearing of the kinematic edges of the muon momentum spectrum
is beamstrahlung and ISR, the track momentum resolution does not appear to be critical for
the measurement of the smuon mass, as long as a resolution
$\delta \rm p_{t}/p_{t}^2 \le 5 \times 10^{-5}$~GeV$^{-1}$ can be achieved, though it remains
important for the neutralino mass. It is important to have a good control of the luminosity
spectrum and desirable to limit the beamstrahlung not significantly beyond that corresponding
to the 2008 CLIC parameters.
Finally, the effect of the overlay of $\gamma \gamma \to {\mathrm{hadrons}}$ events from machine-induced
background does not lead to any significant degradation of the signal selection efficiency for a detector
with time stamping capability of 10~ns.
\section{Acknowledgements}
We are grateful to Daniel~Schulte for making the luminosity spectrum and
generated $\gamma \gamma \to \mathrm{hadrons}$ events available and to Dieter Schlatter
for his careful reading of this note.
\newpage
|
1,477,468,751,013 | arxiv | \section{Hitchin representations}\label{sectionHitchin}
In this section we give a short overview of surface and orbifold Hitchin representations.
\subsection{Spaces of representations} \label{section_irrep}
\sloppy Let $G$ be a Lie group and let $\Gamma$ be a group with a finite presentation $\langle \alpha_1, \ldots, \alpha_k \ | \ r_1, \ldots , r_m \rangle$.
Then every relator $r_i$ defines a map $R_i \colon\thinspace G^k \to G$.
If we let $\text{Hom}(\Gamma, G) = \cap_{i=1}^m R_i^{-1}(Id)$, then the map $\phi \mapsto (\phi(\alpha_1), \ldots, \phi(\alpha_k))$ is a bijection between the set of all group homomorphisms from $\Gamma$ to $G$ and $\text{Hom}(\Gamma, G)$.
We will regard $\text{Hom}(\Gamma, G)$ as having the subspace topology from $G^k$.
Let $\text{Hom}^+(\Gamma, G)$ be the subset of representations in $\text{Hom}(\Gamma,G)$ which decompose as a direct sum of irreducible representations and let
$\text{Rep}^+(\Gamma, G) = \text{Hom}^+(\Gamma,G)/G$
be the quotient space by the conjugation action.
With the quotient topology $\text{Rep}^+(\Gamma, G)$ has the structure of an algebraic variety (\cite{BGGW_07_RepsNotes} sec. 5.2).
In the following we will frequently use the representation
\begin{eqnarray}\label{SLirrep}
\tilde\omega_n \colon\thinspace SL(2,\mathbb{R}) \to SL(n,\mathbb{R})
\end{eqnarray}
given by the action of $SL(2,\mathbb{R})$ on the vector space $\mathcal{P}$ of homogeneous polynomials in 2 variables of degree $n-1$.
If $n = 2k$ is even, the image of $\tilde\omega_n$ is contained in the symplectic group $Sp(2k, \mathbb{R})$, and if $n = 2k + 1$ is odd, it is contained in a group isomorphic to $SO(k, k + 1)$.
It is well known that the representation $\tilde\omega_n$ is absolutely irreducible and is, up to conjugation, the unique irreducible representation from $SL(2,\mathbb{R})$ into $SL(n,\mathbb{R})$.
This representation induces a \textit{ projective representation} $\omega_n \colon\thinspace PSL(2,\mathbb{R}) \to PSL(n,\mathbb{R})$ which is also irreducible and unique up to conjugation.
\subsection{Hitchin representations of surface groups}
Let $S$ be a closed surface of genus $g>1$.
In 1988 Goldman proved that $\text{Rep}^+(\pi_1(S),PSL(2,\mathbb{R}))$ has $4g-3$ connected components,
two of which are diffeomorphic to $\mathbb{R}^{6g-6}$ and called these \textit{Teichm\"uller spaces} (\cite{Goldman_88_TopComponents} thm. A, see also note at end of thm. 10.2 in \cite{Hitchin92}).
The two Teichm\"uller spaces $\mathcal{T}^\pm(S)$ are precisely the sets of conjugacy classes by $PSL(2,\mathbb{R})$ of \textit{Fuchsian representations}, which are discrete and faithful representations $\rho\colon\thinspace \pi_1(S) \to PSL(2,\mathbb{R})\equiv\text{Isom}^+(\mathbb{H}^2 )$.
\begin{definition}\label{fuchsian_def}\em
For $n>2$ a representation $r\colon\thinspace \pi_1(S) \to PSL(n,\mathbb{R})$ is called \textit{Fuchsian} if it can be decomposed as $r = \omega_n\circ r_0$ where $r_0\colon\thinspace \pi_1(S)\to PSL(2,\mathbb{R})$ is discrete and faithful, and $\omega_n\colon\thinspace PSL(2,\mathbb{R})\to PSL(n,\mathbb{R})$ is the unique irreducible representation introduced in section \ref{section_irrep}.
\end{definition}
\begin{definition}\em
The \textit{Fuchsian locus} is the set of all $PSL(n,\mathbb{R})$ conjugacy classes of Fuchsian representations, namely the set $\omega_n(\mathcal{T}^\pm(S))$.
\end{definition}
The space $\text{Rep}^+(\pi_1(S), PSL(n,\mathbb{R}))$ has three topological connected components if $n$ is odd and 6 if $n$ is even (\cite{Hitchin92}, thm. 10.2).
The Fuchsian locus is contained in one component in the odd case and in two components in the even case.
Each of these distinguished components, called \textit{Hitchin components}, is diffeomorphic to $\mathbb{R}^{(1-n^2)(1-g)}$.
When $n>2$ is even, both Hitchin components are related by an inner automorphism of $PSL(n,\mathbb{R})$.
In the odd case, where there is only one component, we will denote the Hitchin component by $\text{Hit}(\pi_1(S),PSL(n,\mathbb{R}))$.
\begin{definition}\em
Let $S$ be a closed surface of genus greater than one.
A representation $r \colon\thinspace \pi_1(S) \to PSL(n,\mathbb{R})$ is a \textit{surface Hitchin representation} if its $PSL(n,\mathbb{R})$-conjugacy class belongs to a Hitchin component of $\text{Rep}^+(\pi_1(S),PSL(n,\mathbb{R}))$.
\end{definition}
In \cite{Labourie_06_AnosovReps}, Labourie introduces \textit{Anosov representations} and proves that surface Hitchin representations are $B$-Anosov where $B$ is any Borel subgroup of $PSL(n,\mathbb{R})$.
This gives surface Hitchin representations essential algebraic properties, out of which we will use theorem \ref{Hitchin_reps} below.
\begin{definition}[\cite{bridgeman2020simple} sec. 2.2]\label{loxo_def}\em
A matrix $A\in SL(n,\mathbb{R})$ is \textit{purely loxodromic} if it is diagonalizable over $\mathbb{R}$ with eigenvalues of distinct modulus. If $A \in PSL(n,\mathbb{R})$ then we say $A$ is \textit{purely loxodromic} if any lift of $A$ to an element of $SL(n,\mathbb{R})$ is purely loxodromic.
\end{definition}
\begin{theorem}[\cite{Labourie_06_AnosovReps} thm. 1.5, lemma 10.1]\label{Hitchin_reps}
A surface Hitchin representation $r\colon\thinspace \pi_1(S) \to PSL(n,\mathbb{R})$ is discrete, faithful and strongly irreducible.
Moreover, the image of every non-trivial element of $\pi_1(S)$ under $r$ is purely loxodromic.
\end{theorem}
\subsection{Hitchin representations of orbifold groups}
Let $\mathcal{O}$ be a 2-dimensional closed orbifold of negative orbifold Euler characteristic $\chi(\mathcal{O})$ and let $\pi_1(\mathcal{O})$ be its orbifold fundamental group.
In \cite{Thurston_78_GeometryNotes} Thurston proves there is a connected component of the representation space $\text{Rep}(\pi_1(\mathcal{O}), PGL(2,\mathbb{R}))$
that parametrizes hyperbolic structures on $\mathcal{O}$.
This component is called the \textit{Teichm\"uller space} of the orbifold $\mathcal{O}$, we will denote it by $\mathcal{T}(\mathcal{O})$.
As with surfaces, the orbifold Teichm\"uller space consists of conjugacy classes of discrete and faithful representations of $\pi_1(\mathcal{O})$ into $PGL(2,\mathbb{R}) \equiv \text{Isom}(\mathbb{H}^2 )$, which we will call \textit{Fuchsian representations} too.
More recently, Alessandrini, Lee, and Schaffhauser used the irreducible representation $\omega_n$ to define the \textit{Hitchin component} $\text{Hit}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$ of $\text{Rep}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$ as the unique connected component in this representation space which contains the connected Fuchsian locus $\omega_n(\mathcal{T}(\mathcal{O}))$ (\cite{ALS_19_orbifoldHitchin} def. 2.3) and prove $\text{Hit}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$ is homeomorphic to an open ball (\cite{ALS_19_orbifoldHitchin} thm. 1.2).
\begin{definition}[\cite{ALS_19_orbifoldHitchin} def. 2.4]\em
Let $\mathcal{O}$ be a 2-dimensional connected closed orbifold with negative orbifold Euler characteristic.
A representation $r\colon\thinspace \pi_1(\mathcal{O}) \to PGL(n,\mathbb{R})$ is an \textit{orbifold Hitchin representation} if its $PGL(n,\mathbb{R})$-conjugacy class belongs to the Hitchin component $\text{Hit}(\pi_1(\mathcal{O}), PGL(n,\mathbb{R}))$ of $\text{Rep}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$.
\end{definition}
The definition of Anosov representations has been generalized by Guichard and Wienhard (\cite{GuichardWienhard_12_Anosov} def. 2.10) to include representations of word hyperbolic groups into semisimple Lie groups.
With this more general definition, and just as their surface counterparts, orbifold Hitchin representations are also $B$-Anosov where $B$ is a Borel subgroup of $PGL(n,\mathbb{R})$ (\cite{ALS_19_orbifoldHitchin} prop. 2.16) and thus share some strong algebraic properties.
\begin{theorem}[\cite{ALS_19_orbifoldHitchin} thm. 1.1]\label{orbifoldHitchin}
An orbifold Hitchin representation $r\colon\thinspace \pi_1(\mathcal{O}) \to PGL(n,\mathbb{R})$ is discrete, faithful and strongly irreducible.
Moreover, the image of every infinite order element of $\pi_1(\mathcal{O})$ under $r$ is purely loxodromic.
\end{theorem}
\section{Zariski dense Hitchin representations}\label{ZdenseSection}
In this section we focus on Zariski density of Hitchin representations and prove corollary \ref{corollary_Zdense} which gives a criterion to determine when the image of a finite index subgroup of an orbifold group under a Hitchin representation is Zariski dense.
\subsection{Zariski closures of Hitchin representations}\label{subsec_irrep}
Let $G$ be an algebraic matrix Lie group, then $G$ has both its standard topology as a subset of some $\mathbb{R}^N$ and the Zariski topology.
If $X$ is a subset of $G$ then its \textit{Zariski closure} is the closure of $X$ in $G$ with respect to the Zariski topology.
We say a subgroup $H<G$ is \textit{Zariski dense} in $G$ if its Zariski closure equals $G$.
A representation $r\colon\thinspace \Gamma \to G$ is \textit{Zariski dense} if $r(\Gamma)$ is Zariski dense in $G$.
The image of the irreducible representation $\omega_n\colon\thinspace PSL(2,\mathbb{R}) \to PSL(n,\mathbb{R})$ is contained, if $n$ is even, in a conjugate of $PSp(n, \mathbb{R})$, which is the projectivization of the symplectic group $Sp(n, \mathbb{R})$.
If $n = 2k + 1$ is odd, the image of $\omega_n$ is contained in a conjugate of the orthogonal group $SO(k, k + 1) = PSO(k,k+1)$.
This implies that the images of Fuchsian representations are contained in (a conjugate of) $PSp(n,\mathbb{R})$ or in $SO(k,k+1)$ and, in particular, they are not Zariski dense.
More generally, for surface Hitchin representations Guichard \cite{Guichard_ZariskiClosure} has announced a classification of Zariski closures of their lifts.
An alternative proof of this result has been given recently by Sambarino (\cite{Sambarino_closures} cor. 1.5).
The version of this result we cite here comes from theorem 11.7 in \cite{BCLS_15_PressureMetric}.
\begin{theorem}[\cite{Guichard_ZariskiClosure}, \cite{Sambarino_closures}] \label{GuichardResult}
If $r \colon\thinspace \pi_1(S) \to SL(n,\mathbb{R})$ is the lift of a surface Hitchin representation and $H$ is the Zariski closure of $r(\pi_1(S))$, then
\begin{itemize}
\item If $n=2k$ is even, $H$ is conjugate to either $\omega_n(SL(2,\mathbb{R}))$, $Sp(2k,\mathbb{R})$ or $SL(2k,\mathbb{R})$.
\item If $n=2k+1$ is odd and $n\neq7$, then $H$ is conjugate to either $\omega_n(SL(2,\mathbb{R}))$, $SO(k,k+1)$ or $SL(2k+1,\mathbb{R})$.
\item If $n=7$, then $H$ is conjugate to either $\omega_7(SL(2,\mathbb{R}))$, $G_2$, $SO(3,4)$ or $SL(7,\mathbb{R})$.
\end{itemize}
\end{theorem}
\subsection{A criterion for Zariski density}
Here we prove proposition \ref{prop1.2} which gives us a criterion to find Zariski dense Hitchin representations.
\begin{lemma}\label{lift_even}
Let $\rho \colon\thinspace \pi_1(\mathcal{O}) \to PSL(n,\mathbb{R})$ with $n$ even be an orbifold Hitchin representation.
Then for every $[\alpha] \in \pi_1(\mathcal{O})$ of infinite order there is a lift $A \in SL(n,\mathbb{R})$ of $\rho([\alpha])$ which has $n$ positive distinct eigenvalues.
\end{lemma}
\textit{Proof.}
First consider a Fuchsian representation $\sigma \colon\thinspace \pi_1(\mathcal{O}) \to PSL(2,\mathbb{R})$ and $[\alpha]$ an infinite order element of $\pi_1(\mathcal{O})$.
Since $\mathcal{O}$ is a hyperbolic orbifold, $\sigma([\alpha])$ is conjugate to a hyperbolic element
$\begin{bmatrix} \lambda & 0 \\ 0 & \frac{1}{\lambda} \end{bmatrix} \in PSL(2,\mathbb{R})$.
We can lift this element to a matrix
$\begin{pmatrix} \lambda & 0 \\ 0 & \frac{1}{\lambda} \end{pmatrix} \in SL(2,\mathbb{R})$ with $\lambda >0$.
Let $\tilde\omega_n \colon\thinspace SL(2,\mathbb{R}) \to SL(n,\mathbb{R})$ be the unique irreducible representation in (\ref{SLirrep}),
then $\tilde\omega_n\begin{pmatrix} \lambda & 0 \\ 0 & \frac{1}{\lambda} \end{pmatrix} \in SL(n,\mathbb{R})$ has $n$ distinct positive eigenvalues $\lambda^{n-1}, \lambda^{n-3}, \ldots, \lambda^{-(n-3)}, \lambda^{-(n-1)}$ and is a lift of $\omega_n\circ\sigma([\alpha])\in PSL(n,\mathbb{R})$.
Now consider a Hitchin representation $\rho\colon\thinspace \pi_1(\mathcal{O}) \to PSL(n,\mathbb{R})$.
Let $\rho_t$ be a path of Hitchin representations such that $\rho_0$ is Fuchsian and $\rho_1=\rho$.
This induces a path $\rho_t([\alpha]) \subset PSL(n,\mathbb{R})$.
By the previous argument we may lift $\rho_t([\alpha])$ to a path $\tilde{A}_t \in SL(n,\mathbb{R})$ such that $\tilde{A}_0$ has $n$ distinct positive eigenvalues.
Since each eigenvalue of $\tilde{A}_t$ varies continuously and $\det\tilde{A}_t \neq 0$, all eigenvalues of $\tilde{A}_t$ are positive.
Moreover, by theorem \ref{orbifoldHitchin} the absolute values of the eigenvalues of $\rho_t([\alpha])$ are distinct.
This in turn implies all the eigenvalues of $\tilde{A}_t$ are distinct.
Therefore $\tilde{A}_1 \in SL(n,\mathbb{R})$ is a lift of $\rho([\alpha])$ with $n$ positive distinct eigenvalues.
\hfill{$\Box$}
To prove our criterion for Zariski density (propositions \ref{Zdense_even} and \ref{Zdense}) we will make use of the following theorem by Culver.
\begin{theorem}[\cite{Culver} thm. 2]\label{Culver_thm}
Let $C$ be a real square matrix. Then the equation $C=\exp(X)$ has a unique real solution $X$ if and only if all the eigenvalues of $C$ are positive real and no elementary divisor (Jordan block) of C belonging to any eigenvalue appears more than once.
\end{theorem}
\begin{proposition}\label{Zdense_even}
Let $\rho\colon\thinspace \pi_1(\mathcal{O}) \to PSL(n,\mathbb{R})$ with $n$ even be an orbifold Hitchin representation so that $\rho(\pi_1(\mathcal{O}))$ is not conjugate to a subgroup of $PSp(n,\mathbb{R})$.
If $S$ is a surface finitely covering $\mathcal{O}$ then $\rho(\pi_1(S))$ is Zariski dense.
\end{proposition}
\textit{Proof.}
Let $S$ be a surface finitely covering $\mathcal{O}$ and
suppose that $\rho(\pi_1(S))$ is conjugate to a subgroup of $PSp(n,\mathbb{R})$.
Then there exists an alternating form $\Omega\in SL(n,\mathbb{R})$ such that
$Sp(\Omega) = \{ g \in SL(n,\mathbb{R}) \ | \ g^T\Omega g = \Omega \}$
and $\rho(\pi_1(S)) \subset PSp(\Omega) = Sp(\Omega)/\pm I$.
Let $[\alpha] \in \pi_1(\mathcal{O})$ be an infinite order element.
By lemma \ref{lift_even} we can lift $\rho([\alpha]) \in PSL(n,\mathbb{R})$ to a matrix $A\in SL(n,\mathbb{R})$ with $n$ positive distinct eigenvalues.
Since $\pi_1(S)$ has finite index in $\pi_1(\mathcal{O})$ there exists a $k\in \mathbb{N}$ such that $\rho([\alpha])^k \in \rho(\pi_1(S))$.
Then $A^k$ is a lift of $\rho([\alpha])^k$ and $A^k \in Sp(\Omega)$.
Given that $A$ has $n$ positive distinct eigenvalues, by theorem \ref{Culver_thm} there is a unique $X\in M_{n\times n}(\mathbb{R})$ such that $\exp(X) = A$.
Then using that $\exp(kX) = A^k$ preserves $\Omega$ we get that
\begin{eqnarray*}
\exp(kX)^T\Omega \exp(kX) = \Omega &\Rightarrow& \Omega^{-1}\exp(kX)^T\Omega = \exp(kX)^{-1}\\
&\Rightarrow& \exp(\Omega^{-1}(kX)^T\Omega) = \Omega^{-1}\exp(kX)^T\Omega =\exp(-kX).
\end{eqnarray*}
Applying theorem \ref{Culver_thm} now to $\Omega^{-1}\exp(kX)^T\Omega$ we obtain that \begin{eqnarray*}
\Omega^{-1}(kX)^T\Omega = -kX
&\Rightarrow& -\Omega(kX)^T\Omega = -kX\\
&\Rightarrow& \Omega(kX)^T\Omega = kX.
\end{eqnarray*}
This implies that $kX \in \mathfrak{sp}(\Omega)$ and thus $A = \exp(X)\in Sp(\Omega)$.
Given that $A$ is a lift of $\rho([\alpha])$, we have that $\rho([\alpha]) \in PSp(\Omega)$.
Since $\pi_1(\mathcal{O})$ is generated by its infinite order elements we get that $\rho(\pi_1(\mathcal{O})) \subset PSp(\Omega)$, a contradiction.
So it cannot be that $\rho(\pi_1(S))$ is conjugate to a subgroup of $PSp(n,\mathbb{R})$.
In particular, if $r$ is a lift of the Hitchin surface representation $\rho|_{\pi_1(S)}$ then the Zariski closure of $r(\pi_1(S))$ cannot be conjugate to a subgroup of $Sp(n,\mathbb{R})$.
By theorem \ref{GuichardResult} it must be that the Zariski closure of $r(\pi_1(S))$ is $SL(n,\mathbb{R})$.
Therefore the Zariski closure of $\rho(\pi_1(S))$ is $PSL(n,\mathbb{R})$.
\hfill{$\Box$}
In the case when $n=2k+1$ is odd, by theorem \ref{GuichardResult} the Zariski closure of $\rho(\pi_1(S))$ where $\rho$ is a surface Hitchin representation is either conjugate to a subgroup of $SO(k,k+1)$ or equals $SL(n,\mathbb{R})$.
By assuming there exists a symmetric bilinear form $J$ such that $\rho(\pi_1(S)) \subset SO(J)$ we have an analogous proof to that of \ref{Zdense_even} to get a criterion for Zariski density of surface Hitchin representations in the odd case.
\begin{proposition}\label{Zdense}
Let $\rho\colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ with $n$ odd be an orbifold Hitchin representation such that there is no real quadratic form $J$ for which $\rho(\pi_1(\mathcal{O}))\subset SO(J)$.
If $S$ is a surface finitely covering $\mathcal{O}$ then $\rho(\pi_1(S))$ is Zariski dense.
\end{proposition}
Given that any finite index subgroup of $\pi_1(\mathcal{O})$ contains a surface subgroup which has finite index in $\pi_1(\mathcal{O})$ we obtain the following result.
\begin{proposition}\label{corollary_Zdense}
Let $\rho \colon\thinspace \pi_1(\mathcal{O}) \to PSL(n,\mathbb{R})$ be an orbifold Hitchin representation such that
\begin{itemize}
\item if $n=2k$ is even then $\rho(\pi_1(\mathcal{O}))$ is not conjugate to a subgroup of $PSp(2k,\mathbb{R})$ or,
\item if $n=2k+1$ is odd then $\rho(\pi_1(\mathcal{O}))$ is not conjugate to a subgroup of $PSO(k,k+1)$.
\end{itemize}
Then for every finite index subgroup $H$ of $\pi_1(\mathcal{O})$ the image $\rho(H)$ is Zariski dense in $PSL(n,\mathbb{R})$.
\end{proposition}
\section{Bending representations of orbifold groups }\label{sectionBending}
Theorem \ref{Zdense_surfacegrps} in this section gives a general construction of a path $\rho_t$ of Zariski dense Hitchin surface representations into $SL(n,\mathbb{R})$ for odd $n$.
By requiring that the initial representation $\rho_0$ has image inside $SL(n,\mathbb{Q})$ we obtain corollary \ref{rational_reps}, in which every representation $\rho_t$ with $t\in \mathbb{Q}$ also has image in $SL(n,\mathbb{Q})$.
\subsection{Bending representations}
Let $\mathcal{O}$ be a 2-dimensional orientable connected closed orbifold of negative orbifold Euler characteristic and $\mathcal{O}_L$, $\mathcal{O}_R$ be open connected suborbifolds with connected intersection $\mathcal{O}_L\cap \mathcal{O}_R$.
Given a representation $\rho \colon\thinspace \pi_1(\mathcal{O}) \to G$ there is a standard way of "bending" $\rho$ by an element $\delta$ of the centralizer in $G$ of $\rho(\pi_1(\mathcal{O}_L\cap \mathcal{O}_R))$ to obtain a representation $\rho_\delta \colon\thinspace \pi_1(\mathcal{O}) \simeq \pi_1(\mathcal{O}_L)\ast_{\pi_1(\mathcal{O}_L\cap\mathcal{O}_R)}\pi_1(\mathcal{O}_R)\to G$ so that
$\rho_\delta(\pi_1(\mathcal{O})) = \langle \rho(\pi_1(\mathcal{O}_L)), \delta \rho(\pi_1(\mathcal{O}_R))\delta^{-1}\rangle$ (see for example \cite{Goldman_87_geometricstructures} sec. 5).
From now onwards we will consider the case where there is a simple closed curve $\gamma \subset \mathcal{O}$, not parallel to a cone point, that divides $\mathcal{O}$ into two orbifolds $\mathcal{O}_L$ and $\mathcal{O}_R$ which share $\gamma$ as their common boundary, so that
$\pi_1(\mathcal{O}) \simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R).$
\begin{proposition}\label{path_rho1}
Let $\rho \colon\thinspace \pi_1(\mathcal{O})\simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R) \to SL(n,\mathbb{Q})$ be a representation for which $\rho([\gamma])$ has $n$ distinct positive eigenvalues.
Then there exists a path of representations $\rho_t \colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ with $t\geq0$ such that
\begin{enumerate}
\item $\rho_0 = \rho$,
\item $\rho_t(\pi_1(\mathcal{O})) = \langle \rho(\pi_1(\mathcal{O}_L)),\delta_t \rho(\pi_1(\mathcal{O}_R)) \delta_t^{-1}\rangle$ for some $\delta_t\in SL(n,\mathbb{R})$ which commutes with $\rho([\gamma])$, and
\item $\rho_t$ has image in $SL(n,\mathbb{Q})$ for every $t\in \mathbb{Q}$.
\end{enumerate}
\end{proposition}
\textit{Proof.} The matrix $\rho([\gamma])$ is conjugate to a diagonal matrix $D$ with entries $\lambda_1, \ldots, \lambda_n >0$ along its diagonal.
Now for every $t>0$ define
\begin{equation}\label{deltat}
\delta_t = (t\rho([\gamma]) + I)\det(t\rho([\gamma]) + I)^{-\frac{1}{n}}
\end{equation}
Notice that $\det(t\rho([\gamma]) + I) = \det(tD + I) = \Pi_{k=1}^{n}(t\lambda_i +1)>0$,
so $t\rho([\gamma]) +I$ is invertible for all $t$.
Then each $\delta_t$ is in $SL(n,\mathbb{R})$ and we can check that $\delta_t$ commutes with $\rho([\gamma])$.
Since $\rho$ is a rational representation,
whenever $t\in \mathbb{Q}$ the matrix $t\rho([\gamma]) + I$ has rational entries and non-zero determinant.
Let $\rho_t \colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ be the representation such that $\rho_t(\pi_1(\mathcal{O})) = \langle \rho(\pi_1(\mathcal{O}_L)),\delta_t \rho(\pi_1(\mathcal{O}_R)) \delta_t^{-1}\rangle$.
Notice that $\rho_0 = \rho$ and that for every $t\in\mathbb{Q}$ the representation $\rho_t$ has image in $SL(n,\mathbb{Q})$.
\hfill{$\Box$}
\subsection{Discarding Zariski closures}
For the rest of section \ref{sectionBending} we focus on the case where $n=2k+1$ is odd.
Recall that in this case $SL(n,\mathbb{R}) \equiv PSL(n,\mathbb{R})$.
\begin{lemma}\label{Junique}
Let $\rho\colon\thinspace\Gamma \to SL(n,\mathbb{R})$ be an irreducible representation and suppose there is a quadratic form $J$ such that $\rho(\Gamma )\subset SO(J)$.
Then $J$ is unique up to scaling.
\end{lemma}
\textit{Proof.}
Suppose $\rho(\Gamma)< SO(J_1)\cap SO(J_2)$. Then for any $\rho(\gamma) \in \rho(\Gamma)$ we have that
$$J_1^{-1} \rho(\gamma)J_1 = \rho(\gamma)^{-T} = J_2^{-1}\rho(\gamma)J_2,$$
which implies that
$\rho(\gamma)J_1J_2^{-1} = J_1J_2^{-1}\rho(\gamma).$
Since $n$ is odd, $J_1J_2^{-1}$ has a real eigenvalue $\lambda$.
Then $\text{Ker} (J_1J_2^{-1} - \lambda I)$ is a non-zero invariant subspace for the irreducible representation $\rho$, which implies $J_1 = \lambda J_2$.
\hfill{$\Box$}
\begin{proposition}\label{notZclosure}
Let $\rho \colon\thinspace \pi_1(\mathcal{O})\simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R) \to SL(n,\mathbb{R})$ be a representation in which the restrictions $\rho|_{\pi_1(\mathcal{O}_L)}$ and $\rho|_{\pi_1(\mathcal{O}_R)}$ are irreducible and $\rho([\gamma])$ has $n$ positive distinct eigenvalues.
Suppose there is a quadratic form $J$ such that $\rho(\pi_1(\mathcal{O}))\subset SO(J)$.
Then there exists a path of representations $\rho_t \colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ such that
\begin{enumerate}
\item $\rho_0 = \rho$ and
\item for each $t>0$ there is no quadratic form $\tilde{J}$ such that $\rho_t(\pi_1(\mathcal{O}))\subset SO(\tilde{J})$.
\end{enumerate}
\end{proposition}
\textit{Proof.} By proposition \ref{path_rho1} there are $\delta_t\in SL(n,\mathbb{R})$ that commute with $\rho([\gamma])$, with which we can construct a path of representations $\rho_t\colon\thinspace \pi_1(\mathcal{O})\to SL(n,\mathbb{R})$ such that $\rho_0=\rho$ and $\rho_t(\pi_1(\mathcal{O})) = \langle \rho(\pi_1(\mathcal{O}_L)),\delta_t \rho(\pi_1(\mathcal{O}_R)) \delta_t^{-1}\rangle$.
Now fix $t>0$. Suppose there exists a quadratic form $\tilde{J}$ such that $\rho_t(\pi_1(\mathcal{O})) \subset SO(\tilde{J})$.
Since $\rho(\pi_1(\mathcal{O})) \subset SO(J)$, in particular $\rho_t(\pi_1(\mathcal{O}_L)) = \rho_0(\pi_1(\mathcal{O}_L)) \subset SO(J)\cap SO(\tilde{J})$.
The restriction $\rho_t|_{\pi_1(\mathcal{O}_L)}$ is irreducible, so by lemma \ref{Junique} $J$ is a real multiple of $\tilde{J}$.
Similarly, by construction $\rho_t (\pi_1(\mathcal{O}_R)) \subset SO(\delta_t J \delta_t^T)\cap SO(\tilde{J})$ and $\rho_t|_{\pi_1(\mathcal{O}_R)}$ is irreducible too.
Thus $\delta_t J \delta_t^T$ is also a multiple of $\tilde{J}$.
This implies there is a $\lambda \in \mathbb{R}$ such that $\lambda J = \delta_t J \delta_t^T$ and then $\lambda^n = \det(\delta_t)^2 =1.$
Since $n$ is odd it must be that $\lambda =1 $ and we obtain $\delta_t \in SO(J)$.
Given that
$$
(t\rho([\gamma]) + I) J (t\rho([\gamma])^T + I)
\ =\ t^2J \ +\ tJ(\rho([\gamma])^{T})^{-1} \ +\ tJ\rho([\gamma])^T\ +\ J,
$$
having $J = \delta_tJ\delta_t^T$ would imply that $ \mu I = \rho([\gamma])^{-1} + \rho([\gamma])$ for some $\mu\in \mathbb{R}$.
Recall that $\rho([\gamma])$ is conjugate to a diagonal matrix $D$ whose eigenvalues are all distinct.
If $\mu I = \rho([\gamma])^{-1} + \rho([\gamma])$ then by conjugating we would obtain that $\mu I = D^{-1} + D$, which is not the case given that $n>2$.
\hfill{$\Box$}
\subsection{Representations of surface groups}
Recall we are assuming that $\mathcal{O}$ is a 2-dimensional orientable connected closed orbifold of negative orbifold Euler characteristic.
Such orbifolds are always finitely covered by a surface $S$ of genus greater than one, so $\pi_1(S)$ is a finite index subgroup of $\pi_1(\mathcal{O})$.
Given a representation $\rho \colon\thinspace \pi_1(\mathcal{O}) \to G$ we will denote the restriction of $\rho$ to $\pi_1(S)$ by $\rho^S$.
\begin{theorem}\label{Zdense_surfacegrps}
Suppose $\pi_1(\mathcal{O})\simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R)$ with $[\gamma]$ an infinite order element.
Let $\rho \colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ be an orbifold Fuchsian representation such that the restrictions $\rho|_{\pi_1(\mathcal{O}_L)}$ and $\rho|_{\pi_1(\mathcal{O}_R)}$ are irreducible.
If $S$ is a surface finitely covering $\mathcal{O}$ then there exists a path of representations $\rho^S_t \colon\thinspace \pi_1(S) \to SL(n,\mathbb{R})$ such that $\rho^S_0 = \rho^S$ and $\rho_t^S$ is a Zariski dense surface Hitchin representation for each $t>0$.
\end{theorem}
\textit{Proof.}
Since $\rho\colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ is an orbifold Hitchin representation with odd $n=2k+1$ and $[\gamma]$ has infinite order, then $\rho([\gamma])$ has $n$ positive distinct real eigenvalues.
Moreover, since $\rho$ is Fuchsian its image is contained in a conjugate of $SO(k,k+1)$.
Using proposition \ref{notZclosure} we obtain a path of representations $\rho_t \colon\thinspace \pi_1(\mathcal{O}) \to SL(n,\mathbb{R})$ such that $\rho_0 = \rho$ and for each $t>0$ there is no real quadratic form $J$ such that $\rho_t(\pi_1(\mathcal{O}))\subset SO(J)$.
By proposition \ref{Zdense} each $\rho_t(\pi_1(S))$ is Zariski dense in $SL(n,\mathbb{R})$.
Now consider the continuous path $[\rho_t] \in \text{Rep}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$ for $t\geq0$.
Its image is connected so all $PGL(n,\mathbb{R})$-conjugacy classes $[\rho_t]$ are contained in the same connected component of $\text{Rep}(\pi_1(\mathcal{O}),PGL(n,\mathbb{R}))$.
Because the representation $\rho_0 = \rho$ is Fuchsian, $[\rho_0]$ is in the Hitchin component $\text{Hit}(\pi_1(\mathcal{O}), PGL(n,\mathbb{R}))$ and so is every $[\rho_t]$.
Thus, by theorem \ref{orbifoldHitchin}, each $\rho_t$ is discrete, faithful and strongly irreducible.
Since $\pi_1(S)$ has finite index in $\pi_1(\mathcal{O})$, each restriction $\rho^S_t \colon\thinspace \pi_1(S) \to SL(n,\mathbb{R})$ is irreducible.
In particular $\rho_0^S$ is a surface Fuchsian representation.
Then $[\rho^S_t]$ is a continuous path in $\text{Rep}^+(\pi_1(S),SL(n,\mathbb{R}))$ with $[\rho_0^S] \in \text{Hit}(\pi_1(S),SL(n,\mathbb{R}))$.
Since the Hitchin component is path connected $[\rho^S_t] \in \text{Hit}(\pi_1(S),SL(n,\mathbb{R}))$ for all $t\geq0$.
\hfill{$\Box$}
To finish this section notice that the construction of the path of Zariski dense representations in the previous theorem is based on proposition \ref{path_rho1}, so we may add the assumption of $\rho(\pi_1(\mathcal{O}))\subset SL(n,\mathbb{Q})$ to obtain that the image of every $\rho_t$ is in $SL(n,\mathbb{Q})$ for every $t\in \mathbb{Q}$.
\begin{corollary}\label{rational_reps}
Let $\rho \colon\thinspace \pi_1(\mathcal{O})\to PSL(n,\mathbb{Q})$ be a representation satisfying the assumptions of theorem \ref{Zdense_surfacegrps}.
If $S$ is a surface finitely covering $\mathcal{O}$ then there exists a path $\rho^S_t \colon\thinspace \pi_1(S) \to SL(n,\mathbb{R})$ of Hitchin representations such that $\rho^S_0 = \rho^S$, $\rho_t^S$ is Zariski dense for each $t>0$ and $\rho_t^S$ has image in $SL(n,\mathbb{Q})$ for every $t\in \mathbb{Q}$.
\end{corollary}
\section{Representations of $\pi_1(\mathcal{O}_{3,3,3,3})$}\label{sectionO3333}
In this section we look at the orbifold $\mathcal{O}_{3,3,3,3}$ and find a Fuchsian representation $\rho \colon\thinspace \pi_1(\mathcal{O}_{3,3,3,3})\to SL(n,\mathbb{Z})$ satisfying the assumptions of corollary \ref{rational_reps}.
\subsection{The orbifold $\mathcal{O}_{3,3,3,3}$}
In what follows we focus on the triangle group $\Delta(3,4,4) \subset PSL(2,\mathbb{R})$.
If we let $T$ be the hyperbolic triangle with angles $\{\frac{\pi}{3}$, $\frac{\pi}{4}$, $\frac{\pi}{4}\}$, then the
generators of $\Delta(3,4,4)$ are the rotations $x$ and $y$ by $\frac{2\pi}{3}$ and $\frac{\pi}{2}$ around the corresponding vertices of $T$.
This group has presentation
\begin{equation}\label{triangle_presentation}
\Delta(3,4,4) = \langle x, y \ | \ x^3 = y^4 = (xy)^4 = 1 \rangle.
\end{equation}
The fundamental domain for the action of $\Delta(3,4,4)$ on $\mathbb{H}^2 $ is a quadrilateral with angles
$\{\frac{\pi}{2},\frac{\pi}{3},\frac{\pi}{2},\frac{\pi}{3}\}$.
The quotient $\mathbb{H}^2 /\Delta(3,4,4)$ is homeomorphic to the orbifold $S^2(3,4,4)$ whose underlying topological space is $S^2$ and has three cone points of orders $3,\ 4$ and $4$.
This defines, up to conjugation, an isomorphism $\pi_1(S^2(3,4,4))\to \Delta(3,4,4)\subset PSL(2,\mathbb{R})$.
\begin{figure}[H]
\begin{center}
\includegraphics[scale=0.3]{orbifold} \ \ \ \ \ \
\caption{Orbifold $S^2(3,4,4)$}
\end{center}
\label{fig:flagsapt3}
\end{figure}
Let
$\theta_1 = x $ and $ \theta_i = y\theta_{i-1}y^{-1} $ for $ i = 2,3,4$,
then $\langle \theta_1, \ldots, \theta_4 \rangle$ the quotient of $\mathbb{H}^2 $ by the action of $\langle \theta_1, \ldots, \theta_4 \rangle$ is homeomorphic to the orbifold $\mathcal{O}_{3,3,3,3}$ with underlying topological space $S^2$ and 4 cone points of order 3.
By construction, we obtain that $\mathcal{O}_{3,3,3,3}$ is an index four orbifold covering of $S^2(3,4,4)$.
If $ \gamma_1, \ldots, \gamma_4$ are loops around the cone points of $\mathcal{O}_{3,3,3,3}$, then the orbifold fundamental group has the presentation
$$
\pi_1(\mathcal{O}_{3,3,3,3}) = \langle \gamma_1, \ldots, \gamma_4 \ | \ \gamma_1^3 = \ldots = \gamma_4^3 = \gamma_1 \gamma_2 \gamma_3 \gamma_4 = 1\rangle.
$$
Identifying each $ \gamma_i$ with the rotation $\theta_i$ gives an isomorphism $\pi_1(\mathcal{O}_{3,3,3,3})\cong \langle \theta_1, \ldots, \theta_4\rangle$ which defines (up to conjugation) a discrete and faithful representation
\begin{equation}\label{init_rep}
\sigma \colon\thinspace \pi_1(\mathcal{O}_{3,3,3,3}) \to \Delta(3,4,4) < PSL(2,\mathbb{R}).
\end{equation}
\begin{lemma}\label{sigma_Zdense}
The representation $\sigma\colon\thinspace\pi_1(\mathcal{O}_{3,3,3,3})\to PSL(2,\mathbb{R})$ defined in (\ref{init_rep}) is Zariski dense.
\end{lemma}
\textit{Proof.}
We will check that the group $ \sigma(\pi_1(\mathcal{O}_{3,3,3,})) = \langle \theta_1, \ldots, \theta_4 \rangle < \Delta(3,4,4)$ is Zariski dense.
Hyperbolic triangles with the same angles are isometric,
so we can fix the hyperbolic triangle with angles $\{\frac{\pi}{3}, \frac{\pi}{4},\frac{\pi}{4}\}$ by placing it symmetrically along the $y$-axis in the upper-half plane.
By having the generators $x,y$ of $\Delta(3,4,4)$ defined in (\ref{triangle_presentation}) in rational canonical form we obtain that:
\begin{eqnarray}\label{gens}
x = \begin{bmatrix} 0 & -1 \\ 1 & 1 \end{bmatrix} \ \text{ and } \
y = \begin{bmatrix} 0 & -1 - \sqrt{2} \\ -1 +\sqrt{2} & \sqrt{2} \end{bmatrix}.
\end{eqnarray}
This choice of generators fixes a representative in the conjugacy class of the representation $\sigma$.
Notice that $\theta_2\theta_1 = yxy^{-1}x$ is an infinite order element in $\Delta(3,4,4)$ and is therefore hyperbolic.
By using the matrices in (\ref{gens}) we can explicitly find $P,D \in PGL(n,\mathbb{R})$ with $D$ diagonal so that $P^{-1}(\theta_2\theta_1)P = D$.
It suffices then to see that the conjugated representation $P^{-1}\sigma P$ is Zariski dense.
Let $H$ be the Zariski closure of $P^{-1}\sigma(\pi_1(\mathcal{O}_{3,3,3,3}))P$ in $PSL(2,\mathbb{R})$ and $\mathfrak{h}$ its Lie algebra.
First notice that the Zariski closure of $\langle D \rangle$ is the algebraic torus
whose Lie algebra is the span of $X_1 = \begin{pmatrix} 1 & 0 \\ 0 & -1 \end{pmatrix}$.
Taking
$ X_2 = \text{Ad}_{P^{-1}\theta_1\theta_2P}(X_1)$ and $X_3 = \text{Ad}_{P^{-1}\theta_1^2\theta_2P}(X_1)$ we obtain three linearly independent vectors in $\mathfrak{h}$. Then $\dim(\mathfrak{h})=3 = \dim(\mathfrak{sl}(2,\mathbb{R}))$ so the two algebras must coincide and so $H=PSL(2,\mathbb{R})$.
\hfill{$\Box$}
\subsection{Rational representations of $\pi_1(\mathcal{O}_{3,3,3,3})$}
We will now focus on the case $n=2k+1$ and the representation
$
\omega_n \circ \sigma \colon\thinspace \pi_1(\mathcal{O}_{3,3,3,3}) \to SL(n,\mathbb{R}),
$
where $\sigma$ is the representation defined in (\ref{init_rep}) and $\omega_n\colon\thinspace PSL(2,\mathbb{R}) \to PSL(n,\mathbb{R}) =SL(n,\mathbb{R})$ the irreducible representation introduced in \ref{subsec_irrep}.
Since $\omega_n\circ \sigma$ is an orbifold Fuchsian representation, it is irreducible.
The following result implies that we can conjugate $\omega_n\circ \sigma$ to obtain an integral representation
\begin{eqnarray}\label{rho_0}
\rho \colon\thinspace \pi_1(\mathcal{O}_{3,3,3,3}) \to SL(n,\mathbb{Z}) < SL(n,\mathbb{R}).
\end{eqnarray}
\begin{proposition}[\cite{Long_This_20_SLodd} thm. 2.1 ]\label{conjugateZ}
Let $\omega_n\colon\thinspace PSL(2,\mathbb{R}) \to PSL(n,\mathbb{R})$ be the unique irreducible representation between these groups.
Then for every odd $n$ the restriction $\phi_n = \omega_n |_{\Delta(3,4,4)}$ is conjugate to a representation $\rho_n \colon\thinspace \Delta(3,4,4) \to PSL(n,\mathbb{Z})$.
\end{proposition}
Now let $\gamma \subset \mathcal{O}_{3,3,3,3}$ be a simple closed loop dividing $\mathcal{O}_{3,3,3,3}$ into two orbifolds $\mathcal{O}_L$ and $\mathcal{O}_R$ which share $\gamma$ as their common boundary and have two cone points of order 3 each.
Then $[\gamma] \in \pi_1(\mathcal{O}_{3,3,3,3})$ is an infinite order element and
$\pi_1(\mathcal{O}_{3,3,3,3}) \simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R).$
\begin{proposition}\label{t_abs_irred}
Let $\rho\colon\thinspace \pi_1(\mathcal{O}_{3,3,3,3}) \simeq \pi_1(\mathcal{O}_L)\ast_{\langle [\gamma] \rangle}\pi_1(\mathcal{O}_R) \to PSL(n,\mathbb{Z})$ be the representation defined in (\ref{rho_0}). Then the restrictions of $\rho$ to $\pi_1(\mathcal{O}_L)$ and $\pi_1(\mathcal{O}_R)$ are irreducible.
\end{proposition}
\textit{Proof.}
To see that $\rho|_{\pi_1(\mathcal{O}_L)}$ is irreducible it suffices to see that the restriction of $\omega_n\circ \sigma$ to $\pi_1(\mathcal{O}_L)$ is irreducible.
By the proof of lemma \ref{sigma_Zdense} we have that $\sigma(\pi_1(\mathcal{O}_L))$ is Zariski dense in $PSL(2,\mathbb{R})$.
To see that the representation $\omega_n\colon\thinspace\sigma(\pi_1(\mathcal{O}_L)) \to PSL(n,\mathbb{R})$ is irreducible, it is enough to check that the Zariski closure of its image is irreducible.
This holds since $\omega_n \colon\thinspace PSL(2,\mathbb{R}) \to PSL(n,\mathbb{R})$ is an irreducible representation and a morphism of algebraic groups, so
$\omega_n(PSL(2,\mathbb{R})) = \omega_n(\overline{\sigma(\pi_1(\mathcal{O}_L)}) \subseteq \overline{\omega_n\circ\sigma(\pi_1(\mathcal{O}_L))}$.
To see $\rho|_{\pi_1(\mathcal{O}_R)}$ is irreducible it is enough to notice that the proof of \ref{sigma_Zdense} also holds for $\pi_1(\mathcal{O}_R)$ by using the generators $\theta_3$ and $\theta_3$ instead of $\theta_1$ and $\theta_2$.
\hfill{$\Box$}
Knowing that $\rho$ is an integral orbifold Fuchsian representation, the previous proposition shows $\rho$ satisfies the assumptions of theorem \ref{Zdense_surfacegrps}.
Thus we obtain the following application of corollary \ref{rational_reps}.
\begin{theorem}
For every surface $S$ finitely covering the orbifold $O_{3,3,3,3}$ and every odd $n>1$ there exists a path of Hitchin representations $\rho_t \colon\thinspace \pi_1(S) \to SL(n,\mathbb{R})$, so that
\begin{enumerate}
\item $\rho_0(\pi_1(S)) \subset SL(n,\mathbb{Z})$,
\item $\rho_t$ is Zariski dense for every $t>0$ and
\item $\rho_t(\pi_1(S)) \subset SL(n,\mathbb{Q})$ for every $t\in \mathbb{Q}$.
\end{enumerate}
\end{theorem}
\bibliographystyle{ieeetr}
|
1,477,468,751,014 | arxiv | \section{Introduction}\label{sec:intro}
Supersolidity, where superfluid and crystalline orders coexist, have
fascinated physicsits since they were first theoretically
proposed\cite{Andreev69}. Recent experimental results in
$^4$He\cite{MosesChan} that are still under active debate have led
to renewed interest. Experimental developments on a different front,
in the realization of optical lattices in ultracold atomic systems,
motivated a search for a lattice supersolid. One of the more
promising candidates is a model of strongly interaction hard-core
bosons on a triangular lattice. The model Hamiltonian reads \be
H=-t\sum_{\langle ij\rangle}\left(b_i^{\dag}b_j^{\vphantom{\dag}}+\text{H.c.}\right) + V\sum_{\langle ij\rangle}\left( n_i-\frac12\right)\left(n_j-\frac12 \right),
\label{equ:Hbosons}
\ee where $b_i^{\vphantom{\dag}}$ ($b_i^{\dag}$) annihilates
(creates) a hard-core boson on site $i$ and
$n_i=b^{\dag}_ib_i^{\vphantom{\dag}}$ are density operators. The
model is equivalent to the XXZ
spin-1/2 Hamiltonian on the
triangular lattice
\be
H=\sum_{\langle ij\rangle}\left[\frac{J_{\perp}}{2}
\left (s_i^+ s_j^- + s_i^- s_j^+\right)+ \frac{J_z}{4} s_i^z s_j^z\right],
\label{equ:H}
\ee {\modified where $s^{\pm}=(1/2)(s^x\pm \im s^y)$, $s^{x,y,z}$
are the Pauli matrices and are related to bosons by
$s_i^z=(2n_i-1),\,s^+=b^\dagger,\,s^-=b^\nd$, and $J_z =V$,
$J_\perp=-2t$.} The discussion below will be largely in terms of the
bosons, although we will sometimes switch to the equivalent spin
description, when that is more natural.
The
$t>0$ case corresponds to the unfrustrated hard-core boson model
with repulsive nearest-neighbor interactions.
For this case, a variety of studies including large scale quantum
Monte Carlo simulations \cite{Heidarian05, Melko05,Wessel05}
indicate a supersolid phase for all $V/t\ge8.9$, stabilized by an
`order by disorder' mechanism. The solid order is of the three
sublattice (++-) type, where two sublattices have the same boson
density.
The case of {\em frustrated} hopping ($t<0$) suffers from a sign
problem in the occupation number basis, and its ground state has
been a subject of conjecture for the last three decades. The
corresponding spin model is just the XXZ antiferromagnet, which,
in the large $J_z$ limit was at the center of the RVB spin liquid
proposal of Fazekas and Anderson \cite{FazekasAnderson}. Later
semiclassical and small cluster numerical studies suggested magnetic
order \cite{Fazekas}, and general arguments which apply to the phase
structure of bipartite dimer models, to which this model can be
mapped in the large $J_z$ limit, also indicate the same
result\cite{ReadSachdev,Fradkin}. However, the precise nature of
ordering has not been conclusively established. In this letter, we
show how this problem can be tackled,
which is summarized briefly in boson language below. Due to
frustration, the ground states in the $V\rightarrow \infty$ limit is
extensively degenerate.
Within this ground state manifold, we demonstrate that the
frustrated problem with $t<0$ can be mapped, via a nontrivial
unitary transformation, onto the unfrustrated one with $t>0$. Since
the latter is well understood\cite{Heidarian05,
Melko05,Wessel05,Prokofiev}, many properties of the frustrated case
can be immediately derived. Such a generalized `Marshall sign' was
conjectured earlier based on state enumeration and numerics
\cite{FazekasAnderson,ShengPrivate}. Here we construct the explicit
transformation which proves this conjecture, and moreover utilize it
to deduce properties of the frustrated model. Our unitary
transformation is diagonal in the occupation number basis, which,
combined with our knowledge of the unfrustrated model, allows us to
argue that a supersolid state is realized for $t<0$ as well. The
precise details of the superfluid phase ordering requires further
calculation. This is carried out using a variational wave-function
approach \cite{Sen08} recently introduced for the unfrustrated $t/V
= 0^+$ limit, which captures the essential aspects of supersolid
order very well and has good variational energy as compared to the
quantum Monte Carlo results. Applying the unitary transformation, we
obtain a variation wavefunction for the frustrated problem.
Properties of this wavefunction, in particular the phase
correlations, are then calculated. The state is found to be a
supersolid and the resulting structure of the long range order (LRO)
is shown in \fig{fig:order}a. Surprisingly, the superfluid amplitude
vanishes on one of the sublattices and hence superfluidity lives
exclusively on the honeycomb lattice formed by the remaining two
sublattices, on which the amplitude alternates in sign. Contrary to
naive expectations, the superfluid amplitude on these sites {\it
exceeds} the maximum superfluid amplitude of the unfrustrated case.
Finally, with this information in hand, we propose a phase diagram
for the entire $t/V>0$ parameter range. Note the point
$t/V=1/2$
corresponds to the spin-isotropic triangular antiferromagnet, where
the 120$^\circ$ state is established. This can be smoothly connected
to
the large $V$ supersolid state derived here as shown in
\fig{fig:phasediagramXXZ}. Supersolid order would then
naturally be preserved over the wide parameter range
$0<-t<V/2$,
in contrast to the unfrustrated case, where it is only present for
$t<V/10$. The frustrated triangular lattice boson model therefore
appears to be an appealing candidate for the realization of the
elusive supersolid phase - experimental prospects are discussed at
the end. Note, this is also a phase diagram for the spin 1/2 XXZ
magnet, and the regime of proposed RVB phase of
Fazekas-Anderson \cite{FazekasAnderson} is actually a particular
spin ordered state.
\begin{figure}
\includegraphics[scale=0.9]{XXZPhaseDiagram.eps}
\caption{ Schematic phase diagram for both unfrustrated ($t>0$) and
frustrated hopping ($t<0$) with repulsive interactions ($V>0$). The
three arrows are order parameters
$\vec{s}=(b^\dagger+b,\im b-\im b^\dagger,2n-1)$
on the three sub-lattices. For $t/V < -1/2$ or $t/V > 0.1$ there is
only superfluid LRO (XY spin order). The thick line $-1/2 < t/V <
0.1$ is the region of supersolid order. $t/V=-1/2$ is the SU(2)
symmetric antiferromagnet.} \label{fig:phasediagramXXZ}
\end{figure}
{\em Strong Repulsion Limit and Generalized Marshall Sign:} In the
limit of $V \gg |t|$, we can restrict the Hilbert space to a
manifold of states which correspond to classical Ising ground states
of the triangular antiferromagnet \cite{Wannier50}. Every such Ising
configurations $\ising$ can be represented by a close-packed dimer
configuration $\dimer$ on the dual honeycomb lattice. This is a
two-to-one mapping because of the Ising Z$_2$ symmetry
(particle-hole symmetry in the boson language).
The Hamiltonian (\ref{equ:H}), projected into this
degenerate subspace, introduces dynamics which splits the
degeneracy. Note, to first order in degenerate perturbation theory,
only the hopping term
$H_{t}=-t\sum_{\langle ij\rangle}(b^{\dag}_i b_j^{\vphantom{\dag}}+ H.c.)$
plays a role, leading to the double-hexagon
resonance in \fig{fig:doublehexagon}~(a) with amplitude $-t$. The
problem of the large repulsion limit is therefore related to finding
the ground state of a quantum dimer model with such dimer
resonances. We have already noted that the $t>0$ case is tractable
by Quantum Monte Carlo methods since there is no sign problem.
However, the problem of interest here is the case $t<0$. {\em If}
there is a unitary transformation which changes the sign of every
matrix element of $H_{t}$, the problem can be mapped to unfrustrated
case.
This is generically not possible but, {\em within} the restricted
Hilbert space, this indeed happens, and the required unitary
transformation is the following.
Consider the lattice in \fig{fig:doublehexagon}~(c) with $1/4$
special edges marked as thick and green. One can check by inspection
that any double-hexagon resonance will change the number of covered
special edges by $\pm 2$. Therefore, if we define a unitary
transformation on the dimer basis \be
|\dimer'\ket = U|\dimer\ket = \im^{N_s(\dimer)}|\dimer\ket,
\label{equ:U}
\ee where $\im=\sqrt{-1}$, and $N_s(\dimer)$ is the number of special
(green) edges covered by a dimer in the dimer configuration
$\dimer$, the sign of the Hamiltonian will be changed. The unitary
transformation does not change the energy spectrum nor correlations
that are diagonal in boson density. Hence, thermodynamics -
that only depends on energy eigenvalues- is unchanged,
for eg. transition temperature and nature of transitions. However, off-diagonal
correlations are affected. We can therefore immediately conclude
that the ground state has the same three sublattice density
modulation as the supersolid phase in the unfrustrated model.
Moreover, it also has a finite compressibility, identical to that in
the unfrustrated problem, since this can also be expressed as a
density-density correlation function. The latter strongly suggests
superfluid long range order (a 2D bosonic phase with finite
compressibility at zero temperature), and taken all together this points
towards supersolid order for $t/V = 0^-$ as well. In order to
directly establish off diagonal long range order, and obtain more
detailed quantitative information, we turn to a variational
wavefuction approach.
{\em Variational Wavefunction} We denote the two
Ising states related to the dimer state $\dimer$ as
$\ising[\dimer]$ and $\bar{\ising}[\dimer]$ and consider the
following kind of wavefunctions,
\be
|\Psi\ket=\sum_{\dimer}\phi(\dimer)|\dimer\ket=\sum_{\dimer}\phi(\dimer)\cdot
\left (|\ising[\dimer]\ket+|\bar{\ising}[\dimer]\ket \right )/\sqrt{2}
\label{equ:psi}
\ee
where $\phi(\dimer)$ is the (complex) amplitude.
\begin{figure}
\includegraphics[scale=0.8]{Fig2.eps}
\caption{(Color online)
(a): two double-hexagon resonance configurations $c_{ij}$ and
$\bar{c}_{ij}=c_{ji}$. Red thick bars denote dimers.
(b): Kasteleyn orientation and edge weights of the honeycomb
lattice. Thick blue edges have weight $z$, others have weight $1$.
The green dash-line rhombus encloses the enlarged unit cell.
$x$,$y$ are the principal axis. We use the six sites on a
thick-edge hexagon as the basis, labeled as $1,\dots ,6$ as shown
in the right-bottom corner. (c): special edges (thick green on the
honeycomb) for the unitary transformation relating the unfrustrated
and frustrated case. Thin solid green bonds on the triangular
lattice are dual to the special edges. }
\label{fig:doublehexagon}
\end{figure}
In the dimer representation, the projected $H_{t}$ corresponds to
the double-hexagon resonance in \fig{fig:doublehexagon}~(a). Only
those dimer configurations with `resonatable' double hexagons appear
in the Hamiltonian matrix elements. We denote by $c_{ij}$, that a
particular dimer covering has a resonatable double hexagon at the
pair of adjacent plaquettes $i,\,j$, where $i$ is the plaquette with
two dimers. Under resonance $c_{ij}\rightarrow \bar{c}_{ij}=c_{ji}$.
However, the rest of the dimer configuration with this pair of
plaquettes removed $d_{ij}$ remains unchanged. Hence, the entire
dimer configuration may be denoted as $c_{ij}+d_{ij}$. Note, a
single dimer configuration may have many representations in this
notation - one for each resonatable hexagon pair.
The variational energy $E=\bra\Psi|H_{t}|\Psi\ket$ is
\be
E= -t\sum_{<ij>}\sum_{d_{ij}}\left [ \phi^*(c_{ij}+d_{ij})
\phi(\bar{c}_{ij}+d_{ij}) + c.c.\right ]
\label{equ:E-as-phi}
\ee
where $c.c.$ is the complex conjugate. Before considering the frustrated case in detail, we briefly review
the variational wavefunction for the unfrustrated case\cite{Sen08}. There, $t > 0$, so
the matrix elements of $H_{t}$ are all non-positive in the dimer basis.
Thus the Perron-Frobenius theorem applies and the ground state can be taken to be everywhere positive.
Hence, we get a normalizable wavefunction if $\phi(\dimer)=\sqrt{P(\dimer)}$
with $P(\dimer)$ taken as the probability of the dimer configuration $\dimer$. Equivalently, one can assign positive weights $W(\dimer)$ to each dimer configuration $\dimer$, then the probability $P(\dimer)=W(\dimer)/Z$, where $Z=\sum_{\dimer}W(\dimer)$.
\par
The central assumption that leads to tractable wavefunctions is the following. We assign edge weights $w_{ab}$ to all honeycomb lattice edges $ab$, and write the weight of dimer covering $\dimer$ as $W(\dimer)=\prod_{{\rm covered\ } \langle ab\rangle} w_{ab}$. Interpreting $W$ as a ficticious Gibbs weight, this corresponds to a problem of hardcore dimers in an external potential. Powerful Grassmann variable techniques have been developed for this problem, which will allow us to calculate properties of these wavefunctions.
Plug the ansatz $\phi(\dimer)=\sqrt{P(\dimer)}$ into
(\ref{equ:E-as-phi}), and using the fact that the ratio $ \frac{ P(\bar{c}_{ij}+d_{ij}) }{
P(c_{ij}+d_{ij}) } $ is independent of the configuration $d_{ij}$.
\be
E=-t\sum_{<ij>}\sqrt{P(c_{ij}) P(\bar{c}_{ij})}
\label{equ:E}
\ee where $P(c_{ij})=\sum_{d_{ij}}P(c_{ij}+d_{ij})$ is the net probability of the local configuration.
The dimer number operator is
$n_{ab}=(1+s_i^z s_j^z)/2$
where $ab$ is the honeycomb lattice edge dual to the
triangular lattice edge $ij$.
The probability $P(c_{ij})$, $P(\bar{c}_{ij})$
are the expectation values $\bra n_{12}n_{34}n_{56}n_{78}n_{9,10}\ket$,
$\bra n_{23}n_{45}n_{67}n_{89}n_{10,1}\ket$, respectively.
This can be evaluated analytically by the Grassmannian
integral method \cite{Samuel}.
In the Grassmannian formulation, the dimer partition function is represented as
an integral over Grassmannian variables $\eta_a$ defined on
the honeycomb lattice sites, $Z=\int \exp(\sum_{a,b}\eta_a A_{ab} \eta_b/2) \prod_{a}\eta_a={\rm Pf}[A]$,
where
${\rm Pf}[A]$ is the Pfaffian of the Kasteleyn matrix $A$ \cite{Kasteleyn1963},
and $A_{ab}=+ w_{ab}$ if the Kasteleyn orientation is from $a$ to $b$,
or $=- w_{ab}$ if otherwise (see FIG.~\ref{fig:doublehexagon}~(b)).
The probability $P(c_{ij})$ is calculated as an expectation value in the Grassmannian theory,
the rule is to replace $n_{ab}$ by $A_{ab}\eta_a\eta_b$, then we get
$
P(c_{ij})
= w_{12}w_{34}w_{56}w_{78}w_{9,10}
\left | \bra\prod_{a=1}^{10}\eta_a\ket \right |.
$
Thus the variational energy (\ref{equ:E}) can be written as
$
E=-t\sum_{<ij>}{\sqrt{\prod_{i=1}^{10} w_{i,i+1} } }
\left |\bra\prod_{a=1}^{10}\eta_a\ket \right |
$
with $w_{10,11} =w_{10,1}$.
The ten-point correlator of anticommuting $\eta$ can be Wick-expanded into a
Pfaffian of a $10\times 10$ antisymmetric matrix,
$ \left | \bra\prod_{a=1}^{10}\eta_a\ket \right |
= {\rm Pf}[\bra\eta_a\eta_b\ket]
= \sqrt{\det[\bra\eta_a\eta_b\ket]},\ a,b=1\dots 10.
$
The above formula can be further simplified to the determinant of
a $5\times 5$ matrix exploiting the bipartiteness of the honeycomb lattice:
$
\left | \bra\prod_{a=1}^{10}\eta_a\ket \right | =
|\det[\bra\eta_a\eta_b\ket]|,\ a=1,3,\dots ,9;\ b=2,4,\dots ,10.
$.
{\modified This is much more efficient than the brutal-force Wick
expansion used by Sen {\it et al.} \cite{Sen08}, which allows us to
evaluate more complicated correlation functions later in this
paper.}
The two-point correlator $\bra\eta_a\eta_b\ket=(A^{-1})_{ba}$
can now be evaluated by a Fourier transformation since the Kasteleyn
matrix $A$ has 2D translational symmetry. For the chosen Kasteleyn
orientation and basis shown in FIG.~\ref{fig:doublehexagon}~(b)),
in the thermodynamic limit, the two-point correlator of the site
$a$ in unit cell $(0,0)$ and the site $b$ in unit cell $(x,y)$ is
\bes
\bra\eta_{a,(0,0)}\eta_{b,(x,y)}\ket=\int_{0}^{2\pi}\int_{0}^{2\pi}
[\tilde{A}^{-1}(\vec{k})]_{ba} e^{\im(k_x x+k_y y)} \frac{\dif k_x \dif k_y}{4\pi^2}
\ees
where
$a,b=1,\dots,6$, and $\tilde{A}^{-1}(\vec{k})$ is the inverse of
the $6\times 6$ anti-hermitian matrix $\tilde{A}(\vec{k})$,
\bes
\tilde{A}(\vec{k}) = \begin{pmatrix} 0_{3\times 3} & R(\vec{k}) \\
-R^\dagger(\vec{k}) & 0_{3\times 3}
\end{pmatrix},\,\text{with}\,
R(\vec{k}) = \begin{pmatrix}
\frac{1}{\epsilon_x \epsilon_y} & z & z \\
z & \epsilon_y & z \\
z & z & \epsilon_x
\end{pmatrix}
\ees
where $\epsilon_x=e^{\im k_x},\,\epsilon_y=e^{\im k_y}$.
As is shown in \inlinecite{Sen08}, for $t>0$ this variational wavefunction has
two local minima at $z\approx 0.9258$ with energy per site $E=-0.13774t$,
and $z\approx 1.073$ with energy per site $E=-0.13762t$,
corresponding to the two supersolid states, $(+--)$ and $(0+-)$
of the triangular lattice boson model \cite{Wessel05, Heidarian05, Melko05}.
For the frustrated case, the wavefunction is obtained by unitary
transformation,
$|\Psi'\ket=U|\Psi\ket=\sum_{\dimer}\phi'(\dimer)|\dimer\ket$, hence
the variational wavefunction is
$\phi'(\dimer)=\im^{N_s(\dimer)} \sqrt{P(\dimer)}$. The variational energy $E=\bra\Psi'|H_{t}|\Psi'\ket$ is of course the same as in the unfrustrated case.
In order to understand the two variational wavefunctions better,
we shall calculate two point correlation functions, assuming for simplicity that the two points $i$, $j$ are on the same horizontal line,
{\modified and $j$ is on the right.}
{\em Diagonal Correlations:} Consider first the density-density
correlation function $\bra s^z_i s^z_j\ket$. Draw a line from $i$ to
$j$ and it will cut through an set of honeycomb lattice edges
$<ab>$. If the number of edges with no dimers cut by this line is
even, then $s^z_i s^z_j = +1$ and otherwise $= -1$. In terms of the
dimer number operator $n_{ab}$ the $s^z_i s^z_j$ becomes a non-local
string operator,
\be
\bra s^z_i s^z_j \ket
= \bra \prod_{<ab>{\rm\ cut\ by\ }ij}(2n_{ab}-1)\ket
\ee
Expand the product we get $2^{|j-i|}$ terms($|j-i|$ is the distance between
$j$ and $i$ measured by the triangular lattice constant),
each of which is the type of correlation functions evaluated before.
Because these operators are diagonal in the dimer basis,
$\bra\Psi|s^z_i s^z_j|\Psi\ket = \bra\Psi'|s^z_i s^z_j|\Psi'\ket$.
{\em Off diagonal Correlations:}
The square of the off-diagonal long range order (ODLRO) parameter
$\bra b^\dagger_i b^\nd_j \ket$ is slightly more complicated.
In the dimer basis it describes the simultaneous resonances of two hexagons
(if $i$ and $j$ are not neighbors).
However for this process to happen $s^z_i$ and $s^z_j$ must be opposite.
Label the two local resonating configurations on hexagon $i$($j$) by
$\dimer_{i(j)}$and $\bar{\dimer}_{i(j)}$,
there are two possibilities of this simultaneous double-resonance,
shown in \fig{fig:doubleresonance},
with opposite conditions for the edges cut by the line $i+\hat{x},j-\hat{x}$,
where $\hat{x}$ is the horizontal triangular lattice vector.
\begin{figure}
\includegraphics{DoubleResonance.eps}
\caption{(Color online)Two possible simultaneous double-resonance
needed for calculating $\bra b^\dagger_i b_j\ket$:
$\dimer_i,\dimer_j \leftrightarrow \bar{\dimer}_i,\bar{\dimer}_j$,
and $\dimer_i,\bar{\dimer}_j \leftrightarrow \bar{\dimer}_i,\dimer_j$,
with even(odd) number of no-dimer edges cut by
the line $i+\hat{x},j-\hat{x}$(dash line).
}
\label{fig:doubleresonance}
\end{figure}
The even(odd) requirement can be enforced using the dimer number operators as
$ [1\pm\prod(2n_{ab}-1)]/2 $, where the product is over all edges
$<ab>$ cut by the line $i+\hat{x},j-\hat{x}$ (see \fig{fig:doubleresonance} for an example).
Consider $t>0$ case first, we have
\be
\begin{split}
& \bra\Psi|b^\dagger_i b^\nd_j|\Psi\ket =
\frac{ w_{23}w_{45}w_{61} w_{89}w_{10,11}w_{12,7} }
{ w_{12}w_{34}w_{56} w_{78}w_{9,10}w_{11,12} } \\
& \ \times \Big \{
\left \bra n_{12}n_{34}n_{56} n_{78}n_{9,10}n_{11,12}\cdot
[ 1+\prod(2n_{ab}-1) ]/{ 2 } \right \ket \\
& \ \ \ + \left \bra n_{12}n_{34}n_{56} n_{78}n_{9,10}n_{11,12}\cdot
[ 1-\prod(2n_{ab}-1) ]/{ 2 } \right \ket
\Big \} \\
& = \sqrt{ \prod w } \left |\bra \prod_{a=1}^{12}\eta_a\ket \right |
\end{split}
\label{equ:spsm}
\ee
where the
$\prod w$ is the product of edge weights of the twelve(12) edges around
hexagons $i$ and $j$. Note, this simple form arises because the string $\prod(2n_{ab}-1)$ cancels out. We will see shortly that in the frustrated hopping case, this does not happen.
If distance between $i$ and $j$ is large, the 12-point correlator
$\bra\prod_{a=1}^{12}\eta_a\ket$ can be factorized into two 6-point
correlators
$\bra\prod_{a=1}^{6}\eta_a\ket\cdot\bra\prod_{a=7}^{12}\eta_a\ket$.
And we have the relation
$\sqrt{ w_{12}w_{34}w_{56} w_{23}w_{45}w_{61} }|\bra
\prod_{a=1}^{6}\eta_a\ket| = \bra\Psi|b^\dagger_i|\Psi\ket $.
So literally we have the factorization property
\bes
\bra\Psi| b^\dagger_i b^\nd_j|\Psi\ket \to
\bra\Psi | b^\dagger_i|\Psi\ket \bra\Psi| b^\nd_j|\Psi\ket,\quad
|j-i| \to \infty
\ees
For $t < 0$ case we need to take care of the phases of $\phi'$.
From \fig{fig:doublehexagon}~(c) we can see that $\bra\Psi'|b^\dagger_i b^\nd_j|\Psi'\ket$
has similar form as the first line of \eqtn{equ:spsm}, only the first term
inside $\big \{ \cdot\big \}$ acquires a minus sign. Therefore we get
\be
\begin{split}
& \bra\Psi'|b^\dagger_i b^\nd_j|\Psi'\ket =
-\frac{ w_{23}w_{45}w_{61} w_{89}w_{10,11}w_{12,7} }
{ w_{12}w_{34}w_{56} w_{78}w_{9,10}w_{11,12} } \\
& \quad \times \left \bra n_{12}n_{34}n_{56} n_{78}n_{9,10}n_{11,12}
\prod(2n_{ab}-1) \right\ket
\end{split}
\ee
The product can be expanded into $2^{|j-i|-2}$ terms,
each of which can be evaluated as before.
Note, this correlation function cannot be factorized as in
the unfrustrated case and one necessarily needs to evaluate a string correlator.
\begin{figure}
\includegraphics[width=7cm]{order_af_f.eps}
\caption{Supersolid LRO from the variational wavefunction in the
strong repulsion limit. Greyscale shows density order $\langle
2n_i-1\rangle$ while arrows denote superfluid order $\langle
b_i^\dagger\rangle$, for (a): Frustrated hopping ($t<0$) - note the
sign structure of superfluid order; and (b):Unfrustrated hopping
($t>0$)}
\label{fig:order}
\end{figure}
{\it Results:}
We evaluate the above mentioned correlators up to distance $|j-i|=18$ and
extrapolate to infinite distance limit to determine the long range order.
At the global energy minimum $z=0.9258$ and for the {\em
unfrustrated} ($t>0$) case, the long range order parameter $\bra
\vec{s}\ket=(b^\dagger+b^\nd,\im b^\nd-\im b^\dagger,2n-1)=
(0.163,0,0.764),\ (0.372,0,-0.412),\ (0.372,0,-0.412)$ for the three
sublattices A,B,C, respectively (we have set the superfluid phase to
zero and sublattice A is surrounded by weight $z$ hexagon in
\fig{fig:doublehexagon}~(b)). These numbers are in agreement with
Quantum Monte Carlo (QMC) results. The average density deviation
from 1/2 is $|0.764-0.412-0.412|/2/3=0.010$, which is about $2\%$,
in good agreement with QMC \cite{Heidarian05}. The solid order
parameter is $ |n_A+n_B e^{2\pi\im/3}+n_C e^{4\pi\im/3}|^2/9 =
0.0384$ ($n_{A,B,C}$ are boson densities on the three sublattices),
which is about $15\%$ smaller than the QMC result of $0.045$
\cite{Heidarian05, Prokofiev}, but in good agreement with classical
Monte Carlo calculations result $0.0389$ of the same type of
variational wavefunctions \cite{Sen08}.
In the {\em frustrated} ($t<0$) case the three sublattice order is
$(0,0,0.764),\ (0.389,0,-0.412),\ (-0.389,0,-0.412)$, as shown in
Fig. \fig{fig:order}. The average density deviates from 1/2 by the
same amount as the frustrated case. In spin language this means a
non-zero average z-component of spin,
$|\sum_{i}S^z_i/N|=|\sum_{i}s^z_i/(2N)|=0.01$, which is about $50\%$
smaller than harmonic spin-wave result $0.02$, which has the same
symmetry\cite{Fazekas}. Note that surprisingly the superfluid
amplitude($XY$-component of $\vec{s}$) on the B,C sublattices is
{\em larger} than those in the unfrustrated case, while it vanishes
on the A sublattice. Note, this quantity can be directly measured in
Quantum Monte Carlo simulations of the unfrustrated system in the
large repulsion limit, by calculating correlations of the unitarily
transformed operator. For example, with $O=s^+_i s^-_j$,with $j$ to
the right of $i$ in the same horizontal line and $|j-i| > 2$, the
correlator to be measured is $U^\dagger O^\nd U^\nd=-s^+_i
s^z_{i+\hat{x}} s^z_{j-\hat{x}} s^-_j$. Finally, we combine the
present results in the large repulsion (or $V\gg -t$) limit with
known $120^\circ$ order in the isotropic $V=-2t$ ($J_z=J_\perp$)
limit. These can be connected without a phase transition, is we
assume that the supersolid phase persists with no change in symmetry
over the entire range $V>-2t>0$. This scenario, which is also a
phase diagram for the spin 1/2 XXZ antiferromagnet, is depicted in
Figure \ref{fig:phasediagramXXZ}. \footnote{ At the other local
minimum $z=1.073$, the three sublattice order for unfrustrated and
frustrated cases are $(0.438,0,0),\ (0.231,0,0.661),\
(0.231,0,-0.661)$ and $(0.340,0,0),\ (-0.147,0,0.661),\
(-0.147,0,-0.661)$.}
{\it Experimental Realization:} How can the frustrated boson
hoppings be experimentally realized? In lattice cold atom systems, a
recent experiment \cite{Winkler} demonstrated that `repulsively'
bound molecular bosons have frustrated hoppings. Consider preparing
an initial state composed of molecules of pairs of atoms (either
bosons or fermions) with one or zero molecules per site. If the
interactions between atoms are now made repulsive, the effective
molecular hopping is readily seen to be frustrated, since the singly
occupied sites are lower in energy. If this metastable state is
sufficiently long lived, the equilibrium properties of the
frustrated boson system can be accessed. In Josephson Junction
Arrays, external magnetic fields can generate frustrated
hopping\cite{Chaikin}.
We acknowledge funding from NSF-DMR 0645691 and ARO Grant No.
W911NF-07-1-0576, and useful discussions with Ehud Altman and Arnab Sen.
\begin{acknowledgments}
\end{acknowledgments}
|
1,477,468,751,015 | arxiv | \section{Appendix A: Proof of Observation 1}
Let us first recapitulate the algorithm and fix the notation. Let $\vert \psi \rangle$
be the randomly drawn initial state and denote by $\vert \pi \rangle $ its best product
state approximation (BPA), that is,
\begin{align} \label{min}
\vert \pi \rangle = \text{argmin} \lbrace \vert \pi \rangle \,
: \, 1- \vert \langle \varphi \vert \pi \rangle \vert^{2} \,
: \, \, \vert \pi \rangle \, \, \text{product state} \rbrace.
\end{align}
In particular, we can choose $\vert \pi \rangle$ such that $\lambda := \langle \pi \vert \psi \rangle > 0$.
This allows for the computation of the update direction,
$\vert \eta \rangle := (\Pi \vert \psi \rangle)/\mathcal{M}$, where
$\Pi:= \openone - \vert \pi \rangle \langle \pi \vert$ is the projector
onto the orthocomplement of the span of the BPA $\vert \pi \rangle$ and $\mathcal{M}$
a normalization such that $\langle \eta \vert \eta \rangle = 1$. It follows directly
that $\mathcal{M}^2 = \langle \psi \vert P \vert \psi \rangle = 1 - \lambda^{2}$, hence
$\mathcal{M} = \sqrt{1-\lambda^{2}}$. For a fixed step size $\theta >0$, the update
rule is given by
\begin{align}
\label{app:nice_update}
\vert \psi \rangle \mapsto \vert \tilde{\psi} \rangle
:= \frac{1}{\mathcal{N}} (\vert \psi \rangle + \theta \vert \eta \rangle).
\end{align}
where $\mathcal{N}$ is again a normalization. In the following, we will frequently
make statements about generic states. In these cases, we require the assumptions
that a state is not a product state and that the BPA $\ket{\pi}$ is unique up to
a phase.
The first step in order to prove monotonicity of the algorithm is to show that
small variations in the initial state can only lead to small variations in the BPA.
As we will see, this follows from the more general observation that under certain
conditions the value $y_0$, where a function $f(x_0,y)$ assumes its minimum
(for a given $x_0$), depends continuously on $x_0$. Further, by virtue of the
canonical embedding, we can identify any $\vert \varphi \rangle \in \mathbb{C}^{n} $
with a $\vert \Tilde{\varphi} \rangle \in \mathbb{R}^{2n}$ and consequently
we can omit the absolute in Eq.~\eqref{min}. Then we have:
\begin{lemma}
\label{app:lemma_1}
Let $X,Y$ be compact and $f : X \times Y \rightarrow \mathbb{R}$
be uniformly continuous. Further, suppose that for
$x_{0} \in X$ the value $y_{0} := {\text{argmin}}_{y \in Y} f (x_{0},y)$
is unique. Then for all $\varepsilon > 0$ there exists $\delta >0$ such that
for all $x \in U_{\delta} (x_{0} )$ we have
${\text{argmin}}_{y \in Y} \, f (x,y) \subset U_{\varepsilon}(y_{0})$, where
$U_{\delta(x_{0})}$ and $U_{\varepsilon}(y_{0})$ denote vicinities of $x_0$ and
$y_0$, respectively. In other words, the function $\text{argmin}$ is
continuous in $x_0$.
\end{lemma}
\begin{proof}
For the given $\varepsilon$ we can split the set $Y$ in the vicinity
$U_{\varepsilon} (y_{0})$ and its complement $\overline{U}_{\varepsilon} (y_{0})$.
In particular we have
\begin{align}
f(x_{0}, y_{0} ) &=
\min_{y \in U_{\varepsilon}(y_{0})}{f (x_{0} ,y )}
\nonumber
\\
& <
\min_{y \in \overline{U}_{\varepsilon} (y_{0})}{f (x_{0} , y )}
=: f ( x_{0}, \tilde{y}_{0}),
\end{align}
that is, $\tilde{y}_{0}$ denotes the value where the minimum in
$\overline{U}_{\varepsilon} (y_{0})$ is assumed. Let us denote the
difference between the function values as
\begin{align}
\xi = f ( x_{0}, \Tilde{y}_{0}) - f(x_{0}, y_{0} ) > 0.
\end{align}
By the uniform continuity we can choose $\delta>0$ such that for all
$\tilde{x} \in \mathbb{R}^{n}$ with $\Vert \tilde{x}- x_{0} \Vert < \delta$
and for all $y$ we have
\begin{align}
\vert f ( \tilde{x} , y) - f( {x}_0, y )\vert < \frac{\xi}{2}.
\end{align}
Then we have
\begin{align}
f(\tilde{x}, y_0) < f(x_0, y_0) + \frac{\xi}{2},
\end{align}
but for all $y\in \overline{U}_{\varepsilon} (y_{0})$
\begin{align}
f(\tilde{x}, y) > f(x_0, \tilde{y}_0) - \frac{\xi}{2} > f(x_0, y_0) + \frac{\xi}{2},
\end{align}
which implies that the minimum of $f(\tilde{x}, y)$ lies in the vicinity
$U_{\varepsilon} (y_{0})$.
\end{proof}
\begin{corollary}\label{app:corr_1}
Let $\vert \varphi \rangle $ be a pure quantum state and suppose that
its BPA $\vert \pi \rangle $ is unique. Then, for all $\tau >0$, there
exists a $\xi >0$ such that the BPA $\vert \Tilde{\pi} \rangle $ of
$\vert \Tilde{\varphi} \rangle \in \mathcal{U}_{\xi} ( \vert \varphi \rangle ) $
lies in $\mathcal{U}_{\tau} (\vert \pi \rangle )$.
\end{corollary}
\begin{proof}
The function $f(x,y) := \vert \langle x,y \rangle \vert^{2}$ is
continuous on $\mathbb{R}^{2n} \times \mathbb{R}^{2n}$. Further,
the space $B_{1} := \lbrace x \in \mathbb{R}^{2n} \, : \, \vert \vert x \vert \vert =1 \rbrace$
is compact and thus also $M:=B_{1} \times B_{1}$. Then, by the Heine-Cantor
theorem~\cite{continuity}, $f$ is uniformly continuous on $M$. Since we
assume $\vert \pi \rangle$ to be unique, we can apply Lemma~\ref{app:lemma_1},
which guarantees for all $\tau > 0$ the existence of $\xi >0$ such that
$\vert \tilde{\pi} \rangle \in U_{\tau}(\vert \pi \rangle)$ for
$\vert \tilde{\psi} \rangle \in U_{\xi}(\vert \psi \rangle)$.
\end{proof}
According to Eq.~\eqref{app:nice_update}, the updated state needs a renormalization given by $\mathcal{N}= \mathcal{N}(\vert \psi \rangle , \theta)$. The next ingredient for the proof of the main result is a Lemma
that gives later an upper approximation of the function $1/\mathcal{N}$.
\begin{lemma} \label{app:lemma_2}
There exists $C>0$ such that for all $q \in [0,1]$ and $x>0$ we have
\begin{align} \label{eq:norm_estimate}
\frac{1}{\sqrt{1+2qx+x^2}} < 1- qx + Cx^{2}.
\end{align}
More precisely, the above inequality holds for all $C \geq 3$.
\end{lemma}
\begin{proof}
As both sides of Eq.~\eqref{eq:norm_estimate} are positive, we can square them
such that the inequality remains true.
This yields the equivalent inequality
\begin{align}
\begin{split}
0 < & [2C-3q^2 +1]x^2 + 2q[C-1+q^2]x^3 + [C^2 - C(4q^2-2) +q^2]x^4 + 2q[C^2-C]x^5 + C^2x^6 \\
& =: f_{2}x^2 + f_{3}x^3+f_{4}x^4+f_{5}x^5+f_{6}x^6
\end{split}
\end{align}
Now observe that for each of the constants $f_{k} = f_{k}(C)$ there is a $C_{k} >0$ such that
$f_{k}(C) \geq 0$ for all $C \geq C_{k}$. Indeed, we have
$C_{2} := \text{max} \lbrace (1/2)(3q^2-1), 0 \rbrace$,
$C_{3} := 1-q^2$,
$C_{5} = C_{6} := 1$.
The choice of $C_{4}$ depends on whether $q^2 \geq 1/2$ or not. If we denote $\alpha = \vert 4 q^2-2 \vert$, we obtain for the case $q^2<1/2$ that $C^2+ \alpha C + q^2 >0$, what is trivially fulfilled for any $C\geq 1$. If $q^2 > 1/2$, we need $C^2-\alpha C +q^2 >0$. But $C^2-\alpha C +q^2 \geq C^2 - \alpha C = C(C-\alpha)>0$, we obtain $C >\alpha$.
In general, we have $C_{4} := \text{max} \lbrace 1, \vert 4q^2-2 \vert \rbrace$. This implies that
for $C\geq \tilde{C} := \text{max} \, \lbrace C_{k} \, \vert \, k=2,...,6 \rbrace$ all coefficients are positive. Hence $0< f_{2}x^2 $ implies $0 <f_{2}x^2 + f_{3}x^3+f_{4}x^4+f_{5}x^5+f_{6}x^6 $ if $x>0$. Consequently, it is sufficient to only consider the problem $0 < f_{2}x^{2}$, what is true for $C \geq C_{7} := (1/2)(3q^2-1)$. Hence, choosing $C \geq \text{max} \lbrace \Tilde{C},C_{7} \rbrace$ yields the claim. Taking the maximum over all $C_{k}$ with respect to $q \in [0,1]$ yields that $C>2$.
\end{proof}
\begin{theorem}
Let $\vert \psi \rangle$ be a generic pure quantum state. Then there exists $\Theta >0$
such that for the updated state $\vert \tilde{\psi} \rangle$ according to
Eq.~\eqref{eq:simpleupdate} with step-size $\Theta > \theta > 0$ we have
$G(\vert \psi \rangle ) < G(\vert \tilde{\psi} \rangle)$.
\end{theorem}
\begin{proof}
Let us start with some step-size $\theta_{0} > 0$ that we will choose in the end
appropriately and consider
\begin{align}
\vert \tilde{\psi} \rangle = \frac{1}{\mathcal{N}}
(\vert \psi \rangle + \theta_{0} \vert \eta \rangle).
\label{eq-app-1}
\end{align}
It is important to note that $\vert \eta \rangle$ is a normalized state, that is,
$\vert \eta \rangle = 1/(\sqrt{1-\lambda^2}) (\openone - \vert \pi \rangle \langle \pi \vert)
\vert \psi \rangle$. This yields $\langle \psi \vert \eta \rangle = \sqrt{1-\lambda^2}$.
The BPA $\vert \tilde{\pi} \rangle $ of $\vert \tilde{\psi}\rangle$ can be parameterized
using the old product state, i.e., $\vert \tilde{\pi} \rangle = \sqrt{1-\delta^2} \vert \pi \rangle + \delta \vert \chi \rangle$, for a normalized, appropriately chosen $\vert \chi \rangle$ and $\delta > 0.$
Using $\langle \pi \vert \eta \rangle =0$
and
$|\langle \chi \vert \eta \rangle| \leq 1$
we obtain
\begin{align}
\tilde{\lambda} :=
| \langle \tilde{\pi} \vert \tilde \psi \rangle|
= \frac{1}{\mathcal{N}}
\big|
\braket{\tilde{\pi}}{\psi} + \theta_0 \delta \braket{\chi}{\eta}
\big|
\leq
\frac{1}{\mathcal{N}}
\big(
|\braket{\tilde{\pi}}{\psi}|
+
|\theta_0 \delta \braket{\chi}{\eta}|
\big)
\leq
\frac{1}{\mathcal{N}} (\lambda + \delta \theta_{0}).
\end{align}
Using that $\mathcal{N}=\sqrt{1+2\theta_0 \sqrt{1-\lambda^2}+ \theta_0^2}$ and
Lemma~\ref{app:lemma_2} there exists $C>0$ such that
\begin{align}
\Tilde{\lambda} & < (1- \theta_{0} \sqrt{1-\lambda^2} + C \theta_{0}^{2}) (\lambda + \delta \theta_{0})
\nonumber \\
& = \lambda + \delta \theta_{0} - \lambda \sqrt{1-\lambda^2} \theta_{0}
+ \theta_0^2\big[C (\lambda + \delta \theta_0) - \delta \sqrt{1-\lambda^2}\big]
\nonumber \\
& = \lambda + \theta_0 (\delta - \lambda \sqrt{1-\lambda^2}) + \mathcal{O}(\theta_0^2).
\label{eq-app-2}
\end{align}
Note that $\lambda \sqrt{1-\lambda^2}>0$, since $0\neq\lambda\neq1$. This comes from the fact that
a generic state is not a product state with $\lambda=1$ and any state has at least some overlap
with some product state. So, if $\delta < \lambda \sqrt{1-\lambda^2}$ we have
$\tilde{\lambda} < \lambda$ for suitably small $\theta_0$.
It remains to show that we can guarantee that $\delta$ obeys this condition. We start with a given
value of $\lambda$ and consider a number $0 < \delta_1 < \lambda \sqrt{1-\lambda^2}$. According to
Corollary~\ref{app:corr_1}, we can find a $\theta_1>0$ such that
$\vert \tilde{\pi} \rangle \in U_{\delta_{1}} (\vert \pi \rangle)$
if $\vert \tilde{\psi} \rangle \in U_{\theta_1} (\vert \psi \rangle)$.
Then, this gives us an upper bound on $\theta_0$ for Eq.~(\ref{eq-app-1}),
so that the resulting $\delta<\delta_1$ in Eq.~(\ref{eq-app-2}) is small
enough to guarantee a negative slope for the linear term. Still,
$\tilde{\lambda} < \lambda$ is not guaranteed, due to the
$\mathcal{O}(\theta_0^2)$ term in Eq.~(\ref{eq-app-2}). But,
for the given values of $\lambda, \delta_1$ and $C$, we can
also compute from Eq.~(\ref{eq-app-2}) a second threshold $\theta_2$,
which guarantees $\tilde{\lambda} < \lambda$. Then we can take finally
in the statement of the Theorem $\Theta = \min\{\theta_1, \theta_2\}$
and the proof is complete.
\end{proof}
It is remarkable that in this proof the fact that $\ket{\pi}$ is a product
state was never used. So, the algorithm can also be used if the overlap with
states from some other subset of the (pure) state space shall be minimized.
\section{Appendix B: Performance of the algorithm}
In the following we explain our choice of the step-size $\theta$, the number
of iterations and the duration of the computation. First, it should be noticed
that the algorithm contains the computation of the closest product state as a
subroutine. Because the see-saw is prone to local maxima, we randomize the
algorithm, i.e., we run the iteration for many different initial states. The
number of iterations as well as the number of initial states depends on the
number of parties and the local dimension. The iteration typically converges
fast, e.g. for three qubits $10$ iterations are sufficient and for five ququads
$30$ iterations. The number of initial states can for small systems be
chosen small, e.g., for three qubits $10$ different initial points make the
largest overlap robust while for larger systems more initial states are
necessary, e.g., for five ququads $100$ points were taken.
The step-size $\theta$ used in the update rule Eq.~\eqref{app:nice_update}
depends on the size of the system and on the variation of the measure of the
iterates. For systems of small and moderate size, we initially choose
$\theta=0.01$. After a certain number of iterations (mostly around $400$)
the measure of the iterates is not increasing anymore, but fluctuates around
a certain value where the amount of fluctuation depends on the step-size.
In this case the step-size is reduced via $\theta = \theta/2$ and one
proceeds with the new step-size. However, if $\theta$ becomes small
it is also useful to improve the precision in the computation of the best
product state approximation.
\begin{figure}[t]
\centering
\includegraphics[width=0.55\textwidth]{Appendix_01.png}
\caption{Convergence of the algorithm for bipartite systems of different local dimension $d \in \lbrace 2,3,4,5,10 \rbrace$. For local dimension $d \leq 5$ we have chosen the step-size as $\theta=0.01$. For $d=10$ the step-size was chosen as $\theta=0.1$. After $350$ iterations, the iterates had a very high fidelity with the qudit Bell state.}
\label{fig:my_label}
\end{figure}
Because the update rule resembles the idea of a gradient descent (GD), we can
make use of advanced gradient descent techniques as the momentum method
(CM)~\cite{gradient_polyak_1964} or Nesterov's accelerated gradient
(NAG)~\cite{nesterov1983}. In the general setting one aims to minimize
a (typically complicated and high-dimensional) real-valued function $f(\omega)$.
Suppose that one initializes the parameters to $\omega_{0}$. In the GD,
the parameters are iteratively updated according to
$\omega_{t+1} = \omega_{t} - \kappa_{t}$, where $\kappa_{t} = \theta_{t} (\nabla_{\omega} f) (\omega_t)$
with step-size $\theta_{t} >0$, and $(\nabla_{\omega}f)(\omega_{t})$ is the gradient of $f$
evaluated at $\omega_{t}$. The idea behind CM is to keep track about the direction we are moving
in parameter space. For a given momentum parameter $\gamma \in [0,1]$ the update is given
by $\kappa_{t} = \gamma \kappa_{t-1} + \theta_{t} (\nabla_{\omega}f) (\omega_{t})$.
Clearly, $\kappa_{t}$ is a running average of previously calculated gradients.
The advantage of using CM is that the momentum enhances the decrease in directions with small
gradients while suppressing oscillations in high-curvature directions.
For NAG, the idea is to not calculate the gradient with respect to the current parameter
$\omega_{t}$, that is $(\nabla_{\omega}f)(\omega_{t})$, but at the expected value of the
parameters given the recent momentum, that is, $(\nabla_{\omega}f)(\omega_{t}) + \gamma \kappa_{t-1}$.
Accordingly, the NAG update rule is given by $\omega_{t+1} = \omega_{t} - \kappa_{t}$ where $\kappa_{t}= \gamma
\kappa_{t-1} + \theta_{t} (\nabla_{\omega}f)(\omega_{t} + \gamma \kappa_{t-1})$.
The update rule in Eq.~\eqref{app:nice_update} naturally invites for two different variants.
First, as introduced in the main text, the update direction
$\vert \eta \rangle = (1/\sqrt{\langle \psi \vert \Pi \vert \psi \rangle}) \Pi \vert \psi \rangle$
can be re-normalized. This has the consequence that the amount of the shift stays
constant for all iterations. However, a second option would be to take
just the projected (un-normalized) state $\Pi \vert \psi \rangle$ as the update.
Here, the size of the shift changes within each iteration, depending on how large
the state $\vert \psi \rangle$ is supported within the subspace $\text{im}(\Pi)$.
\section{Appendix C: Unitary optimization} \label{app:uniopt}
After a sufficient number of iterations, the algorithm yields a state given by
coordinates with respect to a random basis. Hence, generically each component
of the tensor is nonzero.
However, in order to understand the structure of the state, we seek for a concise
representation in which most of the coefficients vanish. As we consider two states
to be equal if there is a LU-transformation connecting them, this requires a parametrization
of the set of unitary matrices. First notice, that $\text{U}(d)$ is the semidirect product
of $\text{U}(1)$ with $\text{SU}(d)$ and hence we can restrict to parametrizations of $\text{SU}(n)$. For
qubits, one can make use of the fact that $\text{SU}(2)$ is diffeomorphic to the $3$-sphere
$S^{3}$. In particular, an arbitrary $\text{SU}(2)$ matrix can be written as
\begin{align}
U =
\begin{pmatrix}
\alpha & - \beta^{*} \\
\beta & \alpha^{*}
\end{pmatrix},
\quad \alpha, \beta \in \mathbb{C} \, \, \text{with} \, \, \vert \alpha \vert^{2} + \vert \beta \vert^{2} =1
\end{align}
Consequently, the parametrization involves four real parameters and one quadratic constraint.
However, for $d \geq 3$ the $\text{SU}(d)$ is not even homeomorphic to a sphere, or a product of
them, e.g., $\text{SU}(3)$ is \textit{not} homeomorphic to $S^{8}$. Consequently, for systems of
higher dimensions different approaches exists \cite{tilma2002,jarlskog2005,spengler2010}.
In our work, we have used the Jarlskog parametrization~\cite{jarlskog2005}, what is a
simple recursive scheme for parametrization, which can be easily implemented
numerically. First, notice that any $X \in \text{U}(d)$ can be written as
$X = \Phi_{\alpha} Y \Phi_{\beta}$ where
$\Phi_{\alpha} = \text{diag}(e^{i \alpha_{1}},...,e^{i \alpha_{n}})$, $\Phi_{\beta}$
similar and $Y$ a unitary $n\times n$ matrix. Now, $Y$ is decomposed into a product of unitaries, that is,
$Y = \prod_{k=2}^{d} A_{d,k} $ with
\begin{align}
A_{d,k} =
\begin{pmatrix}
A^{(k)} & 0 \\
0 & \openone_{d-k}
\end{pmatrix},
\quad
\text{U}(d) \ni A^{(k)} =
\begin{pmatrix}
\openone_{d-1} - (1-\cos(\theta_{k})) \vert a_{k} \rangle \langle a_{k} \vert & \sin(\theta_{k}) \vert a_{k}
\rangle \\
- \sin(\theta_{k}) \langle a_{k} \vert & \cos(\theta_{k})
\end{pmatrix},
\end{align}
where $\vert a_{k} \rangle \in \mathbb{C}^{d-1}$ normalized to one, i.e.,
$\langle a_{k} \vert a_{k} \rangle =1$ and $\theta_{k} \in [0,2\pi)$ an arbitrary angle.
We now describe how this parametrization can be used to bring the numerically found
states into a concise form. Here, two different cases can be considered. If one has
a guess for the possible state, e.g., the marginals are all maximally mixed so one
expects an AME state, one could compute the fidelity between the numerical state
$\vert \psi \rangle$ and the guess $\vert \varphi_{\text{guess}}\rangle$,
i.e., $\text{sup} \vert \langle \psi \vert U_{1} \otimes \cdots \otimes U_{n} \vert \varphi_{\text{guess}} \rangle \vert$.
If there is no possible candidate, the idea is to minimize a function
$f: \text{U}(d) \times \cdots \times \text{U}(d) \rightarrow \mathbb{R}$
depending on the state, which becomes minimal if many entries of the state vanish.
For instance, given the state $\vert \psi \rangle$ a natural candidate
would be $f(U_{1},...,U_{n}) = \sum \vert ( U_{1} \otimes \cdots \otimes U_{n} \vert \psi \rangle)_{i_{1},...,i_{n}} \vert $. Given two states $\vert \phi \rangle, \vert \psi \rangle$ we regard them as equal, if $\mathcal{F} (\vert \psi \rangle, \vert \phi \rangle) \geq 1-\epsilon$ with $\epsilon <10^{-6}$, where $\mathcal{F}$ denotes the fidelity.
\section{Appendix D: Known AME graph states and their graphs}
In this section we present the graphs of the corresponding graph states mentioned
in the main text and explain their construction. There are two equivalent methods
how a graph state could be defined, namely via a quantum circuit in terms of a
sequence of commuting unitary two-qubit operations or alternatively using
the stabilizer formalism~\cite{graph_states_hein_2004}. Here we use the
first approach. A graph $G=(V,E)$ is a set of $\vert V \vert$ vertices and
some edges $E \subset V\times V$ connecting them. A graph state $\vert G \rangle $
associated to a graph $G$ is a pure quantum state on $(\mathbb{C}^{2})^{\otimes \vert V \vert}$
built up of Ising type interactions according to
\begin{align}\label{eq:graphstate}
\vert G \rangle = \prod_{(a,b) \in E} \textsf{CZ} _{a,b} \vert + \rangle^{\otimes \vert V \vert}
\quad \text{where}\quad
\textsf{CZ} _{a,b} = \sum_{k=0}^{1} \vert k \rangle \langle k \vert_{a} \otimes Z_{b}^{k}
\end{align}
where $Z_{b}$ denotes $\sigma_{z}$ only acting on system $b$ and similar
for $\openone_{a}$. The $\text{AME}(4,4)$ state can be built up out of $8$
qubits, that are grouped together as $A= \lbrace 1,2 \rbrace, B= \lbrace 3,4 \rbrace,C= \lbrace 5,6 \rbrace,D= \lbrace 7,8 \rbrace$, while the Ising type interaction is implemented
as described in Eq.~\eqref{eq:graphstate}.
\begin{comment}
\begin{figure}[h]
\centering
\begin{subfigure}[b]{0.28\textwidth}
\centering
\includegraphics[width=0.7\textwidth]{AME44.png}
\caption{Graph of the known $\text{AME}(4,4)$ state~\cite{AME_qudit_helwig_2013}. }
\label{fig:AME44}
\end{subfigure}
\hfill
\begin{subfigure}[b]{0.7\textwidth}
\centering
\includegraphics[width=1\textwidth]{graphs.png}
\caption{Highly entangled qubit graph states. The first four graphs yielding the known AME states on two/three/five qubits respectively. The last graph state, the Fano graph state~\cite{huber2017}, is a $2$-uniform state on seven qubits where $32$ of its $35$ three-body marginals are maximally mixed.}
\label{fig:qubitsAME}
\end{subfigure}
\caption{Graphs of the highly entangled quantum states.}
\label{fig:graph_states}
\end{figure}
\end{comment}
\begin{figure}[!htb]
\centering
\begin{minipage}{.3\textwidth}
\centering
\includegraphics[width=0.75\linewidth]{AME44.png}
\caption{Graph of the known $\text{AME}(4,4)$ state~\cite{AME_qudit_helwig_2013}.}
\label{fig:prob1_6_2}
\end{minipage}%
\begin{minipage}{.05\textwidth}
\mbox{ }
\end{minipage}
\begin{minipage}{0.64\textwidth}
\centering
\includegraphics[width=\linewidth]{graphs.png}
\caption{Highly entangled qubit graph states. The first four graphs yielding the known AME states on two/three/five/six qubits respectively. The last graph state, the Fano graph state~\cite{huber2017}, is a $2$-uniform state on seven qubits where $32$ of its $35$ three-body marginals are maximally mixed.}
\label{fig:prob1_6_1}
\end{minipage}
\end{figure}
\twocolumngrid
|
1,477,468,751,016 | arxiv | \section{Introduction}
In deep inelastic scattering (DIS) the incident lepton is scattered on
a coloured quark. Normally this results in a colour field between the struck
quark and the proton remnant, such that hadrons are produced in the whole
rapidity region in between.
In electron-proton collisions at HERA, this leads to particles being produced
also close to the proton beam direction. The recent discovery by
the ZEUS \cite{lrg_ZEUS} and H1 \cite{lrg_H1} collaborations at HERA
of large rapidity gap events has attracted much interest.
This new class of DIS events have a large region of forward rapidity
(i.e. close to the proton beam) where no particles or energy depositions
are observed.
The most forward hadronic activity being observed is then actually in the
central part of the detectors.
These large rapidity gap events cannot be described by standard models
for DIS and hadronization \cite{LEPTO,Lund,Werner}.
Therefore the observation of a surprisingly large fraction ($\sim 10\%$) of
events with a large rapidity gap strongly suggests the presence of a final
proton close to the beam momentum.
These events have, therefore, been primarily interpreted in terms of
pomeron exchange, although alternative models have recently been proposed
\cite{SCI}.
In this interpretation, the lepton interacts with a colorless object having
the quantum numbers of the vacuum, i.e. the pomeron. The experimental
signature is then a quasi-elastically scattered proton well separated in
rapidity from the other produced particles. The leading proton escapes
undetected by the main detector, but may be observed in leading proton
spectrometers that are coming into operation in both ZEUS and H1.
In the last few years, experiments on DIS have demonstrated that
the internal structure of the nucleon is more complicated than
expected. The polarized DIS experiments performed by EMC and SMC at CERN have shown that
only a small fraction of the proton spin is carried by the valence quarks
\cite{A89}. In addition, the strong violation of the Gottfried sum rule
observed by NMC \cite{A91} strongly suggests a $\bar d - \bar u$
asymmetry of the nucleon sea. The new fits of the parton distributions
\cite{MSR94} to the world deep inelastic and Drell--Yan data (including
the dedicated NA51 experiment \cite{B94}) seem to confirm the asymmetry.
Both the violation of the Gottfried sum rule and the asymmetry measured
in the Drell--Yan processes can be naturally accounted for by
the presence of pions in the nucleon, as formulated in the pion cloud
model \cite{pioncloud}.
In view of these successes of the pion cloud model, it is mandatory to consider
its role in other phenomena.
The presence of such pions leads to an additional mechanism for nucleon
production in DIS. In fixed target experiments, as
(anti)neutrino deep inelastic scattering \cite{neutrinoDIS} for instance,
it leads to the production of slow protons. The pion-exchange model
describes the proton production on a neutron target \cite{SBD95}
(extracted from deuteron data \cite{BDT94}
obtained in bubble chamber experiments at CERN). With HERA kinematics
the pion cloud induced mechanism leads to the production of rather
fast forward protons and neutrons. The mechanism is shown schematically in
Fig.~1. The virtual photon `smashes' the virtual colorless pion
into debris and the nucleon (proton or neutron) or an isobar is produced
as a spectator of the reaction. In this respect there is full analogy
to the reaction on the pomeron. Therefore, the pion cloud
induced mechanism could also lead to rapidity gap events.
In this paper, we investigate this processes and present quantitative
results, not previously available in the literature.
To understand these processes, not only protons but also neutrons
in the forward directions are interesting \cite{LF92,HLNSS94}.
Recently the ZEUS collaboration has installed a forward neutron
calorimeter (FNCAL) \cite{FNC} which will provide additional experimental
information. In analogy to the hadronic reaction
$pp\to nX$, the pion-exchange is expected to be the dominant mechanism of
the fast neutron production also at HERA
\cite{LF92,HLNSS94}. Thus, HERA open new possibilities to test the concept
of pion exchange and the pionic structure in the nucleon.
In the present paper we study several quantities which could be analyzed
in HERA experiments; in particular using the main calorimeter, the
leading proton spectrometer \cite{LPS} (LPS) and the forward neutron
calorimeter \cite{FNC} in ZEUS. The main aim of the study is to find the best
signal to identify the discussed mechanism of scattering on a pion in
the proton.
\begin{figure}[t]
\epsfig{file=fig_1.eps,
width=21cm,bbllx=15pt,bblly=280pt,bburx=700pt,bbury=480pt,clip=}
\caption{\it Fast forward nucleon production at HERA:
(a) direct production through pion-exchange,
(b) indirect production via a $\Delta$ resonance in pion-exchange,
(c) pomeron-exchange.}
\end{figure}
\section{Pion--exchange mechanism of fast nucleon production}
In the meson cloud model \cite{HSS94} the nucleon is viewed as a quark
core (called a bare nucleon) accompanied by the mesonic cloud.
Restricting to the most important $\pi N$ component, the Fock
state decomposition of the light-cone proton is
\begin{eqnarray}
|p\rangle={\sqrt{Z}}\Big[|(3q)\rangle
+ \int dy\,d^2\vec k_T\phi(z,\vec p_T)\Big(
\sqrt{1\over 3} |p\pi^0,z,\vec p_T\rangle
+ \sqrt{2\over 3} |n\pi^+,z,\vec p_T\rangle\Big)+\,... \,\Big],
\label{WFp}
\end{eqnarray}
with $Z$ being the wave function renormalization constant which can be
calculated by imposing the normalization condition $\langle p|p\rangle=1$.
$\phi(z,\vec p_T)$ is the light cone wave function of the $\pi N$ Fock state,
where $z$ is the longitudinal momentum fraction of the bare nucleon and
$\vec p_T$ its transverse momentum.
The presence of virtual pions in the nucleon leads to an additional
mechanism for nucleon production referred to as `direct spectator' (Fig.~1a)
and `sequential spectator' (Fig.~1b) processes. The pion
in the nucleon interacts with a virtual $\gamma$ producing
a system $X$. For comparison we show the pomeron-exchange
mechanism in Fig.~1c. The cross section for the semi-inclusive spectator
process $ep\to e'NX$ can be written as
\begin{equation}
\frac{d^4 \sigma^{\rm sp} \bigl( ep\to e'NX \bigr) }
{dx dQ^{2} dz dp_{T}^2}
= \frac{1}{z} f_{\pi N}(1-z,t) \frac{d \sigma^{e \pi}(x/(1-z)) }
{d(x/(1-z))\,dQ^{2}}
\; .
\end{equation}
The presence of the $\pi \Delta$ Fock component in the proton leads to
the production of a spectator $\Delta$ which decays into a pion and
a nucleon. The one-pion exchange contribution to the inclusive cross
section can be obtained by integrating over unmeasured quantities
\begin{equation}
\frac{ d\sigma^{e p}(x,Q^{2}) } {dx \, dQ^2} =
\int_{0}^{1-x} {dz} \int_{-\infty}^{t(0,z)} dt
\; f_{\pi N}(1-z,t)
\frac{d\sigma^{e \pi}(x/(1-z),Q^2)} {d(x/(1-z)) \, dQ^2} ,
\end{equation}
where $\sigma^{e \pi}$ is the cross section for the inclusive deep
inelastic scattering of the electron from the virtual pion. In practical
calculations the on-mass-shell $e \pi$ cross section can be used.
The probability density to find a meson with light-cone momentum fraction
$x_{\pi} = (1-z)$ and four-momentum squared $t$
(or alternatively transverse momentum $ p_{T}^2= -t(1-x_\pi)-m^2x_{\pi}^2$)
is referred to as the splitting function, which quantifies the presence
of virtual mesons in the nucleon.
The splitting function $f(x_\pi,t)$ to the $\pi N$ Fock state (Fig.~1a) is
\begin{equation}
f_{\pi N} (x_\pi,t) = \frac{3 g_{p \pi^0 p}^2}{16\pi^2}
x_\pi \frac{ (-t) |F_{\pi N}(x_\pi,t)|^2 } {(t - m_{\pi}^2)^2} ,
\label{splitpiN}
\end{equation}
and to the $\pi \Delta$ Fock state (Fig.~1b) is
\begin{equation}
f_{\pi \Delta} (x_\pi,t) = \frac{2 g_{p \pi^- \Delta^{++}}^2}{16\pi^2}
x_\pi \frac{ (M_{+}^2 - t)^2 (M_{-}^2 - t) |F_{\pi \Delta}(x_\pi,t)|^2 }
{ 6 m_N^2 m_{\Delta}^2 (t - m_{\pi}^2)^2 } ,
\label{splitpiDelta}
\end{equation}
where $M_{+} = m_{\Delta} + m_N$ and $M_{-} = m_{\Delta} - m_N$ .
The couplings $g^2$ depend on the process, but via the isospin relations
$g^2_{p\to \pi^+ n} : g^2_{p\to \pi^0 p}=2:1$ and $g^2_{p\to \pi^+ \Delta^0} :
g^2_{p\to \pi^0 \Delta^+} : g^2_{p\to \pi^- \Delta^{++}}=1:2:3$
there are only two independent couplings which we take as
$g^2_{p\to \pi^0 p}/4\pi = 13.6$ \cite{trs91}
and $g^2_{p\to \pi^- \Delta^{++}}/4\pi = 12.3$ GeV$^{-2}$ \cite{hhs89}.
The $F_{MB}(x_\pi,t)$ are vertex form factors, which account for the extended
nature of the hadrons involved. The form factors used in meson exchange
models are usually taken to be functions of $t$ only.
As discussed in ref.~\cite{HSS94}
such form factors are a source of momentum sum rule violation and
it was therefore suggested to use form factors which are
functions of the invariant mass of the intermediate meson-baryon system,
i.e. $M_{MB}^2(x_\pi,p_{T}^2)= \frac{m^2_\pi +p_{T}^2}{x_\pi}
+\frac{m_B^2 +p_{T}^2}{1-x_\pi}. $
It can be shown that such a vertex function arises naturally
if one computes the splitting function $f(x_\pi,t)$ in time-ordered
perturbation theory in the infinite momentum frame \cite{HSS93}.
This functional form is typical for parameterizing
the light-cone wave function of composed systems (see e.g. \cite{lfcqm}).
In all calculations discussed below
the vertex form factors have been assumed in the exponential form
\begin{equation}
F_{MB}(x_\pi,p_{T}^2) =
\exp \left[- \frac{M_{MB}^2(x_\pi,p_{T}^2) - m_N^2}
{2\Lambda_{MB}^2} \right].
\label{formfac}
\end{equation}
By using the kinematical relation \cite{S72}
\begin{equation}
t = {-p_{T}^2 \over 1-x_\pi} - x_\pi
({m_{B}^2 \over 1-x_\pi} - m_{N}^{2})
\end{equation}
the form factor given by Eq.(\ref{formfac}) can be equivalently
expressed in terms of $x_\pi$ and $t$ in the simple form:
\begin{equation}
F_{MB}(x_\pi,t) =
\exp \left[ - {m_{\pi}^{2} - t \over 2 \Lambda^{2}_{MB} x_\pi} \right] \; .
\label{formfact}
\end{equation}
The cut-off parameters used in the present calculation
($\Lambda_{\pi N}$ = 1.10 GeV and $\Lambda_{\pi \Delta}$ = 0.98 GeV)
have been determined from the analysis of the particle spectra
for high-energy neutron and $\Delta$ production \cite{HSS94}, i.e.
$pp\to nX$ and $pp\to \Delta^{++}X$.
With these cut-off parameters the NMC result for the Gottfried sum rule
\cite{A91} which depends sensitively on $\Lambda_{MB}$, has been
reproduced \cite{HSS94}. Furthermore the model describes the
$\overline{u}-\overline{d}$ asymmetry extracted recently from
the Drell-Yan experiment NA51 at CERN \cite{DYMCM}.
We note, however, that all results of this paper would be quite similar
if traditional dipole form factors with cut-off parameter of
1.0--1.2 GeV had been used instead of Eq.~(\ref{formfac}).
In hadronic reactions quite often the Regge approach was used rather
than the light--cone approach. In order to obtain the flux factor in
the Regge approach it is sufficient to replace in Eq.(\ref{splitpiN})
$x_{\pi}$ by $x_{\pi}^{1-2\alpha_{\pi}(t)}$, where the pion's Regge
trajectory $\alpha_{\pi}(t)=\alpha_{\pi}^{'}(t-m_{\pi}^{2})$. The
reggeization is important for small $x_{\pi}$ and/or large $t$.
This is a kinematical region where the flux factor, especially with the
vertex form factor Eq.~(\ref{formfac}), is rather small.
Furthermore in the Regge approach, in contrast to the light--cone
approach, it is not clear whether it would be fully consistent in the
lepton DIS to use the on--shell pion structure function.
However, since the difference is important only in very limited region
of the phase space, in practice both approches lead to almost identical
flux factors.
\section{Results and Discussion}
The formalism presented above has been implemented in the Monte
Carlo program {\sc Pompyt} version 2.3 \cite{Pompyt}. This program, which
was originally for diffractive interactions via pomeron exchange,
simulates the interaction dynamics resulting in the complete final
state of particles.
The basic hard scattering and perturbative QCD parton emission processes
are treated based on the program {\sc Pythia} \cite{Pythia} and the
subsequent hadronization is according to the Lund string model \cite{Lund}
in its
Monte Carlo implementation {\sc Jetset} \cite{Pythia} which also handles
particle decays.
The main difference in comparison to the pomeron case is the replacement
of the pomeron flux factor by the pion flux factors given by
Eqs.~(\ref{splitpiN},\ref{splitpiDelta}) and the pomeron structure
function by the pion structure function.
The pion case is better contrained than the pomeron case, due to the
better known pion structure function where those for the on-shell pion
can be used. The pion parton densities from the parametrisation
GRV-P HO ($\overline{MS}$) \cite{GRV92_pi} is therefore used.
It is important to mention in
this context that the absolute normalization of the cross section for
the production of the spectator nucleon via pion-exchange mechanism
depends on the absolute value of the pion structure function.
At the small-$x$ relevant at HERA, the structure function is
completely dominated by the pion sea contribution which is not very well
known. Experimentally the pion structure function can be determined from
the Drell--Yan processes only for $x > 0.1$ \cite{B83,SMRS92}.
If the pion-exchange mechanism is the dominant mechanism of fast neutron
production, the coincidence measurement of scattered electrons and
forward neutrons may allow
the determination of the pion deep inelastic structure function
\cite{HLNSS94}. When considering the event structure, however, the
precise value of $F_{2}^{\pi}$ is not required.
When the deep inelastic scattering is on a valence quark (antiquark)
the pion remnant is simply the remaining antiquark (quark).
A colour triplet string is then stretched between them and hadronization
described with the Lund string model \cite{Lund}.
In case it is a sea quark (antiquark) that was struck,
the pion remnant contains the associated sea antiquark (quark) in
addition to the valence quark and antiquark. A string is then stretched
between the struck quark (antiquark) and a valence antiquark (quark), whereas
the remaining valence quark (antiquark) forms a meson together with
the spectator sea antiquark (quark).
For the results presented below we have made simulations corresponding to
the HERA conditions, i.e. $26.7\: GeV$ electrons on $820\: GeV$ protons.
The results for the above pion exchange mechanism are compared with normal
DIS on the proton, which is simulated with {\sc Lepto} 6.3 \cite{LEPTO}
using the MRS(D-') parton distributions \cite{MRS93}.
In all cases, events are simulated according to the cross section formulae
and are constrained to be in the kinematical region
$x>10^{-5}$, $Q^2>4\: GeV^2$.
In Fig.~2 we show the resulting energy spectra of nucleons
($p,\bar{p},n,\bar{n}$) in the lab frame of HERA. This is of direct interest
for measurements in the leading proton spectrometer \cite{LPS} and
forward neutron calorimeter \cite{FNC}.
Neutrons from the pion exchange mechanism have large energies giving a
spectrum with a broad peak around $E\approx 0.7E_{beam}$,
i.e. around 500 $GeV$,
whereas the corresponding spectrum from DIS on the proton decreases
monotonically with increasing neutron energy. In the region of interest,
say 400--700 $GeV$, the two processes have a similar absolute magnitude.
An observable effect from DIS on a pion should be therefore possible.
\begin{figure}[t]
\epsfig{file=fig_2.eps,width=16.5cm,bbllx=30pt,
bblly=300pt,bburx=550pt,bbury=500pt,clip=}
\caption{{\it
Energy spectra in the HERA lab frame for nucleons ($p,\bar{p},n,\bar{n}$) from
(a,b) conventional DIS on a proton (obtained with {\sc Lepto})
and (c) DIS on an exchanged $\pi^+$ (obtained with {\sc Pompyt}).}}
\end{figure}
While the energy distribution of primary $\Delta$'s is very similar
to that of the direct neutron production \cite{HSS94},
after the $\Delta \rightarrow n \pi$ decay the energy distribution of
the secondary nucleons becomes peaked at smaller energies of about
400 GeV \cite{HNSSZ95}.
The two-step mechanism is, however, much less important for the
production of neutrons. First of all the probability of the $\pi \Delta$
Fock states in the light-cone nucleon wave function is much smaller than
the probability of the $\pi N$ component:
$P_{\pi \Delta} \approx 0.3 P_{\pi N}$ \cite{HSS94}.
Secondly, the isospin Clebsch-Gordan coefficients favour
the decay of the $\Delta$ into the proton over the decay into
the neutron channel with the proton/neutron branching ratio
${7 \over 9} : {2 \over 9}$. The analogous branching ratio for the
direct component is ${1 \over 3} : {2 \over 3}$.
All this imply that both the 1-step and 2-step
mechanisms produce comparable amounts of protons. In contrast,
the two--step mechanism produces about 10 times less neutrons than the
1-step mechanism. This means that in a first approximation the
two-step process may be neglected for the spectrum of neutrons.
Therefore we concentrate on the comparison of DIS on $\pi^{+}$,
having a neutron as spectator, with standard DIS on the proton.
The calculated transverse momentum ($p_T$) distributions are shown in Fig.~3.
As can be seen, the distribution of the spectator neutrons
falls faster with increasing $p_T^2$ than that from standard DIS.
It can be expected that the distribution of neutrons produced in
the two-step process in Fig.~1b is less steep than those produced
in the direct process in Fig.1a.
The higher overall level of DIS on the proton can be reduced by a cut in
neutron energy, as is obvious from Fig.~2. Still, the difference in shape
of the $p_T^2$-spectra in Fig.~3 is presumably too small to be exploited
experimentally. A safe conclusion does, however, require further analysis
including, e.g., finite angular acceptance of FNCAL.
\begin{figure}[t]
\epsfig{file=fig_3.eps,width=15cm,bbllx=-20pt,
bblly=150pt,bburx=550pt,bbury=650pt,clip=}
\caption{{\it
Distribution in transverse momentum of neutrons from DIS on
the proton (dashed histogram from {\sc Lepto}) and from DIS on
a $\pi^{+}$ with a neutron as a spectator
(solid histogram from {\sc Pompyt}).}}
\end{figure}
\begin{figure}[t]
\epsfig{file=fig_4.eps,width=15cm,
bbllx=0pt,bblly=280pt,bburx=550pt,bbury=520pt,clip=}
\caption{\it
Rapidity distributions of (a) all stable particles, charged
pions and $\gamma$'s and (b) protons and neutrons from DIS on the proton
as obtained from {\sc Lepto}.}
\end{figure}
To study other characteristics of events arising through DIS
on a virtual pion and compare with standard DIS on the proton,
we consider spectra of different quantities normalized as
\begin{equation}
f(\kappa) \equiv {1 \over N_{event}} {dN \over d\kappa} \; ,
\end{equation}
where $\kappa$ can be any kinematical variable
and $N_{event}$ is the number of events.
This give emphasis to shapes irrespectively of normalisation and statistics
(of data and Monte Carlo samples).
A quantity with especially nice transformation properties under
longitudinal boosts is rapidity defined as
\begin{equation}
y = \frac{1}{2} ln \left( {E + p_{z} \over E - p_{z}} \right) \; ,
\label{rapidity}
\end{equation}
where $E$ is the energy and $p_{z}$ the longitudinal momentum along the proton
beam axis. For massless particles this quantity is identical to the
pseudo-rapidity defined by
\begin{equation}
\eta = -ln \, tan({\theta/2}) \;
\label{pseudo}
\end{equation}
where $\theta$ is the angle of a particle with respect to the proton beam,
i.e. $\eta >0$ is the proton hemisphere in the HERA lab frame.
In Fig.~4 and 5 we show the pseudo-rapidity distributions of different
particle species produced in DIS on the proton and DIS on a $\pi^+$,
respectively.
In Fig.~5a spectator neutrons are not included, but shown separately in
Fig.~5b.
For example, the size of the beam pipe hole in FCAL ($\theta=1.5^{0}$),
assures that in almost 100\% of the spectator nucleons (proton/neutron)
leaves the main ZEUS detector without any energy loss.
As seen by comparing Fig.~4a and Fig.~5a the pseudo-rapidity spectra
of $\pi^\pm$ and $\gamma$ are rather similar in the two cases.
The pseudo-rapidity spectrum of spectator neutrons (Fig.5b) has a
maximum at only a slightly higher value compared to the peak of neutrons
from non-diffractive DIS on the proton (Fig.~4b).
These predicted neutron distributions should be considered in the
context of the pseudo-rapidity coverage of the forward neutron calorimeter.
In general, the neutron acceptance is a complicated function of both polar
and azimuthal angle. The ZEUS FNCAL geometry limits pseudo-rapidity coverage
approximately to $7\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} \eta \mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 10$.
The Lund hadronization model
predicts a small amount of nucleon-antinucleon pairs produced
in DIS on the pion (Fig.5cd).
\begin{figure}[tb]
\epsfig{file=fig_5.eps,width=16.5cm,bbllx=20pt,
bblly=150pt,bburx=550pt,bbury=650pt,clip=}
\caption{\it
Rapidity distributions of the specified particles produced in DIS on a
$\pi^+$ (neutron spectator) as obtained from {\sc Pompyt}.}
\end{figure}
The pseudo-rapidity variable is of particular interest in the context of large
rapidity gap events. These have been defined by $\eta_{max}$ giving,
in each event, the maximum pseudo-rapidity where an energy deposition is
observed. Based on our Monte Carlo simulated events using {\sc Lepto} and
{\sc Pompyt} we extract this
$\eta_{max}$-variable and show its distribution in Fig.~6 for conventional
non-diffractive DIS on the proton, DIS on an exchanged $\pi^+$ and diffractive
DIS on a pomeron.
Since our aim here is to demonstrate the genuine physics effects
of the models, we have not included any experimental acceptance effects
or rapidity gap requirements in this study. Doing this will severely distort
the distributions at large $\eta_{max}$ and, therefore, one cannot make
direct comparisons with the available measured distributions.
Thus, from this model study we find a shift of about one unit towards
smaller $\eta_{max}$ in case of DIS on the pion as compared to normal DIS.
For $\eta_{max}\mathrel{\rlap{\lower4pt\hbox{\hskip1pt$\sim$} 6$, these two processes contribute about equally to
the rate.
\begin{figure}[tb]
\epsfig{file=fig_6_log.eps,width=15cm,bbllx=-30pt,
bblly=150pt,bburx=550pt,bbury=630pt,clip=}
\caption{\it
Distribution of $\eta_{max}$ (see text) in non-diffractive DIS on
the proton (solid), in DIS on the virtual $\pi^{+}$ (dashed)
and in DIS on the pomeron (dotted); pure physics of the models without
experimental acceptance effects.}
\end{figure}
For the spectrum of $\eta_{max}$ for DIS on the pomeron, we have taken
a set of parameters which is usually called `hard pomeron' in the literature.
The pomeron is assumed to contain equal amounts of the light quarks, i.e.
$u = \bar{u} = d = \bar{d}$, each with a density distribution
\begin{equation}
z q(z) = {6 \over 4} z (1-z) \; ,
\end{equation}
with the normalization chosen such that the parton distributions
fulfill the momentum sum rule.
The pomeron flux factor is here taken as the ratio of the
single diffractive cross section and the pomeron-proton total cross
section \cite{IS85}
\begin{equation}
f_{{I\!\!P}/p}(x_{{I\!\!P}},t) =
\frac{d \sigma / dx_{{I\!\!P}} dt}{\sigma({I\!\!P} p \rightarrow X)} =
{1 \over 2.3} {1 \over x_{{I\!\!P}}} (3.19 e^{8t} + 0.212 e^{3t}) \; ,
\end{equation}
where the simple parameterization is obtained by fitting the numerator
to single diffractive cross section and the denominator is taken as
$\sigma ({I\!\!P} p \rightarrow X) = 2.3 \; mb$ obtained from a Regge
analysis \cite{BCSS87} of elastic and single diffractive scattering.
The resulting $\eta_{max}$ distribution from DIS on the pomeron is considerably
different from the other two cases.
From Fig.~6 one may conclude that the pion--exchange induced DIS
leads to events with intermediate size rapidity gaps rather than to
those with large gaps.
Nonetheless it is important to verify experimentally the effect of the
pion exchange by, e.g., correlating $\eta_{max}$ with fast forward
neutrons measured in FNCAL (for technical details see \cite{Brk95}).
\begin{figure}[tb]
\epsfig{file=fig_7.eps,width=16.5cm,bbllx=20pt,
bblly=150pt,bburx=550pt,bbury=650pt,clip=}
\caption{\it Multiplicity distributions of (a) all stable particles,
(b) charged pions from DIS on a $\pi^+$ (full curves) and DIS on a proton
(dashed curves). In (c) $\gamma$'s from DIS on the proton and
(d) $\gamma$'s from DIS on $\pi^{+}$.}
\end{figure}
The flux factor given by Eq.~(\ref{splitpiN}) with a cut-off parameter
of the vertex form factor extracted
from the high-energy neutron production data \cite{HSS94} predicts
that the pion carries, on average, a fraction 0.3 of the proton beam
momentum \cite{HSS93}. This implies that as a first approximation
the pion-induced DIS processes can be viewed as an electron
scattering on the pion with effective energy
$E_{eff} \approx 0.3 \cdot E_{beam}$.
Because of the smaller energy of the pion one could expect smaller
multiplicity of electron-pion DIS events in
comparison to those for the electron-proton DIS. In Fig.~7 we compare
the model predictions (without experimental acceptance effects) of the
multiplicity spectra for DIS on the proton with those on $\pi^{+}$.
The multiplicities in DIS events on the pion is
noticeably smaller than in DIS events on the proton;
the average multiplicity is about 20 and 30, respectively.
The dominant contribution to the multiplicity spectra comes from charged pions
(11.6 on the proton vs. 7.5 on the
$\pi^{+}$) and $\gamma$'s (14.1 on the proton vs. 8.0 on $\pi^{+}$).
The even-odd fluctuations of the multiplicity spectra of photons
is not statistical, but caused mainly by the decay
$\pi^{0} \rightarrow \gamma \gamma$.
Thus, as expected the multiplicity of standard DIS events is typically
larger than in pion-induced DIS events. However, due to the large fluctuations
in multiplicity and the overlap between the distributions for the two cases,
as well as the distortions that limited experimental acceptance will create,
it is not clear whether this difference can be used as a
discriminator. This needs further considerations.
\section{Conclusions}
The concept of a pion cloud in the nucleon was recently found to
be very useful \cite{HSS94,DYMCM} in understanding the Gottfried sum
rule violation observed by the New Muon Collaboration \cite{A91} and
the Drell-Yan asymmetry measured recently in the NA51 Drell-Yan
experiment at CERN \cite{B94}.
In the present paper we have investigated several quantities in
order to find useful observables which would help to verify this
concept using deep inelastic electron-proton scattering at HERA.
We have therefore analyzed the structure of deep inelastic
events induced by the pion-exchange mechanism. In particular,
we have studied distributions of final nucleons
as well as rapidity and multiplicity spectra.
Most of the event characteristics do not provide a direct possibility
to distinguish the events from DIS on a pion from
the ordinary events with DIS on a proton.
A clear difference is, however, found in the energy
spectrum of outgoing neutrons.
We find that the pion cloud model predicts an energy distribution of
neutrons which substantially differs from the standard
hadronization models. While the pion-exchange mechanism leads to
an energy spectrum which peaks at an energy of about
$0.7 E_{beam}$, i.e. at about 500 GeV, the spectrum of neutrons
produced in the standard hadronization process following
DIS on the proton decreases monotonically with increasing
neutron energy.
This should facilitate to discriminate between the two processes,
in particular since they have cross sections of similar magnitude in this
energy region. Therefore, the experiments with forward neutron
calorimeters should shed new light on the nucleon structure in terms
of a pion content.
We have shown that the pion cloud
induced mechanism practically does not contribute to the large rapidity
gap events observed recently by the ZEUS and H1 collaborations
\cite{lrg_ZEUS,lrg_H1}
and cannot be a severly competing mechanism for the pomeron exchange.
The multiplicity of the pion cloud induced events is about 60-70\%
of that for standard hadronization on the proton, but given the large
fluctuations it is not clear to what extent this difference can be
exploited.
Our results on the pion exchange mechanism are more general than the
detailed formulation of the pion cloud model. Since essentially the
same pion flux factor is obtained in Regge phenomenology, our results
may also be taken as a representation of Regge-based expectations.
In this study we have omitted experimental effects due to
finite appertures, clustering effects in the main detector,
finite energy thresholds, detector efficiencies, etc., which may distort
the observed spectra. Many of them are quite important in order to
understand and interpret the observed spectra and we plan a
future study \cite{PS96} of such effects.
\newpage
|
1,477,468,751,017 | arxiv |
\section{Introduction}
Low flux states in active galactic nuclei (AGN) are of particular interest as they can show very extreme behaviour, such as drops by factors of 10 or more in flux over the 2--10~keV band on time scales of months to days \citep[e.g.][]{Grupe07}. The nature of these low states may have important implications for our understanding of the behaviour of AGN in general, and may be relevant to measurements of spin.
One of the key questions for any such low state is whether the drop in flux is intrinsic to the source \citep[as in 1H~0707-495;][]{Fabian12} or can be attributed to intervening material \citep[as in NGC~1365;][]{Risaliti13}. In the former case, observations of low states in AGN may allow use to probe the very inner edge of the accretion disk and make very accurate measurements of the spin \citep[Parker et al., submitted]{Fabian14}, while in the latter these observations can be used to observe the environment surrounding AGN, including outflows \citep{Kaastra14}, broad line region (BLR) clouds and the edge of the torus \citep{Maiolino10,Lohfink12}, and constrain the size of the X-ray emitting corona \citep{Sanfrutos13}. Unfortunately, these two different causes for low-flux states can be hard to distinguish \citep[e.g.][]{Schartel07}, due to low count spectra and intrinsically similar spectral shapes. Interestingly, a dramatic but more gradual decline in the luminosity (including the X-ray luminosity) from Mrk~509 appears to have coincided with a change in the Seyfert classification of the source \citep{Denney14}.
Mrk~1048 (also known as NGC~985) is a Seyfert~1 galaxy \citep{VeronCetty83} at redshift 0.0427. The galaxy itself has a clear ring structure, which suggests that it is going through a merger \citep{deVauc75}. The presence of warm absorption in the X-ray spectrum of this source is well known, and was first suggested based on \emph{ROSAT} observations by \citet{Brandt94}. These features were then confirmed using \emph{ASCA} by \citet{Nicastro99}, and have also been seen with grating spectra by \emph{Chandra} and \emph{XMM-Newton} \citep{Krongold05,Krongold09}. In this work we present \emph{XMM-Newton} observations of an unusual low flux state in Mrk~1048, detected using \emph{Swift} monitoring.
\section{Observations and Data Reduction}
In an ongoing search for AGN in deep low-states \citep[e.g.][]{Schartel10}, we noticed the low Swift flux of Mrk 1048, and then triggered our dedicated \emph{XMM-Newton} follow-ups. We use an archival \emph{XMM-Newton} observation from 2003, when the source was not in a low state, for comparison with the new data.
\subsection{XMM-Newton}
Mrk~1048 was observed three times with \emph{XMM-Newton} \citep{Jansen01}, once in 2003 and twice in 2013. The details of the observations are given in Table~\ref{obstable}.
We used the \emph{XMM-Newton} Science Analysis System (SAS) v13.5.0 for all data reduction, using the \textsc{epproc} task to produce cleaned \emph{EPIC}-pn \citep{Struder01} event files. \textsc{evselect} was then used to extract spectra, filtering for background flares and using only single and double events. Response matrices were generated using the \textsc{rmfgen} and \textsc{arfgen} tasks. Source and background spectra were extracted from 40" circular regions for all observations, and all spectra are source-dominated until $>10$~keV. The background regions are selected to avoid contaminating sources and copper lines from the detector. The \emph{RGS} data were reduced using the \textsc{rgsproc} ftool, and filtered for background flares.
All spectral fitting is performed using Xspec version 12.8.1l \citep{Arnaud96}, and the spectra are binned to a minimum of 30 counts per bin using \textsc{grppha}. We use $\chi^2$ statistics for the fits presented in this work, but we also checked the validity of our fits using Cash statistics \citep[\textsc{cstat} in \textsc{xspec};][]{Cash79} and find no significant differences in the acceptability of the fits or in the values of model parameters. The Optical Monitor (OM) data was extracted using \textsc{omichain}, and corrected for Galactic reddening using the reddening curve of \citet{Cardelli89}.
\begin{table*}
\centering
\begin{tabular}{c c c c c c}
\hline
Obs. ID & Start Date & Exposure time & Count Rate & 0.3--2~keV Flux & 2--10~keV Flux\\
& & (ks) & (s$^{-1}$) & erg~cm$^{-2}$~s$^{-1}$ & erg~cm$^{-2}$~s$^{-1}$\\
\hline
0150470601 & 2003-07-15 & 31.14 & $3.94\pm0.01$ & $5.68\times10^{-12}$ & $9.17\times10^{-12}$ \\
0690870101 & 2013-07-20 & 15.92 & $1.64\pm0.01$ & $1.57\times10^{-12}$ & $8.12\times10^{-12}$ \\
0690870501 & 2013-08-10 & 71.62 & $1.763\pm0.05$& $1.91\times10^{-12}$ & $7.96\times10^{-12}$ \\
\hline
\end{tabular}
\caption{Details of the three \emph{XMM-Newton} observations used in this analysis. The exposure time and count rate are for the \emph{EPIC}-pn detector after filtering for background flares and background subtraction. Fluxes are corrected for Galactic absorption.}
\label{obstable}
\end{table*}
In Fig.~\ref{lightcurves} we show the \emph{EPIC}-pn lightcurves for Mrk~1048. There is a large drop (a factor of $\sim3$) in flux between the 2003 observation and the 2013 observations, which is concentrated in the 0.5--2~keV band, with almost no changes in flux over the 5--10~keV band. We show the three spectra in Fig.~\ref{powerlawratios} as ratios to a powerlaw, modified by galactic absorption, fit from 5--10~keV. It is clear from this figure that almost all of the variability takes place at low energies, and a narrow iron line and warm absorption are clearly visible.
\begin{figure*}
\includegraphics[width=14cm]{alllightcurves.eps}
\caption{\emph{EPIC}-pn lightcurves for the three \emph{XMM-Newton} observations of Mrk~1048. The first column shows the data from the 2003 `normal' flux state observation, and the other two columns show the short and long 2013 low state observations. The three rows show the full band, 0.5--2 and 5--10~keV lighcurves, respectively. Data are binned into 1~ks intervals. Gaps in the first observation are due to background flares.}
\label{lightcurves}
\end{figure*}
\begin{figure}
\includegraphics[width=\linewidth]{powerlawratios.eps}
\caption{Ratios of the \emph{EPIC}-pn spectra of the three \emph{XMM-Newton} observations of Mrk~1048 to the same (in both normalisation and index) absorbed powerlaw, fit from 5--10~keV, where the spectra are very similar. Data are rebinned in Xspec for clarity.}
\label{powerlawratios}
\end{figure}
\subsection{Swift}
Mkn 1048 was a target of a {\it Swift}\ fillin program between 2007 July and 2008 June \citep{Grupe10} and was observed 5 times during that period. When Swift observed Mkn 1048 again on 2013-July-07 it discovered Mkn 1048 in a low X-ray flux state. consequentially we requested further {\it Swift}\ observations between 2013-July 12 and August 11. These observations are listed in Table\,\ref{swift_log}.
All observations with the {\it Swift}\ X-ray Telescope \citep[XRT,][]{Burrows05} were performed in photon counting mode (Hill et al., 2005). XRT data were reduced with the the task {\it xrtpipeline} version 0.12.6., which is included in the HEASOFT package 6.12. Source counts were collected in a circular region with a radius of 94.3$^{''}$ and the background in a close-by source-free region with a radius of 282.9$^{''}$.
The auxiliary response files (ARFs) were created by the FTOOL {\it xrtmkarf} and we used the most recent response file {\it swxpc0to12s6\_20010101v013.rmf}. All spectra were rebinned with 20 counts per bin, except for the data in segments 008 and 012 which were binned with one count per bin, due to the low number of total counts in these observations. These spectra were fitted in {\it XSPEC} using Cash statistics, and all others were fitted with $\chi^2$ statistics.
The photometric data taken by the UV-Optical Telescope \citep[UVOT,][]{Roming05} were analyzed with task {\it uvotsource}. Source counts were selected in a circular region with a radius of 5$^{''}$ and a nearby source-free region with a radius of 20$^{''}$. Count rates were converted into fluxes and magnitudes based on the most recent UVOT calibration as described in \citet{Poole08} and \citet{Breeveld10}. The UVOT data were corrected for Galactic reddening of $E_{\rm B-V}=0.033$ \citep{Schlegel98}. The correction factor in each filter was calculated with equation (2) in \citet{Roming09} who used the standard reddening correction curves by \citet{Cardelli89}.
\begin{table}
\centering
\begin{tabular}{ccccc}
\hline
& & & \multicolumn{2}{c}{Exposure time (s)}\\
\cmidrule(r){4-5}
ID & Segment & Start Date & XRT & UVM2\\
\hline
91616 & 001 & 2013-07-07 & 2148 & 526\\
36530 & 006 & 2013-07-12 & 2400 & 582\\
& 007 & 2013-07-20 & 1051 & 247\\
& 008 & 2013-07-25 & 1049 & 1038\\
& 009 & 2013-07-29 & 952 & 962\\
& 010 & 2013-08-02 & 967 & 954\\
& 012 & 2013-08-10 & 974 & 981\\
& 013 & 2013-08-11 & 849 & 199\\
\hline
\end{tabular}
\caption{{\it Swift}\ observation log of Mrk~1048. From the \emph{UVOT} data we state only the UVM2 exposure times, as the other filters were not used for some observations.}
\label{swift_log}
\end{table}
In Fig.~\ref{longtermlcurve} we show the long term lightcurve of Mrk~1048, which has been observed by every major X-ray mission. The source flux has never been observed to drop as low as in the 2013 low state, which is around 20 times fainter than Mrk~1048 at its brightest. The additional data used in this plot comes from \emph{EINSTEIN} \citep{Fabbiano92}; \emph{ASCA} \citep{Nicastro99}; \emph{Suzaku} \citep{Winter12}; \emph{Swift} \citep{Grupe10} and \emph{Chandra} \citep{Krongold05}.
\begin{figure}
\includegraphics[width=\linewidth]{longtermlcurve.eps}
\caption{Long term 0.2--2~keV light curve of Mrk~1048, showing the 2013 low state. The three \emph{XMM-Newton} observations are plotted as red diamonds. The left panel shows all the data up to 2013, and the right panel shows the measurements from \emph{XMM-Newton} and \emph{Swift} in July and August 2013.}
\label{longtermlcurve}
\end{figure}
\section{Spectral Analysis}
\subsection{2003 XMM-Newton data}
\label{2003section}
We initially focus on the data from the 2003 observation, with the aim of establishing a baseline model from which to examine the changes in the low flux state observations of 2013.
In previous work analysing the 2003 data, the \emph{EPIC}-pn data are first fit from 2--10~keV, and the fit is then extrapolated down to lower energies, showing clear evidence of warm absorption and a soft excess. However, as we show in Fig.~\ref{gammaestimates}, the estimated power law index from such a fit is highly dependent on the energy band chosen, and this can have a very large effect on the extrapolated low energy ($<2$~keV) power law flux. For the fitting in Fig.~\ref{gammaestimates}, we use a simple power law plus distant reflection model, including galactic absorption, using \textsc{xillver} \citep{Garcia13} with $\log(\xi)=0$ to model the distant reflection. The full Xspec model used is \textsc{tbabs*(powerlaw+xillver)}, and we fix the column density of the galactic absorption to $3\times 10^{20}$~cm$^{-2}$ \citep{Kalberla05}.
As shown in the figure, a much harder powerlaw index is obtained when this model is fit from 2--10~keV than from 5--10, which indicates that there is still spectral curvature from the warm absorption present above 2~keV.
Based on the flattening of the plots above $\sim3$~keV and the consistency between the different observations, it seems likely that the powerlaw index should be steeper than that found from the 2--10~keV band, around 1.9. Extrapolating such a power law model to lower energies does not show a low energy excess, rather it leaves a deficit which could potentially be explained by the warm absorption (Fig.~\ref{gammaestimates}, right panel). This plot shows the impact of extrapolating the continuum from two different energy bands. The model extrapolated from the 2--10~keV band shows a very strong soft excess, whereas no such feature is present when the model is extrapolated from the 5--10~keV band. The indices for these fits are $1.5\pm0.1$ and $1.9\pm0.2$, respectively. Because of this result, we do not include a separate soft excess component in our fits.
\begin{figure*}
\includegraphics[width=8.5cm]{gammaestimates.eps}
\includegraphics[width=8.5cm]{gammaextrapolations.eps}
\caption{Left: Estimated photon index $\Gamma$ as a function of the energy band used to fit the data, for the three different observations. The upper limit is set to 10~keV in all cases, while the lower energy limit is varied. The dashed line shows the conventional 2--10~keV band. Right: Data to model ratio for absorbed power law plus distant reflection fits over the 2--10~keV (black) and 5--10~keV (red) energy bands, extrapolated to lower energies for the 2003 \emph{EPIC}-pn data. The soft excess is replaced with a deficit at low energies when the energy band is restricted to higher energies.}
\label{gammaestimates}
\end{figure*}
To investigate the full spectrum (from 0.3--10~keV) we use an \textsc{xstar} grid to fit the warm absorption. A fit with a single warm absorption zone (\textsc{tbabs*(xstar*powerlaw+xillver+zgauss)}) gives an excellent fit ($\chi^2_\nu=1099/1094$), and this is not significantly improved by the addition of a second absorbing zone. The parameters of this fit are shown in the first row of Table~\ref{fitpars}. An additional narrow line is included (modelled with \textsc{zgauss}) at 6.55~keV, as a significant excess is found in the fit to all observations (see \S~\ref{section_allfits}), with its parameters fixed at the best fit values from the joint fit to all 3 observations.
\begin{figure}
\includegraphics[width=\linewidth]{obs1_fits.eps}
\caption{Top: Data from the 2003 high flux state observation of Mrk~1048, fit with a power law plus distant reflection model, modified by warm absorption and galactic absorption. Bottom: Ratio of the data to the model.}
\label{2003fits}
\end{figure}
\subsection{Differences between observations}
For the purposes of comparing the observations, we first plot the 2013 low state observations as a ratio to the best-fit model of the 2003 data (Fig.~\ref{ratiotoobs1}). For this plot, the 2003 model can be effectively regarded as a phenomenological fit. The figure clearly shows that the spectral changes take place almost exclusively below 5~keV, with a sharp drop down to $\sim1$~keV where it flattens again. This spectral shape, with low and high energy breaks and no strong features, is exactly that predicted by partial covering neutral absorption, and we therefore focus on modelling this interpretation.
\begin{figure}
\includegraphics[width=\linewidth]{ratioto1stobs.eps}
\caption{Ratio of the \emph{EPIC}-pn spectra of the two 2013 observations to the best fit model of the 2003 data. This variability is dominated by changes below $\sim4$~keV, consistent with absorption from neutral material. There are also two potential absorption lines visible around 0.8 and 1~keV in the 2013 long observation. Data are rebinned in Xspec for clarity}
\label{ratiotoobs1}
\end{figure}
To investigate potential changes in the warm absorber(s), we simultaneously fit the 2003 and 2013 long observations with the best fit model discussed in \S~\ref{2003section}. To this model, we add a partial covering neutral absorber, modelled with \textsc{pcfabs} in Xspec. The covering fraction and column density are fixed at zero in the 2003 spectrum, and free to vary in the 2013 observation. We exclude the 0.5--1~keV band from the 2013 data, where there are small absorption features visible in Fig.~\ref{ratiotoobs1}, then jointly fit the data. The resulting residuals are shown in the lower panel of Fig.~\ref{wabschanges}, showing clear evidence of changes in the ionised absorption.
\begin{figure}
\includegraphics[width=\linewidth]{wabs_changes.eps}
\caption{Data and ratio for a joint fit to the 2003 and 2013 long observations, excluding the 0.5--1.5~keV band in 2013. For this fit we allow only changes in neutral absorption between the two observations. Strong residual features are visible at $\sim 0.95$ and $0.75$~keV.}
\label{wabschanges}
\end{figure}
\begin{figure}
\includegraphics[width=\linewidth]{rgs_justratios.eps}
\caption{Emission lines remaining after fitting a powerlaw continuum modified by warm absorption and cold partially covering absorption, for the 2003 and 2013 \emph{RGS} data. The strongest lines are the \textsc{Ovii} triplet from 0.56--0.57~keV, which is the strongest line feature predicted by the distant reflection model (see Fig.~\ref{bestfit}).}
\label{rgsplot}
\end{figure}
\begin{table}
\centering
\begin{tabular}{l c c}
\hline
Line & Energy (eV)& Flux (10$^-14$~erg~s$^{-1}$~cm$^{-2}$)\\
\hline
\textsc{Ovii} triplet & 561 & $3.3^{+1.2}_{-0.1}$\\
& 569 & $6.6^{+2.4}_{-1.9}$\\
& 574 & $5.0^{+2.4}_{-1.6}$\\
\textsc{Oviii} K$_\alpha$ & 653 & $1.8^{+1.0}_{-0.5}$ \\
\hline
\end{tabular}
\caption{Fluxes of the narrow lines found in the \emph{RGS} data from the low flux state observation.}
\label{linefluxes}
\end{table}
While detailed modelling of the \emph{RGS} data is beyond the scope of this work, we can use the spectra to investigate potential changes in the absorption and emission between the observations. In Fig.~\ref{rgsplot} we show the data/model ratios for the 2003 observation and the 2013 long observation, where both spectra are fit with the full model including partial covering and warm absorption (see \S~\ref{section_allfits}), but with the distant reflection component removed. Strong narrow emission lines are clearly visible in the 2013 data, and are either not detected or much weaker in the 2003 spectrum. The strongest emission feature here is the \textsc{Ovii} triplet, which is also the strongest feature predicted in this band by the distant reflection model (Fig.~\ref{bestfit}, right panel). These lines do not correspond to strong absorption features in the model (with the exception of the feature above 0.7~keV in the 2003 data, which corresponds to the iron UTA), and therefore are genuine emission features. This suggests that while the primary continuum is highly absorbed, the reflection component is unaffected, leaving much more prominent emission lines.
Fitting narrow Gaussian lines to the low flux spectrum at the energies of the four lines identified by \citet{Krongold09} in the high state returns line fluxes which are consistent with those found by \citeauthor{Krongold09}, although the errors are large (Table~\ref{linefluxes}). This supports the conclusion that the distant reflection has not changed between the observations, and certainly has changed much less than the continuum.
\subsection{Fits to all observations}
\label{section_allfits}
To investigate the spectral changes in detail, we simultaneously fit the \emph{EPIC-pn} spectra from all three observations. In all the models discussed here, we find that an additional narrow iron line is required to properly model the high energy spectrum. The typical energy for this feature (from the best fit model) is $6.55\pm0.03$~keV, and the equivalent width is 38~eV. We do not find significant variability in the strength or energy of the line either between observations or between models.
As expected, based on the large drop in flux (Fig.~\ref{ratiotoobs1}) and only small changes in the absorption lines (Fig.~\ref{wabschanges}), a model without an additional cold absorber (allowing changes in the ionisation and column density of the warm absorber only) gives a poor fit ($\chi^2_\nu=6048/3120=1.93$). Similarly, allowing only the continuum and reflection to vary (both in spectral index and normalisation) between the observations also gives a poor fit ($\chi^2_\nu=4850/3123=1.55$). Allowing both the underlying spectrum and warm absorption to change gives a much better fit ($\chi^2_\nu=3692/3122=1.18$), although significant residuals remain around 0.8--0.9~keV, where a strong oxygen absorption edge is present in the ionised absorption model, but absent in the data. Adding additional warm absorption zones does not significantly improve this fit, as the (very small) drop in $\chi^2$ is offset by the drop in degrees of freedom.
\begin{table*}
\centering
\begin{tabular}{c c c c c c c c c c c}
\hline
&\multicolumn{2}{c}{Neutral absorber}&\multicolumn{2}{c}{Warm absorber}& Powerlaw & \multicolumn{2}{c}{Distant reflection}\\
\cmidrule(r){2-3} \cmidrule(r){4-5}\cmidrule(r){6-6} \cmidrule(r){7-8}
Model & $n_\textrm{H}$ & Covering Fraction& $n_\textrm{H}$ & $\log(\xi)$& $\Gamma$ & $A_\textrm{Fe}$ & $\theta$ &$\chi^2/$d.o.f.\\
& $(10^{22}$cm$^{-2})$&& $(10^{22}$cm$^{-2})$ & (log[erg~cm~s$^{-1}$])& &(solar)& (degrees) \\
\hline
1 &&& $2.36\pm0.07$ &$1.951\pm0.005$ & $1.96\pm0.01$ & $<0.53$ & $73\pm5$ & 1099/1094\\
\\
2a & & & $2.17\pm0.06$ & $1.952\pm0.006$ & $1.94\pm0.01$ & $<0.585$ & $82\pm1$ & 3629/3122\\
2b & & & $4.78\pm0.12$ & $1.942\pm0.006$ & $1.50\pm0.02$ &-&-&-&\\
2c & & & $4.72\pm0.06$ & $1.940\pm0.003$ & $1.677\pm0.007$ &-&-&-&\\
\\
3a & 0* & 0* & $2.32\pm0.05$ & $1.937\pm0.004$ & $1.961^{+0.003}_{-0.004}$ & $<0.51$ & $64\pm2$ & 3423/3127\\
3b & $2.47\pm0.07$ & $0.788\pm0.003$ & - & -&-&-&-&-&\\
3c & $3.02\pm0.06$ & $0.718\pm0.002$& - & -&-&-&-&-&\\
\\
4a & 0* & 0* & $2.35\pm0.07$ & $1.952\pm0.005$ & $1.959\pm0.006$ & $<0.51$ & $75_{-3}^{+1}$ & 3165/3118\\
4b & $2.82_{-0.2}^{+0.3}$ & $0.66\pm0.03$ & $2.6\pm0.3$ & $1.91\pm0.01$&$1.82\pm0.02$&-&-&-&\\
4c & $4.2\pm0.2$ & $0.59\pm0.02$& $2.4\pm0.1$ & $1.78\pm0.02$& $1.89\pm0.01$&-&-&-&\\
\hline
\end{tabular}
\caption{Fit parameters for the models discussed in \S~\ref{2003section} and \S~\ref{section_allfits}. Values left blank are not used in a given model, and those marked with `-' are fixed between observations. Model 1 is the best fit to the 2003 (high state) observation only. Model 2 is the fit to all three spectra, allowing for changes in the photon index and warm absorption between observations. Model 3 ties the warm absorption and underlying spectrum between observations, but allows for a variable cold absorber, and model 4 allows the warm absorber and powerlaw to change as well. a,b and c correspond to the three observations in Table~\ref{obstable}. Errors are not shown for the covering fraction and column density in 3a and 4a, as their low values and degeneracy mean that they cannot be independently constrained (see text).}
\label{fitpars}
\end{table*}
We next investigate the addition of a cold absorber, modelled using \textsc{zpcfabs} (full model: \textsc{tbabs * (zpcfabs * xstar * powerlaw + xillver + zgauss)}). Keeping all parameters apart from the covering fraction and column density tied between the observations, we find a much improved fit over the spectral pivoting models ($\chi^2_\nu=3423/3127=1.09$), with a $\delta\chi^2$ of -206 for 5 more degrees of freedom. However, as shown in Fig.~\ref{wabschanges} there do appear to be changes in the ionised absorption between the 2003 and 2013 observations, once cold absorption is taken into account. For our final best-fit model we allow the column density and ionisation of the warm absorber to change between observations, along with the photon index and flux of the power law. This gives a very good fit ($\chi^2_\nu=3165/3118=1.02$), with no significant residual features. The bulk of the spectral changes are caused by the cold absorption, with only minor changes needed in the continuum and warm absorption. We show the fits to all three datasets in the left panel of Fig.~\ref{bestfit}, both with and without the cold absorber. The best fit model is shown in the right panel of Fig.~\ref{bestfit}. Based on this model, after correcting for both Galactic and intrinsic absorption, we find X-ray luminosities of $L_\textrm{X, 0.3--2}=3.85\times10^{43}$~erg~s$^{-1}$ and $L_\textrm{X, 2--10}=4.21\times10^{43}$~erg~s$^{-1}$.
\begin{figure*}
\includegraphics[width=8.5cm]{bestfit2.eps}
\includegraphics[width=8.5cm]{bestfit_model2.eps}
\caption{Left: The data and residuals to the best fit model for all three observations. Data has been binned in Xspec for clarity. Right: The models fit to the 2003 and 2013 long observations, showing the absorbed powerlaw and distant reflection components, including the narrow 6.55~keV line. In all cases, the upper line corresponds to the 2003 observation. The models have been smoothed slightly using a moving average for clarity.}
\label{bestfit}
\end{figure*}
\section{Optical Monitor and Swift Data}
\label{sec_omuvot}
The X-ray and UV fluxes from {\it Swift}\ monitoring are shown in Table~\ref{swift_res}. We show only the UWM2 flux, as the other filters were not used for half of the 2013 observations. The XRT flux shows a very large (a factor of $\sim$17) drop from 2008 to 2013, but the UV flux actually increases over that interval. Over the course of the 2013 monitoring, the UV flux is very stable, with only small changes in flux, whereas the X-ray flux varies considerably (a factor of $\sim$2.5).
\begin{table}
\begin{center}
\begin{tabular}{clrc}
\hline
ID & Segment & $F_{\rm 0.2-2.0 keV}$ & $F_\textrm{UVM2}$\\
\hline
36530 & 001$^{1}$ & 3.68$^{+0.21}_{-0.16}$ & ---\\
& 003$^{1}$ & 17.00$^{+0.55}_{-0.80}$ & 57.1\plm1.5 \\
91616 & 001 & 0.99\plm0.10 & 39.0\plm1.2 \\
36530 & 006 & 1.43$^{+0.12}_{-0.10}$ & 40.8\plm1.2 \\
& 007 & 0.99$^{+0.16}_{-0.20}$ & 38.1\plm1.2 \\
& 008 & 1.26$^{+0.14}_{-0.44}$ & 38.1\plm1.2 \\
& 009 & 1.72$^{+0.21}_{-0.44}$ & 41.1\plm1.2 \\
& 010 & 2.51$^{+0.28}_{-0.50}$ & 43.5\plm1.2 \\
& 012 & 1.95$^{+0.66}_{-0.55}$ & 41.4\plm1.2 \\
& 013 & 1.70$^{+1.03}_{-0.40}$ & 40.8\plm1.2 \\
\hline
\end{tabular}
\caption{{\it Swift}\ observed XRT 0.2-2.0 keV and UVOT UVM2 fluxes. All fluxes are given in units of $10^{-12}$ ergs s$^{-1}$ cm$^{-2}$ and the UV fluxes are corrected for Galactic reddening.
$^{1}$These observations were previously published in \citep{Grupe10}, and are from 23 July and 17 December 2007. We list these here for comparison purposes. Note that the UVOT magnitudes differ from those given in Grupe et al. (2010) due to changes in the calibration.}
\label{swift_res}
\end{center}
\end{table}
This is confirmed by the OM data (Table~\ref{OMdata}), which shows a significant increase in UV flux in the 2013 observations over the 2003 observation.
\begin{table*}
\centering
\begin{tabular}{c c c cc c c}
\hline
& \multicolumn{3}{c}{UV Flux} & \multicolumn{3}{c}{Optical Flux}\\
\cmidrule(r){2-4} \cmidrule(r){5-7}
Obs. & UVW1 & UVW2 & UVM2 & U & B & V\\
\hline
0150470601 & $37.31\pm0.06$ & $38.1\pm0.8$ & $36.65\pm0.13$ & -& - &-\\
0690870101 & $43.10\pm0.03$ & $46.4\pm0.7$ & $38.86\pm0.08$ & $35.12\pm0.03$ & $25.25\pm0.05$ & $32.8\pm0.1$\\
0690870501 & $45.137\pm0.009$ & $48.5\pm0.7$ & $40.49\pm0.08$ & $37.87\pm0.03$ & $26.69\pm0.05$ & $33.9\pm0.1$\\
\hline
\end{tabular}
\caption{\emph{XMM-Newton} OM data for Mrk~1048. All fluxes are in units of $10^{-12}$~erg~s$^{-1}$~cm$^{-2}$, and are corrected for Galactic reddening.}
\label{OMdata}
\end{table*}
\section{Discussion}
While it is clear that multiple models can fit the underlying spectrum of this source (e.g. with and without a separate soft excess), this does not change our conclusions about the dominant cause of spectral variability between 2003 and 2013. The model used for the underlying spectrum can effectively be treated as a phenomenological fit, until more detailed data or modelling can distinguish the different models. The spectral differences will still be well modelled with partial covering neutral absorption, regardless of the intrinsic spectrum.
While the high-state \emph{RGS} data from 2003 were well-suited to a two-component warm absorber model \citep{Krongold09} the low state data do not provide additional constraints on the absorption features, but reveal strong emission features. These lines appear to be consistent in flux with those found by \citeauthor{Krongold09}, and appear much more prominent due to the reduced continuum flux. The constancy of these features implies that they are caused by reflection outside the region being obscured, potentially from a dusty torus or gas in the broad or narrow line regions.
Unlike other observations of AGN low states Mrk~1048 does not show any features of relativistic reflection. Several sources show strong, broad iron lines in these low states \citep[e.g. PG~2112+059 and 1H~0707-495:][]{Schartel10, Fabian12}, and a recent NuSTAR observation of Mrk~335 in a low flux state shows both a strong broad iron line and Compton hump (Parker et al., submitted). We do not see any evidence for relativistic reflection in any of the observations presented here, and unlike the other sources the drop in flux in Mrk~1048 is strictly confined to low energies.
The OM and UVOT data presented in \S~\ref{sec_omuvot} do not show the large drop in flux seen in the X-ray spectrum of Mrk~1048 in 2013. This suggests that only the inner regions of the disk are affected by this event. Similarly, the {\it Swift}\ monitoring in 2013 reveals that the X-ray flux is much more variable than the UV throughout this period. As the majority of the flux variability in the X-ray band appears to be driven by neutral absorption, this suggests that the cold absorption identified here is not affecting the UV or optical flux. There are two possible explanations for this: firstly, the absorption could be caused by a small cloud or clouds passing in front of the disk, blocking the emission from the compact X-ray emitting region \citep{Dai10} but not impacting flux from the much larger UV and optical emitting regions; alternatively, if the absorber is dust-free there will be no extinction of the UV continuum, and the absorber could be much larger.
In the following, we use the results of the analysis shown in this paper to better constrain the location and size of the absorber. With this aim, we use the observed duration of the obscuration event (at least 40 days), the limited variations of the absorption parameters during the event (Table~\ref{fitpars} fit 4b and 4c), and the values of those parameters. In particular, we use the non-unity covering fraction (0.6--0.7) and the column density (2.5--4.4$\times10^{22}$~cm$^{-2}$). We also use the fact that the UV emission is not affected by the absorption event, and the low ionization (consistent with being neutral) of the obscuring material.
The absorption event in 2013 clearly lasted for at least 40 days, based on the long-term light curve (Fig.~\ref{longtermlcurve}), with only small variations in the covering fraction and column density. This suggests a single large absorber or multiple small clouds passing over the disk, as a single small cloud should cause more rapid variability. However, the effect of a large cloud or multiple small clouds passing over the disk might be expected to be noticeable in the UV band, unless they were dust free.
Another issue is the covering fraction. A large absorber (i.e. one significantly larger than the X-ray emitting region) would be expected to fully cover the source.
If we assume, based on the non-unity covering fraction, that the absorbing cloud is around the same size as the X-ray emitting region, we can calculate a lower limit on the orbital radius of the cloud. We assume a black hole mass of $2\times10^8 M_\odot$ \citep{Kim08,Vasudevan09} and a diameter of 10~$R_\textrm{G}=3\times10^{14}$~cm for the corona \citep[based on microlensing constraints, e.g.][]{Dai10} and the same for the cloud diameter, $D_\textrm{c}$. Using the equation for the total orbit duration in terms of the cloud size and Keplerian velocity from \citet{Risaliti07}, we find that in order to have an eclipse that lasts longer than 1 month a small cloud must be located at a radius $r_\textrm{c}\gtrsim8\times10^{17}$~cm from the source. This corresponds to $\sim3\times10^4R_\textrm{G}$, which is consistent with a cloud in the broad line region.
For a larger cloud, where the cloud diameter is significantly greater than that of the source, the equation from \citet{Risaliti07} simplifies such that the eclipse time scales as $t\propto r_\textrm{c}^{1/2}D_\textrm{c}$. We can calculate an upper limit on where such a cloud could lie based on the dust sublimation radius\footnote{We note that this estimate assumes that the cloud originated at larger radii - if the cloud had come from inside the sublimation radius, it could have formed dust-free and moved outwards.}, as the cloud must be dust free in order to not obscure the UV emission. Assuming a bolometric correction factor $K=20$ \citep{Vasudevan07} based on the low Eddington ratio \citep[$\lambda_\textrm{Edd}=0.02$,][]{Vasudevan09} and an ionizing flux equal to half the bolometric flux \citep[$L_\textrm{bol}=10^{44.8}$~erg s$^{-1}$,][]{Vasudevan09}, we find a dust sublimation radius of $r_\textrm{sub}\sim6\times10^{17}$~cm using equation 5 from \citet{Barvainis87}. This radius is only marginally smaller than the radius inferred for a cloud of size similar to the X-ray emitting region. Therefore, for a cloud at this radius, the required size to produce an eclipse with a duration greater than 40 days is only marginally larger than the X-ray source, and partial covering of the source is plausible.
A final constraint on the location of the obscuring medium comes from the ionization of the gas \citep[e.g.][]{Risaliti07}. By replacing the neutral partial covering absorber with an ionized one \citep[modelled using \textsc{zxipcf},][]{Reeves08}, we calculate an upper limit on the ionization of $\log(\xi)<-0.62$. From the definition of $\xi$, this implies a minimum radius of $r_\textrm{c}=(L/n\xi)^{1/2}\sim3\times10^{18}$~cm for a cloud with $n$, the density of the gas, given by $n=n_\textrm{H}/D_\textrm{c}$ (see Table~\ref{fitpars}). This distance scales as $D_\textrm{c}^{1/2}$, so a larger absorber would have to be further out. However, this constraint is inconsistent with the requirement that a larger cloud lies within the dust sublimation radius, so we conclude that the cloud is most likely to be small (around 10$^{14}$--10$^{15}$~cm) and at a radius $r_\textrm{c}\gtrsim10^{18}$~cm from the source. We note that the density inferred for such a cloud, $n= n_\textrm{H}/D_\textrm{c}\sim 3\times10^{22}/3\times10^{14}=10^{8}$~cm$^{-3}$, is low compared to that generally assumed for BLR clouds (10$^9$--10$^{11}$~cm$^{-3}$) or observed in other sources \citep[e.g.][]{Risaliti11}. One possible explanation for this is that we are not viewing the source through the dense core of the cloud, rather through the more diffuse outer regions \citep[see e.g.][]{Maiolino10}. This should not greatly affect our estimate of the cloud location or size, as the dust sublimation constraint is independent of the cloud size, the ionization constraint will still apply to the outer regions of the cloud, and partial covering becomes implausible if the cloud is a great deal larger.
Mrk~1048 has a very unusual morphology, with a prominent ring structure \citep{deVauc75} and a gas-rich double nucleus \citep{PerezGarcia96}. It is interesting to speculate that this remarkable morphology and recent merger may be responsible for absorption of the AGN, by allowing cold gas to fall in towards the nucleus.
We note that there appears to be a small emission line at around 9~keV in the spectrum of the 2003 observation (Fig.~\ref{2003fits} and~\ref{gammaestimates}). This feature also appears to be present in the spectra shown in \citet{Krongold09}, although it is difficult to see as the residuals are plotted in standard deviations, rather than as a ratio. The origin of this feature is unclear, as we have selected our source and background regions to avoid contamination from the copper emission in the detector. Regardless, this feature is too small to have a significant impact on any of the results and conclusions based on the data from this observation.
\section{Conclusions}
We have presented new \emph{XMM-Newton} observations of a low flux state in Mrk~1048. The spectra show a drop in flux by a factor of $\sim5$ below 1~keV compared to an archival observation from 2003, while above 5~keV the spectrum is unaffected.
We find that the source spectrum can be well modelled with a single warm absorption zone, applied to a power law continuum and distant reflection. Based on fits to different energy bands we find that the large soft excess identified in previous work is a model dependent result. The soft excess can be well modelled by warm absorption and distant reflection when the continuum is carefully extrapolated from bands unaffected by absorption.
The drop in flux can be described simply by the addition of a partial covering neutral absorber, with some additional minor changes in the warm absorption. Reflection dominated models can be discounted by the lack of relativistic reflection features and the total absence of variability above 5~keV, and low states driven by intrinsic spectral changes, warm absorption, or both are ruled out by spectral fitting.
We conclude that the low flux state in Mrk~1048 is caused by obscuration of the inner disk by cold gas and find that this is most likely due to an obscuring cloud at a radius of a few $10^{18}$~cm from the source.
\section*{Acknowledgements}
The authors would like to thank the anonymous referee for their detailed and helpful comments, and Dom Walton for the \textsc{xstar} grids used in this work.
MLP acknowledges financial support from the Science and Technology Facilities Council (STFC), and would like to thank Ranjan Vasudevan and Matt Middleton for helpful discussions.
This work is based on observations obtained with XMM-Newton, an ESA science mission with instruments and contributions directly funded by ESA Member States and NASA.
At Penn State we acknowledge support from the NASA Swift program
through contract NAS5-00136.
\bibliographystyle{mnras}
|
1,477,468,751,018 | arxiv | \section{Introduction} \label{s:intro}
\begin{notation}\label{not:leqct}
Equality, inequality and strict inequality up to a constant
between total functions $D\to\bbbn$, where $D$ is any set,
are denoted as follows:
\begin{eqnarray*}
f\ \leq_{\rm ct}\ g &\Leftrightarrow&
\exists c\in\bbbn\ \forall x\in D\ f(x)\leq g(x)+c
\\
f\ =_{\rm ct}\ g &\Leftrightarrow &
f\leq_{\rm ct} g\ \wedge\ g\leq_{\rm ct} f\\
&\Leftrightarrow &
\exists c\in\bbbn\ \forall x\in D\ |f(x)-g(x)|\leq c
\\
f\ <_{\rm ct}\ g &\Leftrightarrow &
f \leq_{\rm ct} g\ \wedge\ \neg(g \leq_{\rm ct} f)\\
&\Leftrightarrow &
f \leq_{\rm ct} g\ \wedge\ \forall c\in\bbbn\ \exists x\in D\ g(x)>f(x)+c
\end{eqnarray*}
\end{notation}
As we shall consider $\bbbn$-valued partial functions
with domain $\bbbn$, $\bbbz$, ${\{0,1\}}^{<\omega}$, $\bbbn^2$,..., the following
definition is convenient.
\begin{definition}\label{def:basic}
A basic set $\bbbx$ is any non empty finite product of sets
among $\bbbn,\bbbz$ or the set ${\{0,1\}}^{<\omega}$ of finite binary words
or the set $\Sigma^*$ of finite words in some finite or countable
alphabet $\Sigma$.
\end{definition}
Let's also introduce some notations for partial recursive
functions.
\begin{notation}\label{not:PR}
Let $\bbbx,\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ be basic sets.
We denote $\PR[\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$ (resp. $\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$)
the family of partial recursive (resp.partial $A$-recursive)
functions $\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$.
In case $\bbbx=\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}=\bbbn$,we simply write $\PR[]$ and $\PR[A]$.
\end{notation}
\subsection{Kolmogorov complexity and representations of
$\bbbn$, $\bbbz$}
\label{ss:mainResults}
Kolmogorov complexity $K:\bbbn\to\bbbn$ maps an integer $n$ onto
the length of any shortest binary program $\ttp\in{\{0,1\}}^{<\omega}$ which
outputs $n$.
The invariance theorem asserts that, up to an additive constant,
$K$ does not depend on the program semantics $\ \ttp\mapsto n\ $,
provided it is a universal partial recursive function.
\\
As a straightforward corollary of the invariance theorem, $K$ does
not depend (again up to a constant) on the representation of
integers, i.e. whether the program output $n$ is really in $\bbbn$
or is a word in some alphabet $\{1\}$ or $\{0,...,k-1\}$, for some
$k\geq2$, which gives the unary or base $k$ representation of $n$.
A result which is easily extended to all partial recursive
representations of integers, cf. Thm.\ref{thm:recrep}.
\medskip\\
{\em In this paper, we show that this is no more the case when (suitably
effectivized) classical set theoretical representations are
considered.}
We particularly consider representations of integers via
\begin{itemize}
\item
Church iterators (Church \cite{church33}, 1933),
\item
cardinal equivalence classes
(Russell \cite{russell08} \S IX, 1908, cf. \cite{heijenoort} p.178),
\item
index equivalence classes.
\end{itemize}
Following the usual way to define $\bbbz$ from $\bbbn$, we also
consider representations of a relative integer $z\in\bbbz$
as pairs of representations of non negative integers $x,y$
satisfying $z=x-y$.
In the particular case of Church iterators, restricting to injective
functions and considering negative iterations, leads to another
direct way of representing relative integers.
\medskip\\
Programs are at the core of Kolmogorov theory. They do not work
on abstract entities but require formal representations of objects.
Thus, we have to define effectivizations of the above abstract set
theoretical notions in order to allow their elements to be computed
by programs.
To do so, we use computable functions and functionals
and recursively enumerable sets.
\medskip\\
Effectivized representations of integers constitute particular
instances of {\em self-enumerated representation systems}
(cf. Def.\ref{def:self}).
This is a notion of family ${\cal F}$ of partial functions
from ${\{0,1\}}^{<\omega}$ to some fixed set $D$ for which an invariance theorem
can be proved using straightforward adaptation of original
Kolmogorov's proof.
Which leads to a notion of Kolmogorov complexity
$K_{\cal F}^D:D\to\bbbn$, cf. Def.\ref{def:Kself}.
The ones considered in this paper are
$$K_{\mathit{Church}}^\bbbn\ ,\ K_{\mathit{Church}}^\bbbz\ ,
\ K_{\Delta \mathit{Church}}^\bbbz\ ,\
K_{\mathit{card}}^\bbbn\ ,\ K_{\Delta \mathit{card}}^\bbbz\ ,\
K_{\mathit{index}}^\bbbn\ ,\ K_{\Delta \mathit{index}}^\bbbz$$
associated to the systems obtained by effectivization of the
Church, cardinal and index representations of $\bbbn$ and
the passage to $\bbbz$ representations as outlined above.
\medskip\\
The main result of this paper states that the above Kolmogorov
complexities coincide (up to an additive constant) with those
obtained via oracles and infinite computations as introduced in
\cite{becherchaitindaicz}, 2001, and
our paper \cite{ferbusgrigoKmaxKmin}, 2004.
\begin{theorem}[Main result]\label{thm:A}$\medskip\\ $
\centerline{$\begin{array}{rclcrcl}
K_{\mathit{Church}}^\bbbn
&=_{\rm ct}& K_{\mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn
&=_{\rm ct}& K_{\Delta \mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn
&=_{\rm ct}& K
\medskip\\
K_{\mathit{card}}^\bbbn &=_{\rm ct}& \kmax[]
&& K_{\Delta \mathit{card}}^\bbbz\!\upharpoonright \!\bbbn &=_{\rm ct}& K^{\emptyset'}
\medskip\\
K_{\mathit{index}}^\bbbn &=_{\rm ct}& \kmax[\emptyset']
&& K_{\Delta \mathit{index}}^\bbbz\!\upharpoonright \!\bbbn
&=_{\rm ct}& K^{\emptyset''}
\end{array}$}
\end{theorem}
Thm.\ref{thm:A} gathers the contents of
Thms.\ref{thm:card}, \ref{thm:DeltaCard},
\ref{thm:index}, \ref{thm:DeltaIndex},
\ref{thm:Church}, \ref{thm:DeltaChurch} and \S\ref{ss:churchZ}.
\\
A preliminary ``light" version of this result was presented in
\cite{ferbusgrigoClermont}, 2002.
\medskip\\
The strict ordering result $K>_{\rm ct}\kmax[]>_{\rm ct} K^{\emptyset'}$
(cf. Notations \ref{not:leqct})
proved in \cite{becherchaitindaicz,ferbusgrigoKmaxKmin}
and its obvious relativization (cf. Prop.\ref{p:degrees})
yield the following hierarchy theorem.
\begin{theorem}\label{thm:B}
$$\log >_{\rm ct}
\begin{array}{c}
K_{\mathit{Church}}^\bbbn\\
=_{\rm ct}\\
K_{\mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn\\
=_{\rm ct}\\
K_{\Delta \mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn
\end{array}
>_{\rm ct} K_{\mathit{card}}^\bbbn
>_{\rm ct} K_{\Delta \mathit{card}}^\bbbz\!\upharpoonright \!\bbbn
>_{\rm ct} K_{\mathit{index}}^\bbbn
>_{\rm ct} K_{\Delta \mathit{index}}^\bbbz\!\upharpoonright \!\bbbn$$
\end{theorem}
This hierarchy result for set theoretical representations somewhat
reflects their {\em degrees of abstraction}.
\medskip\\
Though Church representation via iteration functionals can be
considered as somewhat complex, we see that, surprisingly, the
associated Kolmogorov complexities collapse to the simplest
possible one.
\medskip\\
Also, it turns out that, for cardinal and index representations,
the passage from $\bbbn$ to $\bbbz$, i.e. from $K_\mathit{card}^\bbbn$ to
$K_{\Delta \mathit{card}}^\bbbz$ and from $K_\mathit{index}^\bbbn$ to
$K_{\Delta \mathit{index}}^\bbbz$ does add complexity.
However, for Church iterators, the passage to $\bbbz$ does not modify
Kolmogorov complexity, let it be via the $\Delta$ operation
(for $K_{\Delta \mathit{Church}}^\bbbz$) or restricting iterators to
injective functions (for $K_{\mathit{Church}}^\bbbz$).
\medskip\\
The results about the $\Delta card$ and $\Delta index$ classes
are corollaries of those about the $card$ and $index$ classes and of
the following result (Thm.\ref{thm:Deltamax}) which gives a simple
normal form to functions computable relative to a jump oracle,
and is interesting on its own.
\begin{theorem}\label{thm:ADelta}
Let $A\subseteq\bbbn$.
A function $G:{\{0,1\}}^{<\omega}\to\bbbz$ is partial $A'$-recursive if and only
if there exist total $A$-recursive functions
$f,g:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ such that, for all $\ttp$,
$$G(\ttp)=\max\{f(\ttp,t):t\in\bbbn\}-\max\{g(\ttp,t):t\in\bbbn\}$$
(in particular, $G(\ttp)$ is defined if and only if both $\max$'s
are finite).
\end{theorem}
\subsection{Kolmogorov complexities and families of functions}
\label{ss:thmC}
The equalities in Thm.\ref{thm:A} are, in fact, corollaries of
equalities between families of functions ${\{0,1\}}^{<\omega}\to\bbbn$
(namely, the associated self-enumerated representation systems,
cf. \S\ref{ss:self}) which are interesting on their own.
For instance (cf. Thms.\ref{thm:card}, \ref{thm:DeltaCard},
\ref{thm:index}, \ref{thm:DeltaIndex},
\ref{thm:Church}, \ref{thm:DeltaChurch} and \S\ref{ss:churchZ}),
\begin{theorem}\label{thm:C}
Denote $X\to Y$ the class of {\em partial} functions from $X$ to $Y$.
\\
{\bf 1.}
A function $f:{\{0,1\}}^{<\omega}\to\bbbn$ is the restriction to a $\Pi^0_2$ set
of a partial recursive function if and only if
it is of the form $f=\mathit{Church}\circ \Phi$ where
\\ - $\Phi:{\{0,1\}}^{<\omega}\to (\bbbn\to\bbbn)^{(\bbbn\to\bbbn)}$
is a computable functional,
\\ - $\mathit{Church}:(\bbbn\to\bbbn)^{(\bbbn\to\bbbn)}\to\bbbn$ is the
functional such that
\begin{eqnarray*}
\mathit{Church}(\Psi)&=&\left\{
\begin{array}{ll}
n&\mbox{if $\Psi$ is the iterator }f\mapsto f^{(n)}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.
\end{eqnarray*}
{\bf 2.}
A function $f:{\{0,1\}}^{<\omega}\to\bbbn$ is the $\max$ of a total recursive
(resp. total $\emptyset'$-recursive) sequence of functions
(cf. Def.\ref{not:maxpr})
if and only if it is of the form
$$\ttp\mapsto \mathit{card}(W_{\varphi(\ttp)}^\bbbn)\ \ \
\mbox{ (resp. }
\ttp\mapsto
\mathit{index}(W_{\varphi(\ttp)}^{\bbbn^2})\mbox{, up to $1$)}$$
for some total recursive $\varphi:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$, where
\\ - $W_{\tt q}}\newcommand{\ttr}{{\tt r}^\bbbn$ (resp. $W_{\tt q}}\newcommand{\ttr}{{\tt r}^{\bbbn^2}$) is the r.e.
subset of $\bbbn$ (resp. $\bbbn^2$) with code ${\tt q}}\newcommand{\ttr}{{\tt r}$,
\\ - $\mathit{card}:P(\bbbn)\to\bbbn$ is the cardinal function
(defined on the sole finite sets),
\\ - $\mathit{index}:P(\bbbn^2)\to\bbbn$ is defined on equivalence relations
with finitely many classes and gives the index (i.e. the number
of equivalence classes).
\medskip\\
{\bf 3.}
A function $f:{\{0,1\}}^{<\omega}\to\bbbn$ is partial $\emptyset'$-recursive
(resp. $\emptyset''$-recursive) if and only if it is of the form
$$\ttp\mapsto \mathit{card}(W_{\varphi_1(\ttp)}^\bbbn)
-\mathit{card}(W_{\varphi_2(\ttp)}^\bbbn)
\
\mbox{ (resp. } \ttp\mapsto
\mathit{index}(W_{\varphi_1(\ttp)}^{\bbbn^2})
-\mathit{index}(W_{\varphi_2(\ttp)}^{\bbbn^2})\mbox{)}$$
for some total recursive $\varphi_1,\varphi_2:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$.
\end{theorem}
\subsection{Road map of the paper}
\label{ss:road}
\S\ref{s:Kabstract} introduces the notion of self-enumerated
representation system with its associated Kolmogorov complexity.
\\
\S\ref{s:operations} introduce simple operations
on self-enumerated systems.
\\
\S\ref{s:Delta} sets up some connections between self-enumerated
representation systems for $\bbbn$ and $\bbbz$.
\\
\S\ref{s:RE} considers a self-enumerated representation system for
the set of recursively enumerable subsets of $\bbbn$.
\\
\S\ref{s:InfiniteComp} recalls material from
Becher \& Chaitin \& Daicz, 2001 \cite{becherchaitindaicz}
and our paper \cite{ferbusgrigoKmaxKmin}, 2004,
about some extensions of Kolmogorov complexity involving infinite
computations.
This is to make the paper self-contained.
\\
\S\ref{s:abstract} introduces abstract representations and their
effectivizations.
\\
\S\ref{s:card}, \ref{s:index}, \ref{s:church} develop the
set-theoretical representations mentioned in \S\ref{ss:mainResults}
and prove all the mentioned theorems and some more results
related to the associated self-enumerated systems, in particular
the syntactical complexity of universal functions for such systems.
\section{An abstract setting for Kolmogorov complexity:
self-enumerated representation systems}
\label{s:Kabstract}
\subsection{Classical Kolmogorov complexity}
\label{ss:K}
Classical Kolmogorov complexity of elements of a basic set $\bbbx$
is defined as follows (cf. Kolmogorov, 1965 \cite{kolmo65}):
\begin{enumerate}
\item
To every $\varphi:{\{0,1\}}^{<\omega}\to\bbbx$ is associated
$K^\bbbx_\varphi:\bbbx\to\bbbn$
such that
\\\centerline{$K^\bbbx_\varphi(\ttx)
=\min\{|\ttp|:\varphi(\ttp)=\ttx\}$}
i.e. $K^\bbbx_\varphi(\ttx)$ is the shortest length of a ``program"
$\ttp\in{\{0,1\}}^{<\omega}$ which is mapped onto $\ttx$ by $\varphi$.
\item
Kolmogorov Invariance Theorem asserts that, letting $\varphi$
vary in $\PR[{\{0,1\}}^{<\omega}\to\bbbx]$ (cf. Notation \ref{not:PR}),
there is a least $K^\bbbx_\varphi$, up to an additive constant:
\\\centerline{$\exists\varphi\in \PR[{\{0,1\}}^{<\omega}\to\bbbx]\ \
\forall\psi\in \PR[{\{0,1\}}^{<\omega}\to\bbbx]\ \
\ K^\bbbx_\varphi\leq_{\rm ct} K^\bbbx_\psi$}
Kolmogorov complexity $\ K_\bbbx:\bbbn\to\bbbn\ $ is such
a least $K^\bbbx_\varphi$, so that it is defined up to an additive
constant.
\end{enumerate}
Let $A\subseteq\bbbn$. The above construction relativizes to oracle
$A$ : replace $\PR[{\{0,1\}}^{<\omega}\to\bbbx]$ by $\PR[A,{\{0,1\}}^{<\omega}\to\bbbx]$
to get the oracular Kolmogorov complexity $K_\bbbx^A$.
\subsection{Self-enumerated representation systems}
\label{ss:self}
We introduce an abstract setting for the definition of Kolmogorov
complexity: {\em self-enumerated representation systems}.
As a variety of Kolmogorov complexities is considered, this
allows to unify the multiple variations of the invariance
theorem, the proofs of which repeat, mutatis mutandis, the same
classical proof due to Kolmogorov
(cf. Li \& Vitanyi's textbook \cite{livitanyi} p.97).
\\
This abstract setting also leads to a study of operations on
self-enumerated systems, some of which are presented in
\S\ref{s:Delta},\ref{s:RE} and some more are developed in the
continuation of this paper.
\\
Some intuition for the next definition is given in Note
\ref{note:intuition} and Rk.\ref{rk:self}.
\begin{definition}[Self-enumerated representation systems]
\label{def:self}$\\ $
{\bf 1.} A self-enumerated representation system
(in short ``self-enumerated system") is a pair $(D,{\cal F})$
where $D$ is a set --- the domain of the system ---
and ${\cal F}$ is a family of partial functions ${\{0,1\}}^{<\omega}\to D$
satisfying the following conditions:
\begin{enumerate}
\item[i.]
${\displaystyle D=\bigcup_{F\in{\cal F}} Range(F)}$,
i.e. every element of $D$ appears in the range of
some function $F\in{\cal F}$.
\item[ii.]
If $\varphi:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is a recursive total function
and $F\in{\cal F}$ then $F\circ \varphi\in{\cal F}$.
\item[iii.]
There exists $U\in{\cal F}$
(called a universal function for ${\cal F}$) and a total
recursive function $comp_U:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that
$$\forall F\in{\cal F}\ \ \exists {\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}\ \
\forall \ttp\in{\{0,1\}}^{<\omega}\ \ F(\ttp)=U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$$
In other words, letting
$U_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)=U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$,
the sequence of functions $(U_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in\bbbn}$
is an enumeration of ${\cal F}$.
\end{enumerate}
{\bf 2.} {\bf (Full systems)}
In case condition ii holds for all {\em partial
recursive functions} $\varphi$, the system $(D,{\cal F})$
is called a self-enumerated representation {\em full system}.
\medskip\\
{\bf 3.}
{\bf (Good universal functions)} A universal function
$U$ for ${\cal F}$ is good if
its associated $comp$ function satisfies the condition
$$\forall{\tt e}}\newcommand{\ttf}{{\tt f}\ \exists c_{\tt e}}\newcommand{\ttf}{{\tt f}\ \forall\ttp\
|comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)|) \leq |\ttp| +c_{\tt e}}\newcommand{\ttf}{{\tt f}$$
i.e. for all ${\tt e}}\newcommand{\ttf}{{\tt f}$, we have
$(\ttp\mapsto |comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)|) \leq_{\rm ct} |\ttp|$
(cf. Notation \ref{not:leqct}).
\end{definition}
\begin{note}[Intuition]\label{note:intuition}$\\ $
{\em 1.}
The set ${\{0,1\}}^{<\omega}$ is seen as a family of programs
to get elements of $D$.
The choice of binary programs is a fairness condition in view
of the definition of Kolmogorov complexity (cf. Def.\ref{def:Kself})
based on the length of programs:
larger the alphabet, shorter the programs.
\medskip\\
{\em 2.}
Each $F\in{\cal F}$ is seen as a programming language with
programs in ${\{0,1\}}^{<\omega}$.
Special restrictions: no input, outputs are elements of $D$.
\medskip\\
{\bf 3.}
Denomination $comp$ stands for ``compiler" since it maps
a program $\ttp$ from ``language" $F$ (with code $\ttp$) to its
$U$-compiled form $comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)$ in the ``language" $U$.
\medskip\\
{\bf 4.}
``Compilation" with a good universal function does not
increase the length of programs but for some additive constant
which depends only on the language, namely on the sole code $e$.
\end{note}
\begin{example}\label{ex:self}
If $\bbbx$ is a basic set then $(\bbbx,\PR[{\{0,1\}}^{<\omega}\to\bbbx])$ is
obviously a self-enumerated representation system.
\end{example}
\begin{remark}\label{rk:self}
In view of the enumerability condition {\em iii} and
since there is no recursive enumeration of total recursive
functions, one would a priori rather require condition {\em ii}
to be true for all partial recursive functions
$\varphi:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$, i.e. consider the sole full systems.
\\
However, there are interesting self-enumerated representation
systems which are not full systems.
The simplest one is $\maxr[]$, cf. Prop.\ref{p:maxprmaxr}.
Other examples we shall deal with involve higher order domains
consisting of infinite objects, for instance the domain
$RE(\bbbn)$ of all recursively enumerable subsets of $\bbbn$,
cf. \S\ref{ss:RE}.
{\em The partial character of computability is already inherent to
the objects in the domain or to the particular notion of
computability and an enumeration theorem does hold for a family
${\cal F}$ of total functions.}
\end{remark}
From conditions i and iii of Def.\ref{def:self},
we immediately see that
\begin{proposition} \label{p:onto}
Let $(D,{\cal F})$ be a self-enumerated system.
Then $D$ and ${\cal F}$ are countable and any universal function
for ${\cal F}$ is surjective.
\end{proposition}
Another consequence of condition iii of Def.\ref{def:self}
is as follows.
\begin{proposition} \label{p:univ}
Let $(\bbbn,{\cal F})$ be a self-enumerated system.
Then all universal functions for ${\cal F}$ are many-one equivalent.
\end{proposition}
\subsection{Good universal functions always exist} \label{ss:good}
Let's recall a classical way to code pairs of words.
\begin{definition}[Coding pairs of words]\label{def:pair}$\\ $
Let $\mu:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be the morphism
(relative to the monoid structure of concatenation product on words)
such that $\mu(0)=00$ and $\mu(1)=01$.
\\
The function $c:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)=\mu({\tt e}}\newcommand{\ttf}{{\tt f})1\ttp$ is a recursive injection
which satisfies equation
\begin{equation}\label{eq:c}
|c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)|=|\ttp|+2|{\tt e}}\newcommand{\ttf}{{\tt f}|+1
\end{equation}
Denoting $\lambda$ the empty word, we define
$\pi_1, \pi_2:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ as follows:
\medskip\\\medskip\centerline{$\pi_1(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))={\tt e}}\newcommand{\ttf}{{\tt f}\ ,\
\pi_2(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))=\ttp\ ,\
\pi_1(w)=\pi_2(w)=\lambda \mbox{ if } w\notin Range(c)$}
\end{definition}
\begin{remark}
If we redefine $c$ as $c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)=\mu(Bin(|{\tt e}}\newcommand{\ttf}{{\tt f}|))1{\tt e}}\newcommand{\ttf}{{\tt f}\ttp$
where $Bin(k)$ is the binary representation of the integer
$k\in\bbbn$ then equation (\ref{eq:c}) can be sharpened to
$$|c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)|=|\ttp|+|{\tt e}}\newcommand{\ttf}{{\tt f}|+2\lfloor\log(|e|)\rfloor+3$$
For an optimal sharpening with a coding of pairs involving the
function
$$\log(x)+\log\log(x)+\log\log\log(x)+...$$
see Li \& Vitanyi's book \cite{livitanyi}, Example 1.11.13, p.79.
\end{remark}
\begin{proposition}[Existence of good universal functions]
\label{p:good}$\\ $
Every self-enumerated system contains a good universal function
with $c$ as associated $comp$ function.
\end{proposition}
\begin{proof}
The usual proof works.
Let $U$ and $comp_U$ be as in Def.\ref{def:self} and set
\medskip\\\medskip\centerline{$U_{opt}
=U\circ comp_U\circ (\pi_1,\pi_2)$}
Then $comp_U\circ (\pi_1,\pi_2):{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
is total recursive and condition ii of Def.\ref{def:self} insures
that $U_{opt}\in{\cal F}$. Now, we have
\medskip\\\medskip\centerline{$
U_{opt}(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))
= U(comp_U((\pi_1,\pi_2)(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))))
= U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$}
so that $U_{opt}$ is universal with $c$ as associated
$comp$ function.
\end{proof}
\subsection{Relativization of self-enumerated representation
systems} \label{ss:relativization}
Def.\ref{def:self} can be obviously relativized to any oracle $A$.
However, contrary to what can be a priori expected, this is no
generalization but particularization.
The main reason is Prop.\ref{p:good}: there always exists a
universal function with $c$ as associated $comp$ function.
\begin{definition} \label{def:selfA}
Let $A\subseteq\bbbn$.
A self-enumerated representation $A$-system is a pair $(D,{\cal F})$
where ${\cal F}$ is a family of partial functions ${\{0,1\}}^{<\omega}\to D$
satisfying condition i of Def.\ref{def:self} and the following
variants of conditions ii and iii :
\begin{enumerate}
\item[$ii^A$.]
If $\varphi:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is an $A$-recursive total function
and $F\in{\cal F}$ then $F\circ \varphi\in{\cal F}$.
\item[$iii^A$.]
There exists $U\in{\cal F}$ and a total
$A$-recursive function $comp_U:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that
$$\forall F\in{\cal F}\ \exists {\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}\
\forall \ttp\in{\{0,1\}}^{<\omega}\
F(\ttp)=U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$$
\end{enumerate}
\end{definition}
\begin{example}\label{ex:selfA}
If $\bbbx$ is a basic set then $(\bbbx,\PR[A,{\{0,1\}}^{<\omega}\to\bbbx])$ is
obviously a self-enumerated representation $A$-system.
\end{example}
\begin{proposition} \label{p:selfA}
Every self-enumerated representation $A$-system contains a universal
function with $c$ as associated $comp$ function.
\\
In particular, every such system is also a self-enumerated
representation system.
Thus, $(\bbbx,\PR[A,{\{0,1\}}^{<\omega}\to\bbbx])$ is a self-enumerated
representation system.
\end{proposition}
\begin{proof}
We repeat the same easy argument used for Prop.\ref{p:good}.
Let $U$ and $comp_U$ be as in condition $iii^A$ of
Def.\ref{def:selfA} and set
$U_{opt}=U\circ comp_U\circ (\pi_1,\pi_2)$.
Then $comp_U\circ (\pi_1,\pi_2):{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is total
$A$-recursive and condition $ii^A$ insures that
$U_{opt}\in{\cal F}$ and we have
\medskip\\\medskip\centerline{$
U_{opt}(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))
= U(comp_U((\pi_1,\pi_2)(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))))
= U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$}
so that $U_{opt}$ is universal with $c$ as associated
$comp$ function.
\end{proof}
\subsection{The Invariance Theorem} \label{ss:invariance}
\begin{definition}\label{def:Kphi}
Let $F:{\{0,1\}}^{<\omega}\to D$ be any partial function.
The Kolmogorov complexity
$K_F^D:D\to\bbbn\cup\{+\infty\}$
associated to $F$ is the function defined as follows:
$$K_F^D(x) = \min\{|\ttp|:F(\ttp)=x\}$$
(Convention: $\min\,\emptyset=+\infty$)
\end{definition}
\begin{remark}\label{rk:Kphi}$\\ $
{\bf 1.} $K_F^D(x)$ is finite if and only if $x\in Range(F)$.
Hence $K_F^D$ has values in $\bbbn$ (rather than
$\bbbn\cup\{+\infty\}$) if and only if $F$ is surjective.
\medskip\\
{\bf 2.} If $F:{\{0,1\}}^{<\omega}\to D$ is a restriction of $G:{\{0,1\}}^{<\omega}\to D$
then $K^D_G\leq K^D_F$.
\end{remark}
\medskip
Thanks to Prop. \ref{p:good}, the usual Invariance Theorem
can be extended to any self-enumerated representation system,
which allows to define Kolmogorov complexity for such a system.
\begin{theorem}[Invariance Theorem,
Kolmogorov, 1965 \cite{kolmo65}]\label{thm:invar}$\\ $
Let $(D,{\cal F})$ be a self-enumerated representation system.
\medskip\\
{\bf 1.}
When $F$ varies in the family ${\cal F}$,
there is a least $K_F^D$, up to an additive constant
(cf. Notation \ref{not:leqct}):
$$\exists F\in {\cal F}\ \ \forall G\in {\cal F}\
\ \ K_F^D \leq_{\rm ct} K_G^D$$
Such $F$'s are said to optimal in ${\cal F}$.
\medskip\\
{\bf 2.}
Every good universal function for ${\cal F}$ is optimal.
\end{theorem}
\begin{proof}
It suffices to prove 2. The usual proof works.
Consider a good universal enumeration $U$ of ${\cal F}$.
Let $F\in {\cal F}$ and let ${\tt e}}\newcommand{\ttf}{{\tt f}$ be such that
$$U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))=F(\ttp)\
\mbox{ for all }p\in{\{0,1\}}^{<\omega}$$
First, since $U$ is surjective (Prop.\ref{p:onto}),
all values of $K^D_U$ are finite.
Thus, $K^D_U(x) < K^D_F(x)$ for $x\notin Range(F)$
(since then $K^D_F(x)=+\infty$).
\\ For every $x\in Range(F)$, let $\ttp_x$ be a smallest program
such that $F(\ttp_x)=x$, i.e. $K^D_F(x)=|\ttp_x|$.
Then,
\medskip\\\centerline{$x=F(\ttp_x)=U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp_x))$}
and since $U$ is good,
\\\medskip\centerline{$K^D_U(x) \leq
|comp_U(e,\ttp_x)|\leq|\ttp_x|+c_{\tt e}}\newcommand{\ttf}{{\tt f}=K^D_F(x)+c_{\tt e}}\newcommand{\ttf}{{\tt f}$}
and therefore $K^D_U\leq_{\rm ct} K^D_F$.
\end{proof}
As usual, Theorem \ref{thm:invar} allows for an intrinsic
definition of the Kolmogorov complexity associated to the
self-enumerated system $(D,{\cal F})$.
\begin{definition}[Kolmogorov complexity of a self-enumerated
representation system]\label{def:Kself}$\\ $
Let $(D,{\cal F})$ be a self-enumerated representation system.
\\
The Kolmogorov complexity $\ K^D_{\cal F}:D\to\bbbn\ $
is the function $K_U^D$ where $U$ is some fixed
{\em good universal enumeration} in ${\cal F}$.
\\ Up to an additive constant, this definition is independent
of the particular choice of $U$.
\end{definition}
The following straightforward result, based on Examples
\ref{ex:self} and \ref{ex:selfA}, insures that Def.\ref{def:Kself}
is compatible with the usual Kolmogorov complexity and its
relativizations.
\begin{proposition}
Let $A\subseteq\bbbn$ be an oracle and let $D=\bbbx$ be a basic set
(cf. Def.\ref{def:basic}).
The Kolmogorov complexities $K^\bbbx_{\PR[{\{0,1\}}^{<\omega}\to\bbbx]}$ and
$K^\bbbx_{\PR[A,{\{0,1\}}^{<\omega}\to\bbbx]}$ defined above are exactly
the usual Kolmogorov complexity
$K_\bbbx:\bbbx\to\bbbn$ and its relativization $K_\bbbx^A$
(cf. \S\ref{ss:K}).
\end{proposition}
\section{Some operations on self-enumerated systems}
\label{s:operations}
\subsection{The composition lemma}\label{ss:subst}
The following easy fact is a convenient tool to
effectivize representations (cf. \S\ref{ss:why}, \ref{ss:effRep}).
We shall also use it in \S\ref{s:Delta} to go from systems
with domain $\bbbn$ to ones with domain $\bbbz$.
\begin{lemma}[The composition lemma]\label{l:circ}$\\ $
Let $(D,{\cal F})$ be a self-enumerated representation system
and $\varphi:D\to E$ be a surjective partial function.
Set $\varphi\circ{\cal F}=\{\varphi\circ F:F\in{\cal F}\}$.
\medskip\\
{\bf 1.}
\ $(E,\varphi\circ{\cal F})$ is also a self-enumerated
representation system.
Moreover, if $U$ is universal or good universal
for ${\cal F}$ then so is $\varphi\circ U$ for
$\varphi\circ{\cal F}$.
\medskip\\
{\bf 2.}
For every $x\in E$,
$$K^E_{\varphi\circ {\cal F}}(x)
=_{\rm ct}\min\{K^D_{\cal F}(y):\varphi(y)=x\}$$
In particular,
$K^E_{\varphi\circ{\cal F}}\circ \varphi\
\leq_{\rm ct}\ K^D_{\cal F}$
and if $\varphi:D\to E$ is a total bijection from $D$ to $E$ then
$K^E_{\varphi\circ {\cal F}}\circ \varphi\ =_{\rm ct}\ K^D_{\cal F}$.
\end{lemma}
\begin{proof}
Point 1 is straightforward. As for point 2, let $U:{\{0,1\}}^{<\omega}\to D$ be
some universal function for ${\cal F}$ and observe that,
for $x\in E$,
\begin{eqnarray*}
K^E_{\varphi\circ {\cal F}}(x)
&=&\min\{|\ttp|:\ttp\mbox{ such that }\varphi(U(\ttp))=x\}\\
&=&\min\{\min\{|\ttp|:\ttp\mbox{ s.t. }U(\ttp)=y\}:
y\mbox{ s.t. }\varphi(y)=x\}\\
&=&\min\{K^D_{\cal F}(y):y\mbox{ s.t. }\varphi(y)=x\}
\end{eqnarray*}
In particular, taking $x=\varphi(z)$, we get
$K^E_{\varphi\circ {\cal F}}(\varphi(z))
\leq_{\rm ct}\ K^D_{\cal F}(z)$.
\\ Finally, observe that if $\varphi$ is bijective then
$z$ is the unique $y$ such that $\varphi(y)=x$,
so that the above $\min$ reduces to $K^D_{\cal F}(z)$.
\end{proof}
\subsection{Product of self-enumerated representation systems}
\label{ss:product}
We shall need a notion of product of self-enumerated representation
systems.
\begin{theorem}\label{thm:prod}
Let $(D_1,{\cal F}_1)$ and $(D_2,{\cal F}_2)$ be self-enumerated
representation systems
\\ We identify a pair $(F_1,F_2)\in{\cal F}_1\times{\cal F}_2$
with the function ${\{0,1\}}^{<\omega}\to D_1\times D_2$ which maps $\ttp$ to
$(F_1(\ttp),F_2(\ttp))$.
\\ Then $(D_1\times D_2, {\cal F}_1\times{\cal F}_2)$ is also a
self-enumerated representation system.
\\ If $(D_1,{\cal F}_1)$ and $(D_2,{\cal F}_2)$ are full systems
then so is $(D_1\times D_2, {\cal F}_1\times{\cal F}_2)$.
\\ If $U_1,U_2$ are universal for ${\cal F}_1,{\cal F}_2$ then
$$U_{1,2}
=(U_1\circ\pi_1,U_2\circ\pi_2)$$
is universal for ${\cal F}_1\times{\cal F}_2$.
\end{theorem}
\begin{proof}
{\em Condition ii} in Def.\ref{def:self} is obvious.
\medskip\\
{\em Condition i.} Let $(d_1,d_2)\in D_1\times D_2$.
Applying condition i to $(D_1,{\cal F}_1)$ and to $(D_2,{\cal F}_2)$,
we get $F_1\in{\cal F}_1$, $F_2\in{\cal F}_2$ and
$\ttp_1,\ttp_2\in{\{0,1\}}^{<\omega}$ such that
$d_1=F_1(\ttp_1)$ and $d_2=F_2(\ttp_2)$.
Therefore
$(d_1,d_2)=(F_1\circ\pi_1,F_2\circ\pi_2)(c(\ttp_1,\ttp_2))$.
Observe finally that
$(F_1\circ\pi_1,F_2\circ\pi_2)\in{\cal F}_1\times {\cal F}_2$
(condition ii for $(D_1,{\cal F}_1) , (D_2,{\cal F}_2)$).
\medskip\\
{\em Condition iii.} Let $comp_1,comp_2:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be
the $comp$ functions associated to the universal functions
$U_1, U_2$ and set
\\\centerline{$comp_{1,2}({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)
=c(comp_1(\pi_1({\tt e}}\newcommand{\ttf}{{\tt f}),\ttp),comp_2(\pi_2({\tt e}}\newcommand{\ttf}{{\tt f}),\ttp))$}
For every $(F_1,F_2)\in{\cal F}_1\times {\cal F}_2$
there exist ${\tt a}}\newcommand{\ttb}{{\tt b},\ttb\in{\{0,1\}}^{<\omega}$ such that
$F_1(\ttp)=U_1(comp_1({\tt a}}\newcommand{\ttb}{{\tt b},\ttp))$ and
$F_2(\ttp)=U_2(comp_2(\ttb,\ttp))$. Therefore
\begin{eqnarray*}
(F_1,F_2)(\ttp)
&=&(U_1(comp_1({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)),U_2(comp_2\tt(\ttb,\ttp)))\\
&=&(U_1\circ\pi_1,U_2\circ\pi_2)
(c(comp_1({\tt a}}\newcommand{\ttb}{{\tt b},\ttp),comp_2(\ttb,\ttp)))\\
&=&U_{1,2}(comp_{1,2}(c({\tt a}}\newcommand{\ttb}{{\tt b},\ttb),\ttp))
\end{eqnarray*}
which proves that $U_{1,2}$ is universal
for the product system ${\cal F}_1\times {\cal F}_2$.
\end{proof}
\begin{remark}
Observe that, even if $U_1,U_2$ are good,
the above universal function $U_{1,2}$ is not good since
\begin{eqnarray*}
|comp_{1,2}({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)| & = & 2|comp_1(\pi_1({\tt e}}\newcommand{\ttf}{{\tt f}),\ttp)|
+|comp_2(\pi_2({\tt e}}\newcommand{\ttf}{{\tt f}),\ttp)|+1
\end{eqnarray*}
which is $\geq 3|\ttp|$ in general.
\\ To get a good function $\widetilde{U_{1,2}}$, argue as in
the proof of Prop.\ref{p:good}:
\begin{eqnarray*}
\widetilde{U_{1,2}}(\ttp)
&=&U_{1,2}\circ comp_{1,2}\circ(\pi_1,\pi_2)(\ttp)\\
&=&U_{1,2}(comp_{1,2}(\pi_1(\ttp),\pi_2(\ttp)))\\
&=&U_{1,2}(c(comp_1(\pi_1\pi_1(\ttp),\pi_2(\ttp)),
comp_2(\pi_2\pi_1(\ttp),\pi_2(\ttp))))\\
&=&(U_1\circ\pi_1, U_2\circ\pi_2)
\\ &&(c(comp_1(\pi_1\pi_1(\ttp),\pi_2(\ttp)),
comp_2(\pi_2\pi_1(\ttp),\pi_2(\ttp))))\\
&=&(U_1(comp_1(\pi_1\pi_1(\ttp),\pi_2(\ttp))),
U_2(comp_2(\pi_2\pi_1(\ttp),\pi_2(\ttp))))
\end{eqnarray*}
\end{remark}
\section{From domain $\bbbn$ to domain $\bbbz$}
\label{s:Delta}
\subsection{The $\Delta$ operation}\label{ss:Delta}
Relative integers are classically introduced as equivalence classes
of pairs of natural integers of which they are the differences.
This give a simple way to go from a self-enumerated representation
system with domain $\bbbn$ to some with domain $\bbbz$.
\begin{definition}[The $\Delta$ operation]\label{def:Delta}$\\ $
Let $\mbox{\em diff}:\bbbn^2\to\bbbz$ be the function $(m,n)\mapsto m-n$.
\\
If $(\bbbn,{\cal F})$ is a self-enumerated representation system
with domain $\bbbn$, using notations from
Lemma \ref{l:circ} and Thm.\ref{thm:prod}, we let
$(\bbbz,\Delta{\cal F})$ be the system
$$(\bbbz, \mbox{\em diff}\circ ({\cal F}\times{\cal F}))$$
\end{definition}
As a direct corollary of Lemma \ref{l:circ} and
Thm.\ref{thm:prod}, we have
\begin{proposition}
If $(\bbbn,{\cal F})$ is a self-enumerated representation system
(resp. full system)
with domain $\bbbn$ then so is $(\bbbz,\Delta{\cal F})$.
\end{proposition}
\subsection{$\bbbz$ systems and $\bbbn$ systems}\label{ss:Delta2}
The following propositions collect some easy facts about
self-enumerated systems with domain $\bbbz$ and their associated
Kolmogorov complexities.
\begin{proposition}\label{p:delta}
Let $(\bbbz,{\cal G})$ be a self-enumerated system.
\medskip\\
{\bf 1.}
Let ${\cal F}=\{G\!\upharpoonright \! G^{-1}(\bbbn):G\in{\cal G}\}$.
Then $(\bbbn,{\cal F})$ is also a self-enumerated system and
$K^\bbbn_{\cal F}=K^\bbbz_{\cal G}\!\upharpoonright \!\bbbn$.
\medskip\\
{\bf 2.}
Denote $opp:\bbbz\to\bbbz$ the function $n\mapsto-n$.
If ${\cal G}\circ opp={\cal G}$ then
$K^\bbbz_{\cal G}=_{\rm ct} K^\bbbz_{\cal G}\circ opp$.
\end{proposition}
\begin{proof}
1. Conditions i-ii of Def.\ref{def:self} are obvious.
As for iii, observe that if $U\in{\cal G}$ is universal for
${\cal G}$ then $U\!\upharpoonright \! U^{-1}(\bbbn)$ is in ${\cal F}$ and is
universal for ${\cal F}$ with the same associated $comp$ function.
Now,
$K_{U\!\upharpoonright \! U^{-1}(\bbbn)}=K_U\!\upharpoonright \!\bbbn$. Whence
$K^\bbbn_{\cal F}=K^\bbbz_{\cal G}\!\upharpoonright \!\bbbn$.
\medskip\\
2. Observe that if $\varphi,F\in{\cal G}$ and
$K_\varphi\leq_{\rm ct} K_F$ then
$K_{\varphi\circ opp}\leq_{\rm ct} K_{F\circ opp}$.
Since ${\cal G}\circ opp={\cal G}$, we see that if
$\varphi$ is optimal then so is $\varphi\circ opp$.
Whence $K_\varphi=_{\rm ct} K_{\varphi\circ opp}$, and therefore
$K^\bbbz_{\cal G}=_{\rm ct} K^\bbbz_{\cal G}\circ opp$.
\end{proof}
\begin{proposition}\label{p:deltaPR}
Let $A\subseteq\bbbn$.
\medskip\\
{\bf 1.}\ \
$\PR[A,{\{0,1\}}^{<\omega}\to\bbbn]=\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]\ \cap\ (\bbbn\to\bbbn)
=\{G\!\upharpoonright \! G^{-1}(\bbbn):G\in\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]\}$.
\\
In particular, $K^{A,\bbbz}\!\upharpoonright \!\bbbn=_{\rm ct} K^{A,\bbbn}$.
\medskip\\
{\bf 2.}\ \
$\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]
=\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]\circ opp=\Delta \PR[A,{\{0,1\}}^{<\omega}\to\bbbn]$.
\\
In particular, $K^{A,\bbbz}=_{\rm ct} K^{A,\bbbz}\circ opp$.
\end{proposition}
\section{Self-enumerated representation systems for r.e. sets}
\label{s:RE}
We now come to examples of self-enumerated systems of a somewhat
different kind, which will be used in the effectivization of
set theoretical representations of integers.
\subsection{Acceptable enumerations}\label{ss:acceptable}
Let's recall the notion of acceptable enumeration of partial
recursive functions (cf. Rogers \cite {rogers} Ex. 2.10 p.41,
or Odifrreddi \cite{odifreddi}, p.215)
\begin{definition}\label{def:acceptable}
Let $\bbbx,\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ be some basic sets and $A\subseteq\bbbn$.
\medskip\\
{\bf 1.}
An enumeration $(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ of partial
$A$-recursive functions $\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ is {\em acceptable} if
\begin{enumerate}
\item[\bf i.]
it is partial $A$-recursive as a function
${\{0,1\}}^{<\omega}\times\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$
\item[\bf ii.]
and it satisfies the parametrization (also called s-m-n)
property: for every basic set $\bbbz$, there exists a total
$A$-recursive function $s^\bbbz_\bbbx:{\{0,1\}}^{<\omega}\times\bbbz\to{\{0,1\}}^{<\omega}$
such that, for all ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$, $\ttz\in\bbbz$, $\ttx\in\bbbx$,
$$\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\couple\ttz\ttx)
=\phi^A_{s^\bbbz_\bbbx({\tt e}}\newcommand{\ttf}{{\tt f},\ttz)}(\ttx)$$
where $\couple\ttz\ttx$ is the image of the pair $(\ttz,\ttx)$
by some fixed total recursive bijection $\bbbz\times\bbbx\to\bbbx$.
\end{enumerate}
{\bf 3.}
An enumeration $(W^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
of $A$-recursively enumerable subsets of $\bbbx$
is {\em acceptable} if, for all ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$,
$W^A_{\tt e}}\newcommand{\ttf}{{\tt f}=domain(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})$ for some acceptable enumeration
$(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ of partial $A$-recursive functions.
\end{definition}
We shall need Rogers' theorem
(cf. Odifreddi \cite{odifreddi} p.219).
\begin{theorem}[Rogers' theorem]\label{thm:rogers}
If $(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ and
$(\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ are
two acceptable enumerations of partial $A$-recursive functions
$\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$, then there exists some
$A$-recursive bijection $\theta:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}=\phi^A_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}$ for all ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$.
\end{theorem}
\begin{corollary}\label{cor:rogers}
Let $(W'^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ and $(W''^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
be two acceptable enumerations of $A$-r.e. subsets of $\bbbx$.
Then there exists an $A$-recursive bijection
$\theta:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$W''^A_{\tt e}}\newcommand{\ttf}{{\tt f}=W'^A_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}$ for all ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$.
\end{corollary}
\begin{proof}
Apply Roger's theorem to acceptable enumerations
$(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}},(\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
of partial $A$-recursive functions such that
$W'^A_{\tt e}}\newcommand{\ttf}{{\tt f}=domain(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})$ and $W''^A_{\tt e}}\newcommand{\ttf}{{\tt f}=domain(\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f})$.
\end{proof}
\subsection{Self-enumerated representation systems for r.e. sets}
\label{ss:RE}
Cor.\ref{cor:rogers} allows to get a natural intrinsic notion of
``partial $A$-computable" map ${\{0,1\}}^{<\omega}\to RE^A(\bbbx)$.
\begin{proposition}\label{p:FRE}
Let $RE^A(\bbbx)$ be the family of $A$-recursively enumerable
subsets of $\bbbx$ and let
$(W'^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ and
$(W''^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be
two acceptable enumerations of $A$-r.e. subsets of $\bbbx$.
Let $G:{\{0,1\}}^{<\omega}\to RE^A(\bbbx)$.
\medskip\\
{\bf 1.}
The following conditions are equivalent:
\begin{enumerate}
\item[i.]
There exists a total $A$-recursive function $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that $G(\ttp)=W'^A_{f(\ttp)}$ for all $\ttp\in{\{0,1\}}^{<\omega}$
\item[ii.]
There exists a total $A$-recursive function $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that $G(\ttp)=W''^A_{f(\ttp)}$ for all $\ttp\in{\{0,1\}}^{<\omega}$
\end{enumerate}
{\bf 2.}
The following conditions are equivalent:
\begin{enumerate}
\item[i.]
There exists a partial $A$-recursive function $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that, for all $\ttp\in{\{0,1\}}^{<\omega}$,
$G(\ttp)=
\left\{\begin{array}{ll}
W'^A_{f(\ttp)}&\mbox{if $f(\ttp)$ is defined}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$
\item[ii.]
There exists a partial $A$-recursive function $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that, for all $\ttp\in{\{0,1\}}^{<\omega}$,
$G(\ttp)=
\left\{\begin{array}{ll}
W''^A_{f(\ttp)}&\mbox{if $f(\ttp)$ is defined}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$
\end{enumerate}
\end{proposition}
\begin{proof}
Applying Cor.\ref{cor:rogers}, we get
$W''^A_{f(\ttp)}=W'^A_{\theta(f(\ttp))}$
and $W'^A_{f(\ttp)}=W'^A_{\theta^{-1}(f(\ttp))}$.
To conclude, observe that $\theta\circ f$ and $\theta^{-1}\circ f$
are both total (point 1) or partial (point 2) $A$-recursive as
is $f$.
\end{proof}
We can now come to the notion of self-enumerated systems for
r.e. sets.
\begin{definition}[Self-enumerated systems for r.e. sets]
\label{def:RE}$\\ $
Let $RE^A(\bbbx)$ be the class of $A$-r.e. subsets of the basic
set $\bbbx$.
\\
Let $(W^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be some fixed acceptable
enumeration of $A$-r.e. subsets of $\bbbx$.
Cor.\ref{cor:rogers} insures that the families defined hereafter
do not depend on the chosen acceptable enumeration.
\medskip\\
{\bf 1.}
We let ${\cal F}^{RE^A(\bbbx)}$ be the family of all {\em total}
functions ${\{0,1\}}^{<\omega}\to RE^A(\bbbx)$ of the form
$\ttp\mapsto W^A_{f(\ttp)}$ where $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ varies over
{\em total} $A$-recursive functions.
\medskip\\
{\bf 2.}
We let ${\cal PF}^{RE^A(\bbbx)}$ be the family of all {\em partial}
functions ${\{0,1\}}^{<\omega}\to RE^A(\bbbx)$ of the form
$$\ttp\mapsto \left\{\begin{array}{ll}
W^A_{f(\ttp)}&\mbox{if $f(\ttp)$ is defined}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$$
where $f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ varies over {\em partial} $A$-recursive
functions.
\end{definition}
The following proposition shows that, in the definition of
${\cal F}^{RE^A(\bbbx)}$,
one can either relax the total ``$A$-recursive"
condition on $f$ to ``partial $A$-recursive" with a special
convention (different from that considered in the definition of
${\cal PF}^{RE^A(\bbbx)}$) or restrict it to some particular
$A$-recursive sequence of total functions.
\begin{proposition}\label{p:RE}
For any acceptable enumeration $(W^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
of $A$-r.e. subsets of $\bbbx$ there exists
a total $A$-recursive function
$\sigma:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that,
for any total function $G:{\{0,1\}}^{<\omega}\to RE^A(\bbbx)$,
the following conditions are equivalent:
\begin{enumerate}
\item[a.]
$G$ is of the form $\ \ttp\mapsto W^A_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}$\
for some ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$
\item[b.]
$G\in{\cal F}^{RE^A(\bbbx)}$
\item[c.]
For all $\ttp$,
$G(\ttp)=\left\{\begin{array}{ll}
W^A_{g(\ttp)}&\mbox{if $g(\ttp)$ is defined}\\
\emptyset&\mbox{otherwise}
\end{array}\right.$.
\end{enumerate}
\end{proposition}
\begin{proof}
Since $a\Rightarrow b\Rightarrow c$ is trivial whatever be the
total recursive function $\sigma$,
it remains to define $\sigma$ such that $c\Rightarrow a$ holds.
\\
Let $(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be an acceptable enumeration
of partial $A$-recursive functions $\bbbx\to\bbbn$
such that
$W^A_{\tt e}}\newcommand{\ttf}{{\tt f}=domain(\phi^A_{\tt e}}\newcommand{\ttf}{{\tt f})$.
\medskip\\
Let $(\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be an enumeration
of partial $A$-recursive functions ${\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
and let ${\tt a}}\newcommand{\ttb}{{\tt b}$ be such that
$\phi^A_{\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)}(\ttx)=
\phi^A_{\tt a}}\newcommand{\ttb}{{\tt b}(\couple {({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)} \ttx)$
for all ${\tt e}}\newcommand{\ttf}{{\tt f},\ttp\in{\{0,1\}}^{<\omega}$, $\ttx\in\bbbx$.
The parameter theorem insures that there exists a total
$A$-recursive function
$s:{\{0,1\}}^{<\omega}\times({\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega})\to{\{0,1\}}^{<\omega}$
such that
$$\phi^A_{\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)}(\ttx)
=\phi^A_{\tt a}}\newcommand{\ttb}{{\tt b}(\couple {({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)} \ttx)
=\phi^A_{s({\tt a}}\newcommand{\ttb}{{\tt b},{\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}(\ttx)
=\phi^A_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}(\ttx)$$
where $\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)=s({\tt a}}\newcommand{\ttb}{{\tt b},{\tt e}}\newcommand{\ttf}{{\tt f},\ttp)$.
Whence the equality
$$W^A_{\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)}=W^A_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}$$
which is also valid when $\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)$ is undefined,
in the sense that both sets are empty.
\medskip\\
Let $G,g$ be as in c. Since $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is $A$-recursive,
there exists ${\tt e}}\newcommand{\ttf}{{\tt f}$ such that
$g(\ttp)=\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)$ for any $\ttp\in{\{0,1\}}^{<\omega}$.
Thus,
$$W^A_{g(\ttp)}=W^A_{\psi^A_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp)}=W^A_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}$$
an equality valid also if $g(\ttp)$ is undefined,
in the sense that all sets are empty.
\\
This proves $c\Rightarrow a$.
\end{proof}
\begin{theorem}\label{thm:RE}
$(RE^A(\bbbx),{\cal F}^{RE^A(\bbbx)})$
and $(RE^A(\bbbx),{\cal PF}^{RE^A(\bbbx)})$
are self-enumerated representation systems.
\end{theorem}
\begin{proof}
Conditions $i, ii^A$ of Def.\ref{def:self}, \ref{def:selfA} are
obvious for both systems.
\\
If $U$ satisfies $iii^A$ for $\PR[A,{\{0,1\}}^{<\omega}\to\bbbx]$ then
$$\ttp\mapsto\left\{
\begin{array}{ll}
W^A_{U(\ttp)}&\mbox{if $U(\ttp)$ is defined}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$$
satisfies $iii^A$ for ${\cal PF}^{RE^A(\bbbx)}$
with the same associated $comp$ function.
\\
Prop.\ref{p:RE} proves that the function $\ttp\mapsto W^A_{\tt e}}\newcommand{\ttf}{{\tt f}$
satisfies condition $iii^A$ with $\sigma$ as $comp$ function.
Thus, $(RE^A(\bbbx),{\cal F}^{RE^A(\bbbx)})$
and $(RE^A(\bbbx),{\cal PF}^{RE^A(\bbbx)})$ are self-enumerated
$A$-systems.
We conclude using Prop.\ref{p:selfA}.
\end{proof}
\begin{remark}\label{rk:RE}
It is possible to improve Prop.\ref{p:RE} so as to get $\sigma$
total recursive (rather than $A$-recursive) in condition $a$.
This will hold for particular acceptable enumerations of $A$-r.e.
sets, with the same total recursive $\sigma$ whatever be $A$.
We sketch how this can be obtained (for more details about this type
of argument, cf. our paper \cite{ferbusgrigoOrder} \S2.3, 2.4.).
\\
Using partial computable functionals $\bbbx\times P(\bbbn)\to\bbbn$,
we can view partial $A$-recursive functions as functions obtained by
freezing the second order argument in such functionals.
We can also also consider $A$-r.e. subsets of $\bbbx$
as obtained from domains of such functionals by freezing the
second order argument.
\\
When freezing the second order argument to $A\subseteq\bbbn$,
acceptable enumerations of partial computable functionals
give acceptable enumerations of partial $A$-recursive functions.
\\
In this way, consider an acceptable enumeration
$(\Phi_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ of partial computable functionals
$\bbbx\times P(\bbbn)\to\bbbn$
and let ${\cal W}^A_{\tt e}}\newcommand{\ttf}{{\tt f}=\{\ttx:(\ttx,A)\in domain(\Phi_{\tt e}}\newcommand{\ttf}{{\tt f})\}$.
Arguing as in the proof of Prop.\ref{p:RE}
(with an acceptable enumeration $(\Psi_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
of partial computable functionals ${\{0,1\}}^{<\omega}\times P(\bbbn)\to{\{0,1\}}^{<\omega}$)
we get
$$\Phi_{\Psi_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp,A)}(\ttx,A)
=\Phi_{\tt a}}\newcommand{\ttb}{{\tt b}(\couple {({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)} \ttx,A)
=\Phi_{s({\tt a}}\newcommand{\ttb}{{\tt b},{\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}(\ttx,A)
=\Phi_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}(\ttx,A)$$
where $s$ is the total recursive function involved in the parameter
property for the acceptable enumeration
$(\Phi_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ and
$\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)=s({\tt a}}\newcommand{\ttb}{{\tt b},{\tt e}}\newcommand{\ttf}{{\tt f},\ttp)$.
\\
Now, let $G\in{\cal F}^{RE^A(\bbbx)}$ and let $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be
total $A$-recursive such that $G(\ttp)={\cal W}^A_{g(\ttp)}$.
Let ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$ be such that $g=\Psi({\tt e}}\newcommand{\ttf}{{\tt f},A)$.
Then
$$\Phi_{g(\ttp)}(\ttx,A)=\Phi_{\Psi_{\tt e}}\newcommand{\ttf}{{\tt f}(\ttp,A)}(\ttx,A)
=\Phi_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}(\ttx,A)\ \ \mbox{ and }\ \
G(\ttp)={\cal W}^A_{g(\ttp)}={\cal W}^A_{\sigma({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}$$
\end{remark}
\section{Infinite computations}
\label{s:InfiniteComp}
Chaitin, 1976 \cite{chaitin76}, and Solovay, 1977
\cite{solovay77}, considered infinite computations producing
infinite objects (namely recursively enumerable sets) so as to
define Kolmogorov complexity of such infinite objects.
\\
Following the idea of possibly infinite computations leading to
finite output (i.e. remove the sole halting condition),
Becher \& Chaitin \& Daicz, 2001 \cite{becherchaitindaicz}
introduced a variant $K^\infty$ of Kolmogorov complexity.
\\
In our paper \cite{ferbusgrigoKmaxKmin}, 2004, we introduced two
variants $\kmax[],\kmin[]$ of Kolmogorov complexity and proved
that $K^\infty=\kmax[]$.
These variants are based on two self-enumerated representation
systems, namely the classes of $\max$ and $\min$ of partial
recursive sequences of partial recursive functions.
\subsection{Self-enumerated systems of $\max$ of partial recursive
functions}
\label{ss:maxpr}
\begin{notation}\label{not:maxpr}
Let $A\subseteq\bbbn$.
\\
{\bf 1.}
Let $\bbbx$ be a basic set.
Extending Notation \ref{not:PR}, we denote $Rec^{A,{\{0,1\}}^{<\omega}\to\bbbx}$
the family of total functions ${\{0,1\}}^{<\omega}\to\bbbx$ which are recursive
in $A$.
\medskip\\
{\bf 2.}
Let $\bbbx$ be $\bbbn$ or $\bbbz$.
If $f:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbx$, we denote $\max f$
the function $(\max f)(\ttp)=\max\{f(\ttp,t):t\in\bbbn\}$
(with the convention that $\max X$ is undefined if $X$ is empty
or infinite).
\\
We define the families of functions
\begin{eqnarray*}
\maxprA[{\{0,1\}}^{<\omega}\to\bbbx]
&=&\{\max f:f\in \PR[A,{\{0,1\}}^{<\omega}\times\bbbn\to\bbbx]\}
\\
\maxrA[{\{0,1\}}^{<\omega}\to\bbbx]&=&\{\max f:f\in Rec^{A,{\{0,1\}}^{<\omega}\times\bbbn\to\bbbx}\}
\end{eqnarray*}
In case $A$ is $\ \emptyset\ $, we simply write
$\maxpr[{\{0,1\}}^{<\omega}\to\bbbx]$ and $\maxr[{\{0,1\}}^{<\omega}\to\bbbx]$.
\end{notation}
\begin{proposition}\label{p:maxprmaxr}
Let $A\subseteq\bbbn$. Then
$$(\bbbn,\maxprA[{\{0,1\}}^{<\omega}\to\bbbn])\ \ ,\ \
(\bbbz,\maxprA[{\{0,1\}}^{<\omega}\to\bbbz])\ \ ,\ \
(\bbbn,\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$$
are self-enumerated representation systems.
\end{proposition}
\begin{proof}
First consider the no oracle case (i.e. $A=\emptyset$).
Conditions i-ii in Def.\ref{def:self} are trivial.
The classical enumeration theorem easily extends
to $\maxpr[{\{0,1\}}^{<\omega}\to\bbbx]$
(cf. \cite{ferbusgrigoKmaxKmin}, Thm.4.1),
proving condition iii for $(\bbbx,\maxpr[{\{0,1\}}^{<\omega}\to\bbbx])$
where $\bbbx$ is $\bbbn$ or $\bbbz$.
\\
It remains to show condition iii for $\maxr[{\{0,1\}}^{<\omega}\to\bbbn]$.
We use the following straightforward fact
(cf. \cite{ferbusgrigoKmaxKmin}, Thm.3.6):
\begin{fact}\label{fact:rec}
{\em If $f\in\PR[{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn]$ and
$$g(\ttp,t)=\max(\{0\}\cup\{f(\ttp,i):i\leq t\ \wedge\
f(\ttp,i) \mbox{converges in at most $t$ steps}\})$$
then
$g\in Rec^{{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn}$ and $\max g$ is an
extension of $\max f$ with value $0$ on
$domain(\max g)\setminus domain(\max f)$ (which is the set of $n$'s
such that $f(n,t)$ is defined for no $t$).}
\end{fact}
Let $U\in\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$ be good universal for
$\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$
and let $V$ be an extension of $U$ in $\maxr[{\{0,1\}}^{<\omega}\to\bbbn]$
given by the above fact.
If $F\in Rec^{{\{0,1\}}^{<\omega}\to\bbbn}$ then it is in $\PR[{\{0,1\}}^{<\omega}\to\bbbn]$
and there exists ${\tt e}}\newcommand{\ttf}{{\tt f}$ such that $F(\ttp)=U(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$
for all $\ttp\in{\{0,1\}}^{<\omega}$.
Since $V$ extends $U$ and $F$ is total, we also have
$F(\ttp)=V(comp_U({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))$.
Thus, $V$ is good universal for $\maxr[{\{0,1\}}^{<\omega}\to\bbbn]$ with the
same associated $comp$ function.
\medskip\\
Relativization to oracle $A$ proves conditions $ii^A,iii^A$,
(cf. Def.\ref{def:selfA})
for $(\bbbx,\maxprA[{\{0,1\}}^{<\omega}\to\bbbx])$
and $(\bbbn,\maxr[{\{0,1\}}^{<\omega}\to\bbbn])$.
We conclude using Prop.\ref{p:selfA}.
\end{proof}
\begin{remark}\label{rk:minPR}$\\ $
{\bf 1.}
Fact \ref{fact:rec} implies that
$\maxpr[{\{0,1\}}^{<\omega}\to\bbbx]$ and $\maxr[{\{0,1\}}^{<\omega}\to\bbbn]$ contain the
same total functions. However, considering partial functions,
the inclusion $\maxr[{\{0,1\}}^{<\omega}\to\bbbx]\subset\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$
is strict (cf. \cite{ferbusgrigoKmaxKmin} Thm.3.6, point 1).
\medskip\\
{\bf 2.}
Let $\bbbx$ be $\bbbn$ or $\bbbz$ and let
$\minprA[{\{0,1\}}^{<\omega}\to\bbbx],\minrA[{\{0,1\}}^{<\omega}\to\bbbx]$ be defined
with $\min$ instead of $\max$ as in Point 2 of the above definition
(with the same convention that $\min\emptyset$ is undefined).
Then $(\bbbx,\minprA[{\{0,1\}}^{<\omega}\to\bbbx])$ is also a
self-enumerated representation system.
\\
We shall not use any $\min$ based system in this paper
because they have no simple set theoretical counterparts.
\medskip\\
{\bf 3.}
None of the systems $(\bbbz,\maxrA[{\{0,1\}}^{<\omega}\to\bbbz])$,
$(\bbbn,\minr[{\{0,1\}}^{<\omega}\to\bbbn])$ and
$(\bbbz,\minr[{\{0,1\}}^{<\omega}\to\bbbz])$ is self-enumerated
(cf. \cite{ferbusgrigoKmaxKmin}, Thm.4.3).
\end{remark}
\subsection{Kolmogorov complexities $\kmax[], \kmax[\emptyset'],...$}
\label{ss:Kmax}
We apply Def.\ref{def:Kself} to the self-enumerated representation
systems considered in \S\ref{ss:maxpr}.
\begin{definition}[Kolmogorov complexities]
\label{def:Kmaxpr}
Let $\bbbx$ be $\bbbn$ or $\bbbz$. We denote
$\ \kmax[A,\bbbx]:\bbbx\to\bbbn\ $ the Kolmogorov
complexity of the self-enumerated representation system
$(\bbbx,\maxprA[{\{0,1\}}^{<\omega}\to\bbbx])$.
\medskip\\
In case $\bbbx=\bbbn$, we omit the superscript $\bbbn$.
\medskip\\
In case $\bbbx=\bbbn$ and $A$ is $\ \emptyset\ $
we simply write $\ \kmax[]$.
\end{definition}
Using Remark \ref{rk:Kphi}, point 2, and Fact \ref{fact:rec},
it is not hard to prove the following result
(cf. \cite{ferbusgrigoKmaxKmin}, Prop.6.3).
\begin{proposition}\label{p:Kinfini}
Let $A\subseteq\bbbn$.
Then $\kmax[A]$ is also the Kolmogorov complexity of the
self-enumerated system $(\bbbn,\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$.
I.e.
$$\ K^\bbbn_{\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]}
=K^\bbbn_{\maxprA[{\{0,1\}}^{<\omega}\to\bbbn]}$$
\end{proposition}
\begin{remark}
The above proposition has no analog with $\bbbz$ since
$\maxrA[{\{0,1\}}^{<\omega}\to\bbbz]$ is not self-enumerated
(cf. Remark \ref{rk:minPR}, point 3).
\end{remark}
\subsection{$\maxr[{\{0,1\}}^{<\omega}\to\bbbn]$ and $\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$
and infinite computations}
\label{ss:InfiniteComp}
The following simple result gives a machine characterization
of functions in $\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$
(resp. $\maxprA[{\{0,1\}}^{<\omega}\to\bbbn]$) which will be used in the proof
of Thm.\ref{thm:index}.
\begin{definition}\label{def:Turing}
Let ${\cal M}$ be an oracle Turing machine such that
\begin{enumerate}
\item
the alphabet of the input tape is $\{0,1\}$,
plus an end-marker to delimitate the input,
\item
the output tape is write-only and has unary alphabet $\{1\}$,
\item
there is no halting state
(resp. but there are some distinguished states).
\end{enumerate}
The partial function $F^A:{\{0,1\}}^{<\omega}\to\bbbn$ computed by ${\cal M}$
with oracle $A$ through infinite computation
(resp. with distinguished states) is defined as follows:
$F^A(\ttp)$ is defined with value $n$ if and only if the
infinite computation (i.e. which lasts forever) of ${\cal M}$
on input $\ttp$ outputs exactly $n$ letters $1$
(resp. and at some step the current state is a distinguished one).
\end{definition}
\begin{proposition}\label{p:Turing}
Let $A\subseteq\bbbn$ be an oracle.
A function $F:{\{0,1\}}^{<\omega}\to\bbbn$ is in $\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$
(resp. $\maxprA[{\{0,1\}}^{<\omega}\to\bbbn]$)
if and only if there exists an oracle Turing machine ${\cal M}$
which, with oracle $A$, computes $F$ through infinite computation
(resp. with distinguished states)
in the sense of Def.\ref{def:Turing}.
\end{proposition}
\begin{proof}
$\Leftarrow$. The function associated to an oracle Turing machine
through infinite computation (resp. with distinguished states)
is clearly $\max f$ where $f(\ttp,t)$ is the current output at step
$t$ (resp. and is undefined while the machine has not been in some
distinguished state).
\medskip\\
$\Rightarrow$. Suppose $f:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ is total
(resp. partial) $A$-recursive and set
$$X(\ttp,t)=\{f(\ttp,t'):t'<t\ \wedge\ f(\ttp,t')
\mbox{ converges in $\leq t$ steps}\})$$
Observe that $X(\ttp,0)=\emptyset$, so that the following is indeed
an $A$-recursive definition:
\begin{eqnarray*}
\widetilde{f}(\ttp,t)&=&\left
\{\begin{array}{ll}
0\mbox{ (resp. undefined)}&\mbox{if }X(\ttp,t)=\emptyset
\\
\widetilde{f}(\ttp,t-1)+1&
\mbox{if $X(\ttp,t)\neq\emptyset\ \wedge\
\widetilde{f}(\ttp,t-1)<\max X(\ttp,t)$}
\\
\widetilde{f}(\ttp,t-1)&\mbox{otherwise}
\end{array}\right.
\end{eqnarray*}
Then $\max\widetilde{f}=\max f$. Also, the unary representation of
$\widetilde{f}(\ttp,t)$ can be simply interpreted as the current
output at step $t$ of the infinite computation
(resp. with distinguished states)
of an oracle Turing machine with input $\ttp$. So that
$\max\widetilde{f}$ is the function associated to that machine.
\end{proof}
\subsection{$\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$ and the jump}
\label{ss:Kmaxversusjump}
The following proposition is easy.
\begin{proposition}\label{p:maxprANDjump}
Let $A\subseteq\bbbn$ and let $\bbbx$ be $\bbbn$ or $\bbbz$.
Then $$\maxprA[{\{0,1\}}^{<\omega}\to\bbbx]\subset\PR[A',{\{0,1\}}^{<\omega}\to\bbbx]$$
\end{proposition}
\begin{proof}
1. Let $f:{\{0,1\}}^{<\omega}\to\bbbx$ be partial $A$-recursive. A partial
$A'$-recursive definition of $(\max f)(\ttp)$ is as follows:
\begin{enumerate}
\item[i.]
First, check whether there exists $t$ such that $f(\ttp,t)$ is
defined.
\\If the check is negative then $(\max f)(\ttp)$ is undefined.
\item[ii.]
If check i is positive then start successive steps of the
following process.
\\- At step $t$, check whether $f(\ttp,t)$ is defined,
\\- if defined, compute its value,
\\- and check whether there exists $u>t$ such that $f(\ttp,u)$
is greater than the maximum value computed up to that step.
\item[iii.]
If at some step the last check in ii is negative then halt and
output the maximum value computed up to now.
\end{enumerate}
Clearly, oracle $A'$ allows for the checks in i and ii.
Also, the above process halts if and only if $f(\ttp,t)$ is
defined for some $t$ and $\{f(\ttp,t):t\in\bbbn\}$ is bounded,
i.e. if and only if $(\max f)(\ttp)$ is defined.
In that case it outputs exactly $(\max f)(\ttp)$.
\medskip\\
2. To see that the inclusion is strict, observe that the graph
of any function in $\maxprA[{\{0,1\}}^{<\omega}\to\bbbx]$ is
$\Sigma^{0,A}_1\wedge\Pi^{0,A}_1$ since
$$y=(\max f)(\ttp)\ \Leftrightarrow\
((\exists t\ f(\ttp,t)=y)\ \wedge\
\neg(\exists u\ \exists z>y\ f(\ttp,u)=z))$$
Whereas the graph of functions in $\PR[A',{\{0,1\}}^{<\omega}\to\bbbx]$ can be
$\Sigma^{0,A'}_1$ and not $\Delta^{0,A'}_1$, i.e.
$\Sigma^{0,A}_2$ and not $\Delta^{0,A}_2$.
\end{proof}
In the vein of Prop.\ref{p:maxprANDjump}, let's mention the
following result, cf. \cite{becherchaitindaicz}
(where the proof is for $K^{\infty}$,
cf. start of \S\ref{s:InfiniteComp} above)
and \cite{ferbusgrigoKmaxKmin} Prop.7.2-3 \& Cor.7.7.
\begin{proposition}\label{p:degrees}
Let $A\subseteq\bbbn$.
\medskip\\
{\bf 1.}
$K^A$ and $\kmax[A]$ are recursive in $A'$.
\medskip\\
{\bf 2.}
$K^A >_{\rm ct} \kmax[A] >_{\rm ct} K^{A'}$.
\end{proposition}
\subsection{The $\Delta$ operation on $\maxpr[{\{0,1\}}^{<\omega}\to\bbbn]$
and the jump}
\label{ss:DeltaMax}
The following variant of Prop.\ref{p:maxprANDjump} is a normal form
for partial $A'$-recursive $\bbbz$-valued functions.
We shall use it in \S\ref{s:card}-\ref{s:index}.
\begin{theorem}\label{thm:Deltamax}
Let $A\subseteq\bbbn$. Then
$$\PR[A',{\{0,1\}}^{<\omega}\to\bbbz]=\Delta(\maxprA[{\{0,1\}}^{<\omega}\to\bbbn])
=\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$$
Thus, every partial $A'$-recursive function is the difference of
two functions in $\maxrA[]$ (cf. Notation \ref{not:maxpr}).
\end{theorem}
Before entering the proof of Thm.\ref{thm:Deltamax}, let's recall
two well-known facts about oracular computation and approximation
of the jump.
\begin{lemma}\label{l:oracleCV}
Let $(B_t)_{t\in\bbbn}$ be a sequence of subsets of $\bbbn$ which
converges pointwise to $B\subseteq\bbbn$, i.e.
$$\forall n\ \ \exists t_n\ \ \forall t\geq t_n\ \ \
B_t\cap\{0,1,...,n\}=B\cap\{0,1,...,n\}$$
Let $\bbbx,\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ be basic sets and let $\psi:\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ be a
partial $B$-recursive function computed by some oracle Turing
machine ${\cal M}$ with oracle $B$. Let $\ttx\in\bbbx$.
\\
Then, $\psi(\ttx)$ is defined if and only if there exists $t_\ttx$
such that
\begin{enumerate}
\item[i.]
the computation of ${\cal M}$ on input $\ttx$ with oracle
$B_{t_\ttx}$ halts in at most $t_\ttx$ steps,
\item[ii.]
for all $t\geq t_\ttx$ the computation of ${\cal M}$ on input $\ttx$
with oracle $B_t$ is step by step exactly the same as that with
oracle $B_{t_\ttx}$ (in particular, it asks the same questions to
the oracle, gets the same answers and halts at the same computation
step $\leq t_\ttx$).
\end{enumerate}
\end{lemma}
\begin{lemma}\label{l:approx}
Let $A\subseteq\bbbn$ and let $A'\subseteq\bbbn$ be the jump of $A$.
There exists a total $A$-recursive sequence
$(Approx(A',t))_{t\in\bbbn}$
of subsets of $\bbbn$ which is monotone increasing with respect to
set inclusion and which has union $A'$.
In particular, this sequence converges pointwise to $A'$.
\end{lemma}
We can now prove Thm.\ref{thm:Deltamax}.
\medskip\\
{\em Proof of Thm.\ref{thm:Deltamax}.}\\
Using Prop.\ref{p:maxprANDjump} and Prop.\ref{p:deltaPR}, we get
$$\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])
\subseteq\Delta(\maxprA[{\{0,1\}}^{<\omega}\to\bbbn])
\subseteq\Delta(\PR[A',{\{0,1\}}^{<\omega}\to\bbbn])=\PR[A',{\{0,1\}}^{<\omega}\to\bbbz]$$
Since $\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$ is closed by sums, we have
$\Delta(\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])
=\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$.
Thus, to get the wanted equality, it suffices to prove inclusion
$$\PR[A',{\{0,1\}}^{<\omega}\to\bbbn]
\subseteq\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$$
Let ${\cal M}$ be an oracle Turing machine with inputs in ${\{0,1\}}^{<\omega}$,
which, with oracle $A'$, computes the partial $A'$-recursive function
$\varphi^{A'}:{\{0,1\}}^{<\omega}\to\bbbn$.
\\
To prove that $\varphi^{A'}$ is in $\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$,
we define total $A$-recursive functions
$f,g:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ which are (non strictly)
monotone increasing and such that $\varphi^{A'}=\max f-\max g$.
\medskip\\
The idea to get $f,g$ is as follows.
We consider $A$-recursive approximations of oracle $A'$ (as given
by Lemma \ref{l:approx}) and use them as fake oracles.
Function $f$ is obtained by letting ${\cal M}$ run with the fake
oracles and restart its computation each time some better
approximation of $A'$ shows the previous fake oracle has given
an incorrect answer.
Function $g$ collects all the outputs of the computations which
have been recognized as incorrect in the computing process for $f$.
\medskip\\
We now formally define $f,g$.
\\
First, since we do not care about computation time and space,
we can suppose without loss of generality, that, at any step $t$,
${\cal M}$ asks to the oracle about the integer $t$ and writes down
the oracle answer on the $t$-th cell of some dedicated tape.
\\
Consider $t+1$ steps of the computation of ${\cal M}$ on input
$\ttp$ with oracle $Approx(A',t)$ (cf. Lemma \ref{l:approx}).
We denote ${\cal C}_{\ttp,t+1}$ this limited computation.
We say that ${\cal C}_{\ttp,t+1}$ halts if ${\cal M}$ (with that
fake oracle) halts in at most $t+1$ steps.
\\
We denote $output({\cal C}_{\ttp,t})$ the current value (which is
in $\bbbz$) of the output tape after step $t$.
The $A$-recursive definition of $f,g$ is as follows.
\begin{enumerate}
\item[i.]
$f(\ttp,0)=g(\ttp,t)=0$
\item[ii.]
Suppose
$Approx(A',t+1)\cap\{0,...,t\}= Approx(A',t)\cap\{0,...,t\}$.
Then, up to the halting step of ${\cal C}_{\ttp,t}$ or up to
step $t$ in case ${\cal C}_{\ttp,t}$ does not halt,
both computations ${\cal C}_{\ttp,t},{\cal C}_{\ttp,t+1}$ are
stepwise identical.
\begin{enumerate}
\item
If ${\cal C}_{\ttp,t}$ halts then so does ${\cal C}_{\ttp,t+1}$
at the same step. And both computations have the same output.
\\In that case, we set
$f(\ttp,t+1)=f(\ttp,t)\ ,\ g(\ttp,t+1)=g(\ttp,t)$.
\item
If ${\cal C}_{\ttp,t}$ does not halt then let
$\delta_{t+1}=output({\cal C}_{\ttp,t+1})-output({\cal C}_{\ttp,t})$,
and set
\medskip\\
$\begin{array}{rcl}
f(\ttp,t+1)&=&f(\ttp,t)+1+\max(0,\delta_{t+1})\\
g(\ttp,t+1)&=&g(\ttp,t)+1+\max(0,-\delta_{t+1})
\end{array}$
\medskip\\
i.e. we add $|\delta_{t+1}|$ to $f$ or $g$ according to the sign
of $\delta_{t+1}$.
\end{enumerate}
\item[iii.]
Suppose
$Approx(A',t+1)\cap\{0,...,t\}\neq Approx(A',t)\cap\{0,...,t\}$.
Since these approximations are monotone increasing, we necessarily
have $Approx(A',t)\cap\{0,...,t\}\neq A'\cap\{0,...,t+1\}$.
\\
Thus, the fake oracle in ${\cal C}_{\ttp,t}$ has given answers
which are not compatible with $A'$. In that case, we set
\medskip\\
$\begin{array}{rcl}
f(\ttp,t+1)&=&
f(\ttp,t)+g(\ttp,t)+1+\max(0,output({\cal C}_{\ttp,t+1}))
\\
g(\ttp,t+1)&=&
f(\ttp,t)+g(\ttp,t)+1+\max(0,-output({\cal C}_{\ttp,t+1}))
\end{array}$
\medskip\\
i.e. we uprise $f,g$ to a common value
(namely $f(\ttp,t)+g(\ttp,t)$) and then add
$|output({\cal C}_{\ttp,t+1})|$ to $f$ or $g$ according to the sign
of $output({\cal C}_{\ttp,t+1})$.
\end{enumerate}
From the above inductive definition, we see that,
for each $t>0$,
$$f(\ttp,t)-g(\ttp,t)=output({\cal C}_{\ttp,t})$$
{\em Suppose $\varphi^{A'}(\ttp)$ is defined.}\\
Applying Lemmas \ref{l:oracleCV}, \ref{l:approx}, we see that
there exist $s_\ttp\leq t_\ttp$ such that
\\- ${\cal M}$, on input $\ttp$, with oracle $A'$, halts in
$s_\ttp$ steps,
\\-
$Approx(A',t_\ttp)\cap\{0,...,t_\ttp\}=A'\cap\{0,...,t_\ttp\}$.
\\
Thus, for all $t\geq t_\ttp$, $f_{\ttp,t}=f_{\ttp,t_\ttp}$
and $g_{\ttp,t}=g_{\ttp,t_\ttp}$ and
$f_{\ttp,t}-g_{\ttp,t}=\varphi^{A'}(\ttp)$.
\medskip\\
{\em Suppose $\varphi^{A'}(\ttp)$ is not defined.}\\
Observe that, each time the ``fake" computation ${\cal C}_{\ttp,t}$
with oracle $Approx(A',t)$ does not halt or appears not to be the
``right" one with oracle $A'$
(because $Approx(A',t+1)\cap\{0,...,t\}$ differs from
$Approx(A',t)\cap\{0,...,t\}$),
we strictly increase both $f,g$
(this is why we put $+1$ in the equations of iib and iii).
\\
Applying Lemmas \ref{l:oracleCV}, \ref{l:approx}, we see that,
if $\varphi^{A'}(\ttp)$ is not defined then ${\cal C}_{\ttp,t}$
does not halt for infinitely many $t$'s, so that
$f(\ttp,t)$ and $g(\ttp,t)$ increase infinitely often.
Therefore, $(\max f)(\ttp)$ and $(\max g)(\ttp)$ are both undefined,
and so is their difference.
\medskip\\
This proves that $\varphi^{A'}=\max f-\max g$.
Since the sequence $(Approx(A',t))_{t\in\bbbn}$ is $A$-recursive,
so are $f,g$. Thus, $\max f,\max g$ are in $\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$
and their difference $\varphi^{A'}$ is in
$\Delta(\maxrA[{\{0,1\}}^{<\omega}\to\bbbn])$.
\hfill{$\Box$}
\section{Abstract representations and effectivizations}
\label{s:abstract}
\subsection{Some arithmetical representations of $\bbbn$}
\label{ss:arithm}
As pointed in \S\ref{ss:mainResults}, abstract entities such as
numbers can be represented in many different ways.
In fact, each representation illuminates some particular role
and/or property, i.e. some possible semantics chosen in order to
efficiently access special operations or stress special properties
of integers.
\medskip\\
Usual arithmetical representations of $\bbbn$ using words on a
digit alphabet can be looked at as a (total) surjective
(non necessarily injective) function $R:C\rightarrow\bbbn$ where
$C$ is some simple free algebra or a quotient of some free algebra.
\\
Such representations are the ``degree zero" of abstraction
for representations and, as expected, their associated Kolmogorov
complexities all coincide (cf. Thm.\ref{thm:recrep} below).
\begin{example}[Base $k$ representations]\label{ex:base}$\\ $
{\bf 1.}
Integers in unary representation correspond to elements of the
free algebra built up from one generator and one unary function,
namely $0$ and the successor function $x\mapsto x+1$.
The associated function $R:{1}^*\to\bbbn$ is simply the length
function.
\medskip\\
{\bf 2.}
The various base $k$ (with $k\geq 2$) representations of
integers also involve term algebras, not necessarily free.
They differ by the set $A\subset\bbbn$ of digits they use but all
are based on the usual interpretation $R:A^*\to\bbbn$ such that
$R(a_n \ldots a_1 a_0)=\sum_{i=0,\ldots,n} a_i k^i$.
Which, written \`a la H\"orner,\\
$$k(k(\ldots k (k a_n + a_{n-1}) + a_{n-2})\ldots) + a_1) + a_0$$
is a composition of applications
$S_{a_0} \circ S_{a_1} \circ \ldots \circ S_{a_n}(0)$
where $S_a : x \mapsto kx+a$.
If a representation uses digits $a\in A$ then it corresponds to
the algebra generated by $0$ and the $S_a$'s where $a\in A$.
\begin{enumerate}
\item[i.] The $k$-adic representation uses digits $1,2,\ldots,k$
and corresponds to a free algebra built up from one
generator and $k$ unary functions.
\item[ii.] The usual $k$-ary representation uses digits
$0,1,\ldots,k-1$ and corresponds to the quotient
of a free algebra built up from one generator and
$k$ unary functions,
namely $0$ and the $S_a$'s where $a=0,2,\ldots, k-1$,
by the relation $S_0(0)=0$.
\item[iii.] Avizienis base $k$ representation uses digits
$-k+1,\ldots,-1,0,1,\ldots,k-1$
(it is a much redundant representation used in computers
to perform additions without carry propagation) and
corresponds to the quotient of the free algebra built up from
one generator and $2k-1$ unary functions,
namely $0$ and the $S_a$'s where
$a=-k+1,\ldots,-1,0,1,\ldots, k-1$,
by the relations
$\ \forall x\ (S_{-k+i}\circ S_{j+1}(x)=S_i\circ S_j(x))\ $
where $-k<j<k-1$ and $0<i<k$.
\end{enumerate}
\end{example}
\noindent
Somewhat exotic representations of integers can also be associated
to deep results in number theory.
\begin{example}\label{ex:exotic}$\\ $
{\bf 1.}
$R: \bbbn^4\to \bbbn$ such that $R(x,y,z,t)=x^2+y^2+z^2+t^2$
is a representation based on Lagrange's four squares
theorem.
\medskip\\
{\bf 2.}
$R:(Prime\cup\{0\})^7\to \bbbn$ such that
$R(x_1,\ldots ,x_i)=x_1+\ldots+x_i$
is a representation based on Schnirelman's theorem (1931) in its
last improved version obtained by Ramar\'e, 1995 \cite{ramare},
which insures that every even number is the sum of at most 6 prime
numbers (hence every number is the sum of at most 7 primes).
\end{example}
\noindent
Such representations appear in the study of the expressional
power of some weak arithmetics.
For instance, the representation as sums of 7 primes allows for
a very simple proof of the definability of multiplication with
addition and the divisibility predicate
(a result valid in fact with successor and divisibility,
(Julia Robinson, 1948 \cite{robinson})).
\subsection{Abstract representations} \label{ss:abstract}
Foundational questions, going back to Russell, \cite{russell08} 1908,
and Church, \cite{church33} 1933, lead to
quite different representations of $\bbbn$ : set theoretical
representations involving abstract sets and functionals much more
complex than the integers they represent.
\medskip\\
We shall consider the following simple and general notion.
\begin{definition}[Abstract representations]\label{def:rep}$\\ $
A representation of an infinite set $E$ is a pair $(C,R)$
where $C$ is some (necessarily infinite) set and
$R : C \rightarrow E$ is a {\em surjective partial} function.
\end{definition}
\begin{remark}$\\ $
{\bf 1.}
Though $R$ really operates on the sole subset $domain(R)$,
the underlying set $C$ is quite significant in the
effectivization process which is necessary to get a self-enumerated
systen and then an associated Kolmogorov complexity.
\medskip\\
{\bf 2.}
We shall consider representations with arbitrarily complex
domains in the Post hierarchy
(cf. Prop.\ref{p:complexEffCard}, \ref{p:complexEffIndex},
\ref{p:complexEffChurch}, and coming papers).
In fact, the sole cases in this paper where $R$
is a total function are the usual recursive representations.
\medskip\\
{\bf 3.}
Representations can also involve a proper class $C$
(cf. Rk. \ref{rk:cardRep}).
However, we shall stick to the case $C$ is a set.
\end{remark}
\subsection{Effectivizing representations: why?} \label{ss:why}
Turning to a computer science (or recursion theoretic) point of
view, there are some objections to the consideration of abstract
sets, functions and functionals as we did in \S\ref{ss:mainResults}
and \ref{ss:abstract}:
\begin{itemize}
\item We cannot apprehend abstract sets, functions and
functionals but solely programs to compute them
(if they are computable in some sense).
\item Moreover, programs dealing with sets, functions and
functionals have to go through some intensional
representation of these objects in order to be able to
compute with such objects.
\end{itemize}
To get effectiveness, we turn from set theory to computability
theory. We shall do that in a somewhat abstract way using
self-enumerated representation systems (cf. Def.\ref{def:self}).
\\
We shall consider higher order representations and shall
``effectivize" abstract sets, functions and functionals via
recursively enumerable sets, partial recursive functions or
$\max$ of total or partial recursive functions,
and partial computable functionals.
\subsection{Effectivizations of representations and associated
Kolmogorov complexities} \label{ss:effRep}
A formal representation of an integer $n$ is a finite object
(in general a word) which describes some characteristic property
of $n$ or of some abstract object which characterizes $n$.
To effectivize a representation $\ R:C\to E\ $, we shall process
as follows:
\begin{enumerate}
\item
Restrict the set $C$ to a subfamily $D$ of elements which,
in some sense, are computable or partial computable.
Of course, we want the restriction of $R$ to $D$ to be still
surjective.
\item
Consider a self-enumerated representation system for $D$.
\end{enumerate}
This leads to the following definition.
\begin{definition}\label{def:effRep}$\\ $
{\bf 1.}
A set $D$ is adapted to the representation
$\ R:C\to E\ $ if $D\subseteq C$ and the partial function
$\ R\!\upharpoonright \! D:D\to E\ $ is still surjective.
\medskip\\
{\bf 2.}
{\bf [Effectivization]}
An effectivization of the representation $\ R:C\to E\ $ of the set
$E$ is any self-enumerated representation system $(D,{\cal F})$ for
a domain $D$ adapted to the representation $\ R:C\to E\ $.
\end{definition}
Using the Composition Lemma \ref{l:circ}, we immediately get
\begin{proposition}
Let $\ R:C\to E$ be a representation of $E$ and $(D,{\cal F})$ be
some effectivization of $R$.
Then $(E,(R\!\upharpoonright \! D)\circ{\cal F})$ is a self-enumerated
representation system and the associated Kolmogorov complexity
$K^E_{(R\!\upharpoonright \! D)\circ{\cal F}}$ (cf. Def.\ref{def:Kself})
satisfies
$$K^E_{(R\!\upharpoonright \! D)\circ{\cal F}}(x)
=\min\{K^D_{\cal F}(y):R(y)=x\}\ \mbox{ for all }x\in E$$
\end{proposition}
\begin{remark}
Whereas abstract representations are quite natural and
conceptually simple, the functions $\ (R\!\upharpoonright \! D)\circ F\ $,
for $F\in{\cal F}$, in the self-enumerated representation families
of their effectivized versions may be quite complex.
In the examples we shall consider, their domains involve levels
$2$ or $3$ of the arithmetical hierarchy.
In particular, such representations are not Turing
reducible one to the other.
\end{remark}
\subsection{Partial recursive representations}\label{ss:recrep}
We already mentioned in \S\ref{ss:arithm} that all usual arithmetic
representations lead to the same Kolmogorov complexity (up to an
additive constant).
The following result extends this assertion to all partial recursive
representations.
\begin{theorem}\label{thm:recrep}
We keep the notations of Notations \ref{not:PR} and
Def.\ref{def:Kself}.
\\
Let $A\subseteq\bbbn$ be an oracle.
If $C,E$ are basic sets and $R:C\to E$ is partial recursive
(resp. partial $A$-recursive) then
\medskip\\
$\begin{array}{rcllrcll}
R\circ \PR[{\{0,1\}}^{<\omega}\to C]&=&\PR[{\{0,1\}}^{<\omega}\to E]
&\mbox{(resp. }
&R\circ \PR[A,{\{0,1\}}^{<\omega}\to C]&=&\PR[A,{\{0,1\}}^{<\omega}\to E]
&\mbox{)}
\medskip\\
K^E_{R\circ \PR[{\{0,1\}}^{<\omega}\to C]}&=&K_E
&\mbox{(resp. }
&K^E_{R\circ \PR[A,{\{0,1\}}^{<\omega}\to C]}&=&K^A_E
&\mbox{)}
\end{array}$
\medskip\\
Thus, all Kolmogorov complexities associated to partial recursive
(resp. partial $A$-recursive) representations of $E$ coincide with
the usual (resp. $A$-oracular) Kolmogorov complexity on $E$.
\end{theorem}
\begin{proof}
It suffices to prove that
$$R\circ \PR[A,{\{0,1\}}^{<\omega}\to C] = \PR[A,{\{0,1\}}^{<\omega}\to E]$$
Inclusion
$R\circ \PR[A,{\{0,1\}}^{<\omega}\to C] \subseteq \PR[A,{\{0,1\}}^{<\omega}\to E]$
is trivial.
For the other inclusion, we use the fact that $R:C\to E$ is
surjective partial $A$-recursive.
\\
First, define a partial $A$-recursive $S:E\to C$ such that,
for $x\in E$, $S(\ttx)$ is the element ${\tt y}}\newcommand{\ttz}{{\tt z}\in C$ satisfying
$R({\tt y}}\newcommand{\ttz}{{\tt z})=\ttx$ which appears first in an $A$-recursive enumeration
of the graph of $R$.
Clearly, $S$ is a right inverse of $R$,
i.e. $R\circ S=Id_E$ where $Id_E$ is the identity on $E$.
\\
Using the trivial inclusion
$S\circ \PR[A,{\{0,1\}}^{<\omega}\to E]\subseteq \PR[A,{\{0,1\}}^{<\omega}\to C]$
we get
$$\PR[A,{\{0,1\}}^{<\omega}\to E]
=R\circ S\circ \PR[A,{\{0,1\}}^{<\omega}\to E]
\subseteq R\circ \PR[A,{\{0,1\}}^{<\omega}\to C]$$
\end{proof}
\section{Cardinal representations of $\bbbn$}
\label{s:card}
\subsection{Basic cardinal representation and its effectivizations}
\label{ss:effCard}
Among the conceptual representations of integers, the most basic
one goes back to Russell,
\cite{russell08} 1908 (cf. \cite{heijenoort} p.178), and considers
non negative integers as equivalence classes of sets relative
to cardinal comparison.
\begin{definition}[Cardinal representation of $\bbbn$]
\label{def:card}
Let $\mathit{card}(Y)$ denote the cardinal of $Y$, i.e. the number
of its elements.
\\
The cardinal representation of $\bbbn$ relative to an infinite
set $X$ is the partial function $$\mathit{card}_X:P(X)\to\bbbn$$
with domain $P^{<\omega}(X)$, such that
$$\mathit{card}_X(Y)=\left\{
\begin{array}{ll}
\mathit{card}(Y)&\mbox{if $Y$ is finite}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$$
\end{definition}
\begin{definition}[Effectivizations of the cardinal representation
of $\bbbn$]\label{def:effCard}
We effectivize the cardinal representation by replacing
$P(X)$ by $RE(\bbbx)$ or $RE^A(\bbbx)$ where $\bbbx$ is some
basic set and $A\subseteq\bbbn$ is some oracle.
\\
Two kinds of self-enumerated representation systems can be
naturally associated to these domains
(cf. \S\ref{ss:RE} and the Composition Lemma \ref{l:circ}):%
\begin{eqnarray*}
(RE(\bbbx),\mathit{card}\circ{\cal F}^{RE(\bbbx)})&\mbox{or}&
(RE^A(\bbbx),\mathit{card}\circ{\cal F}^{RE^A(\bbbx)})
\\
(RE(\bbbx),\mathit{card}\circ{\cal PF}^{RE(\bbbx)})&\mbox{or}&
(RE^A(\bbbx),\mathit{card}\circ{\cal PF}^{RE^A(\bbbx)})
\end{eqnarray*}
\end{definition}
\begin{remark}\label{rk:cardRep}$\\ $
{\bf 1.}
Historically, the cardinal representation of $\bbbn$ considered
the whole class of sets rather than some $P(X)$.
However, the above effectivization makes such an extension
unsignificant for our study.
\medskip\\
{\bf 2.}
One can also consider the total representation obtained by
restriction to the set $P_{<\omega}(X)$ of all finite subsets
of $X$. But this amounts to a partial recursive representation
and is relevant to \S\ref{ss:recrep}.
\end{remark}
\subsection{Syntactical complexity of cardinal representations}
\label{ss:syntaxcard}
The following proposition gives the syntactical complexity of
the above effectivizations of the cardinal representations.
\begin{proposition}[Syntactical complexity]
\label{p:complexEffCard}
The family
$$\{domain(\varphi):\varphi\in \mathit{card}\circ{\cal F}^{RE^A(\bbbx)}\}$$
is exactly the family of $\Sigma^{0,A}_2$ subsets of ${\{0,1\}}^{<\omega}$.
Idem with $\mathit{card}\circ{\cal PF}^{RE^A(\bbbx)}$.
\medskip\\
In particular, any universal function for
$\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}$ or for
$\mathit{card}\circ{\cal PF}^{RE^A(\bbbx)}$
is $\Sigma^{0,A}_2$-complete.
\end{proposition}
\begin{proof}
Let $(W^A_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be an acceptable enumeration of
$RE^A(\bbbx)$.
\\
1. If $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is partial $A$-recursive then
$$domain(\ttp\mapsto\mathit{card}(W^A_{g(\ttp)})
=\{\ttp:W^A_{g(\ttp)}\mbox{ is finite}\}$$
is clearly $\Sigma^{0,A}_2$.
\medskip\\
2. Let $X\subseteq{\{0,1\}}^{<\omega}$ be a $\Sigma^{0,A}_2$ set of the form
$X=\{\ttp:\exists u\ \forall v\ R(\ttp,u,v)\}$ where
$R\subseteq{\{0,1\}}^{<\omega}\times\bbbn^2$ is $A$-recursive.
Set
\begin{eqnarray*}
\sigma_\ttp&=&\left\{\begin{array}{ll}
\{u':u'<u\}&\mbox{if $u$ is least such that }\forall v\ R(\ttp,u,v)
\\
\bbbn&\mbox{if there is no $u$ such that $\forall v\ R(\ttp,u,v)$}
\end{array}\right.
\end{eqnarray*}
It is easy to check that $\sigma_\ttp\subseteq\bbbn$ is an $A$-r.e. set
which can be defined by the following enumeration process described in
Pascal-like instructions:
\medskip\\\centerline{\tt\begin{tabular}{lll}
\{Initialization\}&$u:=0$; $v:=0$;\\
\{Loop\}&DO FOREVER&BEGIN\\
&&WHILE $R(\ttp,u,v)$ DO $v:=v+1$;\\
&&output $u$ in $\sigma_\ttp$;\\
&&$u:=u+1$; $v:=0$;\\
&&END;
\end{tabular}}
\\
Clearly, $card(\sigma_\ttp)$ is finite if and only if $\ttp\in X$.
\medskip\\
Now, the set $\{(\ttp,n):n\in\sigma_\ttp\}$ is also $A$-r.e.,
hence of the form $W_{\tt a}}\newcommand{\ttb}{{\tt b}^{{\{0,1\}}^{<\omega}\times\bbbn}$ for some ${\tt a}}\newcommand{\ttb}{{\tt b}$.
The parameter property yields a total $A$-recursive function
$g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that $\sigma_\ttp=W_{g({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)}$.
Finally, the function $\ttp\mapsto\mathit{card}(W_{g({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)})$ is in
$\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}$ and has domain $X$.
\end{proof}
\subsection{Characterization of the $card$ self-enumerated systems}
\label{ss:characterizeCard}
\begin{theorem}\label{thm:card}
For any basic set $\bbbx$ and any oracle $A\subseteq\bbbn$,
\medskip\\
$\begin{array}{lrcl}
\mbox{\bf 1i.}&
\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}&=&\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]
\medskip\\
{\bf \ ii.}&\mathit{card}\circ{\cal PF}^{RE^A(\bbbx)}
&=&\maxprA[{\{0,1\}}^{<\omega}\to\bbbn]
\end{array}$
\medskip\\
$\begin{array}{lrclcl}
\mbox{\bf 2.}&
K^\bbbn_{\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}}&=_{\rm ct}&
K^\bbbn_{\mathit{card}\circ{\cal PF}^{RE^A(\bbbx)}}&=_{\rm ct}&\kmax[A]
\end{array}$
\medskip\\
We shall simply write $K^{\bbbn,A}_{\mathit{card}}$
in place of $K^\bbbn_{\mathit{card}\circ{\cal F}^{RE^A(\bbbn)}}$.
\\
When $A=\emptyset$ we simply write $K^{\bbbn}_{\mathit{card}}$.
\end{theorem}
\begin{proof}
Point 2 is a direct corollary of Point 1 and Prop.\ref{p:Kinfini}.
Let's prove point 1.
\medskip\\
{\bf 1i. }{\em Inclusion $\subseteq$.}\\
Let $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be total $A$-recursive.
We define a total $A$-recursive function
$u:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ such that
$$(*)\ \ \ \{u(\ttp,t):t\in\bbbn\}=
\left\{\begin{array}{ll}
\{0,...,n\}&\mbox{if $W^A_{g(\ttp)}$ contains exactly $n$ points}
\\
\bbbn&\mbox{if $W^A_{g(\ttp)}$ is infinite}
\end{array}\right.$$
\noindent
The definition is as follows.
First, set $u(\ttp,0)=0$ for all $\ttp$.
Consider an $A$-recursive enumeration of $W^A_{g(\ttp)}$.
If at step $t$, some new point is enumerated then set
$u(\ttp,t+1)=u(\ttp,t)+1$, else set $u(\ttp,t+1)=u(\ttp,t)$.
\medskip\\
From $(*)$ we get $\mathit{card}(W_\ttp)=(\max f)(\ttp)$, so that
$\ttp\mapsto\mathit{card}(W^A_{g(\ttp)})$ is in
$\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$.
\medskip\\
{\bf 1ii. }{\em Inclusion $\subseteq$.}\\
Now $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is partial $A$-recursive and we define
$u:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ as a partial $A$-recursive function
such that
$$\{u(\ttp,t):t\in\bbbn\}=
\left\{\begin{array}{ll}
\emptyset&\mbox{if $g(\ttp)$ is undefined}
\\
\{0,...,n\}&\mbox{if $W^A_{g(\ttp)}$ contains exactly $n$ points}
\\
\bbbn&\mbox{if $W^A_{g(\ttp)}$ is infinite}
\end{array}\right.$$
\noindent
The definition of $u$ is as above except that, for any $t$,
we require that $u(\ttp,t)$ is defined if and only if $g(\ttp)$ is.
\medskip\\
{\bf 1i. }{\em Inclusion $\supseteq$.}\\
Any function in $\maxrA[{\{0,1\}}^{<\omega}\to\bbbn]$ is of the form
$\max f:{\{0,1\}}^{<\omega}\to\bbbn$ where $f:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ is
total $A$-recursive.
\\The idea to prove that $\max f$ is in
$\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}$ is quite simple.
For every $\ttp$, we define an $A$-r.e. subset of $\bbbx$ which
collects some new elements each time $f(\ttp,t)$ gets greater than
$\max\{f(\ttp,t'):t'<t\}$.
\\
Formally, let $\psi:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ be the partial
$A$-recursive function such that
$$\psi(\ttp,t)=
\left\{\begin{array}{ll}
0&\mbox{if $\exists u\ f(\ttp,u)>t$}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$$
Clearly,
$$domain(\psi_\ttp)
=\left\{\begin{array}{ll}
\{t:0\leq t<(\max f)(\ttp)\}&\mbox{if $(\max f)(\ttp)$ is defined}\\
\bbbn&\mbox{otherwise}
\end{array}\right.$$
We define $\varphi:{\{0,1\}}^{<\omega}\times\bbbx\to\bbbn$ such that
$\varphi(\ttp,\ttx)=\psi(\ttp,\theta(\ttx))$ where
$\theta:\bbbx\to\bbbn$ is some fixed total recursive bijection.
Let's denote $\psi_\ttp$ and $\varphi_\ttp$ the functions
$t\mapsto\psi(\ttp,t)$ and $\ttx\mapsto\varphi(\ttp,\ttx)$.
Let ${\tt e}}\newcommand{\ttf}{{\tt f}$ be such that
$W^A_{\tt e}}\newcommand{\ttf}{{\tt f}=\{\couple\ttp\ttx:(\ttp,\ttx)\in domain(\varphi)\}$
(where $\couple\,\,$ is a bijection ${\{0,1\}}^{<\omega}\times\bbbx\to\bbbx$).
The parameter property yields an $A$-recursive function
$s:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$W^A_{s({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)}=domain(\varphi_\ttp)$ for all $\ttp$.
Thus, letting $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be the $A$-recursive function
such that $g(\ttp)=s({\tt e}}\newcommand{\ttf}{{\tt f},\ttp)$, we have
$$\mathit{card}(W^A_{g(\ttp)})
=\mathit{card}(domain(\varphi_\ttp))
=\mathit{card}(domain(\psi_\ttp))=(\max f)(\ttp)$$
Which proves that $\max f$ is in
$\mathit{card}\circ{\cal F}^{RE^A(\bbbx)}$.
\medskip\\
{\bf 1ii. }{\em Inclusion $\supseteq$.}\\
We argue as in the above proof of {\bf i. $\supseteq$.}
However, $f:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ is now partial $A$-recursive
and there are two reasons for which $(\max f)(\ttp)$ may be
undefined: first, if $t\mapsto f(\ttp,t)$ is unbounded, second if it
has empty domain.
Keeping $\psi$ and $\varphi$ as defined as above, we now have,
$$domain(\psi_\ttp)
=\left\{\begin{array}{ll}
\{v:0\leq v<(\max f)(\ttp)\}&\mbox{if $(\max f)(\ttp)$ is defined}\\
\bbbn&\mbox{if $range(t\mapsto f(\ttp,t))$ is infinite}\\
\emptyset&\mbox{if $f(\ttp,t)$ is defined for no $t$}
\end{array}\right.$$
We let ${\tt e}}\newcommand{\ttf}{{\tt f},s,g$ be as above and define $h:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that
$$h(\ttp)
=\left\{\begin{array}{ll}
g(\ttp)&\mbox{if $f(\ttp,t)$ is defined for some $t$}\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.$$
Observe that
\\\indent- if $t\mapsto f(\ttp,t)$ has empty domain then $h(\ttp)$
is undefined,
\\\indent- if $t\mapsto f(\ttp,t)$ is unbounded then
$card(W^A_{h(\ttp)})=card(W^A_{g(\ttp)})$ is infinite,
\\\indent- otherwise
$card(W^A_{h(\ttp)})=card(W^A_{g(\ttp)})=(\max f)(\ttp)$.
\\
Which proves that $\max f$ is in $card\circ{\cal PF}^{RE^A(\bbbx)}$.
\end{proof}
\subsection{Characterization of the $\Delta\mathit{card}$
representation system} \label{ss:Deltacard}
We now look at the self-delimited system with domain $\bbbz$
obtained from $card\circ{\cal F}^{RE^A(\bbbx)}$ by the operation
$\Delta$introduced in \S\ref{ss:Delta}.
\begin{theorem}\label{thm:DeltaCard}
Let $A\subseteq\bbbn$ and let $A'$ be the jump of $A$.
Let $\bbbx$ be a basic set. Then
$$\Delta(card\circ{\cal F}^{RE^A(\bbbx)})
=\Delta(card\circ{\cal PF}^{RE^A(\bbbx)})=\PR[A',{\{0,1\}}^{<\omega}\to\bbbz]$$
Hence $K^\bbbz_{\Delta(card\circ{\cal F}^{RE^A(\bbbx)})}
=_{\rm ct} K^{A',\bbbz}$.
\medskip\\
We shall simply write $K^{\bbbn,A}_{\Delta card}$
in place of
$K^\bbbz_{\Delta(card\circ{\cal F}^{RE^A(\bbbn)})}\!\upharpoonright \!\bbbn$.
\\
When $A=\emptyset$ we simply write $K^{\bbbz}_{\Delta card}$.
\end{theorem}
\begin{proof}
The equalities about the self-enumerated systems is a direct
corollary of Thm.\ref{thm:card} and Thm.\ref{thm:Deltamax}.
The equalities about Kolmogorov complexities are trivial corollaries
of those about self-enumerated systems.
\end{proof}
\section{Index representations of $\bbbn$}
\label{s:index}
\subsection{Basic index representation and its effectivizations}
\label{ss:index}
A variant of the cardinal representation considers indexes of
equivalence relations. More precisely, it views an integer as
an equivalence class of equivalence relations relative to index
comparison.
\begin{definition}[Index representation]\label{def:index}$\\ $
The index representation of $\bbbn$ relative to an infinite set
$X$ is the partial function
$$index^{\bbbn}_{P(X^2)}:P(X^2)\to\bbbn$$
with domain the family of equivalence relations on subsets of $X$
which have finite index, such that
\begin{eqnarray*}
index^{\bbbn}_{P(X^2)}(R)&=&
\left\{
\begin{array}{ll}
index(R)&\mbox{if $R$ is an equivalence relation}
\\ &\mbox{with finite index}
\\
\mbox{undefined}&\mbox{otherwise}
\end{array}\right.
\end{eqnarray*}
(where $index(R)$ denotes the number of equivalence classes of $R$).
\end{definition}
\subsection{Syntactical complexity of index representations}
\label{ss:syntaxindex}
\begin{definition}[Effectivization of the index representation
of $\bbbn$]\label{def:effIndex}
We effectivize the index representation by replacing
$P(X^2)$ by $RE(\bbbx^2)$ or $RE^A(\bbbx^2)$ where $\bbbx$ is
some basic set and $A\subseteq\bbbn$ is some oracle.
\\
Two kinds of self-enumerated representation systems can be
naturally associated
(cf. \S\ref{ss:RE} and the Composition Lemma \ref{l:circ}):%
\begin{eqnarray*}
(RE(\bbbx^2),index\circ{\cal F}^{RE(\bbbx^2)})&\mbox{or}&
(RE^A(\bbbx^2),index\circ{\cal F}^{RE^A(\bbbx^2)})
\\
(RE(\bbbx^2),index\circ{\cal PF}^{RE(\bbbx^2)})&\mbox{or}&
(RE^A(\bbbx^2),index\circ{\cal PF}^{RE^A(\bbbx^2)})
\end{eqnarray*}
\end{definition}
The following proposition gives the syntactical complexity of
the above effectivizations of the index representations.
\begin{proposition}[Syntactical complexity]
\label{p:complexEffIndex}
The family
$$\{domain(\varphi):\varphi\in \mathit{index}\circ{\cal F}^{RE^A(\bbbx)}\}$$
is exactly the family of $\Sigma^{0,A}_3$ subsets of ${\{0,1\}}^{<\omega}$.
\\
Idem with $\mathit{index}\circ{\cal PF}^{RE^A(\bbbx)}$.
\medskip\\
In particular, any universal function for
$\mathit{index}\circ{\cal F}^{RE^A(\bbbx)}$ or for
$\mathit{index}\circ{\cal PF}^{RE^A(\bbbx)}$
is $\Sigma^{0,A}_3$-complete.
\end{proposition}
\begin{proof}
We trivially reduce to the case $\bbbx=\bbbn$ and only consider
the case $A=\emptyset$, relativization being straightforward.
\medskip\\
1. Let $(W^{\bbbn^2}_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ be an acceptable
enumeration of $RE(\bbbn^2)$ and $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be a partial
recursive function and $\psi:{\{0,1\}}^{<\omega}\to\bbbn$ be such that
$\psi(\ttp)=\mathit{index}(W^{\bbbn^2}_{g(\ttp)})$.
\\
To see that $domain(\psi)$ is $\Sigma^0_3$,
observe that $\ttp\in domain(\psi)$ if and only if
\begin{enumerate}
\item[i.]
$g(\ttp)$ is defined. Which is a $\Sigma^0_1$ condition.
\item[ii.]
$W^{\bbbn^2}_{g(\ttp)}$ is an equivalence relation on its domain,
i.e.
\medskip\\
$\forall \ttx\ \forall {\tt y}}\newcommand{\ttz}{{\tt z}\
((\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in W^{\bbbn^2}_{g(\ttp)}\
\Rightarrow\ ((\ttx,\ttx)\in W^{\bbbn^2}_{g(\ttp)}\ \wedge\
({\tt y}}\newcommand{\ttz}{{\tt z},\ttx)\in W^{\bbbn^2}_{g(\ttp)}))$
\hfill{$\wedge\ \forall \ttx\ \forall {\tt y}}\newcommand{\ttz}{{\tt z}\ \forall \ttz\
(((\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in W^{\bbbn^2}_{g(\ttp)}\ \wedge\
({\tt y}}\newcommand{\ttz}{{\tt z},\ttz)\in W^{\bbbn^2}_{g(\ttp)})\
\Rightarrow\ (\ttx,\ttz)\in W^{\bbbn^2}_{g(\ttp)})$}
\\
Which is a $\Pi^0_2$ formula
(since $({\tt u}}\newcommand{\ttv}{{\tt v},\ttv)\in W^{\bbbn^2}_{g(\ttp)}$ is $\Sigma^0_1$).
\item[iii.]
$W^{\bbbn^2}_{g(\ttp)}$ has finitely many classes, i.e.
$\exists n\ \forall k\ \exists m\leq n\
(k,m)\in W^{\bbbn^2}_{g(\ttp)}$.
Which is a $\Sigma^0_3$ formula.
\end{enumerate}
2. Let $X\subseteq{\{0,1\}}^{<\omega}$ be $\Sigma^0_3$.
We construct a total recursive function $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such
that $X=\{\ttp: \mathit{index}(W^{\bbbn^2}_{g(\ttp)})\mbox{ is finite}\}$.
\medskip\\
A. Suppose
$X=\{\ttp:\exists u\ \forall v\ \exists w\ R(\ttp,u,v,w)\}$
where $R\subseteq{\{0,1\}}^{<\omega}\times\bbbn^3$ is recursive.
Let $\theta:{\{0,1\}}^{<\omega}\times\bbbn^2\to\bbbn$ be the total recursive
function such that
\begin{eqnarray*}
\theta(\ttp,u,t)&=&\mbox{largest $v\leq t$
such that $\forall v'\leq v\ \exists w\leq t\ R(\ttp,u,v',w)\}$}
\end{eqnarray*}
Observe that $\theta$ is monotone increasing with respect to $t$.
Also,
\begin{itemize}
\item[$(*)$]
if $\ttp\notin X$ then, for all $u$,
$\max_{t\in\bbbn}\theta(\ttp,u,t)$ is finite,
\item[$(**)$]
if $\ttp\in X$ and $u$ is least such that
$\forall v\ \exists w\ R(\ttp,u,v,w)$ then
$$\left\{\begin{array}{rcll}
\max_{t\in\bbbn}\theta(\ttp,u,t)&=&+\infty&\\
\max_{t\in\bbbn}\theta(\ttp,u',t)&&\mbox{is finite}&\mbox{for all }u'<u
\end{array}\right.$$
\end{itemize}
Following this observation, given $\ttp\in{\{0,1\}}^{<\omega}$, we define
a monotone increasing sequence of equivalence relations
$\rho^t_\ttp$ on finite initial intervals of $\bbbn$ such that
$\rho^t_\ttp$ has $t+1$ equivalence classes
$$I^t_{\ttp,0}\ ,\ I^t_{\ttp,1}\ ,\ ...\ ,\ I^t_{\ttp,t}$$
which are successive finite intervals
$$[0,n^t_{\ttp,0}]\ ,\ [n^t_{\ttp,0}+1,n^t_{\ttp,1}]\ ,\
[n^t_{\ttp,1}+1,n^t_{\ttp,2}]\ ,\ \ldots\ ,\
[n^t_{\ttp,t-1}+1,n^t_{\ttp,t}]$$
where
$n^t_{\ttp,1}<n^t_{\ttp,2}<\ldots<n^t_{\ttp,t-1}<n^t_{\ttp,t}$.
\\
The intuition is as follows:
\begin{enumerate}
\item[i.]
the class $I^t_{\ttp,u}$ is related to $\theta(\ttp,u,t)$,
i.e. to the best we can say at step $t$ about the truth value of
$\forall v\ \exists w\ R(\ttp,u,v,w)$.
\item[ii.]
if and when $\theta(\ttp,u,t)$ increases,
i.e. $\theta(\ttp,u,t+1)>\theta(\ttp,u,t)$ for some $u$,
then we increase the class $I^t_{\ttp,u}$ for the least such $u$.
\end{enumerate}
Of course, an equivalence class which grows and remains
an interval either is the rightmost one
or has to aggregate some of its neighbor class(es).
Whence the following inductive definition of the $\rho^t_\ttp$'s
and $n^t_{\ttp,u}$'s, $u\leq t$:
\begin{enumerate}
\item[i.]{\em (Base case).}
$\rho^0_\ttp$ is the equivalence relation with one class $\{0\}$,
i.e. $n^0_\ttp,0=0$.
\item[ii.]{\em (Inductive case. Subcase 1).}
Suppose $\theta(\ttp,u,t+1)=\theta(\ttp,u,t)$ for all $u\leq t$.
Then $\rho^{t+1}_\ttp$ is obtained from $\rho^t_\ttp$ by adding
a new singleton class on the right:
\begin{enumerate}
\item
For all $u\leq t$ we let $n^{t+1}_{\ttp,u}=n^t_{\ttp,u}$,
hence $I^{t+1}_{\ttp,u}=I^t_{\ttp,u}$.
\item
$n^{t+1}_{\ttp,t+1}=n^t_{\ttp,t}+1$, hence
$I^{t+1}_{\ttp,t+1}=\{n^t_{\ttp,t}+1\}$.
\end{enumerate}
\item[ii.]{\em (Inductive case. Subcase 2).}
Suppose $\theta(\ttp,u,t+1)>\theta(\ttp,u,t)$ for some $u\leq t$.
Let $u$ be least such. Then,
\begin{enumerate}
\item
for $u'<u$, classes $I^t_{\ttp,u'}$ are left unchanged:
$n^{t+1}_{\ttp,u'}=n^t_{\ttp,u'}$ and
$I^{t+1}_{\ttp,u'}=I^t_{\ttp,u'}$\ ,
\item
class $I^{t+1}_{\ttp,u}$ aggregates all classes $I^t_{\ttp,u''}$
for $u\leq u''\leq t$,
\item
$t+1-u$ singleton classes are added:
$I^{t+1}_{\ttp,u+i}=\{n^t_{\ttp,t}+i\}$ where $i=1,...,t+1-u$.
I.e.
\medskip\\\centerline{$\begin{array}{rcll}
n^{t+1}_{\ttp,u'}&=&n^t_{\ttp,u}&\mbox{for all }u'\leq u\\
n^{t+1}_{\ttp,u+i}&=&n^t_{\ttp,t}+i
&\mbox{for all $s\in\{i,...,t+1-u\}$}
\end{array}$}
\end{enumerate}
\end{enumerate}
B. Let $\rho_\ttp=\bigcup_{t\in\bbbn}\rho_{\ttp,t}$.
\medskip\\
\fbox{Case $\ttp\in X$}. Let $u$ be least such that
$\forall v\ \exists w\ R(\ttp,u,v,w)$. For $u'<u$, let
\begin{eqnarray*}
V_{u'}&=&\max\{v:\forall v'\leq v\ \exists w\ R(\ttp,u',v',w)\}
\\
t&=&\min\{t':\forall u'<u\ (V_{u'}\leq t'\ \wedge\
\forall v'\leq V_{u'}\ \exists w\leq t'\ R(\ttp,u',v',w)\}
\end{eqnarray*}
Then
\begin{itemize}
\item
$\forall u'<u\ \forall v\
(\forall v'\leq v\ \exists w\ R(\ttp,u',v',w)\ \Rightarrow$
\hfill{$(v\leq t\ \wedge\
\forall v'\leq v\ \exists w'\leq t\ R(\ttp,u',v',w')))$}
\item
$n^{t'}_{\ttp,u',}=n^t_{\ttp,u'}$
and $I^{t'}_{\ttp,u'}=I^t_{\ttp,u'}$ for all $u'<u$ and $t'\geq t$.
\item
$n^{t'}_{\ttp,u}$ tends to $+\infty$ with $t'$ and
$I^{t'}_{\ttp,u}=[n^{t'}_{\ttp,u-1}+1,n^{t'}_{\ttp,u}]$ tends to the
cofinite interval $[n^t_{\ttp,u-1}+1,+\infty[$.
\item
for $u''>u$, classes $I^{t'}_{\ttp,u''}$ are intervals the left
endpoints of which tend to $+\infty$ with $t'$, hence they vanish
at infinity.
\end{itemize}
Thus, $\rho_\ttp$, which is the limit of the $\rho^t_\ttp$'s,
has $u+1$ classes, hence has finite index.
\medskip\\
\fbox{Case $\ttp\notin X$}. For every $u\in\bbbn$,
the class $I^t_{\ttp,u}$ stabilizes as $t$ tends to $+\infty$.
Thus, $\rho_\ttp$ has infinite index.
\medskip\\
C. Clearly, the sequence $(\rho^t_\ttp)_{\ttp\in{\{0,1\}}^{<\omega},t\in\bbbn}$
is recursive.
Thus,
$$\rho=\{(\ttp,m,n):\exists t\ (m,n)\in\rho^t_\ttp\}$$
is r.e.
Let ${\tt a}}\newcommand{\ttb}{{\tt b}\in{\{0,1\}}^{<\omega}$ be such that
$\rho=W^{{\{0,1\}}^{<\omega}\times\bbbn^2}_{\tt a}}\newcommand{\ttb}{{\tt b}$.
Applying the parametrization property, let
$s:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be a total recursive function
such that
$$\rho_\ttp
=\{(m,n)\in\bbbn^2:(\ttp,m,n)\in W^{{\{0,1\}}^{<\omega}\times\bbbn^2}_{\tt a}}\newcommand{\ttb}{{\tt b}\}
=W^{\bbbn^2}_{s({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)}$$
Let $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be total recursive such that
$g(\ttp)=s({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)$.
Using point B, we see that $\ttp\in X$ if and only if
$\mathit{index}(W^{\bbbn^2}_{g(\ttp)})$ is finite.
\end{proof}
\subsection{Characterization of the $\mathit{index}$ self-enumerated
systems} \label{ss:characterizeIndex}
We now come to the characterization of the index self-enumerated
families.
It turns out that these families are almost equal to
$\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]$, almost meaning here ``up to 1".
\begin{notation}
If ${\cal G}$ is a family of functions
${\{0,1\}}^{<\omega}\to\bbbn$, we let
$${\cal G}+1=\{f+1:f\in{\cal G}\}$$
\end{notation}
\begin{theorem}\label{thm:index}$\\ $
{\bf 1.}
For any basic set $\bbbx$ and any oracle $A\subseteq\bbbn$,
the following {\em strict} inclusions hold:
$$\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]+1
\subset
index\circ{\cal F}^{RE^A(\bbbx^2)}
\subset
index\circ{\cal PF}^{RE^A(\bbbx^2)}
\subset
\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]$$
{\bf 2.}\ \ $K^\bbbn_{index\circ{\cal F}^{RE^A(\bbbx^2)}}=_{\rm ct}
K^\bbbn_{index\circ{\cal PF}^{RE^A(\bbbx^2)}}=_{\rm ct}\kmax[A']$.
\medskip\\
We shall simply write $K^{\bbbn,A}_{index}$
in place of $K^\bbbn_{index\circ{\cal F}^{RE^A(\bbbn)}}$.
\\
When $A=\emptyset$ we simply write $K^{\bbbn}_{index}$.
\end{theorem}
\begin{proof}
Observe that if ${\cal F}$ is a self-enumerated system with domain
$D$ and with $U$ as a good universal function, then ${\cal F}+1$
is also a self-enumerated system with $U+1$ as a good universal
function.
In particular $K^D_{\cal F}=K^D_{{\cal F}+1}$.
\\
Point 2 is a direct corollary of Point 1 and Prop.\ref{p:Kinfini}
and the previous observation.
\medskip\\
Let's prove point 1.
\\The central inclusion
$index\circ{\cal F}^{RE^A(\bbbx^2)}
\subset index\circ{\cal PF}^{RE^A(\bbbx^2)}$
is trivial.
\medskip\\
A. {\em Non strict inclusion\ \
$index\circ{\cal PF}^{RE^A(\bbbx^2)}
\subseteq\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]$.}
\\
Let $G\in index\circ{\cal PF}^{RE^A(\bbbx^2)}$ and let
$g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be partial $A$-recursive such that
$$G(\ttp)=\left\{\begin{array}{ll}
index(W^{A,\bbbx^2}_{g(\ttp)})&
\mbox{if $g(\ttp)$ is defined and $W^{A,\bbbx^2}_{g(\ttp)}$ is an}
\\&\mbox{equivalence relation with finite index}
\\\mbox{undefined}&\mbox{otherwise}\end{array}\right.$$
We define a total $A'$-recursive function
$u:{\{0,1\}}^{<\omega}\times\bbbn\to\bbbn$ such that
$$(*)\ \ \ \{u(\ttp,t):t\in\bbbn\}=
\left\{\begin{array}{ll}
\{0,...,n\}&\mbox{if $G(\ttp)$ is defined and $G(\ttp)=n$}\\
\bbbn&\mbox{if $G(\ttp)$ is undefined}
\end{array}\right.$$
\noindent
The definition is as follows.
Since $g$ is partial $A$-recursive and we look for an $A'$-recursive
definition of $u(\ttp,t)$, we can use oracle $A'$ to check
if $g(\ttp)$ is defined.
\\
If $g(\ttp)$ is undefined then we let $u(\ttp,t)=t$ for all $t$.
Which insures $(*)$.
\\
Suppose now that $g(\ttp)$ is defined.
First, set $u(\ttp,0)=0$.
\\
Consider an $A$-recursive enumeration of $W^{A,\bbbx^2}_{g(\ttp)}$.
Let $R_t$ be the set of pairs enumerated at steps $<t$
and $D_t$ be the set of $\ttx\in\bbbx$ which appear in pairs
in $R_t$ (so that $R_0$ and $D_0$ are empty).
Since at most one new pair is enumerated at each step, the set
$R_t$ contains at most $t$ pairs and $D_t$ contains at most $2\,t$
points.
\\
At step $t+1$, use oracle $A'$ to check the following properties:
\begin{enumerate}
\item[$\alpha_t$.]
For every $\ttx\in D_{t+1}$ the pair $(\ttx,\ttx)$ is in
$W^{A,\bbbx^2}_{g(\ttp)}$.
\item[$\beta_t$.]
For every pair $(\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R_{t+1}$ the pair $({\tt y}}\newcommand{\ttz}{{\tt z},\ttx)$
is in $W^{A,\bbbx^2}_{g(\ttp)}$.
\item[$\gamma_t$.]
For every pairs $(\ttx,{\tt y}}\newcommand{\ttz}{{\tt z}),({\tt y}}\newcommand{\ttz}{{\tt z},\ttz)\in R_{t+1}$ the pair
$(\ttx,\ttz)$ is in $W^{A,\bbbx^2}_{g(\ttp)}$.
\item[$\delta_t$.]
For every $\ttx\in D_{t+1}$ there exists ${\tt y}}\newcommand{\ttz}{{\tt z}\in D_t$ such that
the pair $(\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})$ is in $W^{A,\bbbx^2}_{g(\ttp)}$.
\end{enumerate}
Since $R_{t+1},D_{t+1}$ are finite, all these properties
$\alpha_t\mbox{-}\delta_t$ are {\em finite} boolean combinations of
$\Sigma^{0,A}_1$ statements.
Hence oracle $A'$ can decide them all.
\medskip\\
Observe that if $W^{A,\bbbx^2}_{g(\ttp)}$ is an equivalence relation
then answers to $\alpha_t\mbox{-}\gamma_t$ are positive for all $t$.
And if $W^{A,\bbbx^2}_{g(\ttp)}$ is not an equivalence relation
then, for some $\pi\in\{\alpha,\beta,\gamma\}$, answers to $\pi_t$
are negative for all $t$ large enough .
\\
Also, if $W^{A,\bbbx^2}_{g(\ttp)}$ is an equivalence relation then
a new equivalence class is revealed each time $\delta_t$ is false.
And every equivalence class is so revealed.
\medskip\\
Thus, in case $g(\ttp)$ is defined, we insure $(*)$ by letting
$$u(\ttp,t+1)=\left\{
\begin{array}{ll}
u(\ttp,t)
&\mbox{if all answers to $\alpha_t\mbox{-}\delta_t$ are positive}
\\
u(\ttp,t)+1&\mbox{otherwise}
\end{array}\right.$$
From $(*)$, we get $G=\max u$. Since $u$ is total $A'$-recursive,
this proves that $G$ is in $\maxrAj[{\{0,1\}}^{<\omega}\to\bbbx]$.
\medskip\\
B. {\em Non strict inclusion\ \
$\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]+1
\subseteq index\circ{\cal F}^{RE^A(\bbbx^2)}$.}\\
We reduce to the case $\bbbx=\bbbn$.
\\Let $F\in\maxprAj[{\{0,1\}}^{<\omega}\to\bbbn]$. Using Prop.\ref{p:Turing},
let ${\cal M}$ be an oracle Turing machine which on input $\ttp$
and oracle $A'$ computes $F(\ttp)$ through an infinite computation.
\\
The idea to prove that $F$ is in
$index\circ{\cal F}^{RE^A(\bbbn^2)}$ is as follows.
We consider $A$-recursive approximations of oracle $A'$ and use
them as fake oracles.
For each $\ttp$ we build an $A$-r.e. equivalence relation
$\rho_\ttp\subseteq\bbbn^2$ with domain $\bbbn$ which consists of
one big class containing $0$ and some singleton classes.
Each time the computation with the fake oracle outputs a new digit
$1$, we put some new singleton class in $\rho_\ttp$.
When, with a better approximation of $A'$, we see that the fake
oracle has given an incorrect answer, all singleton classes
which were put in $\rho_\ttp$ because of the oracle incorrect answer
are annihilated: they are aggregated to the class of $0$.
Since we are going to consider $index(\rho_\ttp)$, this process
will lead to the correct value $F(\ttp)+1$.
\medskip\\
Formally, we consider an $A$-recursive monotone increasing sequence
$(Approx(A',t))_{t\in\bbbn}$ such that
$A'=\bigcup_{t\in\bbbn}Approx(A',t)$ (cf. Lemma \ref{l:approx}).
Though all oracles $Approx(A',t)$ are false approximations of oracle
$A'$, they are nevertheless ``less and less false" as $t$ increases.
\medskip\\
Without loss of generality, we can suppose that at each computation
step of ${\cal M}$ there is a question to the oracle
(possibly the same one many times).
\medskip\\
Let ${\cal C}_{\ttp,t}$ be the computation of ${\cal M}$ on input
$\ttp$ with oracle $Approx(A',t)$, reduced to the sole $t$ first
steps.
\\
Increasing parts of oracle $Approx(A',t)$ are questioned during
${\cal C}_{\ttp,t}$.
Let $\Omega_{\ttp,t}:\{1,...,t\}\to P_{fin}(\bbbn)$
(where $P_{fin}(\bbbn)$ is the set of finite subsets of $\bbbn$)
be such that $\Omega_{\ttp,t}(t')$ is the set of $k$ such that the
oracle has been questioned about $k$ during the $t'$ first steps,
$1\leq t'\leq t$.
Clearly, $\Omega_{\ttp,t}$ is (non strictly) monotone increasing
with respect to set inclusion.
\\
Let $1^{n_{\ttp,t}}$ be the output of ${\cal C}_{\ttp,t}$
(recall that ${\cal M}$ outputs a finite or infinite sequence of
digits $1$'s).
\\
The successive digits of this output are written down at increasing
times (all $\leq t$).
Let $OT_{\ttp,t}:\{0,...,n_{\ttp,t}\}\to\{0,...,t\}$ be such that
$OT_{\ttp,t}(n)$ is the least step at which the current output is
$1^n$ ($OT$ stands for output time). Clearly, $OT_{\ttp,t}(0)=0$.
\medskip\\
We construct $A$-recursive sequences
$(\rho_{\ttp,t})_{\ttp\in{\{0,1\}}^{<\omega}, t\in\bbbn}$ and
$(w_{\ttp,t})_{\ttp\in{\{0,1\}}^{<\omega}, t\in\bbbn}$
(where $w$ stands for witness) such that
\begin{enumerate}
\item[$i_t$.]
$\rho_{\ttp,t}$ is an equivalence relation on $\{0,...,2^t-1\}$
with index equal to $1+n_{\ttp,t}$
(there is nothing essential with $2^t$, it is merely a large enough
bound convenient for the construction),
\item[$ii_t$.]
all equivalence classes of $\rho_{\ttp,t}$ are singleton sets
except possibly the equivalence class of $0$.
\item[$iii_t$.]
if $t>0$ then $\rho_{\ttp,t}$ contains $\rho_{\ttp,t-1}$.
\item[$iv_t$.]
$w_{\ttp,t}$ is a bijection between $\{1,...,n_{\ttp,t}\}$
and the set of point $s\in\{1,...,2^t-1\}$ such that $\{s\}$ is a
singleton class of $\rho_{\ttp,t}$
(in case $n_{\ttp,t}=0$ then $w_{\ttp,t}$ is the empty map).
\end{enumerate}
First, $w_{\ttp,0}$ is the empty map and $\rho_{\ttp,0}=\{(0,0)\}$,
i.e. the trivial equivalence relation on $\{0\}$.
\medskip\\
The inductive construction of the $\rho_{\ttp,t}$'s uses the
above conditions $i_t\mbox{-} iv_t$ as an induction hypothesis.
\medskip\\
{\em Case $Approx(A',t+1)\cap\Omega_{\ttp,t}(t)
=Approx(A',t)\cap\Omega_{\ttp,t}(t)$.}
\\
Then the computation ${\cal C}_{\ttp,t}$ is totally compatible with
${\cal C}_{\ttp,t+1}$.
Now, that last computation may possibly output one more digit $1$,
i.e. $n_{\ttp,t+1}=n_{\ttp,t}$ or $n_{\ttp,t+1}=n_{\ttp,t}+1$.
Hence the two following subcases.
\medskip\\
{\em Subcase $n_{\ttp,t+1}=n_{\ttp,t}$.}
Then $\rho_{\ttp,t+1}$ is obtained from $\rho_{\ttp,t}$ by putting
$2^t,2^t+1,...,2^{t+1}-1$ as new points in the class of $0$.
In particular,
$\rho_{\ttp,t+1}$ and $\rho_{\ttp,t}$ have the same index.
We also set $w_{\ttp,t+1}=w_{\ttp,t}$.
\medskip\\
{\em Subcase $n_{\ttp,t+1}=n_{\ttp,t}+1$.}
Then $\rho_{\ttp,t+1}$ is obtained from $\rho_{\ttp,t}$ as follows:
\begin{itemize}
\item
Add a new singleton class $\{2^t\}$.
\item
Put $2^t+1,...,2^{t+1}-1$ as new points in the class of $0$.
\end{itemize}
We also set $w_{\ttp,t+1}=w_{\ttp,t}\cup\{(n_{\ttp,t+1},2^t)\}$.
\medskip\\
In both subcases, conditions $i_{t+1}\mbox{-} iv_{t+1}$ are
clearly satisfied.
\medskip\\
{\em Case $Approx(A',t+1)\cap\Omega_{\ttp,t}(t)
\neq Approx(A',t)\cap\Omega_{\ttp,t}(t)$.}
\\
Let $\tau\leq t$ be least such that
$Approx(A',t+1)\cap\Omega_{\ttp,t}(\tau)
\neq Approx(A',t)\cap\Omega_{\ttp,t}(\tau)$.
Though the computation ${\cal C}_{\ttp,t}$ is not entirely
compatible with ${\cal C}_{\ttp,t+1}$, it is compatible up to
step $\tau-1$.
\\
Let $n\leq n_{\ttp,t}$ be greatest such that $OT_{\ttp,t}(n)<\tau$.
Then the $n$ first digits output by ${\cal C}_{\ttp,t}$ are also
output by ${\cal C}_{\ttp,t+1}$ at the same computation steps.
In particular, $n_{\ttp,t+1}\geq n$.
\\
Then $\rho_{\ttp,t+1},w_{\ttp,t+1}$ are obtained from
$\rho_{\ttp,t},w_{\ttp,t}$ as follows:
\begin{itemize}
\item
Put all $w_{\ttp,t}(m)$, where $n<m\leq n_{\ttp,t}$, as new points
in the class of $0$. This annihilates the singleton classes of
$\rho_{\ttp,t}$ corresponding (via $w_{\ttp,t}(m)$) to the part of
the output which was created by answers of oracle $Approx(A',t)$
which are known to be false at step $t+1$.
\item
Add a new singleton class $\{2^t-1+i\}$ for each $i>0$ such that
$n+i\leq n_{\ttp,t+1}$.
Together with the singleton classes of $\rho_{\ttp,t}$ which have
not been aggragated by the above point, this allows to get exactly
$n_{\ttp,t+1}$ singleton classes in $\rho_{\ttp,t+1}$
\\
Accordingly, set
$$w_{\ttp,t+1}=(w_{\ttp,t}\!\upharpoonright \! \{1,...,n\})\
\cup\ \{(n+i,2^t-1+i):0<i\leq n_{\ttp,t+1}-n\}$$
\item
Put the $2^t-1+j$'s, where $j\geq\max(1,n_{\ttp,t+1}-n)$,
as new points in the class of $0$.
\end{itemize}
Again, conditions $i_{t+1}\mbox{-} iv_{t+1}$ are clearly satisfied.
\medskip\\
Let $\rho_\ttp=\bigcup_{t\in\bbbn}\rho_{\ttp,t}$.
Condition $iii_t$ insures that $\rho_\ttp$ is also an equivalence
relation.
Condition $ii_t$ goes through the limit when $t\to+\infty$,
so that all classes of $\rho_\ttp$ are singleton sets except
the class of $0$.
\medskip\\
The computation we are really interesting in is that which gives
$F(\ttp)$, i.e. the infinite computation of ${\cal M}$
on input $\ttp$ with oracle $A'$.
Let denote it ${\cal C}_\ttp$.
When $t$ increases, the common part of ${\cal C}_\ttp$ with
computation ${\cal C}_{\ttp,t}$ gets larger and larger
(though not monotonously).
\medskip\\
We now prove the equality
$$(\dagger)\ \ \ index(\rho_\ttp)=\left\{\begin{array}{ll}
1+F(\ttp)&\mbox{if $F(\ttp)$ is defined}\\
+\infty&\mbox{otherwise}
\end{array}\right.$$
\medskip\\
{\em Case $F(\ttp)$ is defined and $F(\ttp)=z$.}
\\Let $\tau$ be the computation time at which ${\cal C}_\ttp$ has
output $z$. Let $\Omega_\ttp$ be the set of $k$ such that oracle
$A'$ has been questioned about during the first $\tau$ steps of
${\cal C}_\ttp$.
For $t$ large enough, say $t\geq t_z$, we have
$Approx(A',t)\cap\Omega_\ttp=A'\cap\Omega_\ttp$.
In particular, the $\tau$ first steps of ${\cal C}_{\ttp,t}$
and ${\cal C}_\ttp$ will be exactly the same and both computations
output $z$.
The same with the $\tau$ first steps of ${\cal C}_{\ttp,t}$
and ${\cal C}_{\ttp,t+1}$.
\\Thus,
$w_{\ttp,t+1}\!\upharpoonright \!\{1,...,z\}=w_{\ttp,t}\!\upharpoonright \!\{1,...,z\}$.
\\
Let $w_\ttp=w_{\ttp,t+1}\!\upharpoonright \!\{1,...,z\}$.
Then all singleton sets $\{w_\ttp(i)\}$, where $1\leq i\leq z$,
are equivalence classes for the $\rho_{\ttp,t}$'s, hence for
$\rho_\ttp$.
\medskip\\
Now, if $n_{\ttp,t}>z$ then oracle $Approx(A',t)$ has been
questioned on $\Omega_{\ttp,t}(n_{\ttp,t})$ and differs from $A'$
on that set.
Let $u>t$ be first such that $Approx(A',u)$ agrees with $A'$
on $\Omega_{\ttp,t}(z+1)$.
Then the singleton class $\{w_{\ttp,t}(z+1)\}$ of
$\rho_{\ttp,t}$ is aggregated at step $u$ to the class of $0$
in $\rho_{\ttp,t+1}$, hence also in $\rho_\ttp$.
\medskip\\
Thus, the $\{w_\ttp(i)\}$'s, where $1\leq i\leq z$,
are the sole singleton equivalence classes of $\rho_\ttp$.
And the class of $0$ contains all other points in $\bbbn$.
\\
In particular, $index(\rho_\ttp)=1+F(\ttp)$.
\medskip\\
{\em Case $F(\ttp)$ is undefined because the output of ${\cal M}$
on input $\ttp$ with oracle $A'$ is infinite.}
\\
As in the above case, we see that there are more and more singleton
set classes of $\rho_{\ttp,t}$ which are never annihilated.
Thus, the index of $\rho_\ttp$ is infinite.
\medskip\\
This proves $(\dagger)$.
\medskip\\
Observing that all the construction of the $\rho_{\ttp,t}$'s is
$A$-recursive, we see that
$$\rho=\bigcup_{\ttp\in{\{0,1\}}^{<\omega}}\rho_\ttp$$
is $A$-r.e.
Thus, $\rho=W^{A,{\{0,1\}}^{<\omega}\times\bbbn^2}_{\tt a}}\newcommand{\ttb}{{\tt b}$ for some ${\tt a}}\newcommand{\ttb}{{\tt b}$.
The parameter property gives a total $A$-recursive function
$s:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$$\rho_\ttp=W^{A,\bbbn^2}_{s({\tt a}}\newcommand{\ttb}{{\tt b},\ttp)}$$
Thus, $p\mapsto index(\rho_\ttp)$ is indeed in
$index\circ{\cal F}^{RE^A(\bbbx^2)}$.
Thanks to $(\dagger)$, the same is true of $1+F$.
\medskip\\
C. {\em Inclusion\ \ $\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]+1
\subseteq index\circ{\cal F}^{RE^A(\bbbx^2)}$ is strict.}\\
The constant $0$ function is an obvious counterexample
to equality.
\medskip\\
D. {\em Inclusion\ \ $index\circ{\cal PF}^{RE^A(\bbbx^2)}
\subseteq\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]$ is strict.}\\
We exhibit a function $\kappa_X$ in
$\PR[A']\setminus index\circ{\cal PF}^{RE^A(\bbbx^2)}\neq\emptyset.$
\\
Let $X\subset{\{0,1\}}^{<\omega}$ be $A'$-recursive, i.e. $\Delta^{0,A}_2$,
but not a boolean combination of $\Sigma^{0,A}_1$ sets.
Let $\kappa_X:{\{0,1\}}^{<\omega}\to\bbbn$ be the $\{0,1\}$-valued
characteristic function of $X$.
Then $\kappa_X$ is $A'$-recursive
(hence in $\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]$)
and $\kappa_X^{-1}(0)=X$ is a $\Delta^{0,A}_2$ set which is not
a boolean combination of $\Sigma^{0,A}_1$ sets.
\medskip\\
Now, suppose $G$ is in $index\circ{\cal PF}^{RE^A(\bbbx^2)}$ and
$G=index(W^{A,\bbbx^2}_{g(\ttp)})$ where $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ is
in $\PR[A]$. Then
\begin{eqnarray*}
G(\ttp)=0&\Leftrightarrow&(\mbox{$g(\ttp)$ is defined}\
\wedge\ W^{A,\bbbx^2}_{g(\ttp)}=\emptyset)
\\
&\Leftrightarrow&(\mbox{$g(\ttp)$ is defined}\\
&&\wedge\ \forall t\ \forall {\tt e}}\newcommand{\ttf}{{\tt f}\
(g(\ttp) \mbox{ converges to ${\tt e}}\newcommand{\ttf}{{\tt f}$ in $t$ steps }
\Rightarrow\ W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}=\emptyset)
\end{eqnarray*}
so that $G^{-1}(0)$ is $\Sigma^{0,A}_1\wedge\Pi^{0,A}_1$.
\medskip\\
This shows that no $G\in index\circ{\cal PF}^{RE^A(\bbbx^2)}$ can be
equal to the above $\kappa_X$.
Therefore, the considered inclusion cannot be an equality.
\end{proof}
Let's finally observe a simple fact contrasting inclusions in
Thm.\ref{thm:index}.
\begin{proposition}\label{p:indexnoninclusion}
$1+\PR[A',{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}]$ (a fortiori $1+\maxprAj[{\{0,1\}}^{<\omega}\to\bbbn]$)
is not included in $index\circ{\cal PF}^{RE^A(\bbbx^2)}$.
\end{proposition}
\begin{proof}
The proof is analog to that of point D in the proof of
Thm.\ref{thm:index}.
\medskip\\
1. We show that $G^{-1}(1)$ is $\Pi^{0,A}_2$ for every
$G\in index\circ{\cal PF}^{RE^A(\bbbx^2)}$.
\\
Suppose $G=index(W^{A,\bbbx^2}_{g(\ttp)})$ where $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
is partial $A$-recursive.
\\
Let's denote $W^{A,\bbbx^2}_{{\tt e}}\newcommand{\ttf}{{\tt f},t}$ the finite part of
$W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}$ obtained after $t$ steps of its enumeration.
Let's also denote $CV_g(\ttp,{\tt e}}\newcommand{\ttf}{{\tt f},t)$ the $A$-recursive relation
stating that $g(\ttp)$ converges to ${\tt e}}\newcommand{\ttf}{{\tt f}$ in $\leq t$ steps.
Then
\begin{eqnarray*}
G(\ttp)=1&\Leftrightarrow&(\mbox{$g(\ttp)$ is defined}\
\wedge\ W^{A,\bbbx^2}_{g(\ttp)}\neq\emptyset\
\\&&\wedge\ W^{A,\bbbx^2}_{g(\ttp)}
\mbox{ is an equivalence relation with index $1$})
\\
&\Leftrightarrow&(\mbox{$g(\ttp)$ is defined}\
\wedge\ W^{A,\bbbx^2}_{g(\ttp)}\neq\emptyset\
\\
&&\wedge\ \forall t\ \forall {\tt e}}\newcommand{\ttf}{{\tt f}\
(CV_g(\ttp,{\tt e}}\newcommand{\ttf}{{\tt f},t)\ \Rightarrow
\\&&\hspace{2cm}W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}
\mbox{ is an equivalence relation with index $1$})
\end{eqnarray*}
The first two conjuncts are clearly $\Sigma^{0,A}_1$.
As for the last one, observe that $W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}$
is an equivalence relation if and only if
\begin{eqnarray*}
&\forall \ttx,{\tt y}}\newcommand{\ttz}{{\tt z}\in\bbbx\
((\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}\ \Rightarrow\
(\ttx,\ttx)\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}\ \wedge\
({\tt y}}\newcommand{\ttz}{{\tt z},\ttx)\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f})&
\\
&\wedge\ \forall \ttx,{\tt y}}\newcommand{\ttz}{{\tt z},\ttz\in\bbbx\
((\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}\ \wedge\
({\tt y}}\newcommand{\ttz}{{\tt z},\ttz)\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f})\ \Rightarrow\
(\ttx,\ttz)\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f})&
\end{eqnarray*}
Which is $\Pi^{0,A}_2$ since $W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}$ is
$\Sigma^{0,A}_1$.
\\
Also, if $W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}$ is a non empty equivalence relation
then it has index $1$ if and only if
\begin{eqnarray*}
&\forall \ttx,{\tt y}}\newcommand{\ttz}{{\tt z},\ttx',{\tt y}}\newcommand{\ttz}{{\tt z}'\in\bbbx\
((\ttx,\ttx')\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f}\ \wedge\
({\tt y}}\newcommand{\ttz}{{\tt z},{\tt y}}\newcommand{\ttz}{{\tt z}')\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f},)\ \Rightarrow\
(\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in W^{A,\bbbx^2}_{\tt e}}\newcommand{\ttf}{{\tt f})&
\end{eqnarray*}
Which is again $\Pi^{0,A}_2$.
\medskip\\
This proves that $G^{-1}(1)$ is indeed $\Pi^{0,A}_2$.
\medskip\\
2. Now, let $X\subset\bbbx$ be $\Sigma^{0,A'}_1$ and not
$A'$-recursive.
Thus, $X$ is $\Sigma^{0,A}_2$ and not $\Pi^{0,A}_2$.
Let $\pi_X:{\{0,1\}}^{<\omega}\to\bbbn$ be such that
$$\pi_X(\ttp)=\left\{\begin{array}{ll}
1&\mbox{if $\ttp\in X$}\\
\mbox{undefined}&\mbox{otherwise}\end{array}\right.$$
Then $\pi_X\in1+\PR[A',{\{0,1\}}^{<\omega}\to\bbbn]$.
\medskip\\
Since $\pi_X^{-1}(1)=X$ is not $\Pi^{0,A}_2$, $\pi_X$ cannot be in
$index\circ{\cal PF}^{RE^A(\bbbx^2)}$.
\end{proof}
\subsection{Characterization of the $\Delta\mathit{index}$ self-enumerated
systems} \label{ss:DeltaIndex}
\begin{theorem}\label{thm:DeltaIndex}$\\ $
Let $A\subseteq\bbbn$ and let $A''$ be the second jump of $A$.
Let $\bbbx$ be a basic set.
\medskip\\
{\bf 1.}\ \
$\Delta(index\circ{\cal F}^{RE^A(\bbbx)}))
=\Delta(index\circ{\cal PF}^{RE^A(\bbbx)}))
=\PR[A'',{\{0,1\}}^{<\omega}\to\bbbz]$
\medskip\\
{\bf 2.}
$K^\bbbz_{\Delta(index\circ{\cal F}^{RE^A(\bbbx)})}
=_{\rm ct} K^{A'',\bbbz}$.
\medskip\\
We shall simply write $K^{\bbbn,A}_{\Delta index}$
in place of
$K^\bbbz_{\Delta(index\circ{\cal F}^{RE^A(\bbbn)})}\!\upharpoonright \!\bbbn$.
\\
When $A=\emptyset$ we simply write $K^{\bbbz}_{\Delta index}$.
\end{theorem}
\begin{proof}
Point 2 is a direct corollary of Point 1.
Let's prove point 1.
Using Thm.\ref{thm:index}, and applying the $\Delta$ operator,
we get
\medskip\\
$\Delta(\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn]+1)
\subseteq
\Delta(index\circ{\cal F}^{RE^A(\bbbx^2)})$
\hfill{$\subseteq
\Delta(index\circ{\cal PF}^{RE^A(\bbbx^2)})
\subseteq
\Delta(\maxrAj[{\{0,1\}}^{<\omega}\to\bbbn])$}
\medskip\\
But, for any family ${\cal G}$ of functions ${\{0,1\}}^{<\omega}\to\bbbn$,
we trivially have $\Delta({\cal G}+1)=\Delta({\cal G})$.
This proves that the above inclusions are, in fact, equalities.
We conclude with Thm.\ref{thm:Deltamax}.
\end{proof}
\section{Functional representations of $\bbbn$} \label{s:church}
\begin{notation}[Functions sets]\label{not:general2}
We denote
\\\indent -\ \ $Y^X$ the set of total functions from $X$ into $Y$.
\\\indent -\ \ $X\to Y$ the set of partial functions from $X$
into $Y$.
\\\indent -\ \ $X\stackrel{1-1}{\to}X$ the set of injective
partial functions from $X$ into $X$.
\\\indent -\ \ $Id_X$ the identity function over $X$.
\end{notation}
\subsection{Basic Church representation of $\bbbn$}
\label{ss:church}
First, let's introduce some simple notations related to function
iteration.
\begin{definition}[Iteration]\label{def:it}$\\ $
1) If $f:X\to X$ is a partial function, we inductively define
for $n\in\bbbn$ the $n$-th iterate $f^{(n)}:X\to X$ of $f$
as the partial function such that:
$$f^{(0)}=Id_X \ ,\ f^{(n+1)}=f^{(n)}\circ f$$
\medskip
2) $It^{(n)}_X:(X\to X)\to(X\to X)$
is the total functional $f\mapsto f^{(n)}$.
\\
$It_X^\bbbn:\bbbn \to (X\to X)^{(X\to X)}$
is the total functional $n\mapsto It_X^n$.
\end{definition}
The following Proposition is easy.
\begin{proposition}\label{p:injectiveIt}
The total functional $It^\bbbn_X:\bbbn \to (X\to X)^{(X\to X)}$
is injective (hence admits a left inverse) if and only if $X$ is
an infinite set.
\end{proposition}
We can now come to the functional representation of integers
introduced by Church, 1933 \cite{church33}.
\begin{definition}[Church representation of $\bbbn$]
\label{def:Church}$\\ $
If $X$ is an infinite set, the Church representation of $\bbbn$
relative to $X$ is the function
$$\mathit{Church}^{\bbbn}_X:(X\to X)^{(X\to X)}\to\bbbn$$
which is the unique left inverse of $It_X^\bbbn$ with
domain $Range(It_X^\bbbn)=\{It_X^n:n\in\bbbn\}$, i.e.
\begin{eqnarray*}
\mathit{Church}^{\bbbn}_X\circ It_X^\bbbn&=&Id_\bbbn\\
\mathit{Church}^{\bbbn}_X(F)&=&\left\{
\begin{array}{ll}
n&\mbox{if $F=It_X^n$}\\
\mbox{undefined}&\mbox{if $\forall n\in\bbbn\ F\neq It_X^n$}
\end{array}\right.
\end{eqnarray*}
\end{definition}
For future use in Def.\ref{def:effChurch}, let's introduce the following
variant of $\mathit{Church}^{\bbbn}_X$.
\begin{definition}\label{def:church}
We denote
$church^{\bbbn,A}_X:(\PR[A,\bbbx\to\bbbx])^{\PR[A,\bbbx\to\bbbn]}$
the functional which is the unique left inverse of the restriction
of $It^\bbbn_X$ to $(\PR[A,\bbbx\to\bbbx])^{\PR[A,\bbbx\to\bbbx]}$,
i.e.
\begin{eqnarray*}
\mathit{church}^{\bbbn,A}_X(F)&=&\left\{
\begin{array}{ll}
n&\mbox{if }
F=It_X^n\!\upharpoonright \!(\PR[A,\bbbx\to\bbbx])^{\PR[A,\bbbx\to\bbbx]}\\
\mbox{undefined}&\mbox{if }
\forall n\in\bbbn\ F\neq
It_X^n\!\upharpoonright \!(\PR[A,\bbbx\to\bbbx])^{\PR[A,\bbbx\to\bbbx]}
\end{array}\right.
\end{eqnarray*}
\end{definition}
\subsection{Computable and effectively continuous functionals}
\label{ss:computfunctionals}
We recall the two classical notions of partial computability
for functionals,
cf. Odifreddi's book \cite{odifreddi} p.178, 188, 197.
\begin{definition}[Kleene partial computable functionals]
\label{def:Kleene}$\\ $
{\bf 1.}
Let $\bbbx,\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},\mathbb{S}}\newcommand{\bbbt}{\mathbb{T},\bbbt$ be some basic space and fix some
suitable representations of their elements by words.
An $(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})$-oracle Turing machine with inputs and outputs
respectively in $\mathbb{S}}\newcommand{\bbbt}{\mathbb{T},\bbbt$ is a Turing machine
${\cal M}$ which has a special oracle tape and is allowed at
certain states to ask an oracle $f\in(\bbbx\to\bbbx)$ what are
the successive digits of the value of $f({\tt q}}\newcommand{\ttr}{{\tt r})$ where ${\tt q}}\newcommand{\ttr}{{\tt r}$
is the element of $\bbbx$ currently written on the oracle tape.
\\
The functional $\Phi_{\cal M}:((\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T})\to\bbbt$
associated to ${\cal M}$ maps the pair $(f,{\tt s}}\newcommand{\ttt}{{\tt t})$ on the output
(when defined) computed by ${\cal M}$ when $f$ is given as the partial
function oracle and ${\tt s}}\newcommand{\ttt}{{\tt t}$ as the input.
\\
If on input $\ttx$ and oracle $f$ the computation asks the oracle
its value on an element on which $f$ is undefined then ${\cal M}$
gets stuck, so that $\Phi_{\cal M}(f,\ttx)$ is undefined.
\medskip\\
{\bf 2.}
A functional $\Phi:((\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T})\to\bbbt$ is partial
computable (also called partial recursive) if $\Phi=\Phi_{\cal M}$
for some ${\cal M}$.
\\
A functional obtained via curryfications from such a functional is
also called partial computable.
\medskip\\
We denote $\mathit{PC}^\tau$ the family of partial computable
functionals with type $\tau$.
\\
If $A\subseteq\bbbn$, we denote $A\mbox{-}\mathit{PC}^\tau$ the analog
family with the extra oracle $A$.
\end{definition}
\begin{definition}[Uspenskii (effectively) continuous functionals]
\label{def:Uspenskii}
Denote $Fin(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})$ the class of partial functions
$\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}$ with finite domains.
Observe that, for $\alpha,\beta\in Fin(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})$ are
compatible if and only if $\alpha\cup\beta\in Fin(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})$.
\medskip\\
{\bf 1.}
Let's say that the relation
$R\subseteq Fin(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\times\bbbt$
is functional if
$$\alpha\cup\beta\in Fin(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\ \wedge\
(\alpha,{\tt s}}\newcommand{\ttt}{{\tt t},\ttt)\in R\ \wedge\ (\beta,{\tt s}}\newcommand{\ttt}{{\tt t},\ttt')\in R
\ \Rightarrow\ \ttt=\ttt'$$
To such a functional relation $R$ can be associated a functional
$$\Phi_R:((\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T})\to\bbbt$$
such that, for every $f,{\tt s}}\newcommand{\ttt}{{\tt t},\ttt$,
\medskip\\\medskip\centerline{
$\begin{array}{crcl}
(\dagger)\indent\indent&\Phi(f,{\tt s}}\newcommand{\ttt}{{\tt t})=\ttt&\Leftrightarrow&
\exists u\subseteq f\ R(u,{\tt s}}\newcommand{\ttt}{{\tt t},\ttt)
\end{array}$}
{\bf 2.} {\bf (Uspenskii \cite{uspenskii55}, Nerode \cite{nerode57})}
A functional $\Phi:((\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T})\to\bbbt$ is
{\bf continuous} if it is of the form $\Phi_R$ for some functional
relation $R$.
\medskip\\
$\Phi$ is {\bf effectively continuous
(resp. ($A$-effectively continuous)}
if $R$ is r.e. (resp. $A$-r.e.).
Effectively continuous functionals are also called recursive
operators (cf. Rogers \cite{rogers}, Odifreddi \cite{odifreddi}).
\\
A functional obtained via curryfications from such a functional is
also called effectively continuous.
\\
We denote $\mathit{EffCont}^\tau$ the family of effectively continuous
functionals with type $\tau$.
\\
If $A\subseteq\bbbn$, we denote $A\mbox{-}\mathit{EffCont}^\tau$ the analog
family with the extra oracle $A$.
\end{definition}
Effective continuity is more general than partial computability
(cf. \cite{odifreddi} p.188).
\begin{theorem}\label{thm:KleeneUspenskii}
Let $A\subseteq\bbbn$.\\
{\bf 1.} {\bf (Uspenskii \cite{uspenskii55}, Nerode \cite{nerode57})}
Partial $A$-computable functionals are $A$-effectively continuous.
\medskip\\
{\bf 2.} {\bf (Sasso \cite{sasso71,sasso75})}
There are $A$-effectively continuous functionals which are not
partial $A$-computable.
\end{theorem}
However, restricted to total functions, both notions coincide.
\begin{proposition}
A functional $\Phi:(\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}^{\bbbx})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt$
is the restriction of a partial $A$-computable functional
$((\bbbx\to{\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}})\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T})\to\bbbt$ if and only if it is
the restriction of an $A$-effectively continuous functional.
\end{proposition}
\subsection{Effectiveness of the {\em Apply} functional}
\label{ss:apply}
The following result will be used in
\S\ref{ss:syntaxChurch}-\ref{ss:effectiveChurch}.
\begin{proposition}\label{p:apply}
Let $\phi:{\{0,1\}}^{<\omega}\to \PR[A,\bbbx\to\bbbx]$ be partial $A$-recursive
(as a function ${\{0,1\}}^{<\omega}\times\bbbx\to\bbbx$)
and $\Phi:{\{0,1\}}^{<\omega}\to
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}$
be effectively continuous.
There exists a partial $A$-recursive function
$g:{\{0,1\}}^{<\omega}\times{\{0,1\}}^{<\omega}\times\bbbx$ such that,
for all ${\tt e}}\newcommand{\ttf}{{\tt f},\ttp\in{\{0,1\}}^{<\omega}$ and $\ttx\in\bbbx$,
$$(*)\indent\indent\indent
g(\ttp,{\tt e}}\newcommand{\ttf}{{\tt f},\ttx)=(\Phi({\tt e}}\newcommand{\ttf}{{\tt f})(\phi(\ttp)))(\ttx)$$
\end{proposition}
\begin{proof}
Let $R\subseteq
{\{0,1\}}^{<\omega}\times Fin(\bbbx\to\bbbx)\times\bbbx\times\bbbx$
be an $A$-r.e. set such that, for all ${\tt e}}\newcommand{\ttf}{{\tt f}$,
$R^{({\tt e}}\newcommand{\ttf}{{\tt f})}=\{(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z}):({\tt e}}\newcommand{\ttf}{{\tt f},\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R\}$
is functional and $\Phi({\tt e}}\newcommand{\ttf}{{\tt f})=\Phi_{R^{({\tt e}}\newcommand{\ttf}{{\tt f})}}$.
We define $g(\ttp,{\tt e}}\newcommand{\ttf}{{\tt f},\ttx)$ as follows:
\begin{enumerate}
\item[i.]
$A$-effectively enumerate $R^{({\tt e}}\newcommand{\ttf}{{\tt f})}$ and the graph of
$\phi(\ttp)$ up to the moment we get
$(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R^{({\tt e}}\newcommand{\ttf}{{\tt f})}$ and a finite part
$\gamma$ of $\phi(\ttp)$ such that $\alpha\subseteq\gamma$.
\item[ii.]
If and when i halts then output ${\tt y}}\newcommand{\ttz}{{\tt z}$.
\end{enumerate}
It is clear that $g$ is partial $A$-recursive and satisfies $(*)$.
\end{proof}
\subsection{Functionals over $\PR[\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$
and computability}
\label{ss:computfunctionalsPR}
Using indexes, one can also consider computability for functionals
operating on the sole partial recursive or $A$-recursive functions.
\begin{definition}
Let $A\subseteq\bbbn$ and let
$(\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$ denote
some acceptable enumeration of $\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$
(cf. Def.\ref{def:acceptable}).
\medskip\\
{\bf 1.}
A functional $\Phi:\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt$
is an $A$-effective functional on partial $A$-recursive functions if
there exists some partial $A$-recursive function
$f:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that, for all ${\tt s}}\newcommand{\ttt}{{\tt t}\in\mathbb{S}}\newcommand{\bbbt}{\mathbb{T},{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$,
$$\Phi(\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{\tt e}}\newcommand{\ttf}{{\tt f})=f({\tt e}}\newcommand{\ttf}{{\tt f})$$
We denote
$A\mbox{-}\mathit{Eff}^{\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt}$
the family of such functionals.
\medskip\\
{\bf 2.}
We denote
$A\mbox{-}\mathit{Eff}^{\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_1
\to \PR[A,\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_2\to\bbbt]}$
the family of functionals obtained by curryfication of the above
class with $\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}=\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_1\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_2$.
\\
An easy application of the parameter property shows that these
functionals are exactly those for which there exists some partial
$A$-recursive function $g:{\{0,1\}}^{<\omega}\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_1\to{\{0,1\}}^{<\omega}$ such that,
for all ${\tt s}}\newcommand{\ttt}{{\tt t}_1\in\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_1,{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$,
$$\Phi(\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{\tt e}}\newcommand{\ttf}{{\tt f},{\tt s}}\newcommand{\ttt}{{\tt t}_1)
=\varphi^{\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_2\to\bbbt,A}_{g({\tt e}}\newcommand{\ttf}{{\tt f},{\tt s}}\newcommand{\ttt}{{\tt t}_1)}$$
\end{definition}
\begin{note}$\\ $
{\bf 1.}
Thanks to Rogers' theorem (cf. Thm.\ref{thm:rogers}), the above
definition does not depend on the chosen acceptable enumerations.
\medskip\\
{\bf 2.}
The above functions $f,g$ should have the following properties:
\begin{eqnarray*}
\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{\tt e}}\newcommand{\ttf}{{\tt f}=\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{{\tt e}}\newcommand{\ttf}{{\tt f}'}
&\Rightarrow&f({\tt e}}\newcommand{\ttf}{{\tt f},{\tt s}}\newcommand{\ttt}{{\tt t})=f({\tt e}}\newcommand{\ttf}{{\tt f}',{\tt s}}\newcommand{\ttt}{{\tt t})
\\
\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{\tt e}}\newcommand{\ttf}{{\tt f}=\varphi^{\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z},A}_{{\tt e}}\newcommand{\ttf}{{\tt f}'}
&\Rightarrow&
\varphi^{\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_2\to\bbbt,A}_{g({\tt e}}\newcommand{\ttf}{{\tt f},{\tt s}}\newcommand{\ttt}{{\tt t}_1)}
=\varphi^{\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}_2\to\bbbt,A}_{g({\tt e}}\newcommand{\ttf}{{\tt f}',{\tt s}}\newcommand{\ttt}{{\tt t}_1)}
\end{eqnarray*}
\end{note}
As shown by the following remarkable result, such functionals
essentially reduce to those of Def.\ref{def:Uspenskii}
(cf. Odifreddi's book \cite{odifreddi} p.206--208).
\begin{theorem}[Uspenskii \cite{uspenskii55}, Myhill \& Shepherdson
\cite{myhillshepherdson}]\label{thm:uspenskii}$\\ $
Let $A\subseteq\bbbn$.
The $A$-effective functionals
$\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\to \PR[A,\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt]$
are exactly the restrictions to $\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$ of
$A$-effectively continuous functionals
$(\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z})\to(\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt)$.
\end{theorem}
\subsection{Effectivizations of Church representation of $\bbbn$}
\label{ss:effectiveChurch}
Observe the following trivial fact (which uses notations from
Def.\ref{def:Kleene},\ref{def:Uspenskii}).
\begin{proposition}\label{prop:chuchsystems}
Let $A\subseteq\bbbn$ and $\tau$ be any 2d order type.
\\
Functionals in $A\mbox{-}\mathit{PC}^{{\{0,1\}}^{<\omega}\to\tau}$
(resp. $A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to\tau}$)
are total maps ${\{0,1\}}^{<\omega}\to A\mbox{-}\mathit{PC}^{\tau}$
(resp. ${\{0,1\}}^{<\omega}\to A\mbox{-}\mathit{EffCont}^{\tau}$).
\end{proposition}
\begin{theorem}\label{thm:chuchsystems}
Let $\tau$ be any 2d order type.
The systems
$$(A\mbox{-}\mathit{PC}^\tau,A\mbox{-}\mathit{PC}^{{\{0,1\}}^{<\omega}\to\tau})
\ \ \ ,
\ \ \ (A\mbox{-}\mathit{EffCont}^\tau,A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to\tau})$$
are self-enumerated representation $A$-systems.
\end{theorem}
\begin{proof}
Points i-ii of Def.\ref{def:self} are trivial.
As for point iii, we use the classical enumeration theorem for
partial computable (resp. effectively continuous) functionals:
consider a function
$V\in A\mbox{-}\mathit{PC}^{{\{0,1\}}^{<\omega}\to({\{0,1\}}^{<\omega}\to\tau)}$ which
enumerates $A\mbox{-}\mathit{PC}^{{\{0,1\}}^{<\omega}\to\tau}$ and set
$U(c({\tt e}}\newcommand{\ttf}{{\tt f},\ttp))=V({\tt e}}\newcommand{\ttf}{{\tt f})(\ttp)$.
Idem with $A\mbox{-}\mathit{EffCont}$.
\end{proof}
As an easy corollary of Thms.\ref{thm:chuchsystems} and
\ref{thm:uspenskii}, we get the following result.
\begin{theorem}\label{thm:chuchsystems2}
Let $A\subseteq\bbbn$. Let
$A\mbox{-}
\mathit{Eff}^{{\{0,1\}}^{<\omega}\to (\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt)}$
be obtained by curryfication from
$A\mbox{-} \mathit{Eff}^{(\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\times{\{0,1\}}^{<\omega})
\to\bbbt}$.
The systems
\begin{eqnarray*}
(A\mbox{-} \mathit{Eff}^{\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt}&,&
A\mbox{-}
\mathit{Eff}^{{\{0,1\}}^{<\omega}\to (\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\times\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt)})
\\
(A\mbox{-} \mathit{Eff}^{\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\to \PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]}&,&
A\mbox{-}
\mathit{Eff}^{{\{0,1\}}^{<\omega}\to (\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\to \PR[A,\mathbb{S}}\newcommand{\bbbt}{\mathbb{T}\to\bbbt])})
\end{eqnarray*}
are self-enumerated representation $A$-systems.
\end{theorem}
\begin{definition}
[Effectivizations of Church representation of $\bbbn$]
\label{def:effChurch}
We effectivize the Church representation by replacing
$(X\to X)\to(X\to X)$ by one of the following
classes:
$$A\mbox{-}\mathit{PC}^{(\bbbx\to\bbbx)\to(\bbbx\to\bbbx)}
\ ,\
A\mbox{-}\mathit{EffCont}^{(\bbbx\to\bbbx)\to(\bbbx\to\bbbx)}
\ ,\
A\mbox{-}\mathit{Eff}^{\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]\to \PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]}$$
where $\bbbx$ is some basic set.
and $A\subseteq\bbbn$ is some oracle.
Using Def.\ref{def:church}, this leads to three self-enumerated
systems with domain $\bbbn$ :
\begin{eqnarray*}
{\cal F}_1&=&(\bbbn\ ,\
\mathit{Church}^\bbbn_\bbbx\circ
A\mbox{-}\mathit{PC}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))})
\\
{\cal F}_2&=&(\bbbn\ ,\
\mathit{Church}^\bbbn_\bbbx\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))})
\\
{\cal F}_3&=&(\bbbn\ ,\
\mathit{church}^{\bbbn,A}_\bbbx\circ
A\mbox{-}\mathit{Eff}^{{\{0,1\}}^{<\omega}\to(\PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]
\to \PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}])})
\end{eqnarray*}
\end{definition}
The following result greatly simplifies
the landscape.
\begin{theorem}\label{thm:allchurchequal}
The three systems ${\cal F}_1$, ${\cal F}_2$, ${\cal F}_3$ of
Def.\ref{def:effChurch} coincide.
\end{theorem}
Before proving the theorem (cf. the end of this subsection),
we state some convenient tools in the next three propositions,
the first of which will also be used in \S\ref{ss:syntaxChurch}.
\begin{proposition}\label{p:tool1}
Suppose $R\subset Fin(\bbbx\to\bbbx)\times\bbbx\times\bbbx$ is
functional (cf. Def.\ref{def:Uspenskii}).
The following conditions are equivalent
\begin{enumerate}
\item[i.]
$\Phi_R=It^{(n)}_\bbbx$
\item[ii.]
$\Phi_R\!\upharpoonright \! Fin(\bbbx\to\bbbx)
=It^{(n)}_\bbbx\!\upharpoonright \! Fin(\bbbx\to\bbbx)$
\item[iii.]
$\forall\alpha\in Fin(\bbbx\to\bbbx)\ \forall\ttx\
(\alpha^{(n)}(\ttx)\mbox{ is defined }\Rightarrow$
\hfill{$(\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\},
\ttx,\alpha^{(n)}(\ttx))\in R)$}
\\
and\\
$\forall\alpha\in Fin(\bbbx\to\bbbx)\ \forall\ttx\ \forall{\tt y}}\newcommand{\ttz}{{\tt z}$
\hfill{$((\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R\ \Rightarrow\
(\alpha^{(n)}(\ttx)\mbox{ is defined }\wedge\
{\tt y}}\newcommand{\ttz}{{\tt z}=\alpha^{(n)}(\ttx)))$}
\end{enumerate}
\end{proposition}
\begin{proof}
$iii\Rightarrow i$ and $i\Rightarrow ii$ are trivial.
\\
$ii\Rightarrow iii.$
Assume $ii$.
Suppose $(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R$ then $\Phi_R(\alpha)(\ttx)={\tt y}}\newcommand{\ttz}{{\tt z}$.
Since $\alpha\in Fin(\bbbx\to\bbbx)$, $ii$ insures that
$\alpha^{(n)}(\ttx)$ is defined and $\alpha^{(n)}(\ttx)={\tt y}}\newcommand{\ttz}{{\tt z}$.
This proves the second part of $iii$.
\\
Suppose $\alpha^{(n)}(\ttx)$ is defined and let
$\alpha^{(n)}(\ttx)={\tt y}}\newcommand{\ttz}{{\tt z}$. Then
\begin{eqnarray*}
\Phi_R(\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\})(\ttx)
&=&
It^{(n)}_\bbbx(\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\})(\ttx)
\\&=&It^{(n)}_\bbbx(\alpha)(\ttx)
\\&=&{\tt y}}\newcommand{\ttz}{{\tt z}
\end{eqnarray*}
So that there exists a restriction $\beta$ of
$\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\}$ such that
$(\beta,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R$. Thus, $\Phi_R(\beta)(\ttx)={\tt y}}\newcommand{\ttz}{{\tt z}$.
Applying $ii$, this yields that $\beta^{(n)}(\ttx)$ is defined and
$\beta^{(n)}(\ttx)={\tt y}}\newcommand{\ttz}{{\tt z}$.
Since $\beta$ is a restriction of
$\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\}$, this insures that
$\beta=\alpha\!\upharpoonright \!\{\alpha^{(i)}(\ttx):0\leq i<n\}$.
This proves the first part of $iii$.
\end{proof}
\begin{proposition}\label{p:tool2}
Let $n\in\bbbn$.
If $\Phi_R(f)$ is a restriction of $f^{(n)}$ for every
$f:\bbbx\to\bbbx$ then either $\Phi_R=It^{(n)}_\bbbx$ or $\Phi_R$
is not an iterator.
\end{proposition}
\begin{proof}
We reduce to the case $\bbbx=\bbbn$. Let $Succ:\bbbn\to\bbbn$ be
the successor function. Since $\Phi_R(Succ)$ is a restriction of
$Succ^{(n)}$, either $\Phi_R(Succ)(0)$ is undefined or
$\Phi_R(Succ)(0)=n$.
In both cases it is different from $Succ^{(p)}(0)$ for any
$p\neq n$. Which proves that $\Phi_R\neq It^{(p)}_\bbbn$ for
every $p\neq n$. Hence the proposition.
\end{proof}
\begin{proposition}\label{p:tool3}$\\ $
{\bf 1.}
Let $(W_{\tt e}}\newcommand{\ttf}{{\tt f})_{{\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}}$
be an acceptable enumeration of r.e. subsets of
$Fin(\bbbx\to\bbbx)\times\bbbx\times\bbbx$.
There exists a total recursive function $\xi:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that, for all ${\tt e}}\newcommand{\ttf}{{\tt f}$,
\begin{enumerate}
\item[a.]
$W_{\xi({\tt e}}\newcommand{\ttf}{{\tt f})}\subseteq W_{\tt e}}\newcommand{\ttf}{{\tt f}$ and $W_{\xi({\tt e}}\newcommand{\ttf}{{\tt f})}$ is functional
(cf. Def.\ref{def:Uspenskii}, point 1),
\item[b.]
$W_{\xi({\tt e}}\newcommand{\ttf}{{\tt f})}=W_{\tt e}}\newcommand{\ttf}{{\tt f}$ whenever $W_{\tt e}}\newcommand{\ttf}{{\tt f}$ is functional.
\end{enumerate}
{\bf 2.}
There exists a partial recursive function $\lambda:{\{0,1\}}^{<\omega}\to\bbbn$
such that
if $R_{\tt e}}\newcommand{\ttf}{{\tt f}$ is functional and $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is an iterator
then $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$ is defined and
$\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}=It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx$.
(However, $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$ may be defined even if $R_{\tt e}}\newcommand{\ttf}{{\tt f}$ is not
functional or $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is not an iterator).
\medskip\\
{\bf 3.}
There exists a total recursive function $\theta:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that, for all ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$,
\begin{enumerate}
\item[a.]
if $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is an iterator then the $(\bbbx\to\bbbx)$-oracle
Turing machine ${\cal M}_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}$ with code $\theta({\tt e}}\newcommand{\ttf}{{\tt f})$
(cf. Def.\ref{def:Kleene}) computes the functional $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$,
\item[b.]
if $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is not an iterator then neither is the functional
computed by the $(\bbbx\to\bbbx)$-oracle Turing machine
${\cal M}_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}$ with code $\theta({\tt e}}\newcommand{\ttf}{{\tt f})$.
\end{enumerate}
In other words,
$Church(\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}})=Church(\Phi_{{\cal M}_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}})$
\medskip\\
{\bf 4.}
The above points relativize to any oracle $A\subseteq\bbbn$.
\end{proposition}
\begin{proof}
1. This is the classical fact underlying the enumeration theorem
for effectively continuous functionals.
To get $W_{\xi({\tt e}}\newcommand{\ttf}{{\tt f})}$, enumerate $W_{\tt e}}\newcommand{\ttf}{{\tt f}$ and retain a triple
if and only if, together with the already retained ones, it does not
contradict functionality
(cf. Odifreddi's book \cite{odifreddi} p.197).
\medskip\\
2. We reduce to the case $\bbbx=\bbbn$.
Let $\alpha_n:\bbbn\to\bbbn$ be such that
$$domain(\alpha_n)=\{0,...,n\}\ \ ,\ \ \alpha_n(i)=i+1\mbox{ for }
i=0,...,n$$
Suppose $R$ is functional and $\Phi_R=It^{(n)}_\bbbn$.
Prop.\ref{p:tool1} insures $(\alpha_n,0,n)\in R$.
\\
Also, for $m\neq n$, since $\alpha_m$ and $\alpha_n$ are
compatible and $R$ is functional, $R$ cannot contain
$(\alpha_m,0,m)$.
Thus, if $\Phi_R=It^{(n)}_\bbbn$ then $n$ is the unique integer
such that $R$ contains $(\alpha_n,0,n)$.
\medskip\\
This leads to the following definition of the wanted partial
recursive function $\lambda:{\{0,1\}}^{<\omega}\to\bbbn$ :
\medskip\\\indent- enumerate $R{\tt e}}\newcommand{\ttf}{{\tt f}$,
\\\indent- if and when some triple $(\alpha_n,0,n)$ appears,
halt and output $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})=n$.
\medskip\\
3. Given a code ${\tt e}}\newcommand{\ttf}{{\tt f}$ of a functional relation $R_{\tt e}}\newcommand{\ttf}{{\tt f}$, we let
$\theta$ be the total recursive function which gives a code for the
oracle Turing machine ${\cal M}$ which acts as follows:
\begin{enumerate}
\item[i.]
First, it computes $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$.
\item[ii.]
If $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$ is defined then, on input $\ttx$ and oracle $f$,
${\cal M}$ tries to compute $It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx(f)(\ttx)$
in the obvious way: ask the oracle the values of $f^{(i)}(\ttx)$ for
$i\leq\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$.
\item[iii.]
Finally, in case i and ii halt, ${\cal M}$ enumerates $R{\tt e}}\newcommand{\ttf}{{\tt f}$ and
halts and accepts (with the output computed at phase ii)
if and only if
$(f\!\upharpoonright \!\{f^{(i)}(\ttx):i\leq\lambda({\tt e}}\newcommand{\ttf}{{\tt f})\},
\ttx,f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx))$ appears in $R_{\tt e}}\newcommand{\ttf}{{\tt f}$.
I.e. if and only if $f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx)=\Phi_R(f)(\ttx)$
\end{enumerate}
Clearly, the functional $\Phi_{\cal M}$ computed by ${\cal M}$
is such that $\Phi_{\cal M}(f)$ is equal to or is a restriction of
$It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx(f)$.
\\
If $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is an iterator then point 2 insures that
$\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}=It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx$ and Prop.\ref{p:tool1}
insures that phase iii is no problem, so that ${\cal M}$ computes
exactly $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$.
\\
Suppose $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is not an iterator.
\\
If $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$ is undefined then ${\cal M}$ computes the constant
functional with value the nowhere defined function. Thus, ${\cal M}$
does not compute an iterator.
\\
If $\lambda({\tt e}}\newcommand{\ttf}{{\tt f})$ is defined then, on input $\ttx$, ${\cal M}$ computes
$f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx)$ and halt and accepts if and only
$f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx)=\Phi_R(f)(\ttx)$.
Since $\Phi_R$ is not an iterator, there exists $f$ and $\ttx$
such that $f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx)$ is defined and
$\Phi_R(f)(\ttx)\neq f^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}(\ttx)$.
Hence $\Phi_{\cal M}(f)$ is a strict restriction of
$It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx(f)$, so that
$\Phi_{\cal M}\neq It^{(\lambda({\tt e}}\newcommand{\ttf}{{\tt f}))}_\bbbx$.
Finally, Prop.\ref{p:tool2} insures that $\Phi_{R_{\tt e}}\newcommand{\ttf}{{\tt f}}$ cannot be
an iterator.
\end{proof}
\noindent{\bf\em Proof of Theorem \ref{thm:allchurchequal}.}\\
1. Since $Fin(\bbbx\to\bbbx)\subset \PR[A,\bbbx\to\mathbb{Y}}\newcommand{\bbbz}{\mathbb{Z}]$,
condition $ii$ of Prop.\ref{p:tool1} and Thm.\ref{thm:uspenskii}
prove equality ${\cal F}_2={\cal F}_3$.
\medskip\\
2. Inclusion ${\cal F}_1\subseteq{\cal F}_2$ is a corollary of
Thm.\ref{thm:KleeneUspenskii}, point 1.
Let's prove the converse inclusion.
Suppose $\Phi:({\{0,1\}}^{<\omega}\times(\bbbx\to\bbbx))\to(\bbbx\to\bbbx)$
is effectively continuous and let
$R\subseteq {\{0,1\}}^{<\omega}\times Fin(\bbbx\to\bbbx)\times\bbbx\times\bbbx$
be a functional r.e. set such that $\Phi=\Phi_R$.
Using the parameter property, let $h:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ be a total
recursive function such that $h({\tt e}}\newcommand{\ttf}{{\tt f})$ is an r.e. code for
$R^{({\tt e}}\newcommand{\ttf}{{\tt f})}=\{(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z}):({\tt e}}\newcommand{\ttf}{{\tt f},\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R\}$.
Prop.\ref{p:tool3}, point 3, gives a total recursive
$\theta:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such that
$Church(\Phi_{R^{({\tt e}}\newcommand{\ttf}{{\tt f})}})=Church(\Phi_{{\cal M}_{\theta({\tt e}}\newcommand{\ttf}{{\tt f})}})$.
Thus, ${\tt e}}\newcommand{\ttf}{{\tt f}\mapsto Church(\Phi_{R^{({\tt e}}\newcommand{\ttf}{{\tt f})}})$ is partial computable
with a $(\bbbx\to\bbbx)$-oracle Turing machine having inputs in
${\{0,1\}}^{<\omega}\times\bbbx$.
\hfill{$\Box$}
\subsection{Some examples of effectively continuous functionals}
\label{ss:examples}
For future use in sections
\S\ref{ss:syntaxChurch}-\ref{ss:characterizeChurch}, let's get the
following examples of effectively continuous functionals.
\begin{proposition}\label{p:examples}
If $\varphi:{\{0,1\}}^{<\omega}\to\bbbn$ is partial $A$-recursive and
$S\subseteq{\{0,1\}}^{<\omega}$ is $\Pi^{0,A}_2$ then there exists an
$A$-effectively continuous functional
$$\Phi:{\{0,1\}}^{<\omega}\to(\bbbx\to\bbbx)^{\bbbx\to\bbbx}$$
such that, for all $\ttp$,
\medskip\\
$\begin{array}{crcl}
(*)\hspace{1cm}&\ttp\in S\cap domain(\varphi)&\Rightarrow&
\Phi(\ttp)=It^{(\varphi(\ttp))}_\bbbx
\\
(**)\hspace{1cm}&\ttp\notin S\cap domain(\varphi)&\Rightarrow&
\Phi(\ttp)\mbox{ is not an iterator}
\end{array}$
\end{proposition}
\begin{proof}
We consider the sole case $A=\emptyset$, relativization being
straightforward.\\
Let $S=\{{\tt e}}\newcommand{\ttf}{{\tt f}:\forall u\ \exists v\ ({\tt e}}\newcommand{\ttf}{{\tt f},u,v)\in\sigma\}$ where
$\sigma$ is a recursive subset of ${\{0,1\}}^{<\omega}\times\bbbn\times\bbbn$.
We construct a total recursive function $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$ such
that, for all $\ttp$, $W_{g(\ttp)}$ is functional and
\begin{eqnarray*}
\ttp\in S\cap domain(\varphi)&\Rightarrow&
\Phi_{W_{g(\ttp)}}=It^{(\varphi(\ttp))}_\bbbx
\\
\ttp\notin S\cap domain(\varphi)&\Rightarrow&
\Phi_{W_{g(\ttp)}}\mbox{ is not an iterator}
\end{eqnarray*}
Let
\begin{eqnarray*}
{\cal S}_n&=&\{(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z}):\alpha\in Fin(\bbbx\to\bbbx)
\ \wedge\ \alpha^{(n)}(\ttx)\mbox{ is defined}
\ \wedge\ {\tt y}}\newcommand{\ttz}{{\tt z}=\alpha^{(n)}
\\
&&\hspace{5cm}\wedge\ domain(\alpha)=\{\alpha^{(i)}:i\leq n\}\}
\end{eqnarray*}
Let $\gamma:\bbbn^2\to\bigcup_{n\in\bbbn}{\cal S}_n$ be a
total recursive function such that, for all $n$,
$u\mapsto\gamma(n,u)$ is a bijection $\bbbn\to{\cal S}_n$.
Set
$$\rho_{\tt e}}\newcommand{\ttf}{{\tt f}=\{\gamma(\varphi({\tt e}}\newcommand{\ttf}{{\tt f}),u):
\varphi({\tt e}}\newcommand{\ttf}{{\tt f})\mbox{ is defined}\ \wedge\
\exists v\ ({\tt e}}\newcommand{\ttf}{{\tt f},u,v)\in\sigma\}$$
Clearly, $\rho_{\tt e}}\newcommand{\ttf}{{\tt f}$ is functional. Also, the construction of the
$\rho_{\tt e}}\newcommand{\ttf}{{\tt f}$'s is effective and the parametrization property yields
a total recursive function $g:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$
such that $\rho_{\tt e}}\newcommand{\ttf}{{\tt f}=W_{g({\tt e}}\newcommand{\ttf}{{\tt f})}$.
\\
If $\varphi({\tt e}}\newcommand{\ttf}{{\tt f})$ is not defined then $\rho_{\tt e}}\newcommand{\ttf}{{\tt f}=\emptyset$ so that
$\Phi_{\rho_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is the constant functional which maps any function
to the nowhere defined function. In particular, $\Phi_{\rho_{\tt e}}\newcommand{\ttf}{{\tt f}}$
is not an iterator.
\\
Suppose $\varphi({\tt e}}\newcommand{\ttf}{{\tt f})$ is defined.
Condition $iii$ of Prop.\ref{p:tool1} and the definition of
$\rho_{\tt e}}\newcommand{\ttf}{{\tt f}$ show that
\begin{eqnarray*}
\Phi_{\rho_{\tt e}}\newcommand{\ttf}{{\tt f}}\mbox{ is an iterator}
&\Leftrightarrow& \Phi_{\rho_{\tt e}}\newcommand{\ttf}{{\tt f}}=It^{(\varphi(n))}_\bbbx
\\
&\Leftrightarrow& \rho_{\tt e}}\newcommand{\ttf}{{\tt f}\supseteq
range(u\mapsto\gamma(\varphi(n),u))
\\
&\Leftrightarrow& \forall u\ \exists v\ ({\tt e}}\newcommand{\ttf}{{\tt f},u,v)\in\sigma
\\
&\Leftrightarrow& {\tt e}}\newcommand{\ttf}{{\tt f}\in S
\end{eqnarray*}
Since $\rho_{\tt e}}\newcommand{\ttf}{{\tt f}=W_{g({\tt e}}\newcommand{\ttf}{{\tt f})}$, the functional
$\Phi:{\tt e}}\newcommand{\ttf}{{\tt f}\mapsto\Phi_{\rho_{\tt e}}\newcommand{\ttf}{{\tt f}}$ is effectively continuous.
Clearly, it satisfies $(*)$ and $(**)$.
\end{proof}
\subsection{Syntactical complexity of Church representation}
\label{ss:syntaxChurch}
\begin{proposition}[Syntactical complexity]
\label{p:complexEffChurch}
The family
$$\{domain(\varphi):\varphi\in \mathit{Church}^\bbbn_X\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}\}$$
is exactly the family of $\Pi^{0,A}_2$ subsets of ${\{0,1\}}^{<\omega}$.
\medskip\\
Thus, any universal function for
$\mathit{Church}^\bbbn_X\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}$
has $\Pi^{0,A}_2$-complete domain.
\end{proposition}
\begin{proof}
To simplify notations, we only consider the case $A=\emptyset$.
Relativization being straightforward.
\\
1. Prop.\ref{p:examples} insures that every $\Pi^0_2$ set is
the domain of $Church^\bbbn_\bbbx\circ\Phi$ for some effectively
continuous functional $\Phi$.
\medskip\\
2. Conversely, we prove that every function in
$\mathit{Church}^\bbbn_X\circ
\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}$
has $\Pi^0_2$ domain.
\\
Suppose $\Phi:({\{0,1\}}^{<\omega}\times(\bbbx\to\bbbx))\to(\bbbx\to\bbbx)$
is effectively continuous and let
$R\subseteq {\{0,1\}}^{<\omega}\times Fin(\bbbx\to\bbbx)\times\bbbx\times\bbbx$
be a functional r.e. set such that $\Phi=\Phi_R$.
For ${\tt e}}\newcommand{\ttf}{{\tt f}\in{\{0,1\}}^{<\omega}$, let
$R^{\tt e}}\newcommand{\ttf}{{\tt f}=\{(\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z}):({\tt e}}\newcommand{\ttf}{{\tt f},\alpha,\ttx,{\tt y}}\newcommand{\ttz}{{\tt z})\in R\}$.
Then
$$domain(\mathit{Church}^\bbbn_X\circ\Phi)=\{{\tt e}}\newcommand{\ttf}{{\tt f}:\Phi_{R^{\tt e}}\newcommand{\ttf}{{\tt f}}
\mbox{ is an iterator}\}$$
Now, an r.e. code for the functional relation $R^{\tt e}}\newcommand{\ttf}{{\tt f}$ is
given by a total recursive function $h:{\{0,1\}}^{<\omega}\to{\{0,1\}}^{<\omega}$.
Applying Prop.\ref{p:tool2}, point 2, the partial recursive
function $\lambda\circ h$ is such that if $\Phi_{R^{\tt e}}\newcommand{\ttf}{{\tt f}}$
is an iterator then $\Phi_{R^{\tt e}}\newcommand{\ttf}{{\tt f}}=It^{(\lambda(h({\tt e}}\newcommand{\ttf}{{\tt f})))}_\bbbx$.
\\
Thus, $\Phi_{R^{\tt e}}\newcommand{\ttf}{{\tt f}}$ is an iterator if and only if
\begin{enumerate}
\item[a.]
$\lambda(h({\tt e}}\newcommand{\ttf}{{\tt f}))$ is convergent,
\item[b.]
condition $iii$ of Prop.\ref{p:tool1} with $n=\lambda(h({\tt e}}\newcommand{\ttf}{{\tt f}))$ holds.
\end{enumerate}
Condition a is $\Sigma^0_1$ and condition b is $\Pi^0_2$.
Thus, $domain(\mathit{Church}^\bbbn_X\circ\Phi)$ is $\Pi^0_2$.
\end{proof}
\subsection{Characterization of the $\mathit{Church}$ representation system}
\label{ss:characterizeChurch}
\begin{theorem}\label{thm:Church}
Let's denote $\PR[A,{\{0,1\}}^{<\omega}\to\bbbn]\!\upharpoonright \!\Pi^{0,A}_2$ the
family of restrictions to $\Pi^{0,A}_2$ subsets of
partial $A$-recursive functions ${\{0,1\}}^{<\omega}\to\bbbn$.
\\ Let $\bbbx$ be some basic set and $A\subseteq\bbbn$ be some
oracle.
\medskip\\
{\bf 1.}\ \ \
$\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}
=\PR[A,{\{0,1\}}^{<\omega}\to\bbbn]\!\upharpoonright \!\Pi^{0,A}_2$
\medskip\\
{\bf 2.}\ \ \
$K^\bbbn_{\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}}
=_{\rm ct} K^A$
\medskip\\
We shall simply write $K^{\bbbn,A}_{\mathit{Church}}$, or
$K^{\bbbn}_{\mathit{Church}}$ when $A=\emptyset$.
\end{theorem}
\begin{proof}
1A. First, we prove that, for any $A$-effectively continuous
functional $\Phi:{\{0,1\}}^{<\omega}\to(\bbbx\to\bbbx)^{\bbbx\to\bbbx}$,
the function $\mathit{Church}\circ\Phi:{\{0,1\}}^{<\omega}\to\bbbn$ has a partial
$A$-recursive extension.
We reduce to the case $\bbbx=\bbbn$.
\\
Let $Succ:\bbbn\to\bbbn$ be the successor function.
Observe that, for all $n\in\bbbn$,
$$(It^{(n)}_\bbbn(Succ))(0)=n$$
Thus, if $Church(\Phi({\tt e}}\newcommand{\ttf}{{\tt f})$ is defined then
$Church(\Phi({\tt e}}\newcommand{\ttf}{{\tt f}))=(\Phi({\tt e}}\newcommand{\ttf}{{\tt f})(Succ))(0)$.
Applying Prop.\ref{p:apply}, we see that
${\tt e}}\newcommand{\ttf}{{\tt f}\mapsto(\Phi({\tt e}}\newcommand{\ttf}{{\tt f})(Succ))(0)$ is a partial $A$-recursive
extension of $\mathit{Church}\circ\Phi:{\{0,1\}}^{<\omega}\to\bbbn$.
\medskip\\
1B. Prop.\ref{p:complexEffChurch} insures that
$\mathit{Church}\circ\Phi:{\{0,1\}}^{<\omega}\to\bbbn$ has $\Pi^{0,A}_2$ domain.
Together with point 1A, this insures that
$\mathit{Church}\circ\Phi:{\{0,1\}}^{<\omega}\to\bbbn$ is the restriction of a partial
$A$-recursive function to a $\Pi^{0,A}_2$ set.
This proves the inclusion
$$\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}
\subseteq \PR[A,{\{0,1\}}^{<\omega}\to\bbbn]\!\upharpoonright \!\Pi^{0,A}_2$$
1C. The converse inclusion is Prop.\ref{p:examples}.
\medskip\\
2. Inclusion
$\PR[A,{\{0,1\}}^{<\omega}\to\bbbn]\subseteq\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}$
yields the inequality
$K^\bbbn_{\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}}
\leq_{\rm ct} K^A$.
\medskip\\
Consider a function $\phi\in \mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}$.
Let $\widehat{\phi}$ be a partial $A$-recursive extension of $\phi$.
Then $K_\phi\geq K_{\widehat{\phi}}$.
This proves inequality
$K^\bbbn_{\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))}}
\geq_{\rm ct} K^A$.
\end{proof}
\subsection{Characterization of the $\Delta\mathit{Church}$ self-enumerated
systems} \label{ss:DeltaKChurch}
\begin{theorem}\label{thm:DeltaChurch}
Let $\bbbx$ be some basic set and $A\subseteq\bbbn$ be some oracle.
\medskip\\
{\bf 1.}\ \ \
$\Delta(\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))})
=\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]\!\upharpoonright \!\Pi^{0,A}_2$
\medskip\\
{\bf 2.}\ \ \
$K^\bbbz_{\Delta(\mathit{Church}\circ
A\mbox{-}\mathit{EffCont}^{{\{0,1\}}^{<\omega}\to((\bbbx\to\bbbx)\to(\bbbx\to\bbbx))})}
=_{\rm ct} K^A_\bbbz$
\medskip\\
We shall simply write $K^{\bbbz,A}_{\Delta\mathit{Church}}$, or
$K^{\bbbz}_{\Delta\mathit{Church}}$ when $A=\emptyset$.
\end{theorem}
\begin{proof}
1. Observe that $\Delta(\PR[A,{\{0,1\}}^{<\omega}\to\bbbn]\!\upharpoonright \!\Pi^{0,A}_2)
=\PR[A,{\{0,1\}}^{<\omega}\to\bbbz]\!\upharpoonright \!\Pi^{0,A}_2$ and apply
Thm.\ref{thm:Church}.
\medskip\\
2. Argue as in point 2 of the proof of Thm.\ref{thm:Church}.
\end{proof}
\subsection{Functional representations of $\bbbz$}
\label{ss:churchZ}
Specific to Church representation, there is another approach
for an extension to $\bbbz$ : {\em positive and negative iterations}
of injective functions over some infinite set $X$. Formally,
I.e., letting $X\stackrel{1-1}{\to}X$ denote the family of injective
functions, consider the $\bbbz$-iterator functional
$$It^\bbbz_X:\bbbz
\to(X\stackrel{1-1}{\to}X)^{X\stackrel{1-1}{\to}X}$$
such that, for $n\in\bbbn$, $It^\bbbz_X(n)(f)=f^{(n)}$
and $It^\bbbz_X(-n)(f)=It^\bbbz_X(n)(f^{-1})$.
\\
Effectivization can be done as in \S\ref{ss:effectiveChurch}.
Thm.\ref{thm:allchurchequal}, Prop.\ref{p:complexEffChurch} and
Thm.\ref{thm:Church} go through the $\bbbz$ context.
\section{Conclusion}\label{s:conclusion}
We have characterized Kolmogorov complexities associated to some
set theoretical representations of $\bbbn$ in terms of the
Kolmogorov complexities associated to oracular and/or infinite
computations (Thm.\ref{thm:A}).
As a corollary, we got a hierarchy result (Thm.\ref{thm:B}).
\medskip\\
These results can be improved in two directions.
\\
First, one can consider higher order (higher than type 2)
effectivizations of set theoretical representations of $\bbbn$.
This is the contents of a forthcoming continuation of this paper.
\\
Second, using the results of our paper \cite{ferbusgrigoOrder},
the hierarchy result Thm.\ref{thm:B} can be improved with finer
orderings than $<_{\rm ct}$.
These orderings $\lless[{\cal C},{\cal D}]{\cal F}$ are
such that $f\lless[{\cal C},{\cal D}]{\cal F}g$ if and only if
\begin{enumerate}
\item
$f\leq_{\rm ct} g$
\item
For every infinite set $X\in{\cal C}$ and every total monotone
increasing function $\phi\in{\cal F}$ there exists an infinite set
$Y\in{\cal D}$ such that
\\\centerline{$Y\subseteq\{z\in X:f(z)<\phi(g(x))\}$}
\item
The above property is effective: relative to standard enumerations
of ${\cal C},{\cal D},{\cal F}$, a code for $Y$ can be recursively
computed from codes for $X$ and $\phi$.
\end{enumerate}
Thm.\ref{thm:B} can be restated in the following improved form.
\begin{theorem}\label{thm:Abis}
Denote $\minpr[]$ (resp. $\minprjump[A]$) the family of functions
$\bbbn\to\bbbn$ which are infima of partial recursive
(resp.partial $A$-recursive) sequences of functions $\bbbn\to\bbbn$
(cf. Rk.\ref{rk:minPR}).
Then
\medskip\\
$\log \ggreater[\Sigma^0_1,\Sigma^0_1]{\PR[]}
\begin{array}{c}
K_{\mathit{Church}}^\bbbn\\
=_{\rm ct}\\
K_{\mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn\\
=_{\rm ct}\\
K_{\Delta \mathit{Church}}^\bbbz\!\upharpoonright \!\bbbn
\end{array}
\ggreater[\Sigma^0_1\cup\Pi^0_1,\Delta^0_2]{\minpr[]}
K_{\mathit{card}}^\bbbn
\ggreater[\Sigma^0_2,\Sigma^0_2]{\PR[\emptyset']}
K_{\Delta \mathit{card}}^\bbbz\!\upharpoonright \!\bbbn$
\hfill{$\ggreater[\Sigma^0_2\cup\Pi^0_2,\Delta^0_3]
{\minprjump[\emptyset']}
K_{\mathit{index}}^\bbbn
\ggreater[\Sigma^0_3,\Sigma^0_3]{\PR[\emptyset'']}
K_{\Delta \mathit{index}}^\bbbz\!\upharpoonright \!\bbbn$}
\end{theorem}
|
1,477,468,751,019 | arxiv | \section{Introduction}\label{sec:intro}
The aim of this work is to provide a natural geometric interpretation of the automorphism groups of holomorphic vector bundles over the Riemann sphere.
According to a foundational theorem of Grothendieck \cite{Groth57}, every bundle $E\to \field{C}\field{P}^{1}$ splits as a direct sum of line bundles\footnote{Grothendieck's result holds for holomorphic principal bundles whose structure group is an arbitrary reductive group.
For simplicity, we will only consider the case of $\mathrm{GL}(r,\field{C})$ (that is, the vector bundle case).}
\[
E\cong\bigoplus_{i=1}^{r} \mathcal{O}(m_{i})
\]
Consequently, $\Aut(E)$ corresponds to the group of invertible elements in $\End(E) := H^{0}\left(\field{C}\field{P}^{1},\ad(E)\right)$, where
\[
\ad(E) = E^{\vee}\otimes E \cong \ad\left(\bigoplus_{i=1}^{r} \mathcal{O}(m_{i})\right),
\]
and can be parametrized by matrix-valued polynomials of block-triangular form.
What motivates this work is the observation that, despite the previous trivial characterization of bundle automorphisms, such groups posses an interesting geometric structure, which manifests in the study of the moduli of vector bundles with parabolic structure \cite{MS80} and its K\"ahler geometry \cite{MenTak14}, namely, the orbit spaces of $n$-tuples of flags under $\Aut(E)$.
To construct a moduli space $\curly{N}$ of stable parabolic bundles on a Riemann surface $\Sigma$, it is required to fix the topology of $E$ (i.e., its degree and rank), together with an additional numerical data necessary to define the notion of parabolic stability -- a set of marked points on $\Sigma$, and a collection of parabolic weights satisfying certain inequalities. The peculiarities of $\field{C}\field{P}^{1}$ among all Riemann surfaces imply that a point in $\curly{N}$ may be thought of as a splitting type (a pair of trivializations of $E$, over each of the affine charts of $\field{C}\field{P}^{1}$),\footnote{The splitting coefficients of each underlying bundle are only \emph{holomorphic} invariants of $E$. The different admissible splitting types determine a stratification of $\curly{N}$.} together with an \emph{orbit} of collections of flags in $\field{C}^{r}$, under the action of the group of bundle automorphisms of $E$, satisfying a stability condition. Such a structure provides a relatively simple geometric model for $\curly{N}$, exclusive of $\field{C}\field{P}^{1}$ as a Riemann surface.
If $\textrm{rank}(E) = 2$, the technicalities arising from the geometry of group actions are relatively mild, as the groups $\Aut(E)$ are a semidirect product of abelian groups \cite{Men17, MenSpi17}. In general, the geometric properties of the action of the groups $\Aut(E)$ on $n$-tuples of flags are an additional subtlety that needs to be resolved, if any sensible geometric construction of such moduli spaces of stable parabolic bundles is expected in arbitrary rank.
At the same time, it is precisely
those group actions
what encode a rich and subtle geometric structure, which doesn't seem to exist in the literature, that could render an explicit picture on the wall-crossing phenomenon that manifests when variations of parabolic weights are allowed, and that general treatments, such as \cite{MS80,BH95}, present with a caveat precisely for genus 0. A first step to deal with the arbitrary rank case, would be to find a space of generalized flags on which the action of $\Aut(E)$ is \emph{optimal}.
Following the principle of understanding the structure of a group in terms of its actions, we will construct a space, extracted from the splitting coefficients of the vector bundle $E$, on which the group of bundle automorphisms naturally acts, whose points consist of generalized flags on the fibers over a finite set in $\field{C}\field{P}^{1}$, and which is a torsor for $\Aut(E)$, up to a minimal residual subgroup.
Intuitively, the group action that we will consider can be thought of as a natural generalization of the standard action of the group $\mathrm{GL}(2,\field{C})$ on the configuration space of triples of points in $\field{C}\field{P}^{1}$ by means of M\"obius transformations, or certain affine actions of Nagata type \cite{Muk05}.
The work is organized as follows: Section \ref{sec:moduli} presents the general scheme under which the moduli spaces of stable parabolic bundles over $\field{C}\field{P}^{1}$ can be constructed by means of the simultaneous actions of groups of bundle automorphisms, which serves as motivation for the rest of the work. In section \ref{sec:pre} we set up conventions and prove a couple of preparatory results. Theorem \ref{theo:factorization} in section \ref{sec:bundles} provides a factorization of
$\Aut(E)$ in terms of special parabolic subgroups, which constitute our basic building blocks. In section \ref{sec:actions} we construct a space of generalized flags in the bundle $E$, and show in theorem \ref{theo:normalization} (our main result) that the induced action of the group $\Aut(E)$ is transitive and free up to a residual subgroup. The latter result implies that a canonical normalization of such flags can always be attained (corollary \ref{cor:normalization}). The normalization results are applied in section \ref{sec:gauge} to determine the existence of a couple of holomorphic gauges of logarithmic connections adapted to a given parabolic structure, which we have called the Bruhat and Riemann-Hilbert gauges, and which are described, respectively, in corollaries \ref{cor:Bruhat} and \ref{cor:Riemann-Hilbert}. Further details of such gauges are provided for the rank 2 case.
\section{Moduli space models for parabolic bundles on $\field{C}\field{P}^{1}$}\label{sec:moduli}
The existence and construction of a moduli space $\curly{N}^{s}_{\curly{W}}$ of stable parabolic bundles on a compact Riemann surface $\Sigma$ depends on the a priori choice of a set of \emph{parabolic weights}, i.e. a collection of real numbers
\[
\curly{W} =\left\{0\leq \alpha_{i1}\leq \dots\leq \alpha_{ir} < 1,\quad i=1,\dots,n, \; :\; \sum_{i=1}^{n}\sum_{j=1}^{r}\alpha_{ij}\in\field{Z}\right\}.
\]
which is necessary to define the notion of stability of rank $r$ admissible quasi parabolic bundles of parabolic degree 0. The latter are holomorphic vector bundles $E\to \Sigma$ of rank $r$, together with a collection of descending flags over the fibers of a finite subset $S =\{z_{1},\dots,z_{n}\}\subset\Sigma$ whose multiplicity type is subordinate to the parabolic weights' multiplicities of $\curly{W}$, and such that
\[
\deg (E) + \sum_{i=1}^{n}\sum_{j=1}^{r}\alpha_{ij} = 0.
\]
A parabolic bundle with underlying vector bundle $E$ will be denoted by $E_{*}$. The reader is referred to \cite{MS80} for details.
The case $\Sigma = \field{C}\field{P}^{1}$ is rather special, as not only $n \geq 3$ necessarily, but also not every admissible set of weights for a fixed (necessarily negative) degree determines a nonempty moduli space. In \cite{Bis98,Bel01,Bis02}, I. Biswas and P. Belkale proved independently that, for every admissible degree, the set of weights admitting nonempty moduli spaces is an explicit convex polytope in the space of all admissible weights. Generically, the admissible weights are complete (i.e., don't have multiplicities), and hence the flags on each fiber can be parametrized by the complete flag manifold $\curly{F}_{r}$, and moreover, every semistable parabolic bundle is stable, so that $\curly{N}^{s}_{\curly{W}} = \curly{N}^{ss}_{\curly{W}}$ is compact \cite{BH95}. Let us fix once and for all a choice of such $\curly{W}$. For notational simplicity, we will denote the moduli space simply by $\curly{N}$.
There exists a relatively simple description of the points in the moduli space $\curly{N}$, encoded entirely in the geometric properties of the action of the automorphism groups $\Aut\left(E\right)$. The Birkhoff-Grothendieck theorem \cite{Groth57} states that every holomorphic vector bundle $E\to \field{C}\field{P}^{1}$ is isomorphic to a direct sum of line bundles $\bigoplus_{i=1}^{r} \mathcal{O}(m_{i})$, that is, if we cover $\field{C}\field{P}^{1}$ with affine charts $\{\field{C}_{0},\field{C}_{1}\}$, $\field{C}_{0}\cap\field{C}_{1}=\field{C}^{*}$, $E$ can be prescribed by a single diagonal transition function
\[
g_{01}(z)=
\begin{pmatrix}
z^{m_{1}} & & 0 \\
& \ddots & \\
0 & & z^{m_{r}}
\end{pmatrix}
\]
which can be written more succinctly as $g_{01}(z)=z^{N}$ if we let
\[
N = \begin{pmatrix}
m_{1} & & 0\\
& \ddots &\\
0 & & m_{r}
\end{pmatrix}
\]
and moreover, any two such isomorphisms differ by postcomposition by an automorphism of the bundle splitting. To simplify notations, we will denote such bundle splitting types simply as $E_{N}$. The parabolic stability condition on any given $E_{*}$ implies that
\[
H^{0}\left(\field{C}\field{P}^{1}, E \right) = 0,
\]
hence $m_{1},\dots, m_{r} < 0$, yielding a finite number of potential splitting matrices $N$ for each degree $ -nr < \deg(E) \leq -r$.
For each $i=1,\dots, n$, let $\curly{F}\left(E_{i}\right)$ be the manifold of complete descending flags on the fiber $E_{i}$ over $z_{i}$ in $E$. Every isomorphism $E \cong E_{N}$ induces isomorphisms $\curly{F}\left(E_{i}\right) \cong \curly{F}_{r}$.
Hence, a parabolic structure on $E$ can be modeled as a point in the product of complete flag manifolds
\[
\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r
\]
and two $n$-tuples of flags define equivalent parabolic structures if and only they differ by the left action of an element in $\Aut\left(E_{N}\right)$. Consequently, a point in $\curly{N}$ is \emph{determined uniquely} by a pair $\left(E_{N},[\mathbf{F}]\right)$, where
$[\mathbf{F}]$ is an {orbit} of $n$-tuples $(F_{1},\dots,F_{n})$ of complete descending flags under the left action of the group $\Aut\left(E_{N}\right)$, satisfying a stability condition dictated by $\curly{W}$.
The Harder-Narasimhan filtration of the underlying vector bundle $E$, for each isomorphism class $\{E_{*}\}\in\curly{N}$, determines a stratification of $\curly{N}$,
\[
\curly{N} = \bigsqcup \curly{N}_{N}
\]
corresponding to the different underlying holomorphic bundles over $\field{C}\field{P}^{1}$ admitting stable parabolic structures for $\curly{W}$, characterized by a splitting type $N$.
Over each connected component of a Harder-Narasimhan stratum $\curly{N}_{N}$, we can fix simultaneous isomorphisms $E\cong E_{N}$, in such a way that the isomorphism classes of stable parabolic structures correspond to a space of $\Aut\left(E_{N}\right)$-orbits on $\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r}$.
If we denote by $\curly{O}_{N}$ the corresponding space of $\Aut\left(E_{N}\right)$-orbits of $n$-tuples of flags in $\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r}$ that are stable when regarded as parabolic structures in some $E_{N}$, we have that
\[
\curly{N} \cong \bigsqcup_{N} \curly{O}_{N}\bigg/\sim,
\]
where the relation $\sim$ is expressible in terms of the intersection of orbit closures in $\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r}$. Moreover, if variations of $\curly{W}$ in the weight polytope are considered, the induced transformations of the moduli space that occur when a semistability wall is hit can be understood in terms of degenerations of the components of $\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r}$ into more general flag manifolds (cf. \cite{BH95}). The proof of such results, and ``wall-crossing behavior", depends on a careful and intricate analysis of the geometric characterization of stability, and will be discussed elsewhere.
\section{Parabolic groups and Bruhat decompositions for $\mathrm{GL}(r,\field{C})$}\label{sec:pre}
We will recall a few standard facts for convenience of the exposition. Further details of the general theory may be found in \cite{Bor69,FH91}. For any $r\geq 2$, let us consider an arbitrary partition $r=i_{1}+\dots + i_{s}$, which we will denote by $\lambda$. Such partition determines a parabolic subgroup $\mathrm{P}_{\lambda}\subset \mathrm{GL}(r,\field{C})$, consisting of block-lower triangular matrices for which the $jk$-th block (for $j\geq k$) is of size $i_{j}\times i_{k}$. Any parabolic subgroup of $\mathrm{GL}(r,\field{C})$ is conjugated to one of the former.
A parabolic subgroup $\mathrm{P}_{\lambda}$ admits a semidirect product decomposition $\mathrm{P}_{\lambda} = \mathrm{N}_{\lambda} \rtimes \mathrm{D}_{\lambda}$,
where $\mathrm{N}_{\lambda}$, the unipotent radical of $\mathrm{P}_{\lambda}$, consists of block-lower triangular matrices which are block-unipotent, that is, whose $j$-th diagonal block is equal to $\mathrm{Id}_{i_{j}\times i_{j}}$, and $\mathrm{D}_{\lambda}$, the Levi complement of $\mathrm{N}_{\lambda}$ in $\mathrm{P}_{\lambda}$, consists of the invertible block-diagonal matrices. Let us denote by $\lambda_{r}$ the partition
\[
r = \underbrace{1 + \dots + 1}_{\text{$r$ times}}.
\]
Then $\mathrm{P}_{\lambda_{r}}=\mathrm{B}(r)$ is the Borel group of invertible lower triangular matrices, while $\mathrm{N}_{\lambda_{r}}=\mathrm{N}(r)$ is its subgroup of all unipotent matrices.
The structure of the Lie algebra of block-lower diagonal matrices $\mathfrak{n}_{\lambda}=\textrm{Lie}(\mathrm{N}_{\lambda})$, is determined by its lower central series. Concretely, denoting by $\mathfrak{n}_{\lambda,jk}\subset \mathfrak{n}_{\lambda}$ the abelian subalgebra of matrices whose nonzero elements lie in the $jk$-th block, one has that $\left[\mathfrak{n}_{\lambda,jk},\mathfrak{n}_{\lambda,k'l}\right]=\delta_{kk'}\mathfrak{n}_{\lambda,jl}$. In particular,
\[
\mathfrak{n}_{\lambda, jk}=\left[\dots\left[\mathfrak{n}_{\lambda, j j-1},\mathfrak{n}_{\lambda, j-1 j-2}\right],\dots, \mathfrak{n}_{\lambda, k+1 k}\right].
\]
Let $\mathrm{W}(r)$ denote the Weyl group of $\mathrm{GL}(r,\field{C})$, and let $\mathrm{W}_{\lambda}$ be the subgroup given as the Weyl group of $\mathrm{D}_{\lambda}$. Then, $\mathrm{W}(r)$ is isomorphic to the symmetric group $\mathrm{S}_{r}$, while $\mathrm{W}_{\lambda}$ is isomorphic to $\mathrm{S}_{i_{1}}\times\dots\times\mathrm{S}_{i_{s}}$ and both can be explicitly realized as subgroups of $\mathrm{GL}(r,\field{C})$ by means of permutation matrices.
Let us denote by $N_{\lambda,[\Pi]}$ the subgroup $\Ad\left(\Pi\right)\left( \mathrm{P}_{\lambda}\right) \cap \mathrm{N}(r) \subset \mathrm{N}(r)$, for any $[\Pi]\in \mathrm{W}(r)/ \mathrm{W}_{\lambda}$, which is clearly independent of the choice of representative $\Pi$ in $[\Pi]$.
\begin{lemma}\label{lemma:semidirect}
For every class $[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}$, there is factorization
\[
\mathrm{N}(r) = \mathrm{N}_{\lambda,[\Pi]}^{c} \cdot \mathrm{N}_{\lambda,[\Pi]}
\]
for a subgroup $\mathrm{N}_{\lambda,[\Pi]}^{c}\subset \mathrm{N}(r)$, in such a way that $\mathrm{N}_{\lambda,[\Pi]}^{c}$ and $\mathrm{N}_{\lambda,[\Pi]}$ intersect trivially:
\[
N_{\lambda, [\Pi]}^{c} \cap N_{\lambda,[\Pi]} = \{\textrm{Id}_{r\times r}\}.
\]
\end{lemma}
\begin{proof}
Let $i_{0}=0$. The partition $\lambda$ determines a collection of $s$ subsets $\curly{S}_{1},\dots,\curly{S}_{s}\subset\{1,\dots,r\}$, whose elements are the consecutive integers in a given partition interval: $\curly{S}_{j}=\{i_{0}+\dots+i_{j-1}+1,\dots,i_{0}+\dots+i_{j}\}$.
Recall that if $\Pi$ is the permutation matrix corresponding to a permutation $\sigma\in \mathrm{S}_{r}$, that is, $(\Pi)_{jk}=\delta_{\sigma(j)k}$, then the adjoint action of $\Pi$ on a matrix $A=(a_{jk})$ is given as $\Ad(\Pi)(A)_{jk}= a_{\sigma(j)\sigma(k)}$. Then, by definition, the subgroup $\mathrm{N}_{\lambda,[\Pi]}=\Ad\left(\Pi\right)\left(\mathrm{P}_{\lambda}\right) \cap \mathrm{N}(r)$ consists of the unipotent matrices with a zero $jk$-entry, for $ j > k $, whenever $\sigma^{-1}(j)<\sigma^{-1}(k)$, unless $\{\sigma^{-1}(j), \sigma^{-1}(k)\} \subset \curly{S}_{l}$ for some $1 \leq l \leq s$. Define the subset $\mathrm{N}_{\lambda,[\Pi]}^{c}\subset \mathrm{N}(r)$ to consist of those unipotent matrices with a zero $jk$-entry (for $j>k$) whenever $\sigma^{-1}(j)>\sigma^{-1}(k)$, or $\sigma^{-1}(j)<\sigma^{-1}(k)$ if $\{\sigma^{-1}(j), \sigma^{-1}(k)\} \subset \curly{S}_{l}$ for some $1 \leq l \leq s$. Then, in particular, $\mathrm{N}_{\lambda,[\Pi]}^{c} \cap \mathrm{N}_{\lambda,[\Pi]} = \{\textrm{Id}_{r\times r}\}$. It is straightforward to verify that the defining equations of $\mathrm{N}_{\lambda,[\Pi]}^{c}$ are preserved under multiplication, i.e., that $\mathrm{N}_{\lambda,[\Pi]}^{c}$ is a subgroup of $\mathrm{N}(r)$. Moreover, consider any $C \in \mathrm{N}(r)$. That there is a unique solution to the equation $C =AB$, with $A \in \mathrm{N}_{\lambda,[\Pi]}^{c}$, $B \in \mathrm{N}_{\lambda,[\Pi]}$, can be seen inductively. For any $1\leq j \leq r-1$, either $\sigma^{-1}(j) < \sigma^{-1}(j+1)$ or $\sigma^{-1}(j) > \sigma^{-1} (j+1)$ giving a unique solution to $c_{j+1 j} = a_{j+1 j} + b_{j+1 j}$. Now, assuming we know all $a_{l_{1}l_{2}}$ and $b_{l_{1}l_{2}}$ with $l_{1}-l_{2} < j-k$, the same idea determines $a_{jk}$ and $b_{jk}$ from the equation
\[
n_{jk} = a_{jk} + b_{jk} +\sum_{l = k+1}^{j-1} a_{jl}b_{lk}.
\]
\end{proof}
\begin{definition}
For any partition $\lambda$, and any class $[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}$, we will let $\mathrm{N}_{\lambda,[\Pi]}^{c}$ be the subgroup of $\mathrm{N}(r)$ defined in lemma \ref{lemma:semidirect}.
\end{definition}
\begin{remark}\label{remark:complement}
Incidentally, the group $\mathrm{N}_{\lambda,[\Pi]}^{c}$ is also of the form $\mathrm{N}_{\lambda_{0},[\Pi_{c}]}$ for some $[\Pi_{c}]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}$, where $\lambda_{0} = i_{s} + \dots + i_{1}$. Consider the involution on $\mathrm{S}_{r}$, defined by mapping a given permutation $\sigma^{-1}$ to the complementary permutation
\[
\sigma^{-1}_{c}(k)=r-\sigma^{-1}(k)+1.
\]
Then, $\sigma_{c}$ is the unique permutation in $\mathrm{S}_{r}$ satisfying that $\sigma^{-1}_{c}(j) < \sigma^{-1}_{c}(k)$ if and only if $\sigma^{-1}(j) > \sigma^{-1}(k)$, and $\{\sigma^{-1}_{c}(j),\sigma_{c}^{-1}(k)\}\in \curly{S}^{c}_{s - l +1}$ if and only if $\{\sigma^{-1}(j),\sigma^{-1}(k)\}\in \curly{S}_{l}$ for some $1 \leq l \leq s$, and every $1 \leq j< k \leq r$. If a representative $\Pi$ is chosen in $[\Pi]$, and $\sigma$ is its corresponding permutation, then, a representative $\Pi_{c}$ for $[\Pi_{c}]$ can be constructed in terms of $\sigma_{c}$.
\end{remark}
\begin{remark}
In general, $\mathrm{N}_{\lambda,[\Pi]}$ would not be normal in $\mathrm{N}(r)$, and following remark \ref{remark:complement}, the same is true for its complement $\mathrm{N}_{\lambda,[\Pi]}^{c}$. As an example where neither subgroup is normal, consider $r=4$, the partition $4=2+2$, and the permutation $\sigma=(123)$.
\end{remark}
\begin{lemma}\label{lemma:factorization}
Every matrix $g\in \mathrm{GL}(r,\field{C})$ can be factored in the form
\[
g = L\Pi P
\]
with $P\in \mathrm{P}_{\lambda}$, $\Pi$ representing a fixed equivalence class $[\Pi]\in \mathrm{W}(r)/ \mathrm{W}_{\lambda}$, and $L\in \mathrm{N}(r)$. Moreover, for such $\Pi$, there is a unique factorization with $L$ belonging to $\mathrm{N}_{\lambda,[\Pi]}^{c}$.
\end{lemma}
\begin{proof}
Existence of factorizations $L\Pi P$ follow from the standard Bruhat decomposition
\[
\mathrm{GL}(r,\field{C})=\bigsqcup_{\Pi\in \mathrm{W}(r)} \mathrm{N}(r)\cdot \Pi \cdot \mathrm{B}(r).
\]
If $g=L'\Pi' P'$, then $P P'^{-1}=\Pi^{-1}(L^{-1}L')\Pi'$, and it follows that all nonzero entries of $\Pi^{-1}\Pi'$ are nonzero entries of $P P'^{-1}\in \mathrm{P}_{\lambda}$. Hence $\Pi^{-1}\Pi' \in W^{\lambda}$.
To conclude, observe that if $g = L' \Pi P'$, then $L^{-1}L' =\Pi P\left(P'\right)^{-1} \Pi^{-1}$. Hence $L^{-1}L' \in \mathrm{N}_{\lambda,[\Pi]}$. In particular, since from lemma \ref{lemma:semidirect}, every choice of $L$ would factor as $L' L''$, with $L'\in \mathrm{N}_{\lambda,[\Pi]}^{c}$ and $L'' \in \mathrm{N}_{\lambda,[\Pi]}$, it readliy follows that such $L'$ is independent of the choice of $L$. Hence, there is a unique representative of $L$ lying in $\mathrm{N}_{\lambda,[\Pi]}^{c}$.
\end{proof}
\begin{remark}\label{rem:Bruhat}
As a consequence of lemmas \ref{lemma:semidirect} and \ref{lemma:factorization}, we conclude the generalized Bruhat decomposition for $\mathrm{GL}(r,\field{C})$
\[
\mathrm{GL}(r,\field{C})=\bigsqcup_{[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}} \mathrm{N}_{\lambda,[\Pi]}^{c}\cdot \Pi \cdot\mathrm{P}_{\lambda}.
\]
Consequently, the generalized flag manifold $\curly{F}_{\lambda} \cong \mathrm{GL}(r,\field{C})/\mathrm{P}_{\lambda}$ admits a stratification
\[
\curly{F}_{\lambda} = \displaystyle\bigsqcup_{[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}} \curly{F}_{\lambda,[\Pi]}
\]
The strata $\curly{F}_{\lambda,[\Pi]} \cong \mathrm{N}_{\lambda,[\Pi]}^{c} \cdot \Pi \cdot \mathrm{P}_{\lambda}/\mathrm{P}_{\lambda}$ are the so-called \emph{Bruhat cells}. The largest (open) cell, of dimension $\left(r^{2}-(i_{1}^{2}+\dots+i_{s}^{2})\right)/2$, is given by the class of the permutation
\[
\Pi_{0} = (r,1)(r-1,2),\dots(\lfloor (r +1)/2\rfloor +1, \lfloor r/2\rfloor).
\]
In turn, it is easy to see that the group $\mathrm{N}_{\lambda,[\Pi_{0}]}^{c}$ actually coincides with $\mathrm{N}_{\lambda_{0}}$, the unipotent radical of the parabolic subgroup associated to the partition
\[
\lambda_{0} = i_{s}+\dots + i_{1}.
\]
\end{remark}
\begin{remark}\label{rem:flag-action}
Let $\{\mathbf{e}_{1},\dots,\mathbf{e}_{r}\}$ be the canonical basis in $\field{C}^{r}$. Recall that the group $\mathrm{P}_{\lambda}$ is the stabilizer, under the standard $\mathrm{GL}(r,\field{C})$-action, of the standard generalized decreasing flag $F_{\lambda}=\{\field{C}^{r}=V_{1}\supset V_{2} \supset \dots\supset V_{s}\supset\{0\}\}$, where for $j \geq 2$,
\[
V_{j}=\mathrm{Span}\{\mathbf{e}_{i_{1}+\dots+i_{j-1}+1},\dots,\mathbf{e}_{r}\}.
\]
Any given element in $\curly{F}_{\lambda}$ is obtained as $g \cdot F_{\lambda}$, by means of the left $\mathrm{GL}(r,\field{C})$-action on $F_{\lambda}$, defined as
\[
g \cdot V_{j} = g(V_{j}), \qquad g\in\mathrm{GL}(r,\field{C}),
\]
providing the correspondence with the homogeneous space model of $\curly{F}_{\lambda}$. In particular, $F_{\lambda}$ corresponds to the class of the identity $[\mathrm{Id}]$, equal to its own Bruhat stratum (a 0-cell). Therefore, from lemma \ref{lemma:factorization}, we can detect the stratum $g \cdot F_{\lambda}$ belongs to in terms of the Bruhat decomposition of $g$.
\end{remark}
If we now fix a class $[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}$ and let the $[\Pi]$-stratum of $\mathrm{GL}(r,\field{C})$ act on $F_{\lambda}$, we conclude the following corollary.
\begin{corollary}\label{cor:action}
Each Bruhat cell $\curly{F}_{\lambda,[\Pi]}\subset \curly{F}_{\lambda}$ is a torsor for the group $\mathrm{N}_{\lambda,[\Pi]}^{c}$.
\end{corollary}
\begin{definition}\label{def:Bruhat-coord}
The \emph{Bruhat coordinates} of the Bruhat cell $\curly{F}_{\lambda,[\Pi]}\subset \curly{F}_{\lambda}$, for any given $[\Pi]\in \mathrm{W}(r)/\mathrm{W}_{\lambda}$, are the holomorphic coordinates given by the off-diagonal nonzero entries in the group $\mathrm{N}_{\lambda,[\Pi]}^{c}$. By a slight abuse of notation, these will be denoted by $\left(L, [\Pi]\right)$, or simply by $L$, where $L\in \mathrm{N}_{\lambda,[\Pi]}^{c}$.
\end{definition}
We will denote the complete flag manifold $\curly{F}_{\lambda_{r}}$ by $\curly{F}_{r}$, and similarly for its Bruhat cells $\curly{F}_{r,\Pi}$, $\Pi\in \mathrm{W}(r)$.
\section{Birkhoff-Grothendieck splittings over the Riemann sphere}\label{sec:bundles}
Let us assume, without any loss of generality, an ordering of coefficients $m_{1}\leq\dots\leq m_{r}$, determined by $s$ different integers $n_{1} < \dots < n_{s}$ with multiplicities $i_{1},\dots,i_{s}$ subject to $i_{1}+ \dots + i_{s} = r$, determining a partition $\lambda_{N}$. To simplify notations, we will denote
the parabolic group $\mathrm{P}_{\lambda_{N}}$ by $\mathrm{P}_{N}$, and so on. An automorphism of $E_{N}$ is then equivalent to a pair of holomorphic maps $\{g_{i}:\field{C}_{i}\to \mathrm{GL}(r,\field{C})\}_{i=0,1}$ satisfying
\[
g_{0}=z^{N}g_{1}z^{-N}\qquad \text{on}\;\; \field{C}^{*}.
\]
It readily follows that an automorphism of $E_{N}$ is fully determined by a matrix-valued polynomial
\[
g(z)=\sum_{l=0}^{n_{s}-n_{1}}P_{l}z^{l},
\]
with $P_{0}\in \mathrm{P}_{N}$, and for $l>0$, $P_{l}\in\mathfrak{n}_{N}$ has zero $jk$-blocks whenever $n_{j}-n_{k} < l$.
\begin{remark}
In the special case when $E_{N}$ is a twist of the trivial bundle, that is, when $r=i_{1}$ is the trivial partition, the previous structures degenerate, and it follows that $\Aut(E_{N}) = \mathrm{GL}(r,\field{C})$. Since the latter is a rather singular and special case, we will exclude it from our considerations. We will assume henceforth that the associated bundle $\ad(E_{N})$ \emph{is always nontrivial}. It then follows that for every $i = 1,\dots, n$,
\[
\Aut\left(E_{N}\right)|_{z_{i}} \cong \mathrm{P}_{N},
\]
and moreover, there is a unique subgroup $\mathrm{D}_{N}\subset \Aut\left(E_{N}\right)$ such that
\[
\mathrm{D}_{N} \subset \Aut\left(E_{N}\right)|_{z}
\qquad \forall \; z\in\field{C}\field{P}^{1}
\]
\end{remark}
Let us consider the numbers $d_{l} = n_{l+1}-n_{l}$, for $1 \leq l \leq s-1$. Then, for any $s \geq j > k \geq 1$, the dimension of the subspace $\Aut\left( E_{N}\right)_{jk}\subset \Aut\left( E_{N}\right)$, corresponding to the $jk$-th block in the group $\Aut\left( E_{N}\right)$, may be expressed as
\begin{eqnarray*}
\dim \Aut\left( E_{N}\right)_{jk} & = & i_{j}i_{k}\left(n_{j} - n_{k} +1\right)\\
& = & i_{j}i_{k}\left(d_{k}+d_{k+1}+\dots+d_{j-1}+1\right)
\end{eqnarray*}
We will require the introduction of a collection of subgroups of the group $\mathrm{N}_{\lambda}$ of a special kind.
Let us consider, for every $1\leq l \leq s-1$, the $s$-cycle $\sigma_{l}\in\mathrm{S}_{s}$
given as
\[
\sigma_{l}(k) = \left\{
\begin{array}{lr}
k + l & \text{if}\quad k \leq s - l,\\\\
k - s + l & \text{if}\quad k > s - l.
\end{array}
\right.
\]
Each $\sigma_{l}$ acts on the partition $\lambda$, inducing a new partition
\[
\lambda_{l}=i_{\sigma_{l}(1)} + \dots + i_{\sigma_{l}(s)}.
\]
We can also induce a special $r$-cycle $\tau_{l}\in\mathrm{S}_{r}$, defined as
\[
\tau_{l}(k) =
\left\{
\begin{array}{lr}
k + i_{1} + \dots + i_{l} & \text{if}\quad k \leq i_{l + 1} + \dots + i_{s},\\\\
k - \left(i_{l + 1} + \dots + i_{s}\right) & \text{if}\quad k > i_{l + 1} + \dots + i_{s}.
\end{array}
\right.
\]
Denote by $\Pi_{l}$ the corresponding permutation matrix of $\tau_{l}$, and let us consider the associated groups $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$ (recall lemma \ref{lemma:semidirect} and remark \ref{remark:complement}).
The next lemma unmasks the structure of such groups.
\begin{lemma}\label{lemma:abelian}
The groups $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$, $1 \leq l \leq s-1$, are the abelian subgroups of $\mathrm{N}_{\lambda}\subset \mathrm{N}(r)$ determined by a single lower-left block of size $(i_{l+1} + \dots + i_{s})\times(i_{1} + \dots + i_{l})$.
\end{lemma}
\begin{proof}
By definition, the group $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$ has a zero $jk$-th entry ($j>k$) if $\tau^{-1}_{l}(j)>\tau^{-1}_{l}(k)$, or if $\tau^{-1}_{l}(j) < \tau^{-1}_{l}(k)$ when $\{\tau^{-1}_{l}(j), \tau^{-1}_{l}(k)\}$ belong to the same partition interval in $\lambda_{l}$. It is straightforward to see from the definition of $\tau_{l}$ (a shift by $i_{1} + \dots + i_{l}$ modulo $r$) that the second possibility never occurs, since $\Ad\left(\Pi_{l}\right)\left(\mathrm{D}_{\lambda_{l}}\right) = \mathrm{D}_{\lambda}$. Assume that a pair $j > k$ moreover satisfies that either $i_{1} < j \leq i_{1}+ \dots + i_{l}$ and $k \leq i_{1} + \dots + i_{l-1}$, or $i_{1}+ \dots + i_{l} < k \leq i_{1} + \dots + i_{s-1}$ and $i_{1} + \dots + i_{l} < j$. The permutation
$\tau_{l}$ has been defined in such a way that then $\tau^{-1}_{l}(j) > \tau^{-1}_{l}(k)$ in both cases, and all such terms vanish. Moreover, assume that $k \leq i_{1}+ \dots + i_{l} < j$. Then $\tau^{-1}_{l}(j) < \tau^{-1}_{l}(k)$, and there is no vanishing constraint for any of these entries. Therefore, $\mathrm{N}_{\lambda_{l}, [\Pi_{l}]}^{c}$ is determined precisely by the single lower-left block whose entries' indices satisfy $k \leq i_{1}+ \dots + i_{l} < j$.
Now, we can work with the block structure of $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$, with respect to the partition $\lambda$. Restated this way, the block structure is determined by decreeing the $jk$-th block to be equal to 0 if either $j \leq l$, or $l < k$ (figure \ref{subgroups}).
Therefore, for a given $1 \leq l \leq s-1$, let us assume that $j > k$ satisfy $j > l \geq k$. If $n$ and $n'$ are arbitrary elements in $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$, then the $jk$-th block of $nn'$ satisfies
\[
(n n')_{jk} = n_{jk} + n'_{jk} = (n'n)_{jk},
\]
that is, $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$ is abelian. However, the subgroups $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$ do not pairwise commute. In general,
\begin{equation}\label{eq:commutator}
\left[\mathrm{N}_{\lambda_{l_{1}},\left[\Pi_{l_{1}}\right]}^{c}, \mathrm{N}_{\lambda_{l_{2}},\left[\Pi_{l_{2}}\right]}^{c}\right] = \mathrm{N}_{\lambda_{l_{1}},\left[\Pi_{l_{1}}\right]}^{c}\cap \mathrm{N}_{\lambda_{l_{2}},\left[\Pi_{l_{2}}\right]}^{c}\qquad \text{if}\quad l_{1}\neq l_{2}.
\end{equation}
\end{proof}
The structure of the groups of block-unipotent matrices $\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$ is sketched in figure \ref{subgroups} for the
values $i=1,2,\dots,s-1$.
\begin{figure}[!htb]
\[
\scriptstyle
\begin{pmatrix}
I_{1} & & & & \\
* & I_{2} & & 0 & \\
* & 0 & \ddots & & \\
\vdots & \vdots & & I_{s-1} & \\
* & 0 & \hdots & 0 & I_{s}
\end{pmatrix},
\begin{pmatrix}
I_{1} & & & & \\
0 & I_{2} & & 0 &\\
* & * & \ddots & & \\
\vdots & \vdots & & I_{s-1} & \\
* & * & \hdots & 0 & I_{s}
\end{pmatrix},
\dots,
\begin{pmatrix}
I_{1} & & & & \\
0 & I_{2} & & 0 & \\
0 & 0 & \ddots & & \\
\vdots & \vdots & & I_{s-1} & \\
* & * & \hdots & * & I_{s}
\end{pmatrix}
\]
\caption{Block structure of the groups $N_{\lambda_{l},[\Pi_{l}]}^{c}$, where $I_{j}=\textrm{Id}_{i_{j}\times i_{j}}$.}\label{subgroups}
\end{figure}
Recall from remark \ref{rem:Bruhat} that $\mathrm{N}_{\lambda} = \mathrm{N}_{\lambda_{0},[\Pi_{0}]}^{c}$. The next result is the heart of our structural characterization of the group $\Aut \left(E_{N}\right)$.
\begin{theorem}[bundle automorphisms geometric factorization]\label{theo:factorization}
If $s>2$, the group of bundle automorphisms $\Aut(E_{N})$ can be expressed as a semidirect product
\begin{equation}\label{eq:semidirect}
\Aut(E_{N}) \cong \left( \left( \mathrm{G}_{1} \cdot \mathrm{G}_{2}\cdot \dots \cdot \mathrm{G}_{s-1}\right) \rtimes \mathrm{N}_{N}\right) \rtimes \mathrm{D}_{N},
\end{equation}
where for each $1\leq l \leq s-1$, the group $\mathrm{G}_{l}$ is isomorphic to the direct product
\begin{equation}\label{eq:semidirect2}
\underbrace{\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}\times\dots\times \mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}}_{\text{$d_{l}$ times}}.
\end{equation}
and correspons to restrictions to the fibers over $d_{l}$ different points in $\field{C}\field{P}^{1}$.
In the special case $s=2$, there is an isomorphism
\begin{equation}\label{eq:semidirect s=2}
\Aut(E_{N}) \cong \left( \underbrace{ \mathrm{N}_{N} \times \dots \times \mathrm{N}_{N}}_{\text{$d_{1} + 1$ times}}\right) \rtimes \mathrm{D}_{N},
\end{equation}
\end{theorem}
\begin{proof}
Assume that $s > 2$, and consider each of the blocks $\Aut\left(E_{N}\right)_{jk}$. Those for which $j=k$ conform the $\mathrm{D}_{N}$-factor in $\Aut\left(E_{N}\right)$. When $j > k$, the blocks have the structure of a vector space of matrix-valued polynomials. Let us consider an arbitrary vector space decomposition
\[
\Aut\left(E_{N}\right)_{jk}=V^{0}_{jk}\oplus\left(\bigoplus_{l=k}^{j-1} V^{l}_{jk}\right)
\]
where the spaces $V^{0}_{jk}$ are blocks with 1-dimensional entries, and constrained to form a group isomorphic to $\mathrm{N}_{N}$ (such choices obviously exist), and for every $l>0$, each block entry of $V^{l}_{jk}$ is a subspace of dimension $d_{l}$.
For each $0 \leq l \leq s-1$, consider the subspace $\mathrm{G}_{l}$ of $\Aut\left(E_{N}\right)$ consisting of the block-unipotent elements whose $jk$-th block belongs to the subspace $V^{l}_{jk}$,
or is trivial when $l\geq j$ or $l <k$. Then, by construction, each subspace $V^{l}_{jk}$ belongs to a unique $\mathrm{G}_{l}$. In particular
\[
\mathrm{G}_{0} \cong \mathrm{N}_{\lambda_{0},[\Pi_{0}]}^{c} = \mathrm{N}_{N}.
\]
By definition, each space $\mathrm{G}_{l}$, $l>0$, has the structure of a vector space of dimension $d_{l}\cdot \dim \mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}$, and is in fact isomorphic to the direct product of abelian Lie groups
\[
\underbrace{\mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}\times\dots\times \mathrm{N}_{\lambda_{l},[\Pi_{l}]}^{c}}_{\text{$d_{l}$ times}}
\]
following from lemma \ref{lemma:abelian}. Thus, such vector space structure is equivalent to an abelian group structure, in such a way that the previous isomorphism is also an isomorphism of groups.
It is straightforward to verify that the product $\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}$ is normal in $\Aut\left(E_{N}\right)$. Therefore, the factorization \eqref{eq:semidirect} of the group $\Aut\left(E_{N}\right)$ readily follows.
When $s = 2$ (for example, when $E_{N}$ has rank 2),
for the only special permutation matrix $[\Pi_{0}]=[\Pi_{1}]$, we have that $\mathrm{N}_{\lambda_{1},[\Pi_{1}]}^{c} = \mathrm{N}_{N}$. Hence, $\mathrm{G}_{1}\cdot {G}_{0}$ is abelian, and the factorization \eqref{eq:semidirect s=2} follows as a consequence of
\eqref{eq:semidirect}.
\end{proof}
\begin{remark}\label{rem:N_0}
The action of $\Aut\left(E_{N}\right)$ on each $\curly{F}\left(E_{i}\right)\cong \curly{F}_{r}$ actually preserves certain unions of cells in the Bruhat stratification of $\curly{F}_{r}$
\[
\curly{F}_{r} = \bigsqcup_{\Pi\in \mathrm{W}(r)} \curly{F}_{\Pi}
\]
inducing subsequent stratification refinements of the Harder-Narasimhan stratification of $\curly{N}$, which are determined by the Bruhat stratifications of the components of $\curly{F}^{n}_{r}$ on every stratum $\curly{N}_{N}$ for which the underlying splitting $E_{N}$ is not a twist of the trivial bundle (see \ref{subsec:Bruhat}). This finer stratification contains an open and dense set $\curly{N}_{0}\subset \curly{N}$, consisting of evenly-split bundles (remark \ref{rem:evenly-split}) for which the component representatives $F_{i}$ of the stable parabolic structure $\left([\mathbf{F}],\curly{W}\right)$ belong to the $\Aut\left(E_{N}\right)$-orbit of the largest Bruhat cell in $\curly{F}_{r}$.
\end{remark}
\section{Actions of $\mathrm{Aut}\left(E_{N}\right)$}\label{sec:actions}
The next step in the study of the group $\Aut\left(E_{N}\right)$ is the construction of a space where its action is optimal, in the sense that its action is transitive, and free up to the action of the residual Levi complement $\mathrm{D}_{N}$.
A motivating example, albeit parallel to the present discussion, is the case of the group $\mathrm{GL}(2,\field{C})$ with its standard left action on the configuration space of triples of points $w_{i}\in\field{C}\field{P}^{1}$ by M\"obius transformations. It is standard that such action is transitive, and every triple is stabilized by $Z(\mathrm{GL}(2,\field{C}))$. Now, the group $\mathrm{GL}(2,\field{C})$ can be thought of as the group of automorphisms of a trivial rank 2 vector bundle on $\field{C}\field{P}^{1}$, while each point $w_{i}$ can be thought of as a flag in $\field{C}^{2}$. Thus, if flags are considered on the fibers of an arbitrary triple of points $z_{1},z_{2},z_{3}$ in the base, the previous result may be interpreted as a statement on the nature of the action of $\Aut \left(E\right)$ on a space of pairwise-different triples of flags $(w_{1},w_{2},w_{3})$ on the fibers in $E$ over $z_{1},z_{2},z_{3}\in\field{C}\field{P}^{1}$.
\begin{lemma}\label{lemma:action-D}
For any $1 \leq l \leq s-1$, the group $\mathrm{D}_{N}\subset \Aut\left(E_{N}\right)$ acts on the stratum $\curly{F}_{\lambda_{l},[\Pi_{l}]}\subset \curly{F}_{\lambda_{l}}$ by means of its adjoint action in $\mathrm{N}_{\lambda_{l},\left[\Pi_{l}\right]}^{c}\subset \mathrm{N}_{N}$. The special flags $F_{\lambda_{l},\left[\Pi_{l}\right]} := \left[\Pi_{l} \right] \in \curly{F}_{\lambda_{l},[\Pi_{l}]}$ (remark \ref{rem:flag-action}) are stabilized by $\mathrm{D}_{N}$.
\end{lemma}
\begin{proof}
It is straightforward to verify that the subgroup $\mathrm{N}_{\lambda_{l},\left[\Pi_{l}\right]}^{c}$ is invariant under the adjoint action of $\mathrm{D}_{N}$ in $\mathrm{N}_{N}$ (recall that $\mathrm{N}_{\lambda_{l},\left[\Pi_{l}\right]}^{c}\subset \mathrm{N}_{N}$). In the homogeneous space model, a point in $\curly{F}_{\lambda_{l},[\Pi_{l}]}$ corresponds to an orbit $[L \Pi_{l} P]$, for $L\in \mathrm{N}_{\lambda_{l},\left[\Pi_{l}\right]}^{c}$ fixed, and $P\in\mathrm{P}_{\lambda_{l}}$ arbitrary. Moreover, by definition, we have that
\[
\Ad\left(\Pi_{l}\right)\left(\mathrm{D}_{\lambda_{l}}\right)=\mathrm{D}_{\lambda}.
\]
Therefore, an element $D\in \mathrm{D}_{N} = \mathrm{D}_{\lambda}$ acts on such an orbit as $[L \Pi_{l}P] \mapsto [\Ad(D)(L)\Pi_{l}P]$. It readily follows that $F_{\lambda_{l},\left[\Pi_{l}\right]}$ is stabilized by $\mathrm{D}_{\lambda}$, since its Bruhat coordinates are $L = \mathrm{Id}$.
\end{proof}
\begin{definition}\label{def:flag space}
Given a Birkhoff-Grothedieck splitting $E_{N}\to \field{C}\field{P}^{1}$, together with $n_{s} - n_{1} + 1 = d_{1} + \dots + d_{s-1} + 1$ arbitrary points $z_{0}=0,\footnote{The normalization $z_{0}=0$ is not strictly necessary, but implementing it accounts for a simpler proof of theorem \ref{theo:normalization}.} z_{1},\dots, z_{d_{1} + \dots + d_{s-1}}$ in $\field{C}\field{P}^{1}$, let
\[
\curly{C}_{N}= \curly{F}_{\lambda_{0},[\Pi_{0}]}^{0}\times\left(\prod_{i=1}^{d_{1}}\curly{F}_{\lambda_{1},[\Pi_{1}]}^{i}\right)\times\dots\times\left(\prod_{i=1}^{d_{s-1}}\curly{F}_{\lambda_{s-1},[\Pi_{s-1}]}^{d_{1}+\dots+d_{s-2}+i}\right),
\]
where each $\curly{F}^{i}_{\lambda_{l},[\Pi_{l}]}$ is the corresponding stratum of the flag manifold $\curly{F}_{\lambda_{l}}^{i}$ associated to the fiber $E_{i}$ over the point $z_{i}$ in $E_{N}$.
\end{definition}
By definition, and implementing the fundamental factorization \eqref{eq:semidirect} from theorem \ref{theo:factorization} for a special collection of localized automorphisms, we may obtain an induced natural left action of the group $\Aut\left(E_{N}\right)$ on $\curly{C}_{N}$. From corollary \ref{cor:action}, we can conclude our main result regarding the structure of the group $\Aut(E_{N})$.
\begin{theorem}\label{theo:normalization}
The induced left action of $ \left(\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}\right)\rtimes N_{N} \subset\Aut\left(E_{N}\right)$ on $\curly{C}_{N}$ is free and transitive (in particular, the action is proper)
\end{theorem}
\begin{proof}
Let $d_{0}=0$. Recall from theorem \ref{theo:factorization} that the subgroups $\mathrm{G}_{0} \cong \mathrm{N}_{\lambda_{0},\left[\Pi_{0}\right]}^{c} = \mathrm{N}_{N}$ and $\mathrm{G}_{l}$, $l > 0$, are constructed after a choice of subspaces $V^{l}_{jk}$ is made, where $j>k$ by hypothesis (recall that the blocks with $j=k$ determine the subgroup $\mathrm{D}_{\lambda}$). Choose each $V^{0}_{jk}$ to consist of constant entries. In turn, for the remaining block components, let $V_{jk}^{k}\oplus\dots\oplus V_{jk}^{j-1}$ be spanned by the $jk$-th blocks determined by a collection of Lagrange polynomials of degree $d_{k} + \dots + d_{j-1}$ for the points $z_{d_{0} + \dots + d_{k-1}+1},\dots, z_{d_{0} + \dots + d_{j-1}}$, and with a simple zero at $z_{0}=0$,
and for every $k \leq l \leq j-1$, let $V^{l}_{jk}$ be the subspace spanned by the subcollection of Lagrange polynomials for the points $z_{d_{0} + \dots + d_{l-1}+1},\dots, z_{d_{0} + \dots + d_{l}}$ (assumed to vanish at $z_{0}=0$). If one of these points is $\infty$, the corresponding trivialization of $E_{N}$ must be considered.
The constraint for $\mathrm{G}_{0}$ is obviously satisfied, and each $\mathrm{G}_{l}$, for $l=1,\dots,s-1$, acts freely and transitively on the factor
\[
\curly{C}_{l}=\prod_{j=1}^{d_{l}}\curly{F}_{\lambda_{l},[\Pi_{l}]}^{d_{0} + \dots + d_{l-1} +j}.
\]
and similarly, the subgroup $\mathrm{G}_{0} = \mathrm{N}_{N}$ acts freely and transitively over the stratum $\curly{C}_{0} = \curly{F}_{\lambda_{0},\left[\Pi_{0}\right]}^{0}$.
In fact, for any $l >0$, the action of $\mathrm{G}_{l}$ on the remaining factors $\{\curly{C}_{l'}\}_{l'\neq l}$, including the stratum $\curly{C}_{0}$, is trivial. The latter case is easy to see, since by construction $\mathrm{G}_{l}|_{z_{0}} = \{\mathrm{Id}_{r \times r}\}$. For the remaining cases, we first observe that for all $i = d_{0} + \dots + d_{l' - 1} +1,\dots, d_{0} + \dots + d_{l'}$, in a similar way as before, we have that
\[
\mathrm{G}_{l} \cap \mathrm{G}_{l'}|_{z_{i}} = \{\mathrm{Id}_{r \times r}\},
\]
and the claim is true for the subgroup $\mathrm{G}_{l} \cap \mathrm{G}_{l'} \subset \mathrm{G}_{l}$. Now, due to its abelian nature, we can decompose the group $\mathrm{G}_{l}$ as a direct product
\[
\mathrm{G}_{l} = \mathrm{G}_{l} \cap \mathrm{G}_{l'} \times \mathrm{H}_{l l'},
\]
where $\mathrm{H}_{l l'}$ has a nonzero $jk$-th block if $l < j \leq l'$ and $1 \leq k \leq l$ in the case $l < l'$, or if $ j > l$ and $ l' < k \leq l$ in the case $l > l'$. In any case, the group
\[
\Pi_{l'} \mathrm{H}_{l l'} \Pi_{l'}^{-1}
\]
is a subgroup of $\mathrm{N}_{\lambda_{l'}}\subset \mathrm{P}_{\lambda_{l'}}$, which can be readily seen from the fact that $\tau_{l'}$ is the $r$-cycle defined by shifting forward (modulo $r$) by $i_{1} + \dots + i_{l'}$. Since $\mathrm{H}_{l l'}$ and $\mathrm{G}_{l'}$ commute as a consequence of \eqref{eq:commutator}, it follows that the action of $\mathrm{H}_{l l'}$ on $\curly{C}_{l'}$ is also trivial, since every component of an element in $\curly{C}_{l'}$ is of the form $L \Pi_{l'} \cdot F_{\lambda_{l'}}$, with $L\in \mathrm{G}_{l'}$, and by definition, the special flag $F_{\lambda_{l'}}$ is stabilized by $\mathrm{P}_{\lambda_{l'}}$. Consequently, the subgroup $\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}$ acts freely and transitively on $\curly{C}_{1}\times\dots\times \curly{C}_{s-1}$.
Clearly, the stabilizer in $\left(\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}\right) \rtimes \mathrm{N}_{N}$ of any point in $\curly{C}_{N}$ is the identity. Now, although the group $\mathrm{N}_{N} = \mathrm{N}_{\lambda_{0},\left[\Pi_{0}\right]}^{c}$ doesn't act trivially on any factor $\curly{C}_{l}$, $l=0,\dots, s-1$, the theorem follows, since given any point in $\curly{C}_{N}$, we can first map its component in $\curly{C}_{0}$ to any given flag in the stratum over $z_{0}$. Then $\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}$ stabilizes such flag, and also acts freely and transitively in the complement $\curly{C}_{N}\setminus \curly{C}_{0}$. Consequently, the components in $\curly{C}_{N}\setminus \curly{C}_{0}$ can also be normalized in any desired way. Hence, any two points in $\curly{C}_{N}$ can be connected by a unique element in $\left(\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}\right) \rtimes \mathrm{N}_{N}$.
This completes the proof.
\end{proof}
Let us consider the special flag representatives $F_{\lambda_{l},\left[\Pi_{l}\right]} = [\Pi_{l}]$ in the Bruhat cells $\curly{F}_{\lambda_{l},\left[\Pi_{l}\right]}$ (see remark \ref{rem:flag-action}). As a consequence of lemma \ref{lemma:action-D} and theorem \ref{theo:factorization}, we conclude the following corollary.
\begin{corollary}[Normalization of flags]\label{cor:normalization}
Under the action of $\Aut\left(E_{N}\right)$, every element in $\curly{C}_{N}$ can be normalized to the special element
\[
\left(F_{\lambda_{0},[\Pi_{0}]}, \underbrace{F_{\lambda_{1},[\Pi_{1}]},\dots,F_{\lambda_{1},[\Pi_{1}]}}_{\text{$d_{1}$ times}},\dots, \underbrace{F_{\lambda_{s-1},[\Pi_{s-1}]},\dots,F_{\lambda_{s-1},[\Pi_{s-1}]}}_{\text{$d_{s-1}$ times}}\right).
\]
which is stabilized by the subgroup $\mathrm{D}_{N}$.
\end{corollary}
\begin{remark}\label{rem:evenly-split}
The following case is of special importance when holomorphic families of parabolic bundles on $\field{C}\field{P}^{1}$ are considered, due to its genericity. A vector bundle $E_{N}\to\field{C}\field{P}^{1}$ is said to be \emph{evenly-split} if its splitting coefficients satisfy $|m_{j}-m_{k}|\leq 1$ $\forall\, j,k$. In such case, $r=i_{1} + i_{2}$, and $n_{2}-n_{1} = d_{1} = 1$. The evenly-split condition is an open condition on any algebraic family of vector bundles over $\field{C}\field{P}^{1}$, and in particular, for the families of underlying bundles on any moduli space of stable parabolic bundles $\curly{N}$. It readily follows that
\[
\Aut\left(E_{N}\right) \cong \left(\mathrm{N}_{N}\times \mathrm{N}_{N}\right) \rtimes \mathrm{D}_{N}.
\]
Upon consideration of the generalized flag manifolds $\curly{F}_{N}^{0}$ and $\curly{F}_{N}^{\infty}$, whose points parametrize, respectively, generalized flags in the fibers over the points $z_{0}=0$ and $z_{1}=\infty$ in $\field{C}\field{P}^{1}$, our conclusion is that $\Aut\left(E_{N}\right)$ acts transitively on the product of the large Bruhat cells $\curly{F}^{0}_{N,[\Pi_{0}]}\times \curly{F}^{\infty}_{N,[\Pi_{0}]}$. Consequently, every element can be normalized to
\[
\left(F_{N,[\Pi_{0}]},F_{N,[\Pi_{0}]}\right),
\]
which has the subgroup $\mathrm{D}_{N}$ as its stabilizer.
\end{remark}
\section{Holomorphic gauge-fixing of logarithmic connections}\label{sec:gauge}
The contents of this section rely on the infinitesimal description of the moduli space $\curly{N}$. For convenience, we will assume without loss of generality that $z_{n-1} = 0$ and $z_{n} = \infty$. It follows from the infinitesimal deformation theory and Serre duality that, given any isomorphism class of stable parabolic bundles $\{E_{*}\}\in \curly{N}$, together with the isomorphism $\{E_{*}\} \cong \left(E_{N},[\mathbf{F}]\right)$, its holomorphic cotangent space can be described, in terms of any representative $\mathbf{F}\in[\mathbf{F}]$, as the space
\[
T^{*}_{\left(E_{N},[\mathbf{F}]\right)}\curly{N}\cong H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)
\]
where $\Par\ad\left(E_{N},\mathbf{F}\right)$ is the holomorphic vector bundle over $\field{C}\field{P}^{1}$, of degree
\[
\deg \left(\Par\ad\left(E_{N},\mathbf{F}\right)\right) = -n\dim \curly{F}_{r},
\]
associated to the locally-free sheaf of endomorphisms of $E_{N}$ that preserve the quasi parabolic structure determined by $\mathbf{F}$ \cite{MS80,BH95}. Therefore, the possible values that any local section can attain at the points $z_{1},\dots,z_{n}$ form a solvable Lie algebra isomorphic to $\mathfrak{b}(r)$. Moreover, there is a bundle map
\[
\Par\ad\left(E_{N},\mathbf{F}\right)\to \ad\left(E_{N}\right)
\]
which is an isomorphism away from the fibers over the points $z_{1},\dots,z_{n}$. It follows from the parabolic stability of the pair $\left(E_{N},\mathbf{F}\right)$ that
\[
H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)\right) \cong \field{C},
\]
with global sections corresponding to multiples of the identity in $\mathfrak{gl}(r,\field{C})$. In particular, the Riemann-Roch theorem for vector bundles then implies that
\[
\dim\curly{N} = n\dim \curly{F}_{r} -(r^2-1).
\]
\begin{remark}
For the sake of clarity of the future constructions, we will provide a more detailed account of the bundle $\Par\ad\left(E_{N},\mathbf{F}\right)$. Recall from definition \ref{def:Bruhat-coord} that, for any $\Pi\in W(r)$ and its corresponding Bruhat cell $\curly{F}_{r,\Pi} \subset \curly{F}_{r}$, its Bruhat coordinates, corresponding to the nonzero off-diagonal entries of $L \in \mathrm{N}_{r,\Pi}^{c}$, are a consequence of the uniqueness of the standard Bruhat decomposition $g = L\Pi P$ of any $g\in\mathrm{GL}(r,\field{C})$, thought of as an ordered frame giving rise to a decreasing flag
\[
F = \{V_{1} = \field{C}^{r}\supset V_{2}\supset\dots,\supset V_{1}\supset \{0\}\},
\]
and where $P\in B(r)$ represents the arbitrariness of the parametrization. It readily follows that the Lie algebra of endomorphisms of $\field{C}^{r}$ preserving $F$ consists of elements of the form
\[
\Ad\left(L\Pi\right)\left(c\right),\qquad c\in\mathfrak{b}(r).
\]
Now, for every $i = 1,\dots,n$, let $\left(\Pi_{i},L_{i}\right)$ be the Bruhat coordinates of the complete descending flag in the $i$th component of $\mathbf{F}$.
The vector bundle of parabolic endomorphisms $\Par\ad\left(E_{N},\mathbf{F}\right)$ can be explicitly constructed as a ``twist" of the endomorphism bundle $\ad\left(E_{N}\right)$, if one refines the transition functions of the latter at a collection of non-intersecting punctured disks $\curly{U}_{i}$ centered at each point $z_{1},\dots,z_{n}$. Letting $\curly{U}_{0} = \field{C}\field{P}^{1}\setminus S$, and thanks to the algebra structure of $\mathfrak{gl}(r,\field{C})$, the transition functions defining $\Par\ad\left(E_{N},\mathbf{F}\right)$ can be represented in matrix form as $g_{0i}(z) = \Ad\left(L_{i}\Pi_{i}\right)\left(f_{0i}(z)\right)$, where for $i = 1, \dots, n-1$,
\[
\left(f_{0i}(z)\right)_{kl} = \left\{
\begin{array}{cl}
1 & \text{if $k \geq l$}\\\\
(z-z_{i})^{-1} & \text{if $k < l$},
\end{array}
\right.
\]
while
\[
\left(f_{0n}(z)\right)_{kl} = \left\{
\begin{array}{cl}
z^{m_{k} - m_{l}} & \text{if $k \geq l$}\\\\
z^{m_{k} - m_{l} - 1} & \text{if $k < l$}.
\end{array}
\right.
\]
Finally, if any other representative $\mathbf{F}'$ is chosen in the $\Aut\left(E_{N}\right)$-orbit $[\mathbf{F}]$ of $n$-tuples of flags, there is a an induced bundle isomorphism between $\Par\ad\left(E_{N},\mathbf{F}\right)$ and the corresponding bundle of parabolic endomorphisms $\Par\ad\left(E_{N},\mathbf{F}'\right)$, and similarly for its spaces $H^{k}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}'\right)\right)$, $k=0,1$, given explicitly in terms of the element $g\in\Aut\left(E_{N}\right)$ such that $\mathbf{F}' = g\cdot \mathbf{F}$.
\end{remark}
\begin{lemma}\label{lemma:matrix-differentials}
The elements of $H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)$ correspond to the $\mathfrak{gl}(r,\field{C})$-valued meromorphic differentials $\Phi (z)$ on $\field{C}\field{P}^{1}$, that are holomorphic over $\field{C}\field{P}^{1}\setminus S$, and such that, moreover,
\begin{itemize}
\item[(i)]
$\Phi(z)$ has simple poles over $z_{1},\dots, z_{n-1}\subset\field{C}$,
\item[(ii)] $z^{-N}\Phi(z)z^{N}$ has a simple pole at $z_{n} = \infty$,
\item[(iii)] its residues at each point $z_{1},\dots, z_{n}\in S$ have the form $\Ad\left(L_{i}\Pi_{i}\right)(c_{i})$, where $(L_{i},\Pi_{i})$ are the Bruhat coordinates for $F_{i}\in \curly{F}_{r,\Pi_{i}}$, and where $c_{i}\in\mathfrak{n}(r)$.
\end{itemize}
\end{lemma}
\begin{proof}
Follows from the construction of the bundle $\Par\ad\left(E_{N},\mathbf{F}\right)$ and its dual, described in terms of affine trivializations and transition functions: If $g_{\alpha\beta}$ is a set of transition functions for a bundle $E$, then its dual $E^{\vee}$ is determined by the transition functions $g^{\vee}_{\alpha\beta} = {}^{t}g_{\alpha\beta}^{-1}$. The additional twists over each of the points $z_{1},\dots,z_{n}$ determine the polar behavior of the local sections of $\Par\ad\left(E_{N},\mathbf{F}\right)$ over $S$, and its relation to the quasi parabolic structure $\mathbf{F}$.
\end{proof}
\begin{definition}
A \emph{logarithmic connection adapted to a parabolic structure} $\left(\mathbf{F},\curly{W}\right)$ in the bundle splitting $E_{N}$ is a singular connection, holomorphic over $E_{N}|_{\field{C}\field{P}^{1}\setminus S}$, and prescribed in the affine trivialization over $\field{C}\subset\field{C}\field{P}^{1}$ by a meromorphic $\mathfrak{gl}(r,\field{C})$-valued 1-form
\begin{equation}\label{eq:germ-0}
A(z) = \left(\sum_{i=1}^{n-1}\frac{A_{i}}{z-z_{i}} + f_{0}(z)\right) dz
\end{equation}
with $f_{0}(z)$ holomorphic, and such that, in the local coordinate at $\infty$,
\begin{equation}\label{eq:germ-infty}
z^{-N}A(z)z^{N} + N dz/z
\end{equation}
has a simple pole, whose residue we will denote by $A_{n}$, and for each $i=1,\dots, n$, $A_{i}$ is semisimple with eigenvalues $\{\alpha_{ij}\}_{j=1}^{r}$, and with corresponding ordered eigenlines spanning the $n$-tuple of complete descending flags $\mathbf{F} = \left(F_{1},\dots, F_{n}\right)$.
\end{definition}
In other words, there exists a factorization
\[
A_{i} = B_{i}W_{i}B_{i}^{-1},\qquad W_{i} = \begin{pmatrix}
\alpha_{i1} & & 0\\
& \ddots & \\
0 & &\alpha_{ir}
\end{pmatrix}
\]
in such a way that, for each $i=1, \dots, n$, $B_{i}$ determines a projective ordered frame giving rise to $F_{i}$. It is not immediate that any given triple $\left(E_{N},\mathbf{F},\curly{W}\right)$ admits addapted logarithmic connections. However, in the special case when such triple determines a stable parabolic bundle, the existence of a logarithmic connection with irreducible unitary monodromy is guaranteed by the Mehta-Seshadri theorem \cite{MS80}.
\begin{corollary}\label{cor:affine}
For any triple $\left(E_{N},\mathbf{F},\curly{W}\right)$, if nonempty, its space $\curly{A}_{\mathbf{F}}$ of adapted logarithmic connections is an affine space for
\[
H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right).
\]
\end{corollary}
\begin{proof}
This is so since there exist a unique factorization $A_{i} = B_{i}W_{i}B_{i}^{-1}$ for which the Bruhat decomposition $B_{i} = L_{i}\Pi_{i} P_{i}$ is such that $P_{i}\in N(r)$. There is a bijective correspondence between such $P_{i}$ and $c_{i}\in \mathfrak{n}(r)$ such that the following relation holds
\[
P_{i}W_{i}P_{i}^{-1} = W_{i} + c_{i}.
\]
It is then a straightforward consequence of lemma \ref{lemma:matrix-differentials} that the difference of any two logarithmic connections adapted to the same parabolic structure is precisely an element in $H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)$.
\end{proof}
In particular, it follows that for every $\left(E_{N},[\mathbf{F}]\right)\in \curly{N}$, there is a bundle $\curly{A}_{[\mathbf{F}]}$ of affine spaces over the $\Aut\left(E_{N}\right)$-orbit $[\mathbf{F}]$, whose fiber over an orbit representative $\mathbf{F}\in[\mathbf{F}]$, corresponds to the space of logarithmic connections $\curly{A}_{\mathbf{F}}$, and which is an affine bundle for the vector bundle over $[\mathbf{F}]$ with fibers $H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)$. The key point of our construction is the existence of an equivariant action of the group $\Aut\left(E_{N}\right)$ on the bundle $\curly{A}_{[\mathbf{F}]} \to [\mathbf{F}]$, which is given for every $g\in\Aut\left(E_{N}\right)$ as
\[
A \mapsto g\cdot A = g A g^{-1} -dg g^{-1}\in\curly{A}_{\mathbf{F}'},\qquad A\in\curly{A}_{\mathbf{F}}, \quad \mathbf{F'} = g\cdot \mathbf{F}.
\]
In a similar manner there is an equivariant action in the vector bundle over $[\mathbf{F}]$, mapping any $\Phi \in H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)$ to
\[
g\cdot \Phi = g\Phi g^{-1} \in H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F'}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)
\]
inducing a vector space structure in its space of orbits, which we will denote by $\left[H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)\right]$. The latter serves as a model for the holomorphic cotangent space $T^{*}_{\left(E_{N},[\mathbf{F}]\right)}\curly{N}$.
Consequently, we can speak of an affine space $\left[\curly{A}_{\mathbf{F}}\right]$,whose elements are $\Aut\left(E_{N}\right)$-orbits in $\curly{A}_{[\mathbf{F}]}$, and whose underlying vector space is the orbit space $\left[H^{0}\left(\field{C}\field{P}^{1},\Par\ad\left(E_{N},\mathbf{F}\right)^{\vee}\otimes K_{\field{C}\field{P}^{1}}\right)\right]$. Therefore, there is also an induced affine bundle on $\curly{N}$ for the holomorphic cotangent bundle $T^{*}\curly{N}$, which we will denote simply by $\curly{A}$
In conclusion, the presence of a Lie group of automorphisms in every bundle splitting $E_{N}$ implies that, for every point $\left(E_{N},[\mathbf{F}]\right)\in\curly{N}$, the logarithmic connections adapted to a pair $\left([\mathbf{F}],\curly{W}\right)$ make sense as an orbit under an action of the group $\Aut\left(E_{N}\right)$. In practice, it would be important to single out orbit representatives whose features are ``as best as possible" in a suitable sense. With such motivation, we formulate the following definition.
\begin{definition}\label{def:gauge}
A \emph{holomorphic gauge} for an $\Aut\left(E_{N}\right)$-orbit $\left[A_{\mathbf{F}}\right] \in \left[\curly{A}_{\mathbf{F}}\right]$ of logarithmic connections on $E_{N}$ adapted to a pair $\left([\mathbf{F}],\curly{W}\right)$ is a choice of representatives
\[
\mathbf{F}\in[\mathbf{F}], \qquad A_{\mathbf{F}}\in\curly{A}_{\mathbf{F}}.
\]
\end{definition}
\begin{remark}\label{rem:centralizer}
The objective at present is to provide a canonical normalization of a logarithmic connection adapted to a parabolic structure on $E_{N}$, in terms of the normalization of the given quasi parabolic structure, which follows from corollary \ref{cor:normalization}. We emphasize that the determination of all the possible splitting types supporting stable parabolic structures is a delicate question at the heart of the general scheme that we propose to construct the moduli spaces $\curly{N}$. Therefore, we make the \emph{a priori} assumption that the pair $\left(E_{N},\mathbf{F}\right)$, not necessarily corresponding to a stable parabolic bundle, is such that
\[
d_{1} + \dots + d_{s-1} + 1 \leq n
\]
In such case, it readily follows that $\dim \left(\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}\right)\rtimes \mathrm{N}_{N} \leq n\dim \curly{F}_{r}$.
The action of $\Aut\left(E_{N}\right)$ on any affine bundle $\curly{A}_{[\mathbf{F}]}$ is never effective; any logarithmic connection adapted to an arbitrary pair $\left(E_{N},\mathbf{F}\right)$ is stabilized by $Z\left(\Aut\left(E_{N}\right)\right)$, which is isomorphic to $\field{C}^{*}$ since $B(r)\hookrightarrow \Aut\left(E_{N}\right)$ always, and corresponds to nonzero multiples of the identity.
For every $n$-tuple of flags $\mathbf{F}$, there is an associated subgroup $\mathrm{Stab}(\mathbf{F})$, and moreover, if a logarithmic connection $A_{\mathbf{F}}$ adapted to $\left(\mathbf{F},\curly{W}\right)$ is chosen in $\curly{A}_{\mathbf{F}}$, there is also an induced subgroup $\mathrm{Stab}\left(A_{\mathbf{F}}\right))$, in such a way that
\[
Z\left(\Aut\left(E_{N}\right)\right) \subset \mathrm{Stab}\left(A_{\mathbf{F}}\right)\subset \mathrm{Stab}(\mathbf{F}).
\]
Several important possibilities may occur.
The most relevant case corresponds to the equality $ \mathrm{Stab}(\mathbf{F}) = \mathrm{Stab}\left(A_{\mathbf{F}}\right) = Z\left(\Aut\left(E_{N}\right)\right)$. In such case, we conclude the following key observation: \emph{given any $\Aut\left(E_{N}\right)$-orbit $\left[A_{\mathbf{F}}\right]\in \left[\curly{A}_{\mathbf{F}}\right]$, there is a unique representative $A_{\mathbf{F}}\in\curly{A}_{\mathbf{F}}$, once an orbit representative $\mathbf{F}\in[\mathbf{F}]$ is chosen.}
\end{remark}
The next and final goal of this work is to study the existence of different natural holomorphic gauges of logarithmic connections for the affine spaces of orbits $\left[\curly{A}_{\mathbf{F}}\right]$.
\subsection{The Bruhat gauge} \label{subsec:Bruhat}
Let us now consider the fibration
\[
\mathrm{pr}_{N} : \curly{F}_{r} \to \curly{F}_{N}
\]
It follows from lemma \ref{lemma:factorization} and corollary \ref{cor:action} that $\mathrm{pr}_{N}$ is stratification-preserving, i.e.,
the image of every Bruhat cell $\curly{F}_{r,\Pi}\subset \curly{F}_{r}$ lies inside the Bruhat cell $\curly{F}_{N,[\Pi]}\subset\curly{F}_{N}$, where $[\Pi]\in W(r)/ W_{N}$, and, in fact, $\mathrm{pr}^{-1}_{N}\left(\curly{F}_{N,[\Pi]}\right)$ equals the union of all cells in $\curly{F}_{r}$ corresponding to elements in the class $[\Pi]$,
\[
\mathrm{pr}^{-1}_{N}\left(\curly{F}_{N,[\Pi]}\right) = \bigsqcup_{\Pi\in[\Pi]} \curly{F}_{r,\Pi}
\]
The action of $\Aut\left(E_{N}\right)$ on the different Bruhat strata of $\curly{F}^{1}_{r}\times\dots\times\curly{F}^{n}_{r}$ can be described in terms of $\mathrm{pr}_{N}$.
Since for every $z_{i}$, $\Aut\left(E_{N}\right)|_{z_{i}}\cap \mathrm{W}(r) = \mathrm{W}_{N}$, the action of $\Aut\left(E_{N}\right)$ preserves any stratum in $\curly{F}^{i}_{N}$, and moreover, even though its action on $\curly{F}^{i}_{r}$ does not preserve individual strata in general, it does preserves the collections of strata $\mathrm{pr}^{-1}_{N}\left(\curly{F}^{i}_{N,[\Pi]}\right)$.
\begin{definition}
Given any pair $\left([\mathbf{F}],\curly{W}\right)$, assume there exist $d_{1} + \dots + d_{s-1} + 1$ points $z_{i_{0}}, z_{i_{1}},\dots,z_{i_{d_{1} + \dots + d_{s-1}}}$ in $S$ whose corresponding flag components $F_{i_{0}}, F_{i_{1}},\dots, F_{i_{d_{1} + \dots + d_{s-1}}}$ for any representative $\mathbf{F}\in [\mathbf{F}]$ satisfy
\[
\textrm{pr}_{N}\left(F_{i_{0}}\right) \in \curly{F}^{i_{0}}_{\lambda_{0},[\Pi_{0}]},
\]
and
\[
\textrm{pr}_{N}\left(F_{d_{0} + \dots + d_{l-1} +j}\right) \in \curly{F}^{d_{0} + \dots + d_{l-1} +j}_{\lambda_{l},\left[\Pi_{l}\right]},\qquad 1\leq l \leq s - 1,\quad 1\leq j \leq d_{l}
\]
\emph{a Bruhat gauge} for an orbit $\left[A_{\mathbf{F}}\right] \in \left[\curly{A}_{\mathbf{F}}\right]$ is a choice of representative in $\curly{A}_{\mathbf{F}}$, corresponding to any orbit representative $\mathbf{F}$ such that
\begin{itemize}
\item[(i)] $\textrm{pr}_{N}\left(F_{i_{0}}\right) = F_{\lambda_{0},[\Pi_{0}]}$,
\item[(ii)] $\textrm{pr}_{N}\left(F_{d_{0} + \dots + d_{l-1} +j}\right) = F_{\lambda_{l},[\Pi_{l}]},\qquad \qquad 1\leq l \leq s - 1$.
\end{itemize}
a Bruhat gauge is called \emph{strong} if the action of the Levi complement quotient $D_{N}/Z\left(\Aut\left(E_{N}\right)\right)$ on any representative $\mathbf{F}$ and its corresponding logarithmic connection $A_{\mathbf{F}}$ as above is effective (lemma \ref{lemma:action-D}).
\end{definition}
\begin{remark}
For a fixed admissible degree a rank, consider its associated evenly-split bundle, i.e. a splitting of the form $E_{N} = \mathcal{O}(m)^{r-p}\oplus \mathcal{O}(m + 1)^{p}$, where $\deg\left(E_{N}\right) = mr +p$, $0 \leq p <r$. For any given moduli space $\curly{N}$, let us recall the Zariski open set $\curly{N}_{0}$ considered in remark \ref{rem:N_0}. If $\curly{P}^{s}$ denotes the Zariski open set in
\[
\Aut\left(E_{N}\right)\cdot \left(\curly{F}^{1}_{r,\Pi_{0}}\times\dots\times \curly{F}^{n}_{r,\Pi_{0}}\right)\subset \curly{F}^{1}_{r}\times\dots\times \curly{F}^{n}_{r}
\]
whose points define stable parabolic structures with respect to the evenly-split bundle splitting $E_{N}$, then it follows that, by definition,
\[
\curly{N}_{0} \cong P\left(\Aut\left(E_{N}\right)\right)\setminus \curly{P}^{s}
\]
Recall the automorphism group isomorphism from remark \ref{rem:evenly-split}, and consider the subgroup $\mathrm{N}_{N}\times\mathrm{N}_{N}\subset \Aut\left(E_{N}\right)$. It readily follows that its action on $\curly{P}^{s}$ has trivial stabilizer. The Bruhat coordinates $L_{i} \in \mathrm{N}_{N}$ of the projections of the flag components $\mathrm{pr}_{N}(F_{i})$, $i = 1,\dots, n$, have the explicit form
\[
L_{i} = \begin{pmatrix}
\mathrm{Id}_{(r-p) \times (r-p)} & 0\\
M_{i} & \mathrm{Id}_{p \times p}
\end{pmatrix}
\]
Hence, it is possible to normalize uniquely a point $\mathbf{F} \in \curly{P}^{s}$, under the action of $\mathrm{N}_{N}\times\mathrm{N}_{N}$, in such a way that the Bruhat coordinates of the projections of the flag components $\mathrm{pr}_{N}(F_{n-1})$ and $\mathrm{pr}_{N}(F_{n})$, over $z_{n-1} =0, z_{n} = \infty$ take the form
\[
L_{n-1} = L_{n} = \mathrm{Id}_{r \times r}
\]
leaving only the residual action of the subgroup $\mathrm{D}_{N}\subset\Aut\left(E_{N}\right)$.
An element in $\mathrm{D}_{N}$ is given by a block-diagonal matrix
\[
D =\begin{pmatrix}
D_{1} & 0\\
0 & D_{2}
\end{pmatrix}
\]
and its action on $\mathrm{pr}_{N}\left(F_{i}\right)$ is determined as
\[
\Ad(D) \left(L_{i}\right) = \begin{pmatrix}
\mathrm{Id} & 0\\
D_{2}M_{i}D_{1}^{-1} & \mathrm{Id}
\end{pmatrix}.
\]
Let us consider the remaining Bruhat coordinates $L_{1},\dots, L_{n-2}$. Hence, if $n \geq 4$, there exists a Zariski open subset $\curly{P}'^{s}$ of $\curly{P}^{s}$, consisting of $n$-tuples for which at least two matrix blocks of $M_{1},\dots, M_{n-2}$ have maximal rank (if $r = 2$, only one such block is needed). It readily follows that $\curly{P}'^{s}$ is precisely the subset of $\curly{P}^{s}$ of points $\mathbf{F}$ for which
\[
Z\left(\Aut\left(E_{N}\right)\right) = \mathrm{Stab}(\mathbf{F})
\]
The corresponding quotient
\[
\curly{N}'_{0} \cong P\left(\Aut\left(E_{N}\right)\right)\setminus \curly{P}'^{s}
\]
is a Zariski open set of $\curly{N}$. Hence, we have concluded the following result.
\end{remark}
\begin{corollary}\label{cor:Bruhat}
Every logarithmic connection on a stable parabolic bundle $\left(E_{N}, [\mathbf{F}]\right)$ in the Zariski open set $\curly{N}_{0}\subset \curly{N}$ (remark \ref{rem:N_0})
admits a Bruhat gauge. If $n \geq 4$, strong Bruhat gauges exist in the Zariski open set $\curly{N}'_{0}\subset \curly{N}$.
\end{corollary}
\subsection{The Riemann-Hilbert gauge}
For the sake of completeness of the exposition, we also discuss a reinterpretation of a classical result of Plemelj (cf. \cite{GS99}), regarding the solvability of the Riemann-Hilbert problem (also known as Hilbert's 21st problem) in the context of holomorphic gauge-fixing of logarithmic connections, for the special case of semisimple residues adapted to a parabolic structure. No claim of originality is made here.
Plemelj's original attempt at solving the Riemann-Hilbert problem (which appeared published in 1908, and is summarized in the book \cite{Ple64}), i.e. the construction of a Fuchsian system over the Riemann sphere with prescribed monodromy representation, relied on the implicit assumption that at least one of the residue matrices was semisimple, and consequently, was unsuited for dealing with the general case, including Bolibrukh's counterexamples \cite{Bo90}. This is more easily seen by adopting Rohl's modern approach \cite{Rohrl57}, where the Riemann-Hilbert problem is solved by constructing a suitable holomorphic vector bundle equipped with a logarithmic connection, and then showing that the bundle map to the corresponding trivial bundle maps the connection into a Fuchsian system with the same monodromy.
Since all logarithmic connections adapted to a given parabolic structure on a bundle $E_{N}$ have semisimple residues and fixed eigenvalues, it follows that the Riemann-Hilbert problem is always solvable in such case. We claim that such solution admits the interpretation of a choice of holomorphic gauge for the logarithmic connection.
Let us consider a local expression \eqref{eq:germ-0} over $\field{C}$ for a given logarithmic connection $A$ adapted to $\left(\mathbf{F},\curly{W}\right)$. For a sufficiently large $0<R <|z|$, $A(z)$ can be expressed in the form
\begin{equation}\label{eq:expansion-infty}
\zeta^{-N}\left(\frac{A_{n} + N}{\zeta} + f_{1}(\zeta) \right)\zeta^{N} d\zeta
\end{equation}
where $\zeta = 1/z$ and $f_{1}(\zeta)$ is holomorphic.
\begin{lemma}\label{lemma:normalization-infty}
There exists an element $g\in \Aut\left(E_{N}\right)$ such that $g\cdot A$ expands near $z_{n} = \infty$ as
\[
\left(\frac{W_{n} + N'}{\zeta} + f'_{1}(\zeta)\right) d\zeta,
\]
where $N' = \Ad\left(\Pi^{-1}_{n}\right)(N)$. Moreover, such normalization for $A$ is essentially unique.
\end{lemma}
\begin{proof} Consider a ``multivalued" local germ $Y(\zeta)$ satisfying the equation $dY + A_{\infty}(\zeta) Y = 0$, where $A_{\infty}(\zeta)$ is the local matrix-valued meromorphic form \eqref{eq:expansion-infty}, of the form
\[
Y(\zeta) = \zeta^{-N}\Psi(\zeta)\zeta^{-W_{n}}
\]
and $\Psi(\zeta) = \sum_{k=0}^{\infty} C(k)\zeta^{k}$ is a holomorphic $\mathrm{GL}(r,\field{C})$-valued germ near $z_{n} = \infty$. In particular, $C(0)\in\mathrm{GL}(r,\field{C})$ satisfies $C(0)W_{n}C(0)^{-1} = A_{n}$. Consider the unique ``inverse" Bruhat decomposition for $C(0)$,
\[
C(0) = P_{n}\Pi_{n} L_{n},
\]
with
\[
P_{n}\in \mathrm{P}_{N},\qquad L_{n}\in \mathrm{N}^{c}_{N,\left[\Pi_{n}^{-1}\right]}.
\]
whose existence and uniqueness, for a given representative of the class $[\Pi_{n}]$ in $\mathrm{W}_{N}\setminus \mathrm{W}(r)/\mathrm{W}_{N}$, follows from a similar argument to lemmas \ref{lemma:semidirect} and \ref{lemma:factorization}.
The group $\Aut\left(E_{N}\right)$ acts on $Y$ by restriction to a neighborhood of $z_{n} = \infty$, via left multiplication, preserving the local form of such $Y$. The statement of the lemma is equivalent to finding an automorphism $g = \{g_{0}(z),g_{1}(\zeta)\}$ such that $g_{1} Y$ can be furthermore expressed in the form
\begin{equation}\label{eq:solution-normalization}
\left(\Pi_{n} +\sum_{k=1}^{\infty}\widetilde{C}(k)\zeta^{k}\right) \zeta^{-(N' + W_{n})}.
\end{equation}
where $N' = \Ad\left(\Pi^{-1}_{n}\right)(N)$. Such $g$ can be constructed inductively, in terms of the holomorphic $\zeta$-germ $g_{1}$, as a product of monomials of fixed degree; the only algebraic requirement to be satisfied is that the elements of every $jk$-block of
\[
\Psi'(\zeta) = g_{1}(\zeta)\Psi(\zeta),
\]
for $j > k$, have a zero of order at least $n_{j}-n_{k}$, to ensure that the product $\zeta^{-N} \Psi' \zeta^{N'}$ is a holomorphic germ in $\zeta$. Finally, observe that the remaining residues of $g\cdot A$ take the following form, in terms of $g$ and the residues $\{A_{i}\}$ of $A$,
\[
\left(g\cdot A\right)_{i} = \Ad\left(g(z_{i})\right)\left(A_{i}\right),\qquad i=1,\dots,n-1.
\]
Finally, the uniqueness of such normalization follows from the uniqueness of fundamental solutions of the local system determined by $A$, normalized at $z_{n} = \infty$ as \eqref{eq:solution-normalization}.
\end{proof}
Thus, the local form \eqref{eq:germ-0} of $g\cdot A$ over $\field{C}$ determines a matrix-valued meromorphic differential, with simple poles over $S$, and residues adapted to $\left(g\cdot\mathbf{F},\curly{W}\right)$. Since the action of $\Aut\left(E_{N}\right)$ on logarithmic connections clearly does not alter the corresponding monodromy representation, we conclude the following result.
\begin{corollary}\label{cor:Riemann-Hilbert}
The representative $g\cdot A$ described in lemma \ref{lemma:normalization-infty} determines a Fuchsian system on $\field{C}\field{P}^{1}$, with the same monodromy representation of $A$. Thus, the Riemann-Hilbert gauge is a representative $A_{\mathbf{F}}\in \curly{A}_{\mathbf{F}}$ solving the Riemann-Hilbert problem for such monodromy representation.
\end{corollary}
\subsection{Specialization to rank 2} Let us consider the case when $r = 2$, so there is essentially one partition $2 = 1 + 1$. Then, we have that $\curly{F}_{2} \cong \field{C}\field{P}^{1}$, $\mathrm{W}(2) = \field{Z}_{2} = \langle \Pi_{0}\rangle$, and the Bruhat stratification of $\curly{F}_{2} = \curly{F}_{2,\Pi_{0}}\sqcup \curly{F}_{2,\Pi^{2}_{0}}$ corresponds to the standard cell decomposition
\[
\field{C}\field{P}^{1} = \field{C}\sqcup\{\infty\}
\]
It readily follows that, for every $i = 1,\dots,n$, $\infty\in\curly{F}^{i}_{2}$ is always a fixed point of $\Aut\left(E_{N}\right)$.
We will now give an explicit description of the Bruhat gauge in the generic case when
any orbit representative possesses $d_{1} + \dots + d_{s-1} + 1$ components such that $F_{i_{j}} \in \curly{F}^{i_{j}}_{2,\Pi_{0}}\cong \field{C}$ (the essential difference between the $n$-tuples in $\curly{F}^{1}_{2,\Pi_{0}}\times\dots\times\curly{F}^{n}_{2,\Pi_{0}}$ and those in its complementary locus in $\curly{F}^{1}_{2}\times\dots\times\curly{F}^{n}_{2}$ is that the latter possess components corresponding to $\infty$, that are fixed under the $\Aut\left(E_{N}\right)$-action). Therefore, the corresponding residue matrices of $A_{\mathbf{F}}$ take the form $A_{i} = B_{i}W_{i} B_{i}^{-1}$, where
\[
B_{i} = \begin{pmatrix}
1 & 0 \\
b_{i} & 1
\end{pmatrix}
\begin{pmatrix}
0 & 1 \\
1 & 0
\end{pmatrix}
\begin{pmatrix}
1 & 0 \\
c_{i} & 1
\end{pmatrix}
\]
Therefore,
\[
A_{i} = \begin{pmatrix}
\alpha_{i2} & 0 \\
\beta_{i}b_{i} & \alpha_{i1}
\end{pmatrix}
+ \beta_{i}c_{i}\begin{pmatrix}
- b_{i} & 1 \\
-b^{2}_{i} & b_{i}
\end{pmatrix}
\]
where $\beta_{i} = \alpha_{i2} - \alpha_{i1}$. Thus, a Bruhat gauge in $\curly{N}_{0}$ corresponds to the normalization $b_{i_{0}} = b_{i_{1}} = \dots = b_{i_{d_{1} + \dots + d_{s-1}}} =0$. Moreover, a strong Bruhat gauge would exist whenever there is at least one $b_{i} \neq 0$, for $i \neq i_{0},\dots, i_{d_{1} + \dots + d_{s-1}}$. Thus, an explicit strong Bruhat gauge could be given in such case by normalizing the parameter $b_{i}$ to $b_{i} = 1$. The action of the subgroup $\mathrm{G}_{1}\cdot\dots\cdot \mathrm{G}_{s-1}\rtimes \mathrm{N}_{N}\subset\Aut\left(E_{N}\right)$ on the residues $\left\{A_{i_{j}}\right\}$ leaves the parameters $c_{i_{0}},\dots,c_{i_{d_{1} + \dots + \dots d_{s-1}}}$ unchanged.
On the other hand, a Riemann-Hilbert gauge corresponds to a choice of logarithmic connection representative $A_{\mathbf{F}}\in \curly{A}_{\mathbf{F}}$, whose residues $A_{1},\dots A_{n-1}$ over $z_{1},\dots,z_{n-1}\in \field{C}\subset\field{C}\field{P}^{1}$ satisfy
\[
\sum_{i = 1}^{n-1} A_{i} = - \left(W_{n} + N'\right).
\]
There is another important relation between the Riemann-Hilbert gauge for rank 2 and odd degree, and the classical theory of analytic ODE, for any choice of Schubert-Bruhat stratum in $\curly{F}_{2}^{1}\times\dots\times\curly{F}_{2}^{n}$. For instance, in the large stratum $\curly{F}^{1}_{2,\Pi_{0}}\times\dots\times\curly{F}^{n}_{2,\Pi_{0}}$, giving rise to $\curly{N}_{0}$, the equation
\[
\sum_{i=1}^{n}\tr\left(A_{i}\right) + \deg\left(E_{N}\right) = 0
\]
implies that, if the Riemann-Hilbert gauge is fixed, the complex parameters $c_{1},\dots, c_{n-1}$ will satisfy a system of 3 independent linear equations. It is important to remark the similarity of such parameters and the accessory parameters of the classical Fuchsian uniformization theory. Over $\curly{N}_{0}$, such equations take the form
\[
\sum_{I=1}^{n-1}\beta_{i}c_{i} = 0,\qquad\qquad 2\sum_{I=1}^{n-1}\beta_{i}b_{i}c_{i} = 1 + \sum_{i=1}^{n}\beta_{i},
\]
\[
\sum_{I=1}^{n-1}\beta_{i}b^{2}_{i}c_{i} = \sum_{I=1}^{n-1}\beta_{i}b_{i}.
\]
\begin{remark}\label{remark:even-degree}
It should be remarked that similar formulae hold for even degree, evenly-split bundles (which are twists of the trivial bundle), where there is a direct correspondence between logarithmic connections adapted to a parabolic structure and the corresponding Fuchsian systems, although
\[
\Aut\left(E_{N}\right)\cong \textrm{GL}(2,\field{C}).
\]
In such case, the Bruhat and Riemann-Hilbert gauges degenerate into a common gauge. The group $P\left(\Aut\left(E_{N}\right)\right)$ acts freely $\curly{F}^{1}_{2}\times\dots\times\curly{F}^{n}_{2}$, and the flags over 3 different base points, say $z_{n-2}, z_{n-1}, z_{n}$, can be normalized, respectively to the values $0,1,\infty\in\field{C}\field{P}^{1}\cong \curly{F}_{2}$ \cite{HH16}.
\end{remark}
\noindent \textbf{Acknowledgments.}
I would like to kindly thank CIMAT (Mexico) for its generosity during the development of the present work.
\bibliographystyle{amsalpha}
|
1,477,468,751,020 | arxiv | \section{Introduction} \label{intro.st}
The nonparametric estimation, based on the observation of $n$
i.i.d. copies $X_{1}$,~\dots, $X_{n}$, of the distribution of a
continuous random variable under a monotonicity constraint, has
received a great deal of attention in
the past decades, see \cite{BW06} for a review. The most studied
constraint is the monotonicity of the density function. It is
well-known that the nonparametric maximum likelihood estimator of a
decreasing density function
over $[0,\infty)$ is the Grenander estimator defined as the left-continuous
slope of the least concave majorant of the empirical distribution
function of $X_{1}$,~\dots, $X_{n}$. This estimator can be easily
implemented using the PAVA (pool adjacent violators algorithm) or a
similar device, see \cite{BBBB}.
Another well studied constraint is the monotonicity of the first
derivative of the density, such that the density function is assumed to
be convex (or concave) over a given interval. It was shown by
\cite{GJW01} that both the least squares estimator and the
nonparametric maximum likelihood estimator under the convexity
constraint exist and are unique. However, although a precise
characterisation of these estimators is given in that paper, their
practical implementation is a non-trivial issue: it requires
sophisticated iterative algorithms that use a mixture
representation, such as the {\it support reduction algorithm} described
in \cite{GJW08}. The nonparametric maximum likelihood of a log-concave
density function (i.e., a density function $f$ such that $\log(f)$
is a concave function) was introduced in \cite{R06} and algorithmic
aspects were treated in \cite{R07} and in \cite{DHR07}, where an
algorithm similar to the support reduction algorithm is defined.
Recently, the problem of estimating a discrete probability mass
function under a monotonicity constraint has attracted attention:
\cite{JaW09} considered the non-parametric
estimation of a monotone distribution and \cite{BJR11} considered the
case of a log-concave distribution.
In this paper, we consider the
nonparametric estimation of a discrete distribution on $\mathbb{N}$ under the convexity
constraint. This problem has not yet been considered in the
literature, although it has several applications, such as the estimation
of species abundance distribution in ecology. In this field,
the terms ``nonparametric methods'' often refer to finite mixtures of parametric
distributions where only the mixing distribution is inferred in a
nonparametric way, see e.g. (\cite{BoK06}, \cite{BDK05-StatMethAppli},
\cite{ChS04}).
We study the least squares estimator of a discrete
distribution on $\mathbb{N}$ under the constraint of convexity.
First, we prove that this estimator exists and is unique, and that it
always outperforms the classical empirical estimator in terms of the
$\ell_{2}$-distance. Then, we consider computational issues.
Similar to the continuous case, we prove that
a representation of convex discrete
distributions can be given in terms of a --~possibly infinite~--
mixture of triangular
functions on ${\mathbb N}$, and, based on this characterization, we derive an algorithm that
provides the least squares estimate, although both the number of components
in the mixture and the support of the estimator are unknown. This algorithm is an adaptation to our problem of the support reduction algorithm in \cite{GJW08}. Finally, we
assess the performance of the least squares estimator under the convexity
constraint through a simulation study.
The paper is organized as follows. Theoretical properties of the constrained least squares estimator are given in Section~\ref{sectionLSE}. Section \ref{TriBasisAlgo.st} is devoted to computational issues. A similation study is reported in Section~\ref{simul.st}, and the proofs are postponed to Section~\ref{Proofs.st}.
\paragraph{Notation.}
Let us define some notation that will be used throughout the paper
\begin{itemize}
\item ${\cal K}$ is the set of convex functions $f$ on ${\mathbb N}$
such that $\lim_{i\to\infty}f(i)=0$. We recall that a discrete
function $f:{\mathbb N}\to\mathbb{R}$ is convex if and only if it satisfies
\begin{equation}\label{def: convex(taux)}
f(i)-f(j)\geq (i-j)\big(f(j+1)-f(j)\big)
\end{equation}
for all $i$ and $j$ in ${\mathbb N}$, or equivalently, if and only
if
\begin{equation}\label{def: convex(slope)}
f(i)-f(i-1)\leq f(i+1)-f(i)
\end{equation}
for all $i\geq 1$. In particular, any $f\in{\cal K}$ has to be non-negative,
non-increasing and strictly decreasing on its support.
\item ${\cal C}$ is the set of all convex probability mass functions
on ${\mathbb N}$, i.e., the set of functions $f \in {\cal K}$
satisfying $\sum_{i\geq 0}f(i)=1$.
\end{itemize}
\section{The constrained LSE of a convex discrete distribution} \label{sectionLSE}
\subsection{The main result}\label{mainresult.st}
Suppose that we observe $n$ i.i.d. random variables $X_1,\dots,X_n$
that take values in $\mathbb{N}$, and that the common probability mass
function $p_0$ of these variables is convex on $\mathbb{N}$ with an unknown
support. Based on these observations, we aim to build an estimator of
$p_0$ that satisfies the convexity constraint.
For this task, define the empirical estimator $\tilde p_n$ of $p_0$ by
$$\tilde p_n(j)=\frac{1}{n} \sum_{i=1}^n I_{(X_i=j)}$$
for all $j\in\mathbb{N}$, and consider the criterion function
$$Q_n(f)=\frac{1}{2}\sum_{i\geq 0}f^2(i)-\sum_{i\geq 0}f(i)\tilde p_n(i)$$
for all functions $f:\mathbb{N}\to\mathbb{R}$. The empirical estimator $\tilde p_n$
may be non-convex so in order to build a convex estimator, we minimize
the criterion function $Q_n$ over the set ${\cal C}$. The minimizer
(which exists according to Theorem \ref{theo: LSE} below) is called
the constrained least squares estimator (LSE) of $p_0$ because it also
minimizes the least squares criterion
$$\frac{1}{2}\sum_{i\geq 0}\big(f(i)-\tilde p_n(i)\big)^2
=Q_n(f)+\frac{1}{2}\sum_{i\geq 0}\tilde p_n^2(i).$$
It is clear that in the case where $\tilde p_n$
is convex, the constrained LSE coincides with $\tilde p_n$. On the
other hand, in the case where $\tilde p_n$ is non-convex, the
constrained LSE outperforms the empirical estimator $\tilde p_n$, as
detailed in Section \ref{sec: empir/constaint}.
The existence and uniqueness of the constrained LSE of $p_{0}$ over ${\cal C}$ is shown in the
following theorem. It is proved that $\widehat{p}_{n}$ is the
minimizer of $Q_{n}$ over the set ${\mathcal K}$, and has a finite
support. We will denote by $\widehat{s}_{n}$, respectively $\widetilde{s}_{n}$,
the maximum of the support of $\widehat{p}_{n}$, respectively
$\widetilde{p}_{n}$.
\begin{theo}\label{theo: LSE}
There exists a unique $\hat p_n\in{\cal C}$ such that
$$ Q_n(\hat p_n)=\inf_{p\in{\cal C}}Q_n(p) = \inf_{p\in{\cal
K}}Q_n(p).$$
Moreover, the support of $\hat p_n$ is finite, and $\widehat{s}_{n}
\geq \widetilde{s}_{n}$.
\end{theo}
\subsection{Comparison between constrained and unconstrained estimators}\label{sec: empir/constaint}
In Theorem~\ref{theo:
empir/constraint}, we show the benefits of using the constrained LSE
rather than the (unconstrained) empirical estimator $\tilde
p_n$, in
terms of the $l_{2}$-loss. Specifically, the constrained LSE is closer to the
unknown underlying distribution $p_0$ than is the unconstrained
estimator $\tilde p_n$. Moreover, we prove that this happens with a
strictly positive probability (and even, a probability of at least
1/2) whenever $p_0$ is not strictly convex on its support.
\begin{theo}\label{theo: empir/constraint}
Let $p_{0}$, $\widetilde{p}_{n}$ and $\widehat{p}_{n}$ be defined as
in Section~\ref{mainresult.st}. We have the following results:
\begin{equation}
\sum_{j\geq 0}\big(p_{0}(j)-\hat p_n(j)\big)^2\leq \sum_{j\geq
0}\big(p_{0}(j)-\tilde p_n(j)\big)^2, \label{eq.theo2}
\end{equation}
with a strict inequality if $\tilde p_n$ is non-convex.
Assume that there exist $i,j\in\mathbb{N}$ such that $j\geq i+2$, $p_0(i)>0$, and $p_0$ is linear over $\{i,\dots,j\}$. Then,
\begin{equation}\label{eq: tilde p non-convex}
\liminf_{n\to\infty}\P\Big(\tilde p_n\text{ is non-convex}\Big)\geq1/2 ,
\end{equation}
and
\begin{equation}\label{liminf.eq}\liminf_{n\to\infty}\P\left(\sum_{j\geq 0}\big(p_0(j)-\hat p_n(j)\big)^2< \sum_{j\geq 0}\big(p_0(j)-\tilde p_n(j)\big)^2\right)\geq1/2
\end{equation}
\end{theo}
\paragraph{Remark:} as we shall see in the proof of
Theorem~\ref{theo: empir/constraint}, Equation~(\ref{eq.theo2}) also
holds with $p_{0}$ replaced by any $q \in {\mathcal K}$ that belongs to $l_2$, i.e., that
satisfies $\sum_{j} q^{2}(j) < \infty$.
Now, we consider the estimation of some characteristics of the
distribution $p_{0}$, namely the expectation, the centered moments and
the probability at 0. As estimators for these characteristics, we
naturally consider similar caracteristics of the constrained and the
unconstrained estimators. Theorem~\ref{moments.th} states that the
distributions
$\widetilde{p}_{n}$ and $\widehat{p}_{n}$ have the same expectation, but
the centered moments of the distribution $\widetilde{p}_{n}$ are lower than those of the
distribution $\widehat{p}_{n}$. In particular, the variance of the distribution of $\widehat{p}_{n}$ is greater than the variance
of $\widetilde{p}_{n}$. Moreover, the constrained estimator $\hat
p_{n}(0)$ is greater than or equal to the unconstrained estimator
$\tilde p_{n}(0)$. The performance of $\hat p_{n}$ is compared with
that of $\tilde p_{n}(0)$ through simulation studies in Section \ref{simul.st}.
\begin{theo}\label{moments.th}
Let $\widetilde{p}_{n}$ and $\widehat{p}_{n}$ be defined as
in Section~\ref{mainresult.st}. We have for all $u \geq 1$, and
$0 \leq a \leq \widehat{s}_{n}$
\begin{equation}
\sum_{i=1}^{\widetilde{s}_{n}} |i-a|^{u} \widetilde{p}_{n}(i) \leq
\sum_{i=1}^{\widehat{s}_{n}} |i-a|^{u} \widehat{p}_{n}(i).
\label{moments.ineq}
\end{equation}
Moreover, $\sum_{i=1}^{\widetilde{s}_{n}} i
\widetilde{p}_{n}(i)=\sum_{i=1}^{\widehat{s}_{n}} i
\widehat{p}_{n}(i)$ and $\hat p_{n}(0)\geq \tilde p_{n}(0)$.
\end{theo}
It can be shown that similar results hold for constraint estimators of
a convex density function, where $\tilde p_{n}$ is replaced by an
unconstrained estimator of the density function and $\hat p_{n}$ is
replaced by the corresponding constrained estimator.
On the contrary, in the case of discrete log-concave distribution, it is shown by ~\cite{BJR11}, see their Equations~(3.5) and ~(3.6), that the
moments of the constrained
maximum likelihood estimator distribution are smaller than those of the
empirical distribution. These authors refer to similar results for the maximum
likelihood estimator of a continuous log-concave density.
\section{Implementing the constrained LSE}\label{TriBasisAlgo.st}
\subsection{More on convex discrete functions}
The aim of this section is to prove that any $f\in{\cal K}$ is a
combination of the triangular functions $T_j$ defined below, and that
the combination is unique. This compares with Propositions 2.1 and 2.2
in \cite{BW06}, which deals with the case of convex (and more
generally, $k$-monotone) density functions on $(0,\infty)$. For every
integer $j\geq 1$, we define the $j$-th triangular function $T_j$ on
$\mathbb{N}$ by
\[T_j(i)=\begin{cases}
\displaystyle\frac{2(j-i)}{j(j+1)}\text{ for all }i\in\{0,\dots,j-1\}\\
0 \text{ for all integers } i\geq j.
\end{cases}\]
It should be noticed that $T_j$ is normalized in such a way that it is a probability mass function, i.e., $T_j(i)\geq 0$ for all $i$ and
$$\sum_{i\geq 0}T_j(i)=1.$$
Moreover, $T_j$ is monotone non-increasing and convex on
$\mathbb{N}$. Hereafter, we denote by ${\cal M}$ the convex cone of
non-negative measures on $\mathbb{N}\backslash\{0\}$. We denote by $\pi_{j}$,
for $j\in \mathbb{N}\backslash\{0\}$, the components of $\pi \in {\cal M}$.
\begin{theo}\label{theo: mixture k}
Let $f:\mathbb{N}\to [0,\infty)$ such that $\lim_{i\to\infty}f(i)=0.$
\begin{enumerate}
\item We have $f\in{\cal K}$ if and only if there exists $\pi\in{\cal M}$ such that
\begin{equation}\label{eq: mixture k}
f(i)=\sum_{j\geq i+1} \pi_jT_j(i)\text{ for all }i\geq 0.
\end{equation}
\item Assume $f\in{\cal K}$. Then, $\pi$ in (\ref{eq: mixture k}) is uniquely defined by
\begin{equation}\label{eq: pi/p}
\pi_j=\frac{j(j+1)}{2}\big( f(j+1)+f(j-1)-2f(j)\big)\text{ for all }j\geq 1.
\end{equation}
\item Assume $f\in{\cal K}$. Then, $\pi$ is a probability measure over $\mathbb{N}\backslash\{0\}$ if and only if $f$ is a probability mass function.
\end{enumerate}
\end{theo}
Let us note that according to \eqref{eq: pi/p}, $\pi$ puts mass
at point $j$ if, and only if, $f$ changes of slope at point
$j$. Moreover, denoting by $s$ the maximum of the support of $f$ in
the case where this support is not empty, we see that the greatest
point where $f$ changes of slope is $s+1$, since the left-hand slope
of $f$ at this point, $f(s+1)-f(s)$, is strictly negative whereas the right-hand
slope, $f(s+2)-f(s+1)$, is zero. Therefore, in the case where the support of $f$ is
not empty, the greatest point where $\pi$ puts mass is
$s+1$. Obviously, in case $f(j)=0$ for all $j\geq 0$, we also have
$\pi_j=0$ for all $j\geq 1$.
\subsection{Algorithm}
Define the criterion function
$$\Psi_n(\pi)=\frac{1}{2}\sum_{i\geq 0}\left(\sum_{j\geq i+1}\pi_jT_j(i)\right)^2-\sum_{i\geq 0}\tilde p_n(i)\sum_{j\geq i+1}\pi_jT_j(i)$$
for all $\pi\in{\cal M}$. The reason why we define such a criterion
function is that
$\Psi_n(\pi)=Q_n(p)$
for all $p\in{\cal K}$ and $\pi\in{\cal M}$ satisfying (\ref{eq:
mixture k}) with $f$ replaced by $p$.
The constrained LSE of $p_0$ is the unique minimizer of
$Q_n(p)$ over $p\in{\cal K}$. It follows from Theorem \ref{theo:
mixture k} that there exists a unique $\hat \pi_n\in{\cal M}$ that
minimizes $\Psi_n(\pi)$ over $\pi\in{\cal M}$, and $\hat p_n$ and
$\hat \pi_n$ are linked by the relation
\begin{equation}\label{eq: ppihat}
\hat p_n(i)=\sum_{j\geq i+1} \hat \pi_{nj}T_j(i)\text{ for all }i\geq 0.
\end{equation}
Therefore, computing the constrained LSE $\hat p_n$ of $p_{0}$ comes
to computing the measure $\hat\pi_n$ that minimizes $\Psi_n(\pi)$ over
$\pi\in{\cal M}$. Moreover, we know from Theorems~\ref{theo: LSE} and~\ref{theo:
mixture k} that $\hat \pi_n$ is a probability measure and that its
support is finite with the greatest point
equal to
$\widehat{s}_{n}+1$.
For all $L \geq 1$, let ${\mathcal M}^{L}$ be the set of measures $\pi
\in {\mathcal M}$ such that the support of $\pi$ is a subset of $\{1,
\ldots, L\}$. It can easily be shown that for any $L\geq 1$, the minimizer of $\Psi_{n}(\pi)$ over $\pi \in {\mathcal M}^{L}$ exists and is unique. We denote this minimizer by $\widehat{\pi}^{L}$, and for any $L \geq \widetilde{s}_{n}+1$,
we calculate $\widehat{\pi}^{L}$ using the support
reduction algorithm that was proposed by~\cite{GJW08}.
Let us define the following notation. Let $\nu, \mu$ be two measures in ${\mathcal M}$. The derivative of
$\Psi_{n}$ in the direction $\nu$ calculated in $\mu$ is defined as
follows:
$$\left[D_{\nu}(\Psi_{n})\right](\mu) = \lim_{\varepsilon \downarrow 0}
\frac{1}{\varepsilon} \left(\Psi_{n}(\mu + \varepsilon\nu) -
\Psi_{n}(\mu)\right),$$
for all $\mu$ and $\nu$ such that $\Psi_{n}(\mu)$ and
$\Psi_{n}(\nu)$ are finite.
It can be written as
\begin{equation}
\left[D_{\nu}(\Psi_{n})\right](\mu) = \sum_{j\geq 1} \nu_{j} \left[d_{j}(\Psi_{n})\right](\mu) \label{eqD}
\end{equation}
where
\begin{eqnarray*}
\left[d_{j}(\Psi_{n})\right](\mu) & = & \lim_{\varepsilon \downarrow 0}
\frac{1}{\varepsilon} \left(\Psi_{n}(\mu + \varepsilon \delta_{j}) -
\Psi_{n}(\mu)\right) \\
& = &\sum_{l=0}^{j-1} T_{j}(l)
\left(\sum_{j'\geq l+1} \mu_{j'}T_{j'}(l) - \widetilde{p}_{n}(l)\right).
\end{eqnarray*}
\paragraph{The algorithm for calculating $\widehat{\pi}^{L}$ for a
fixed $L$ is described as follows:}
\begin{enumerate}
\item Initialisation
~\\
Let $S=\{L\}$ and choose the
measure $\pi^{L}$, such that
\begin{eqnarray*}
\pi^{L}_{j} & =& 0 \mbox{ for } 1\leq j\leq L-1 \\
\pi^{L}_{L} & = & \arg\min_{\pi\in\mathbb{R}}\sum_{i=0}^{L-1} \left(
\widetilde{p}_{n}(i) - \pi T_{L}(i) \right)^{2}.
\end{eqnarray*}
\item Optimisation over ${\mathcal M}^{L}$
~
\begin{description}
\item[Step 1:] For $1 \leq j \leq L $ calculate the quantities
$\left[d_{j}(\Psi_{n})\right](\pi^{L})$. If all are non negative, then
set $\widehat{\pi}^{L} = \pi^{L}$, and the optimisation over
${\mathcal M}^{L}$ is achieved. If not, choose $j$ such that
$\left[d_{j}(\Psi_{n})\right](\pi^{L})
<0$, and set $S' = S + \{j\}$. For example, one can take $j$ as the
minimizer of $\left[d_{j}(\Psi_{n})\right](\pi^{L})$. Go to step 2.
\item[Step 2:] Let $\pi^{\star}_{S'}$ be the minimizer of
$\Psi_{n}(\pi)$ over all measures $\pi$ such that ${\rm
Supp}(\pi) \subset S'$. Two cases must be considered:
\begin{enumerate}
\item
If for all $l \in S'$, $\pi^{\star}_{S', l} \geq 0$, then set
$\pi^{L} = \pi^{\star}_{S'}$, $S=S'$ and return to Step 1.
\item If not, let $l$ be defined as follows:
\begin{equation*}
l = {\rm arg}\min_{j'}
\left\{
\varepsilon_{j'} = \frac{\pi^{L}_{j'}}
{\pi^{L}_{j'}-\pi^{\star}_{S,j'}} \mbox{ for } j' \mbox{ such that }
\pi^{\star}_{S,j'} < \pi^{L}_{j'}
\right\}.
\end{equation*}
Set $S'=S + \{j\} - \{l\}$ and return to Step 2.
\end{enumerate}
\end{description}
\end{enumerate}
\begin{theo}\label{theo: Algo}
The estimator $\widehat{\pi}^{L}$ given by the algorithm
described above minimizes $\Psi_{n}(\pi)$ over $\pi \in {\cal M}^{L}$.
\end{theo}
Then, thanks to the following theorem, we are able to calculate
a convenient $L$.
\begin{theo}\label{theo:AlgoF}
Let $L \geq \widetilde{s}_{n} + 1$. If $\widehat{\pi}^{L}$ is a
probability measure, then $\widehat{\pi}^{L} = \widehat{\pi}_{n}$.
\end{theo}
One possibility is to carry out the optimisation over ${\mathcal
M}^{L}$ for increasing values of $L$ until the
condition $\sum_{j\geq 1}
\widehat{\pi}^{L}_{j} = 1$ is satisfied. As the support of
$\widehat{\pi}_{n}$ is finite, the condition will be fulfilled in a
finite number of steps.
\section{Simulation study}
\label{simul.st}
\subsection{Simulation design}
We designed a simulation study to assess the quality of the
constrained estimator $\widehat{p}_n$ and to compare it with the
unconstrained estimator $\widetilde{p}_n$.
We considered two shapes for the distribution $p_0$: the geometric
$\cal G(\gamma)$ ($\gamma = .9, .5, .1$), the support of which is
infinite, and the pure triangular distribution $T_j$ ($j = 20, 5,
2$). For each distribution, we considered three sample sizes: $n = 10,
100$ and $1000$.
We also considered the Poisson distribution with mean
$\lambda$, which is convex as long as $\lambda$ is smaller that
$\lambda^* = 2 - \sqrt{2} \simeq .59$. We considered $\lambda = .59$,
$.8$ and $1$. For each simulation configuration, $1000$ random
samples were generated. The
simulation were carried out with {\tt R (www.r-project.org)}, using
functions available at the following web-site \verb+http://w3.jouy.inra.fr/unites/miaj/public/perso/SylvieHuet_en.html+.
\subsection{Global fit}
We first compared the fit of the estimated distribution $\widehat{p}_n$
and $\widetilde{p}_n$ to the entire distribution $p_0$. To this aim, for
each simulated sample, we computed the $\ell_2$-loss for $\widehat{p}_n$
$$
\ell_2(\widehat{p}_n, p_0) = \sum_i [ \widehat{p}_n(i) - p_0(i) ]^2,
$$
and likewise for $\widetilde{p}_n$. The expected $\ell_2$-loss
is estimated by the mean calculated on the basis of $1000$ simulations
and the results are displayed in Figure \ref{Fig:Loss}.
As expected from Theorem \ref{theo: empir/constraint}, the constrained
estimator $\widehat{p}_n$ outperforms the empirical estimator in all
configurations in terms of $\ell_2$-loss. The difference is larger in
the triangular case because of the existence of a region where $p_0$
is linear. The empirical estimator $\widetilde{p}_n$ gets better and
closer to $\widehat{p}_n$ as the true distribution $p_0$ becomes more
convex, i.e., for $\gamma=.9$ or $j=2$. Note that the fit of the
unconstrained estimator improves when the true distribution gets
more convex.
These results are theoretically grounded by Theorem \ref{theo:
empir/constraint} for the $\ell_2$-loss, but we also considered the
Kolomogorov loss:
$$
K(\widehat{p}, p_0) = \sup_i |\widehat{P}_n(i) - P_0(i)|,
$$
where $P_0$ is the true cumulative distribution function (cdf) and
$\widehat{P}_n$ is the constrained cdf. The Kolmogorov loss of the
empirical cdf $\widetilde{P}_n$ was calculated in the same way. As
shown on Figure \ref{Fig:Loss} (bottom), the behavior of the
Kolmogorov loss is similar to that of the $\ell_2$-loss. The same behavior
was observed for the Hellinger loss:
$$
\frac12 \sum_i \left(\sqrt{\widehat{p}_n(i)} - \sqrt{p_0(i)}\right)^2
$$
and the total variation loss:
$$
\frac1{2} \sum_i |\widehat{p}_n(i) - p_0(i)|.
$$
(results not shown). We thus observed that the constrained
estimator $\widehat{p}_n$ outperforms the empirical estimator for all
considered losses.
\begin{figure}
\includegraphics[angle=90,height=6cm, width=12cm]{figs1/L2loss} \\
\includegraphics[angle=90,height=6cm, width=12cm]{figs1/Kloss}
\caption{{\bf $\ell_2$-loss.} Empirical risk as a function of the
sample size $n$. Top: $\ell_2(\cdot, p_0)$, bottom: $K(\cdot,
p_0)$. Black: $\widetilde{p}_n$, \textcolor{red}{red}:
$\widehat{p}_n$. Solid ({\bf --}): $\gamma=1$ or $j = 20$, dashed
(-\,-): $\gamma=.5$ or $j = 5$, dotted ($\cdots$): $\gamma=.9$ or
$j =2$. \label{Fig:Loss} }
\end{figure}
\subsection{Some characteristics of interest}
In this section, we consider the estimation of some characteristics of
the distribution, namely the variance, the entropy and the probability
at $0$. For each of these characteristics, denoted $\theta(p)$, we
measured the performance in terms of relative standard error:
$$
{\sqrt{\mathbb{E}\left(\theta(\widehat{p}_n) -
\theta(p_0)\right)^2}} \left/ {\theta(p_0)}\right..
$$
The expectation was estimated by the mean over 1,000 simulations.
As shown in Section \ref{sectionLSE}, the means of the empirical and
constrained distributions are equal, whereas the variance of the
constrained distribution is larger than the variance of the empirical one. Denoting by
$\mu_k$ the centered moment of order $k$ of $p_{0}$, the mean and variance of the
empirical variance are respectively
$$
\frac{n-1}n \mu_2
\qquad
\text{and}
\qquad
\frac{n-1}{n^3} \left((n-1)\mu_4 - (n-3) \mu_2^2\right).
$$
Figure \ref{Fig:Var} shows that the relative standard error of the
constrained estimator is smaller than that of the empirical
one. Hence, the constrained variance turns out to be more accurate.
\begin{figure}
\includegraphics[angle=90,height=6cm, width=12cm]{figs1/VarMSEr}
\caption{{\bf Variance.} Relative standard error of the variance as
a function of the sample size $n$. Same legend as Figure
\ref{Fig:Loss}. \label{Fig:Var}}
\end{figure}
We also investigated the estimation of the entropy
$$
H(p) = -\sum_{i \geq 0}p(i)\log p(i),
$$
which is often used in ecology as a diversity
index. As shown in Figure \ref{Fig:Entropy},
$H(\widehat p_n)$ is a better estimate of the true entropy than
$H(\widetilde p_n)$, in most situations; the difference between
the two estimators vanishes when the true distribution becomes more
convex.
The worst performance of $H(\widehat p_n)$ are
obtained when the true distribution is $T_2$. Note that this
distribution is a special case since more than half of the
estimation errors consist in adding a component $T_j$ ($j > 2$) in
the mixture \eqref{eq: mixture k}, which result in an increase of
the entropy.
\begin{figure}
\centering
\includegraphics[angle=90,height=6cm,width=12cm]{figs1/EntropyMSEr}
\caption{{\bf Entropy.} Relative standard error of the estimated
entropy estimators as a function of the sample size $n$. Same
legend as Figure \ref{Fig:Loss}. \label{Fig:Entropy} }
\end{figure}
We then considered the estimation of the probability mass
$p(0)$. Theorem \ref{moments.th} showed that the constrained
estimator $\widehat{p}_n(0)$ is greater than or equal to the empirical
estimator $\widetilde{p}_n(0)$, which is known to be unbiased.
However, Figure \ref{Fig:P0} shows that the constrained estimator
$\widehat{p}_n$ still provides a more accurate estimate of $p_0(0)$
than $\widetilde{p}_n$.
\begin{figure}
\includegraphics[height=6cm, width=12cm]{figs1/P0MSEr}
\caption{{\bf Probability mass in 0.} Relative standard error of the
estimated probability mass in zero as a function of the sample
size $n$. Same legend as Figure \ref{Fig:Loss}. \label{Fig:P0}}
\end{figure}
For all these characteristics, the constrained distribution provides
better estimates than the empirical distribution, provided that the
true distribution is indeed convex.
\subsection{Robustness to non-convexity}
We finally studied the robustness of the constrained estimator to non-convexity. As an example, we considered the Poisson distribution with
mean $\lambda$, which is convex as long as $\lambda$ is smaller that
$\lambda^* = 2 - \sqrt{2} \simeq .59$. We studied how
$\widetilde{p}_n$ and $\widehat{p}_n$ behave, in terms of $\ell_2$-loss, when $\lambda$ exceeds $\lambda^*$.
\begin{figure}
\includegraphics[height=6cm, width=12cm]{figs1/L2Poiloss}
\caption{Left: Three different Poisson distributions.
Solid ({\bf --}):$\lambda=\lambda^*$, dashed (-\,-):$\lambda=.8$,
dotted ($\cdots$):$\lambda=1$. Right: empirical $\ell_2$-loss as a function
of $n$. Black: $\widetilde{p}_n$, \textcolor{red}{red}:
$\widehat{p}_n$. \label{Fig:Pois}}
\end{figure}
The left panel of Figure \ref{Fig:Pois} displays the Poisson
distributions with respective means $\lambda^*$, .8 and 1.
Figure \ref{Fig:Pois} (right) shows that the $\ell_2$-loss of the
constrained estimator increases with $\lambda$. However for small
sample sizes, $\widehat{p}_n$ still provides a better fit than
$\widetilde{p}_n$, at least for $\lambda \leq 1$. The performance of
$\widehat{p}_n$ is dramatically altered when the sample size becomes
large and the convexity assumption is strongly violated.
\section{Proofs} \label{Proofs.st}
\subsection{Proof of Theorem~\ref{theo: LSE}}
In order to prove Theorem \ref{theo: LSE}, we first prove in the
following lemma that the minimizer of $Q_n$ over ${\cal K}$ exists and
is unique, where ${\cal K}$ is the set defined in Section \ref{intro.st}. Then, after some intermediate results, we prove in
Lemma \ref{lem: LSE} below that the minimizer of $Q_n$ over ${\cal K}$
belongs to ${\cal C}$. Since ${\cal C}\subset{\cal K},$ Theorem
\ref{theo: LSE} follows from Lemma \ref{lem: min K} combined to Lemma
\ref{lem: LSE}.
\paragraph{Notation}
We denote by $N_n$ the number of distinct values of the
$X_i$'s and by $X_{(1)},\dots,X_{(N_n)}$ these distinct values
rearranged in increasing order, i.e., such that
$X_{(1)}<\dots<X_{(N_n)}$. We set $\widetilde{r}_{n}=X_{(1)}$ and
$\widetilde{s}_{n}=X_{(N_n)}$.
In the
case $\widetilde{s}_{n}=0$ i.e., $\tilde p_n(0)=1$ and $\tilde p_n(i)=0$ for all
$i\geq 1$, the proof of Theorem~\ref{theo: LSE} is
straightforward. Thus, in the sequel, we restrict ourselves
to the case $\widetilde{s}_{n}\geq 1$.
\begin{lem}\label{lem: min K}
There exists a unique $\hat p_n\in{\cal K}$ such that
\begin{equation}\label{eq: min K}
Q_n(\hat p_n)=\inf_{p\in{\cal K}}Q_n(p).
\end{equation}
Moreover, $\hat p_n$ has a finite support.
\end{lem}
\paragraph{Proof.}
For proving the existence and uniqueness of $\widehat{p}_{n}$, we have
to prove the following preliminary results, where $q$ denotes a
candidate to be a minimizer of $Q_n$ over ${\cal K}$.
\begin{itemize}
\item[(i)] There exists $c_1=c_1(\omega)<\infty$ that does not depend on $q$ such that $q\leq c_1$.
\item[(ii)] We have $q=\bar{q}$ where
\begin{equation}\label{eq: tilde q}
\bar{q}(i)= \begin{cases}q(i) &\text{ for all } i \in\{0,\dots,\widetilde{s}_{n}\}\\
\max \{q(\widetilde{s}_{n}) + (q(\widetilde{s}_{n})-q(\widetilde{s}_{n}-1))(i-\widetilde{s}_{n})\,,\,0\}&\text{ for all } i \geq \widetilde{s}_{n}.
\end{cases}
\end{equation}
\end{itemize}
Therefore, minimizing $Q_n$ over ${\cal K}$ amounts to minimizing $Q_n$
over the set of functions $q\in{\cal K}$ such that $q\leq c_1$,
$Q_n(q)\leq Q_n(T_1)$, and $q=\bar{q}$. But for all $q\in{\cal K}$ such that $q=\bar{q}$, we
have
\begin{equation}\label{eq: Qn(q)}
Q_n(q)=\frac{1}{2}\sum_{i=0}^{\widetilde{s}_{n}}q^2(i)+\frac{1}{2}\sum_{i\geq 1}\left(\max\{q(\widetilde{s}_{n})+i(q(\widetilde{s}_{n})-q(\widetilde{s}_{n}-1),0\}\right)^2-\sum_{i\geq 0}q(i)\tilde p_n(i)
\end{equation}
and therefore, this amounts to minimizing
$$\bar{Q}_n(t)=\frac{1}{2}\sum_{i=0}^{\widetilde{s}_{n}}t^2(i)+\frac{1}{2}\sum_{i\geq 1}\left(\max\{t(\widetilde{s}_{n})+i(t(\widetilde{s}_{n})-t(\widetilde{s}_{n}-1),0\}\right)^2-\sum_{i\geq 0}t(i)\tilde p_n(i)$$
over the set $K$ of non-increasing convex functions $t:\{0,\dots,\widetilde{s}_{n}\}\to[0,\infty)$ such that $t(0)\leq c_1$ and $\bar{Q}_n(t)\leq Q_n(T_1)$.
The set $K$ is compact and $\bar{Q}_n$ is continuous and strictly convex on $K$, so there exists a unique minimizer of $\bar{Q}_n$ over $K$. This proves that there exists a unique minimizer of $Q_n$ over ${\cal K}$.
It remains to prove results (i) and (ii).
\paragraph{Proof of {\rm (i)}.}
It is easy to see that for all $p \in {\cal K}$,
\begin{equation*}
Q_{n} (p) \geq \frac{1}{2} p^{2}(\widetilde{r}_{n}) - p(\widetilde{r}_{n})
\end{equation*}
using that $p$ is non-increasing. This lower bound tends to infinity
as $p(\widetilde{r}_{n})\to\infty$. But, if we consider $T_{1}$ the measure that
puts the mass 1 in 0, we have $Q_n(q)\leq
Q_n(T_1)<\infty$, so there exists $c<\infty$ such that $q(\widetilde{r}_{n})<c$. Now,
$Q_n(T_1)\geq Q_n(q)\geq q^2(0)/2-q(\widetilde{r}_{n})$ and therefore, there exists
$c_1<\infty$ such that $q(0)\leq c_1$, which means that $q\leq c_1$.
\paragraph{Proof of {\rm (ii)}.}
By convexity we must have $\bar{q}(i)\leq q(i)$ for all $i\geq \widetilde{s}_{n}$ and
therefore, $$Q_{n}(q)-Q_{n}(\bar{q})=\sum_{i> \widetilde{s}_{n}}(q^2(i)-\bar{q}^2(i))/2\geq
0$$
with a strict inequality in the case $q\neq\bar{q}$. This proves that any
candidate $q$ to be a minimizer of $Q_n$ over ${\cal K}$ should
satisfy $q=\bar{q}$.
Let us now prove that the support of $\widehat{p}_{n}$ is finite.
In the case $\hat p_n(\widetilde{s}_{n})=0$, it is clear that $\hat p_n$ has a
finite support included in $\{0,\dots,\widetilde{s}_{n}-1\}$. Consider the case $\hat
p_n(\widetilde{s}_{n})>0$. Let us first remark that $\hat p_n(\widetilde{s}_{n}-1)>\hat p_n(\widetilde{s}_{n})$, since otherwise, we
would have $\hat p_n(i)=\hat p_n(\widetilde{s}_{n})$ for all $i\geq \widetilde{s}_{n}$ so that
$Q_n(\hat p_n)=\infty$. Then define $\bar{q}$ as in (\ref{eq: tilde q})
where $q$ is
replaced by $\hat p_n$.
From the proof of Lemma \ref{lem: min K}, we
know that $\hat p_n = \bar{q}$ which has finite support as soon as $\hat
p_n(\widetilde{s}_{n}-1)>\hat p_n(\widetilde{s}_{n})$.\hfill{$\Box$}
\paragraph{~}
The following lemma provides a precise characterization of $\hat p_n$. It is the counterpart, in the discrete case, of Lemma 2.2 in \cite{GJW01} for the continuous case. For every $p\in {\cal K},$ we define
\begin{equation}\label{def: FH}
F_p(j)=\sum_{i=0}^j p(i)\text{ and }H_p(j)=\sum_{i=0}^j F_p(i)
\end{equation}
for all integers $j\geq 0$, and $F_p(j)=H_p(j)=0$ for all integers $j<0$. Thus, $F_p$ is a distribution function in the case $p\in{\cal C}$.
\begin{lem}\label{lem: charact}
Let $\hat p_n$ be the unique function in ${\cal K}$ that satisfies (\ref{eq: min K}). For all $l\geq 1$ we have
\begin{equation}\label{eq: charact}
H_{\hat p_n}(l-1)\geq H_{\tilde p_n}(l-1)
\end{equation}
with an equality if $\hat p_n$ has a change of slope at point $l$, i.e., if
$$\hat p_n(l)-\hat p_n(l-1)<\hat p_n(l+1)-\hat p_n(l).$$
Conversely, if $p\in{\cal K}$ satisfies $H_{p}(l-1)\geq H_{\tilde p_n}(l-1)$ for all $l\geq 1$ with an equality if $p(l)-p(l-1)<p(l+1)-p(l)$, then $p=\hat p_n$.
\end{lem}
\paragraph{Proof.} First, note that $\tilde p_n$ has a finite support by
definition, and Lemma \ref{lem: affine} ensures that $\hat p_n$ has a
finite support as well. Thus, all the sums involved in the proof are
well-defined and finite.
For every $\varepsilon>0$ and $l\geq 1$, define $q_{\varepsilon l}$ by $q_{\varepsilon l}(i)=\hat p_n(i)$ for all $i\geq l$ and
$$q_{\varepsilon l}(i)=\hat p_n(i)+\varepsilon (l-i) $$
for all $i\in\{ 0,\dots, l\}$. Thus, $q_{\varepsilon l}$ is the sum of convex functions, which implies that $ q_{\varepsilon l}\in{\cal K}$ for all $\varepsilon,l$. Since $\hat p_n$ minimizes $Q_n$ over ${\cal K}$, we have $Q_n(q_{\varepsilon l})\geq Q_n(\hat p_n)$ for all $\varepsilon,l$ and therefore,
$$\liminf_{\varepsilon\downarrow0}\frac{1}{\varepsilon}\big(Q_n(q_{\varepsilon l})-Q_n(\hat p_n)\big)\geq 0$$
for all $l\geq 1$. This simplifies to
$$\sum_{i= 0}^{l-1}\hat p_n(i)(l-i)\geq \sum_{i= 0}^{l-1}\tilde p_n(i)(l-i)$$
for all $l\geq 1$ and can be rewritten as
$$\sum_{j=0}^{l-1}\sum_{i= 0}^{j}\hat p_n(i)\geq \sum_{j=0}^{l-1}\sum_{i= 1}^{j}\tilde p_n(i)$$
for all $l\geq 1$, which is precisely (\ref{eq: charact}). To prove
the equality case, note that $(1+\varepsilon)\hat p_n\in{\cal K}$ for all
$\varepsilon>-1$. Therefore, for all $\varepsilon>-1$ we have
$$Q_n\big((1+\varepsilon)\hat p_n\big)\geq Q_n(\hat p_n).$$
Distinguishing the cases $\varepsilon>0$ and $\varepsilon<0$ we obtain
$$\liminf_{\varepsilon\downarrow0}\frac{1}{\varepsilon}\big(Q_n((1+\varepsilon)\hat p_n)-Q_n(\hat p_n)\big)\geq 0$$
and
$$\limsup_{\varepsilon\uparrow0}\frac{1}{\varepsilon}\big(Q_n((1+\varepsilon)\hat p_n)-Q_n(\hat p_n)\big)\leq 0.$$
Both limits are equal, so their common value is equal to zero, which can be written as
$$\sum_{i\geq 0}\hat p_n(i)\big(\hat p_n(i)-\tilde p_n(i)\big)=0.$$
Now, noticing that $p(i)=F_p(i)-F_p(i-1)$ for all $p\in{\cal K}$ and $i\in\mathbb{N}$, we arrive at
\[\begin{split}
0=&\sum_{i\geq 0}\hat p_n(i)\Big(F_{\hat p_n}(i)-F_{\hat p_n}(i-1)-F_{\tilde p_n}(i)+F_{\tilde p_n}(i-1)\Big)\\
=&\sum_{i\geq 0}\hat p_n(i)\Big(F_{\hat p_n}(i)-F_{\tilde p_n}(i)\Big)-\sum_{i\geq 1}\hat p_n(i)\Big(F_{\hat p_n}(i-1)-F_{\tilde p_n}(i-1)\Big).\\
\end{split}\]
Rearranging the indices, we have
$$\sum_{i\geq 1}\hat p_n(i)\Big(F_{\hat p_n}(i-1)-F_{\tilde p_n}(i-1)\Big)=\sum_{i\geq 0}\hat p_n(i+1)\Big(F_{\hat p_n}(i)-F_{\tilde p_n}(i)\Big),$$
whence
$$0=\sum_{i\geq 0}\Big(\hat p_n(i)-\hat p_n(i+1)\Big)\Big(F_{\hat p_n}(i)-F_{\tilde p_n}(i)\Big).$$
Now, we notice that $F_p(i)=H_p(i)-H_p(i-1)$ for all $p\in{\cal K}$ and $i\in\mathbb{N}$. A similar change of indices as above then yields
$$0=\sum_{i\geq 0}\Big((\hat p_n(i)-\hat p_n(i+1))-(\hat p_n(i+1)-\hat p_n(i+2))\Big)\Big(H_{\hat p_n}(i)-H_{\tilde p_n}(i)\Big).$$
It follows from (\ref{eq: charact}) that $H_{\hat p_n}(i)\geq H_{\tilde p_n}(i)$ for all $i\geq 0$, and we have
$$\hat p_n(i+1)-\hat p_n(i)\leq\hat p_n(i+2)-\hat p_n(i+1)$$
by convexity of $\hat p_n$. A sum of non-negative numbers is equal to zero if and only if these numbers are all equal to zero, so we conclude that
$$\Big((\hat p_n(i)-\hat p_n(i+1))-(\hat p_n(i+1)-\hat
p_n(i+2))\Big)\Big(H_{\hat p_n}(i)-H_{\tilde p_n}(i)\Big)=0$$
for all $i\geq 0$. Hence, $H_{\hat p_n}(i)=H_{\tilde p_n}(i)$ for all
$i\geq 0$ that satisfy
$$\hat p_n(i+1)-\hat p_n(i)<\hat p_n(i+2)-\hat p_n(i+1).$$
Setting $l=i+1$, this means that we have an equality in (\ref{eq: charact}) if $\hat p_n$ has a change of slope at point $l$.
Conversely, consider $p\in{\cal K}$ such
that $H_{p}(i)\geq H_{\tilde p_n}(i)$ for all $i\geq 0$ with an
equality if $p(i+1)-p(i)<p(i+2)-p(i+1)$. Then we have
\begin{equation}\label{eq: 0=}
0=\sum_{i\geq 0}\big(p(i)-2p(i+1)+p(i+2)\big)\big(H_{p}(i)-H_{\tilde p_n}(i)\big)
\end{equation}
and $p$ has a finite support.
To see this, argue by contradiction and assume for a while that the support of $p$ is not finite. In such a case, there exists an increasing sequence $(u_l)_{l\in\mathbb{N}}$ such that $u_l$ tends to infinity as $l\to\infty$ and $p$ has changes of slope at every point $u_l+1$, $l\in\mathbb{N}$. This implies that
$$H_{p}(u_l)-H_p(u_{l-1})= H_{\tilde p_n}(u_l)-H_{\tilde p_n}(u_{l-1})$$
for all $l\geq 1$. Using that $F_p$ is non-increasing and that $\tilde p_n$ has a finite support, we obtain
$$F_p(u_{l-1})\leq\frac{1}{u_{l}-u_{l-1}}\left(H_{p}(u_l)-H_p(u_{l-1})\right)= F_{\tilde p_n}(u_l)=\sum_{i\geq 0}\tilde p_n(i)$$
for all large enough $l$ and similarly,
$$F_p(u_{l})\geq\sum_{i\geq 0}\tilde p_n(i)$$
for all large enough $l$. Therefore,
$$F_p(u_l)=F_p(u_{l-1})=\sum_{i\geq 0}\tilde p_n(i)$$
for all large enough $l$, which means that $p(i)=0$ for all large
enough $i$. This is in contradiction with the assumption that the
support of $p$ is not finite, which proves that the support of $p$ is
finite.
Now, let $q\in{\cal K}$ be any candidate to be a minimizer of $Q_n$
over ${\cal K}$. We know, see the proof of Lemma \ref{lem: min K},
that $Q_n(q)\leq Q_n(T_1)$, and $q=\bar{q}$, where $\bar{q}$ is defined by
(\ref{eq: tilde q}). In particular, $q$ satisfies (\ref{eq: Qn(q)})
which implies that $q$ has a finite
support. Thus, we can write
\begin{eqnarray}\label{eq: charac egal}
Q_n(q)-Q_n(p)&=&\frac{1}{2}\sum_{i\geq 0}\Big(q^2(i)-p^2(i)-2\tilde p_n(i)\big( q(i)-p(i)\big)\Big)\notag\\
&=&\frac{1}{2}\sum_{i\geq 0}\Big(q^2(i)-p^2(i)+2\big(p(i)-\tilde p_n(i)\big)\big( q(i)-p(i)\big)-2p(i)\big( q(i)-p(i)\big)\Big)\notag\\
&=&\frac{1}{2}\sum_{i\geq 0}\big(q(i)-p(i)\big)^2+\sum_{i\geq 0}\big(p(i)-\tilde p_n(i)\big)\big( q(i)-p(i)\big)\notag\\
&\geq&\sum_{i\geq 0}\big(p(i)-\tilde p_n(i)\big)\big( q(i)-p(i)\big).
\end{eqnarray}
Using that both $q$ and $p-\tilde p_n$ have a finite support
and rearranging the indices as above, we show that
\[\begin{split}
\sum_{i\geq 0}&\big(p(i)-\tilde p_n(i)\big)\big( q(i)-p(i)\big)\\
&=\sum_{i\geq 0}\Big(\big(q(i)-p(i)\big)-2\big(q(i+1)-p(i+1)\big)+\big(q(i+2)-p(i+2)\big)\Big)\big(H_p(i)-H_{\tilde p_n}(i)\big).
\end{split}\]
Combining this with (\ref{eq: 0=}) and (\ref{eq: charac egal}) yields
$$Q_n(q)-Q_n(p)\geq\sum_{i\geq 0}\big(q(i)-2q(i+1)+q(i+2)\big)\big(H_p(i)-H_{\tilde p_n}(i)\big).$$
The right-hand side is non-negative since $H_{p}(i)\geq H_{\tilde p_n}(i)$ for all $i\geq 0$ and $q$ is convex over $\mathbb{N}$, so we conclude that $Q_n(q)\geq Q_n(p)$ for all candidates $q\in{\cal K}$. This means that $p$ minimizes $Q_n$ over ${\cal K}$. \hfill{$\Box$}\\
We are now in a position to prove that $\hat p_n$ is a probability mass function, i.e., $\hat p_n\in{\cal C}.$
\begin{lem}\label{lem: LSE}
Let $\hat p_n$ be the unique function in ${\cal K}$ that satisfies (\ref{eq: min K}).
We have
\begin{equation}\label{eq: Ftilde=Fhat}
F_{\tilde p_n}(\widehat{s}_{n} + 1)=F_{\hat p_n}(\widehat{s}_{n} + 1),
\end{equation}
$\widehat{s}_{n} \geq \widetilde{s}_{n}$ and $\hat p_n\in{\cal C}.$
\end{lem}
\paragraph{Proof.}
Let us first prove by contradiction that $\widehat{s}_{n}$ is well-defined. Let $k= 1+\min_{j}\left\{\widetilde{p}_{n}(j)
\neq 0\right\}$. It is easy to verify that there exists a
strictly positive $a$ such that $Q_{n}(a T_{k}) < 0$. As $Q_{n}(0)=0$,
$\widehat{p}_{n}$ cannot be
identically zero and $\widehat{s}_{n}$ is well-defined.
By definition of $\widehat{s}_{n}$, $\hat p_n$ has a change of slope at point
$\widehat{s}_{n} + 1$, so it follows from Lemma \ref{lem: charact} that
\begin{equation}\label{eq: egal s-1}
\sum_{j=0}^{\widehat{s}_{n}}F_{\hat p_n}(j)= \sum_{j=0}^{\widehat{s}_{n}}F_{\tilde p_n}(j).
\end{equation}
Using Lemma \ref{lem: charact} again we obtain
$$\sum_{j=0}^{\widehat{s}_{n} + 1}F_{\hat p_n}(j)\geq \sum_{j=0}^{\widehat{s}_{n}+1}F_{\tilde p_n}(j),$$
which, combined with (\ref{eq: egal s-1}) shows that $F_{\hat
p_n}(\widehat{s}_{n}+1)\geq F_{\tilde p_n}(\widehat{s}_{n}+1)$.
Let us first consider the case where $\widehat{s}_{n}\geq 1$. We have
$$\sum_{j=0}^{\widehat{s}_{n}-1}F_{\hat p_n}(j)\geq \sum_{j=0}^{\widehat{s}_{n}-1}F_{\tilde p_n}(j)$$
which, combined with (\ref{eq: egal s-1}) shows that $F_{\hat
p_n}(\widehat{s}_{n})\leq F_{\tilde p_n}(\widehat{s}_{n})$. But $\hat p_n(\widehat{s}_{n}+1)=0$ by
definition of $\widehat{s}_{n}$, so we also have $F_{\hat
p_n}(\widehat{s}_{n}+1)=F_{\hat p_n}(\widehat{s}_{n})$ and therefore,
$$F_{\tilde p_n}(\widehat{s}_{n})\geq F_{\hat p_n}(\widehat{s}_{n}+1)\geq F_{\tilde
p_n}(\widehat{s}_{n}+1).$$
By definition, $F_{\tilde p_n}$ is
non-decreasing, so we conclude that (\ref{eq: Ftilde=Fhat}) holds.
Consider now the case $\widehat{s}_{n}=0$. We have $\tilde
p_n(1)=0$: otherwise, we could modify $\hat p_n$ to a $q\in{\cal
K}$ such that $q (0)=\hat p_n(0)$, $0<q(1)\leq \tilde p_n(1)$ and
$q(i)=0$ for all $i>1$, which is a contradiction since for such a
$q$ we have $Q_n(q)<Q_n(\hat p_n)$. Moreover, in the case
$\widehat{s}_{n}=0$, we have $\hat p_n(0)=\tilde p_n(0)$: otherwise, we
could modify $\hat p_n$ to a $q\in{\cal K}$ such that $q
(0)=\tilde p_n(0)$ and $q(i)=0$ for all $i>0$ which is a
contradiction since for such a $q$ we have $Q_n(q)<Q_n(\hat
p_n)$. Hence,
$$F_{\hat p_n}(1)=\hat p_n(0)=\tilde p_n(0)=F_{\tilde p_n}(1),$$
which completes the proof of (\ref{eq: Ftilde=Fhat}).
For the purpose of proving that $\widehat{s}_{n} \geq \widetilde{s}_{n}$, we argue by
contradiction. Assume for a while that $\widehat{s}_{n}=\widetilde{s}_{n}-1$. This means that
$\hat p_n(i)=0$ for all $i\geq \widetilde{s}_{n}$ and $\hat p_n(\widetilde{s}_{n}-1)>0$. In this
case, we can modify $\hat p_n$ to a $q\in{\cal K}$ such that $q
(i)=\hat p_n(i)$ for all $i<\widetilde{s}_{n}$, $0<q(\widetilde{s}_{n})\leq \tilde p_n(\widetilde{s}_{n}),$
and $q(i)=0$ for all $i>\widetilde{s}_{n}$. Then we have
\begin{eqnarray*}
2\big(Q_n(q)-Q_n(\hat p_n)\big)&=&\sum_{i\geq 0}\big (q(i)-\tilde p_n(i)\big)^2-\sum_{i\geq 0}\big (\hat p_n(i)-\tilde p_n(i)\big)^2 \\
&=&\big (q(\widetilde{s}_{n})-\tilde p_n(\widetilde{s}_{n})\big)^2-\big (\tilde p_n(\widetilde{s}_{n})\big)^2\\
&<&0.
\end{eqnarray*}
This is a contradiction since $\hat p_n$ minimizes $Q_n$ and
therefore, $\widehat{s}_{n}\neq \widetilde{s}_{n}-1.$ Assume now that $\widehat{s}_{n}<\widetilde{s}_{n}-1$. Then,
$F_{\tilde p_n}(\widehat{s}_{n}+1)<1$, so (\ref{eq: Ftilde=Fhat}) yields
$$F_{\hat p_n}(j)=F_{\hat p_n}(\widehat{s}_{n}+1)<1$$
for all $j\geq \widehat{s}_{n}+1$. Therefore, for all $l>\widetilde{s}_{n}$ we have
$$\sum_{j=0}^{l-1}\big(F_{\hat p_n}(j)-F_{\tilde
p_n}(j)\big)=\sum_{j=0}^{\widetilde{s}_{n}-1}\big(F_{\hat p_n}(j)-F_{\tilde
p_n}(j)\big)+(l-\widetilde{s}_{n})\big (F_{\hat p_n}(\widehat{s}_{n}+1)-1\big),$$
which tends to $-\infty$ as $l\to\infty$. This is a contradiction since from Lemma \ref{lem: charact}, this has to remain non-negative for all $l$. We conclude that $\widehat{s}_{n}\geq\widetilde{s}_{n}$. Combining this with (\ref{eq: Ftilde=Fhat}) yields
$$F_{\hat p_n}(\widehat{s}_{n}+1)= F_{\tilde p_n}(\widehat{s}_{n}+1)=1.$$
This proves that $\hat p_n$ is a probability mass function and
completes the proof of the lemma. \hfill{$\Box$}
\subsection{Proof of Theorem~\ref{theo: empir/constraint}}
Let us begin with the following lemma that gathers together a number of properties of the
minimizer $\hat p_n$. These
properties compare to those of the constrained least squares
estimator of a convex density function over $[0,\infty)$, see
\cite{GJW01}: in this case the constrained LSE has a bounded support, is piecewise linear,
has no changes of slope at the observation points, and has at most one
change of slope between two consecutive observation points. In the
discrete case, the constrained LSE is also piecewise linear with
bounded support. However, due to the fact that $\mathbb{N}$ is a discrete set,
the constrained LSE can have changes of slopes at the observation
points and can have two changes of slopes between two consecutive
observations.
\begin{lem}\label{lem: affine}
The unique function $\hat p_n\in{\cal K}$ that satisfies (\ref{eq: min
K}) has the following properties:
$\hat p_n$ is linear on the
interval $\{0,\dots, X_{(1)}+1\}$ and also on
$\{\widetilde{s}_{n} -1,\dots,\widehat{s}_{n}\}$; in the case where $N_n$, the number of distinct values of the
$X_i$'s, is greater or equal to 2, it has at most two changes of slopes on $\{X_{(j)},\dots,X_{(j+1)}\}$ for any given $j=1,\dots, N_n-1$, and in the case where it has two changes of slopes on this set, these changes occur at consecutive points in $\mathbb{N}$.
\end{lem}
\paragraph{Proof.}
We know, from the proof of Lemma~\ref{lem: min K}, that $\hat p_n =
\bar{q}$, where $\bar{q}$ is defined as in (\ref{eq: tilde q})
where $q$ is
replaced by $\hat p_n$. It follows that $\hat p_n$ is linear on
$\{\widetilde{s}_{n} -1,\dots,\widehat{s}_{n}\}$ in the case $\widehat{s}_{n} \geq \widetilde{s}_{n}$. Consider an
arbitrary candidate $p$ to be a minimizer of $Q_n$ over ${\cal K}$,
fix $j\in\{1,\dots,N_n-1\}$, and define the functions $p_l$ and $p_r$
over $\mathbb{N}$ as follows: $p_l(i)=p(i)$ for all $i\leq X_{(j)}+1$ and all
$i\geq X_{(j+1)}$ and $p_l$ is linear over
$\{X_{(j)},...,X_{(j+1)}-1\}$, whereas $p_r(i)=p(i)$ for all $i\leq
X_{(j)}$ and all $i\geq X_{(j+1)}-1$ and $p_r$ is linear over
$\{X_{(j)}+1,...,X_{(j+1)}\}$. Setting $q(i)=\max\{p_l(i),p_r(i)\}$
for all $i\in\mathbb{N}$, we obtain that $q\in{\cal K}$ is piecewise linear
over $\{X_{(j)},...,X_{(j+1)}\}$ with at most two changes of slopes
over this interval and in case it has two changes of slopes, these
changes occur at consecutive points. We have $q(X_{(j)})=p(X_{(j)})$
for all $j$, and $q\leq p$ by convexity of $p$. Since $\tilde
p_n(i)>0$ if and only if $i=X_{(j)}$ for some $j$, this implies that
$Q_n(q)\leq Q_n(p)$ with a strict inequality if $p\neq q$. Therefore,
$p$ could be a minimizer of $Q_n$ only if $p=q$. This implies that the
minimizer $\hat p_n$ is piecewise linear over
$\{X_{(j)},\dots,X_{(j+1)}\}$ with at most two changes of slopes over
this interval. A similar argument shows that $\hat p_n$ is linear over
the interval $\{0,\dots, X_{(1)}+1\}$. \hfill{$\Box$}\\
\subsubsection*{Proof of Equation~(\ref{eq.theo2})}
We prove that (\ref{eq.theo2}) holds with $p_0$ replaced by any $q \in {\mathcal K}$ that belongs to $l_2$, i.e., that satisfies $\sum_{j\geq
0}q^2(j)<\infty$. Since
$p_{0}$ belongs to $l_1$ as a probability mass function and $l_2\subset l_1$, $p_0$ also belongs to $l_2$, so (\ref{eq.theo2}) with $p_0$ replaced by any $q \in {\mathcal K}$ that belongs to $l_2$ is a slightly more general result than (\ref{eq.theo2}).
Consider an arbitrary $q \in {\mathcal K}$ satisfying
$\sum_{j\geq
0}q^2(j)<\infty$.
We have
\
\sum_{j\geq 0}\big(q(j)-\tilde p_n(j)\big)^2\geq \sum_{j\geq 0}\big(q(j)-\hat p_n(j)\big)^2+2\sum_{j\geq 0}\big(\hat p_n(j)-\tilde p_n(j)\big)\big(q(j)-\hat p_n(j)\big)
\]
with a strict inequality in the case where $\tilde p_n$ is non-convex
since in that case, $\tilde p_n\neq \hat p_n$. Thus, in order to prove
that (\ref{eq.theo2}) holds with $p_0$ replaced by $q$, it suffices to prove that
\begin{equation}\label{eq: empir/constraint}
\sum_{j\geq 0}\big(\hat p_n(j)-\tilde p_n(j)\big)\big(q(j)-\hat p_n(j)\big)\geq0.
\end{equation}
According to Lemma~\ref{lem: affine}, there exist integers
$c_0<\dots<c_m$ such that $c_0=0$, $c_m=\widehat{s}_{n}+1$, $\hat p_n$ is linear
over the interval $\{c_{i-1},\dots,c_{i}\}$ and has a change of slope
at point $c_i$, for all $i=1,\dots,m$. It follows from Theorem
\ref{theo: LSE} that $\widehat{s}_{n} \geq \widetilde{s}_{n}$, so $\tilde p_n(j)=\hat
p_n(j)=0$ for all $j\geq \widehat{s}_{n}+1$ and the sum in (\ref{eq:
empir/constraint}) can be split as follows:
\begin{equation}\label{eq: split}
\sum_{j\geq 0}\big(\hat p_n(j)-\tilde p_n(j)\big)\big(q(j)-\hat
p_n(j)\big)= \sum_{i=1}^{m} \sum_{j=c_{i-1}}^{c_i-1}\big(\hat p_n(j)-\tilde p_n(j)\big)f(j)
\end{equation}
where $f(j)=q(j)-\hat p_n(j)$ for all $j\geq 0$. For all $i=1,\dots,m$ we have
\[\begin{split}
\sum_{j=c_{i-1}}^{c_i-1}&\big(\hat p_n(j)-\tilde p_n(j)\big)f(j)\\
&=\sum_{j=c_{i-1}}^{c_i-1}\left[\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)-\big(F_{\hat p_n}(j-1)-F_{\tilde p_n}(j-1)\big)\right]f(j)\\
&=\sum_{j=c_{i-1}}^{c_i-1}\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)f(j)-\sum_{j=c_{i-1}-1}^{c_i-2}\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)f(j+1)\\
&=\sum_{j=c_{i-1}}^{c_i-1}\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)\left(f(j)-f(j+1)\right)\\
&\quad +\big(F_{\hat p_n}(c_i-1)-F_{\tilde p_n}(c_i-1)\big)f(c_i)\\
&\quad\quad -\big(F_{\hat p_n}(c_{i-1}-1)-F_{\tilde p_n}(c_{i-1}-1)\big)f(c_{i-1}),
\end{split}\]
where $F_{\hat p_n}$ and $F_{\tilde p_n}$ are defined in (\ref{def: FH}).
By definition, $F_{\tilde p_n}(j)=F_{\hat p_n}(j)=0$ for all $j<c_0$, so summing up over $i$ yields
\begin{eqnarray*}
\sum_{i=1}^{m}\sum_{j=c_{i-1}}^{c_i-1}\big(\hat p_n(j)-\tilde
p_n(j)\big)f(j)&=&\sum_{i=1}^{m}\sum_{j=c_{i-1}}^{c_i-1}\big(F_{\hat
p_n}(j)-F_{\tilde p_n}(j)\big)\left(f(j)-f(j+1)\right)\\
& & +\big(F_{\hat p_n}(c_m-1)-F_{\tilde p_n}(c_m-1)\big)f(c_m),
\end{eqnarray*}
where we recall that $c_m=\widehat{s}_{n}+1$. Now, it follows from the definition
of $\widehat{s}_{n}$ that $\hat p_n(\widehat{s}_{n}+1)=0$ and we also have $\tilde
p_n(\widehat{s}_{n}+1)=0$ since $\widehat{s}_{n}\geq\widetilde{s}_{n}$, see Theorem~\ref{theo: LSE}. Thanks to
(\ref{eq: Ftilde=Fhat}), we conclude that $F_{\tilde
p_n}(\widehat{s}_{n})=F_{\hat p_n}(\widehat{s}_{n})$. Therefore, (\ref{eq:
split}) combined with the preceding display yields
\[\begin{split}\sum_{j\geq 0}&\big(\hat p_n(j)-\tilde p_n(j)\big)\big(q(j)-\hat p_n(j)\big)\\ &=\sum_{i=1}^{m}\sum_{j=c_{i-1}}^{c_i-1}\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)\left(f(j)-f(j+1)\right).\end{split}\]
Now, $H_{\tilde p_n}(j)=H_{\hat p_n}(j)=0$ for all $j<c_0$ and $F_p(j)=H_p(j)-H_p(j-1)$ for $p=\tilde p_n,\hat p_n$ and all $j$, so we can repeat the same arguments as above to obtain
\[\begin{split}
\sum_{i=1}^m&\sum_{j=c_{i-1}}^{c_i-1}\big(F_{\hat p_n}(j)-F_{\tilde p_n}(j)\big)\left(f(j)-f(j+1)\right)\\
&=\sum_{i=1}^m\sum_{j=c_{i-1}}^{c_i-1}\big(H_{\hat p_n}(j)-H_{\tilde p_n}(j)\big)\left(f(j)-2f(j+1)+f(j+2)\right)\\
&\quad +\big(H_{\hat p_n}(c_m-1)-H_{\tilde p_n}(c_m-1)\big)(f(c_m)-f(c_m+1)).
\end{split}\]
Since $\hat p_n$ has a change of slope at each $c_i$, we deduce from Lemma \ref{lem: charact} that $H_{\hat p_n}(c_i-1)=H_{\tilde p_n}(c_i-1)$ for all $i=1,\dots,m$ and we arrive at
\begin{equation}\label{eq: splitH}\begin{split}
\sum_{j\geq 0}&\big(\hat p_n(j)-\tilde p_n(j)\big)\big(q(j)-\hat p_n(j)\big)\\
&=\sum_{i}\sum_{j=c_{i-1}}^{c_i-2}\big(H_{\hat p_n}(j)-H_{\tilde p_n}(j)\big)\left(f(j)-2f(j+1)+f(j+2)\right),\end{split}
\end{equation}
where the first sum on the right-hand side is taken over those
$i=1,\dots,m$ such that $c_{i-1}\leq {c_i-2}$. For such an $i$, $f$ is
convex over the interval $\{c_{i-1},\dots,c_i\}$ as a sum of a convex
function and a linear function (recall that by definition of the
$c_i$'s, $\hat p_n$ is linear over such an interval). Therefore we get
$$f(j)-2f(j+1)+f(j+2)\geq 0$$
for all $j=c_{i-1},\dots,c_i-2$, see (\ref{def:
convex(slope)}). Moreover, it follows from Lemma \ref{lem: charact}
that $H_{\hat p_n}\geq H_{\tilde p_n}$, which leads to
$$\big(H_{\hat p_n}(j)-H_{\tilde p_n}(j)\big)\left(f(j)-2f(j+1)+f(j+2)\right)\geq 0$$
for all $j=c_{i-1},\dots,c_i-2$. Combining this with (\ref{eq:
splitH}) yields (\ref{eq: empir/constraint}) and completes the proof
of the first part of the theorem.
\subsubsection*{Proof of Equations~(\ref{eq: tilde p non-convex}) and~(\ref{liminf.eq})}
It suffices to prove (\ref{eq: tilde p non-convex}) since the second
assertion follows from (\ref{eq: tilde p non-convex}) and~\ref{eq.theo2}. To prove (\ref{eq: tilde p
non-convex}), note that
$$\P\Big(\tilde p_n\text{ is non-convex}\Big)\geq \P\left(\tilde p_n(i)-2\tilde p_n(i+1)+\tilde p_n(i+2)<0\right)$$
and that by assumption, we have
$p_0(i)-2p_0(i+1)+p_0(i+2)=0$. Therefore, we have the following inequality:
\[\begin{split}
&\P\Big(\tilde p_n\text{ is non-convex}\Big)\\
&\quad \geq
\P\left(\sqrt n\Big[ (\tilde p_n(i)-p_0(i))-2(\tilde p_n(i+1)-p_0(i+1))+(\tilde p_n(i+2)-p_0(i+2))\Big]<0\right).
\end{split}\]
From the central limit theorem, the random variable
$$\sqrt n\Big[ (\tilde p_n(i)-p_0(i))-2(\tilde p_n(i+1)-p_0(i+1))+(\tilde p_n(i+2)-p_0(i+2))\Big]$$
converges, as $n\to\infty$, to a centered Gaussian variable $X$ with a non-degenerate variance and therefore,
$$\liminf_{n\to\infty}\P\left(\tilde p_n\text{is non-convex}\right)\geq \P(X\leq 0).$$
The lemma follows since $\P(X\leq 0)=1/2$. \hfill{$\Box$}
\subsection{Proof of Theorem~\ref{moments.th}}
Let us first note that for any positive concave
function $q$ defined on $\mathbb{N}$, such that $q(\widehat{s}_{n})>0$ and $q(i) =0$ for all $i >
\widehat{s}_{n}$, the function $\widehat{p}_{n} - \varepsilon q$
belongs to ${\cal K}$ as soon as $\varepsilon \leq
\widehat{p}_{n}(\widehat{s}_{n})/q(\widehat{s}_{n})$.
Besides, thanks to Theorem~\ref{theo: LSE}, we know that $\widehat{p}_{n}
=\mbox{ Argmin}_{f \in {\cal K }} Q_n(f)$. Therefore for all $q$ defined as
above,
\begin{eqnarray*}
0 &\leq &\lim_{\varepsilon \searrow 0} \frac{Q_n(\hat p_n - \varepsilon q)-Q_n(\hat p_n)}{\varepsilon}\\
& = &\sum_{i=0}^{\hat s_n }(\tilde p_n(i) -\hat p_n(i))q(i).
\end{eqnarray*}
Let $u\geq 1$, $0\leq a \leq \widehat{s}_{n}$, and take
\begin{equation*}
q(i) = \left(1-
\left(\frac{|i-a|}{\widehat{s}_{n}+1-a}\right)^{u}\right), \mbox{
for } 1 \leq i
\leq \widehat{s}_{n}, \end{equation*}
and $q(i)=0$ for $i > \widehat{s}_{n}$.
Then we get the inequality in Equation~(\ref{moments.ineq}).
The proof of $\sum_{i=1}^{\widetilde{s}_{n}} i
\widetilde{p}_{n}(i)=\sum_{i=1}^{\widehat{s}_{n}} i
\widehat{p}_{n}(i)$ follows from the fact that the function
$\widehat{p}_{n} + \varepsilon q$ belongs to ${\cal K}$ for
\begin{equation*}
q(i) = \left(1 - \frac{i}{\widehat{s}_{n}+1}\right) \mbox{
for } 1 \leq i
\leq \widehat{s}_{n}, \mbox{ and } q(i)=0, \mbox{ for } i > \widehat{s}_{n}.
\end{equation*}
It remains to prove that $\hat p_{n}(0)\geq \tilde p_{n}(0)$. Argue by
contradiction and assume that $\hat p_{n}(0)< \tilde p_{n}(0)$. Define
$q(0)=\tilde p_{n}(0)$ and $q(i)=\hat p_{n}(i)$ for all $i\geq
1$. Then, $q\in {\cal K}$ since $\hat p_{n}$ is convex and $q(0)\geq
\hat p_{n}(0)$, and we have $Q_{n}(q)<Q_{n}(\hat p_{n})$. This is a
contradiction since $\hat p_{n}$ minimizes $Q_{n}$ over ${\cal K}$,
see Theorem \ref{theo: LSE}. This completes the proof of the
theorem.
\subsection{Proof of Theorem~\ref{theo: mixture k}}
Assume $f\in{\cal K}$ and consider the function $\pi$ defined by
(\ref{eq: pi/p}). The function $\pi$ takes non-negative values since
$f$ is convex, see (\ref{def: convex(slope)}). Therefore $\pi$
belongs to ${\cal M}$. Moreover, for all $i\in\mathbb{N}$ we have
\begin{eqnarray*}
\sum_{j\geq i+1} \pi_jT_j(i)&=&\sum_{j\geq i+1} \big( f(j+1)+f(j-1)-2f(j)\big)(j-i)\\
&=&\sum_{j\geq i+1} \sum_{k=1}^{j-i}\big( f(j+1)+f(j-1)-2f(j)\big).\\
\end{eqnarray*}
Since all terms in the sum are non-negative and $\lim_{i\to\infty}f(i)=0$, we can write
\begin{eqnarray*}
\sum_{j\geq i+1} \pi_jT_j(i)&=&\sum_{k\geq 1} \sum_{j\geq i+k}\big( f(j+1)+f(j-1)-2f(j)\big),\\
&=&\sum_{k\geq 1}\big( f(i+k-1)-f(i+k)\big)\\
&=&f(i)
\end{eqnarray*}
for all $i\in\mathbb{N}$. Therefore, $\pi\in{\cal M}$ satisfies (\ref{eq:
mixture k}). Conversely, every $f:\mathbb{N}\to [0,\infty)$ satisfying
(\ref{eq: mixture k}) for some $\pi\in{\cal M}$ is clearly convex, so
we obtain the first assertion of the theorem. To prove the second and
the third assertions, we assume that $f\in{\cal K}$. So, in view of the
preceding result, we know that $f$ satisfies (\ref{eq: mixture k}) for
some $\pi\in{\cal M}$. Thus, we have
\begin{eqnarray*}
f(i-1)-f(i)&=&\sum_{j\geq i}\pi_{j} \big (T_{j}(i-1)-T_{j}(i)\big)\\
&=&\sum_{j\geq i}\frac{2\pi_{j}}{j(j+1)}
\end{eqnarray*}
for all $i\geq 1$. By convexity of $f$ we conclude that
\begin{eqnarray*}
0\leq \big(f(i-1)-f(i)\big)-\big(f(i)-f(i+1)\big)=\frac{2\pi_{i}}{i(i+1)}
\end{eqnarray*}
for all $i\geq 1$, which implies that $\pi$ is uniquely defined by (\ref{eq: pi/p}). Moreover,
$$\sum_{i\geq 0}f(i)=\sum_{i\geq 0}\sum_{j\geq i+1}\pi_jT_j(i)=\sum_{i\geq 0}\sum_{j\geq 1}\pi_jT_j(i)$$
since $T_j(i)=0$ for all $j\leq i$. This implies that
$$\sum_{i\geq 0}f(i)=\sum_{j\geq 1}\pi_j\big(\sum_{i\geq 0}T_j(i)\big)$$
where $\sum_{i\geq 0}T_j(i)=1$. This completes the proof of the theorem.
\hfill{$\Box$}
\subsection{Proof of Theorem~\ref{theo: Algo}} \label{proofAlgo.st}
The theorem is proved following the work of Groeneboom and
al.~\cite{GJW08}. It follows from Lemmas~\ref{lem12} and~\ref{lem13}
given below.
\begin{lem}\label{lem12}
Let $\widetilde{s}_{n}$ be the
maximum of the support
of $\widetilde{p}_{n}$ and $L \geq \widetilde{s}_{n}+1$. Then we have the following
result:
$\widehat{\pi}^{L} = \arg\min_{\mu \in {\mathcal M}^{L}}
\Psi_{n}(\mu)$ is equivalent to
\begin{equation}
\left[d_{j}(\Psi_{n})\right](\widehat\pi^{L}) \geq 0 \; \forall 1
\leq j \leq L, \mbox{ and } \left[d_{j}(\Psi_{n})\right](\widehat\pi^{L}) = 0
\; \forall j \in \mbox{Supp}(\widehat\pi^{L}) \label{P2.eq}
\end{equation}
\end{lem}
\paragraph{Proof.}
Let $\widehat\pi^{L} = \arg\min_{\mu \in {\mathcal M}^{L}}
\Psi_{n}(\mu)$.
For all $1 \leq j \leq L$ and $\varepsilon >0$, $\widehat\pi^{L} +
\varepsilon \delta_{j} \in {\mathcal M}^{L}$, and
$\Psi_{n}(\widehat\pi^{L} + \varepsilon \delta_{j}) \geq
\Psi_{n}(\widehat\pi^{L})$. It follows that
$\left[d_{j}(\Psi_{n})\right](\widehat\pi^{L}) \geq 0$.
If $j \in \mbox{Supp}(\widehat\pi^{L})$, then for $\varepsilon > 0$
small enough, $\widehat\pi^{L} -\varepsilon \delta_{j} \in {\mathcal
M}^{L}$, and $\Psi_{n}(\widehat\pi^{L} - \varepsilon \delta_{j}) \geq
\Psi_{n}(\widehat\pi^{L})$. It follows that
$-\left[d_{j}(\Psi_{n})\right](\widehat\pi^{L}) \geq 0$.
Conversely, assume that Equation~(\ref{P2.eq}) is satisfied, and
take $\pi
\in {\mathcal M}^{L}$. Then $\left[D_{\pi}(\Psi_{n})\right](\widehat\pi^{L})$
is non negative and
$\left[D_{\widehat\pi^{L}}(\Psi_{n})\right](\widehat\pi^{L})=0$,
thanks to Equation~(\ref{eqD}).
By convexity of $\Psi_{n}$, for $\varepsilon>0$
\begin{equation*}
\Psi_{n}(\pi) - \Psi_{n}(\widehat\pi^{L}) \geq \frac{1}{\varepsilon} \left(
\Psi_{n}(\varepsilon \pi + (1-\varepsilon) \widehat\pi^{L}) - \Psi_{n}(\widehat\pi^{L})\right).
\end{equation*}
Taking the limit when $\varepsilon$ tends to 0, we get
\begin{eqnarray*}
\Psi_{n}(\pi) - \Psi_{n}(\widehat\pi^{L})
&\geq &
\left[D_{\pi-\widehat\pi^{L}}(\Psi_{n})\right](\widehat\pi^{L}) \\
&=& \left[D_{\pi}(\Psi_{n})\right](\widehat\pi^{L}) \geq 0.
\end{eqnarray*}\hfill{$\Box$}
\begin{lem}\label{lem13}
Let us define the following quantities.
\begin{itemize}
\item Let $\pi=\sum_{i=1}^{L-1} a_{i} \delta_{j_{i}}$ be the minimizer
of $\Psi_{n}$ over the set of positive measures spanned by
$\left\{ \delta_{j_{i}},\; 1 \leq i \leq L-1\right\}$.
\item Let $j_{L}$ be an integer such that $j_L \neq j_i$ for all $i=1,
\ldots, L-1$, and $\left[d_{j_{L}}(\Psi_{n})\right](\pi) <0$.
\item Let $\pi^{\star}
= \sum_{i=1}^{L} b_{i} \delta_{j_{i}}$ be the minimizer of $\Psi_{n}$ over the set spanned by
$\left\{ \delta_{j_{i}}, \; 1 \leq i \leq L\right\}$.
\end{itemize}
Then
$b_{L} >0$, and there exists $\varepsilon>0$ such that $\pi +
\varepsilon (\pi^{\star} - \pi)$ is a non negative measure, and
such that $\Psi_{n}(\pi +\varepsilon (\pi^{\star} - \pi) )< \Psi_{n}(\pi)$.
\end{lem}
\paragraph{Proof.}
Following the same arguments as in the proof of Lemma~\ref{lem12}, we
have $\left[d_{j_{i}}(\Psi_{n})\right](\pi) = 0$ for all $i=1,
\ldots, L-1$. Then
\begin{equation*}
\left[D_{\pi}(\Psi_{n})\right](\pi) = \sum_{i=1}^{L-1}
a_{i} \left[d_{j_{i}}(\Psi_{n})\right](\pi) = 0.
\end{equation*}
Moreover, we have
\begin{equation*}
\Psi_{n}(\pi + \varepsilon \delta_{j_{L}}) = \Psi_{n}(\pi) +
\frac{\varepsilon^{2}}{2} + \varepsilon \left[d_{j_{L}}(\Psi_{n})\right](\pi).\end{equation*}
Therefore, $\left[d_{j_{L}}(\Psi_{n})\right](\pi) <0$ implies that for
$\varepsilon >0$ small enough,
\begin{equation*}
\Psi_{n}(\pi + \varepsilon \delta_{j_{L}}) < \Psi_{n}(\pi).
\end{equation*}
This shows that $\pi^{\star} \neq \pi$.
By convexity of $\Psi_{n}$, we show that
\begin{eqnarray*}
\lim_{\varepsilon \downarrow 0}
\frac{\Psi_{n}((1-\varepsilon)\pi + \varepsilon \pi^{\star}) -
\Psi_{n}(\pi) }{\varepsilon} & \leq &
\lim_{\varepsilon \downarrow 0}
\frac{(1-\varepsilon)\Psi_{n}(\pi) +
\varepsilon \Psi_{n}(\pi^{\star})-\Psi_{n}(\pi)}{\varepsilon} \\
&=&\Psi_{n}(\pi^{\star})-\Psi_{n}(\pi) <0.
\end{eqnarray*}
This shows that for $\varepsilon>0$ small enough, $\Psi_{n}(\pi + \varepsilon
(\pi^{\star}-\pi)) < \Psi_{n}(\pi)$.
Besides, we have
\begin{eqnarray*}
\lim_{\varepsilon \downarrow 0} \frac{1}{\varepsilon} \left(
\Psi_{n}((1-\varepsilon)\pi + \varepsilon \pi^{\star}) -
\Psi_{n}(\pi)\right) & = & \left[D_{\pi^{\star} -
\pi}(\Psi_{n})\right](\pi) \\
& = & \left[D_{\pi^{\star}}(\Psi_{n})\right](\pi) \\
& = & \sum_{i=1}^{L} b_{i} \left[d_{j_{i}}(\Psi_{n})\right](\pi)\\
& = & b^{L} \left[d_{j_{L}}(\Psi_{n})\right](\pi) .
\end{eqnarray*}
Because $\left[d_{j_{L}}(\Psi_{n})\right](\pi) <0$, $b_{L}$ is
positive.
It remains to show that there exists $\varepsilon >0$ such that, for
all $1 \leq i \leq L-1$, $a_{i} + \varepsilon (b_{i}-a_{i})$ is non
negative. This is clearly the case if $b_{i} \geq a_{i}$. If not, take
$\varepsilon \leq \min_{b_{i} <
a_{i}}\left\{a_{i}/(a_{i}-b_{i})\right\}$. \hfill{$\Box$}
\subsection{Proof of Theorem~\ref{theo:AlgoF}} \label{proofAlgoF.st}
Let us begin with the following lemma.
\begin{lem}\label{lem14}
If $\widehat{\pi}^{L} = \arg\min_{\mu \in {\mathcal M}^{L}}
\Psi_{n}(\mu)$, then for all $j \geq 1$,
\begin{equation*}
\left[d_{L+j}(\Psi_{n})\right](\widehat{\pi}^{L}) = b\left(\sum_{i=1}^{L} \widehat{\pi}^{L}_{i}
-1\right),
\end{equation*}
for some positive constant $b$ depending on $j$ and on the maximum of
the support of $\widehat{\pi}^{L}$.
\end{lem}
\paragraph{Proof.}
Let us consider two cases according to wether
$\widehat{\pi}^{L}_{L}$ equals 0 or not.
Suppose that $\widehat{\pi}^{L}_{L} > 0$, and write
\begin{eqnarray*}
\left[d_{L+j}(\Psi_{n})\right]({\widehat\pi}^{L}) & = & \sum_{l=1}^{L+j-1}
T_{L+j}(l) \left( \sum_{j'=l+1}^{L} {\widehat\pi}^{L}_{j'} T_{j'}(l) -
\widetilde{p}_{n}(l)\right) \\
& = & \sum_{l=1}^{L-1}
T_{L+j}(l) \left( \sum_{j'=l+1}^{L}\widehat\pi^{L}_{j'} T_{j'}(l) -
\widetilde{p}_{n}(l)\right).
\end{eqnarray*}
Because for $0 \leq l \leq L-1$, $T_{L+j}(l) = a T_{L}(l) + b $, for
constants $a$ and $b$ depending on $L$ and $j$, we get
\begin{equation*}
\left[d_{L+j}(\Psi_{n})\right](\widehat\pi^{L}) = a
\left[d_{L}(\Psi_{n})\right](\widehat\pi^{L}) + b \left[\sum_{l=1}^{L-1}
\sum_{j'=l+1}^{L} \widehat\pi_{j'}^{L} T_{j'}(l) -1\right].
\end{equation*}
Following Lemma~\ref{lem12}, $\left[d_{L}(\Psi_{n})\right](\widehat\pi^{L})
=0$, and we get
\begin{equation*}
\left[d_{L+j}(\Psi_{n})\right](\widehat\pi^{L}) = b \left( \sum_{i=1}^{L}\widehat\pi^{L}_{i} \sum_{l=0}^{j-1}T_{j}(l)
-1 \right) = b \left( \sum_{i=1}^{L}\widehat\pi^{L}_{i}-1 \right).
\end{equation*}
If $\widehat{\pi}^{L}_{L} =0$, then $\widehat{\pi}^{L} \in {\mathcal M
}^{L_1}$ for some $L_{1} < L$. Thanks to Lemma~\ref{lem12}, we know
that $\widehat{\pi}^{L}$ is the minimizer of $\Psi_{n}$ over
${\mathcal M}^{L_1}$. Then we can show that
$\left[d_{L_{1}+j}(\Psi_{n})\right](\widehat\pi^{L}) =0 $ for all
$j\geq 1$ exactly as we have done in the case $\widehat{\pi}^{L}_{L} > 0$.
\paragraph{~}
To conclude the proof of Theorem~\ref{theo:AlgoF}, note first that for all $L'\leq L$, we have ${\mathcal M}^{L'} \subset
{\mathcal M}^{L}$, which implies
\begin{equation}\label{eq: hatpihatL}\Psi_{n}(\widehat{\pi}^{L}) \leq \Psi_{n}(\widehat{\pi}^{L'}).
\end{equation}
Second, it follows from Lemmas~\ref{lem12} and~\ref{lem14} that if $\sum_{i=1}^{L} \widehat{\pi}^{L}_{i}=1$, then for all
$L'\geq L$, $\widehat{\pi}^{L'}=\widehat{\pi}^{L}$. Therefore
Equation~(\ref{eq: hatpihatL})
holds for all positive integers $L'$, which
implies that $$\Psi_{n}(\widehat{\pi}^{L}) \leq \Psi_{n}(\pi)$$ for
all measures $\pi\in\cal M$ with a finite support. Therefore
$\widehat{\pi}^{L} = \widehat{\pi}_{n}$. \hfill{$\Box$}
\bibliographystyle{elsarticle-harv}
|
1,477,468,751,021 | arxiv | \section{Introduction}
Gravitational infall is a key part of the massive-star formation process, contributing via competitive accretion \citep{2004MNRAS.349..735B} and core accretion \citep{2003ApJ...585..850M}. These processes occur in high-mass young stellar objects (YSOs) at early evolutionary stages and continue their presence until later stage UCHII regions \citep{2003ApJ...599.1196K,2005ApJ...630..987S}. As shown by theoretical works \citep[e.g.][]{1996ApJ...462..874J,2002ApJ...569..846Y,2009ApJ...699..230G}, infall motion is critical for initiating high-mass star formation, while also maintaining accretion flows to feed the stellar mass during subsequent evolutionary stages. Simulations, theory and observations are converging to the idea that the collapse and outflow phenomenon is universal, covering the full range of stellar mass scales from brown dwarfs to massive stars. Moreover, infall and outflow motions should be closely related and interact with each other throughout the star formation history \citep{2014arXiv1401.2219L}. Thus infall candidates can serve as good sources in which to study massive star formation and gas dynamics in molecular cores.
Infall-motion surveys of high-mass star-forming regions have been reported in several recent papers \citep{2007ApJ...669L..37W,2007ApJ...663.1092K,2009MNRAS.392..170S,2011ApJ...740...40R,2012A&A...538A.140K,2013A&A...549A...5R}. However, high-mass infall candidates are still few. Further observations are needed to better constrain the physical properties of the infall, including its spatial distribution, mass infall rate, chemical effect, and understanding its relation with respect to other dynamical processes, including outflow, disk accretion and core fragmentation \citep{2012MNRAS.422.1098R}.
In this paper, 405 compact sources have been selected based on the results of the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey \citep{2009A&A...504..415S} and the Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey \citep{2013PASA...30...57J}. Infall motions toward these sources will be identified and studied using two optically thick lines HCO$^{+}$(1-0) and HNC(1-0), and one optically thin line, N$_{2}$H$^{+}$(1-0). The mapping observations of these surveys allow the identification of the most probable infall candidates. A brief introduction for the ATLASGAL survey and the sample selection are presented in Section 2.1, the MALT90 survey and Spitzer survey in Section 2.2 and 2.3, and the classification of the sample sources in Section 2.4. In Section 3 their kinematic distances, peak column densities and masses are calculated, and in Section 4 the infall candidates are identified and investigated. The conclusions are summarized in Section 5.
\section[]{DATA}
\subsection{The ATLASGAL survey}
The ATLASGAL is the first systematic survey of the inner Galactic plane at sub-millimeter wavelengths. Using the 12 m APEX telescope, the aim of this survey was to study continuum emission from the highest density regions of dust at 345 GHz. The angular resolution of the APEX telescope at this frequency is 19$^{\prime\prime}$.2. The typical pointing r.m.s. error was measured to be $\sim$4$^{\prime\prime}$, and the r.m.s. of the images is 50-70 mJy beam$^{-1}$ \citep{2009A&A...504..415S}.
\citet{2013A&A...549A..45C} published one catalog containing 6639 compact sources located in the range of $330^{\circ}$ $\leq$ $l$ $\leq$ $21^{\circ}$ and $|b|$ $\leq$ $1.5^{\circ}$ (see following). The data from this catalog has been complemented with observations from the MALT90 survey (years 1 and 2) \citep{2013PASA...30...57J}. The source sample has been selected using the following criteria: (i) detected N$_{2}$H$^{+}$(1-0), HNC(1-0), and HCO$^{+}$(1-0) emissions with signal-to-noise ratio larger than 3; (ii) a peak flux at 870 $\mu$m above 6$\sigma$, which corresponds to a flux density of $\sim$ 0.4 Jy beam$^{-1}$; and (iii) an angular distance between any two sources larger than the Mopra beam size (36$^{\prime\prime}$ at 90 GHz). These criteria ensure that the sources are not contaminated by emission from an adjacent clump. Using these criteria, 405 compact sources with good molecular lines have been selected for a sample.
\subsection{The MALT90 survey}
The MALT90 Survey is a large international project that obtains molecular line maps in order to characterize the physical and chemical conditions of high-mass star formation regions over a wide range of evolutionary stages. The sample for this survey is a sub-sample of the ATLASGAL catalog. The angular resolution of the 22 m Mopra radio telescope at 90 GHz is 36$^{\prime\prime}$ \citep{2013PASA...30...57J}. The MALT90 data has been obtained from the online archive\footnote{http://atoa.atnf.csiro.au/MALT90/}. The data were reduced by the software GILDAS (Grenoble Image and Line Data Analysis Software).
\subsection{The Spitzer surveys}
The Galactic Legacy Infrared Mid-Plane Survey Extraordinaire (GLIMPSE) survey is a Spitzer/IRAC Legacy survey of the Galactic mid-plane \citep{2003PASP..115..953B,2009PASP..121..213C} at 3.6, 4.5, 5.8, and 8.0 $\mu$m, respectively. The angular resolution is better than 2$^{\prime\prime}$ at all wavelengths. The MIPS/Spitzer Survey of the Galactic Plane (MIPSGAL) is a survey of the Galactic plane at 24 and 70 $\mu$m using the Multiband Imaging Photometer aboard the Spitzer Space Telescope (MIPS). The angular resolution at 24 and 70 $\mu$m is 6$^{\prime\prime}$ and 18$^{\prime\prime}$ \citep{2009PASP..121...76C}. The highly reliable point source catalogs released from the GLIMPSE survey and the mosaicked images of MIPSGAL at 24 $\mu$m have been used in the following analysis.
\subsection{Classification}
The clumps, which contain objects (in projection) obeying the criteria (a point-source should have [4.5] - [5.8] $>$ 1.0 and be detected at 8 $\mu$m) or (a point-source should have [4.5] - [5.8] $>$ 0.7, [3.6] - [4.5] $>$ 0.7 and be detected at 8 $\mu$m), were assumed to host star-forming activities \citep{2008ApJ...674..336G}. Any clumps associated with 24 $\mu$m point sources were also assumed to host a forming star. As a MIPSGAL 24 $\mu$m point source catalog has not been published, the STARFINDER algorithm \citep{2000A&AS..147..335D} has been used to search for point sources in the 24 $\mu$m MIPSGAL images. Sources have been extracted that had an S/N ratio better than 7, resulting in 280 clumps that were associated with star formation. Of these 280 star-forming clumps, 78 are associated with IRAS sources, and 64 out of these 78 clumps were IRAS sources that satisfied the criteria for being an UCHII region \citep{1989ApJS...69..831W}. Fifteen of the 280 star-forming clumps were identified as UCHII regions in the literature, (see the last column of Table 1 for their corresponding references). The remaining 201 star-forming clumps were classified as proto-stellar clumps. Eighty-four clumps having no star-formation properties were classified as pre-stellar clumps. Column 10 of Table 1 lists the evolutionary stages of all clumps in our sample. It should be noted that no attempts have been made to determine the evolutionary stage of 41 clumps associated with saturated 24 $\mu$m sources or photo-dissociation regions; these clumps are indicated as "Non" in Table 1.
\begin{table*}
\scriptsize
\centering
\begin{minipage}{175mm}
\caption{Examples of the derived clump parameters. The columns are as follows: (1) and (2) ATLASGAL and Clump names; (3) peak submillimetre emission; (4) integrated submillimetre emission; (5) heliocentric distance; (6) references and distance; (7) effective physical radius; (8) column density; (9) clump mass derived from the integrated 870$\,\umu$m emission; (10) Spitzer classification; and (11) references and classification. The full table is available online.}
\begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrr}
\hline
\multicolumn{1}{c}{ATLASGAL} & \multicolumn{1}{c}{Clump$^{\rm{a}}$} & \multicolumn{1}{c}{Peak flux} & \multicolumn{1}{c}{Int. flux} & \multicolumn{2} {c} {Distance} & \multicolumn{1}{c}{Radius} & \multicolumn{1}{c}{Log($N(H_2\,)$)} & \multicolumn{1}{c}{Log($M_{clump}\,$)} & \multicolumn{2}{c}{$Spitzer$ $classification^{\rm{b}}$}\\
\multicolumn{1}{c}{name} & \multicolumn{1}{c}{name} & \multicolumn{1}{c}{($Jy\ beam^{-1}\,$)} & \multicolumn{1}{c}{($Jy$)} & \multicolumn{1}{c}{($kpc$)} & \multicolumn{1}{c}{Ref.} & \multicolumn{1}{c}{($pc$)} & \multicolumn{1}{c}{($cm^{-2}\,$)} & \multicolumn{1}{c}{($M_{\odot}$)\,} & & \multicolumn{1}{c}{Ref.}\\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)} & \multicolumn{1}{c}{(9)} & \multicolumn{1}{c}{(10)} & \multicolumn{1}{c}{(11)}\\
\hline
\input{tableA1.dat}
\hline
\end{tabular}
\medskip
$^{\rm{a}}$ Sources are named by galactic coordinates of ATLASGAL sources: An $\ast$ indicates infall candidates.\\
$^{\rm{b}}$ A $\dagger$ indicates source with IRAS counterpart.\\
References --- Distance: (1) \citet{2013MNRAS.431.1752U}, (2) \citet{2013MNRAS.435..400U}, (3) this paper, (4) \citet{2005A&A...429..945M}, (5) \citet{2013A&A...550A..21S}, (6) \citet{2004A&A...426...97F}, (7) \citet{2012ApJS..202....1H}, (8) \citet{2012A&A...547A..49R}, (9) \citet{2003A&A...397..133R}, (10) \citet{2006MNRAS.366.1096B}, (11) \citet{1998A&AS..132..211H}, (12) \citet{1982ApJS...49..183B}.\\
References --- Spitzer classification: (1) \citet{1994ApJS...91..347B}, (2) \citet{2000ApJ...530..371F}, (3) \citet{1997MNRAS.291..261W}, (4) \citet{2011ApJS..194...32A}, (5) \citet{2001ApJ...549..979K}, (6) \citet{1987A&A...181..378C}, (7) \citet{2009A&A...501..539U}, (8) \citet{2007ApJ...669L..37W}\\
\end{minipage}
\end{table*}
\normalsize
\section{physical properties of the sample}
\subsection{Kinematic distances}
Distances of 132 clumps have been obtained from the literature. For the other 273 clumps, the Galactic rotation model of \citet{2009ApJ...700..137R} and the radial velocities of N$_{2}$H$^{+}$(1-0) were used to estimate their kinematic distances. It should be noted that, if one source is located outside the solar circle (i.e. $>$ 8.5 kpc from the Galactic Centre) or is at a tangential point, we will calculate one unique distance. However, if one source is located within the solar circle (i.e. $<$ 8.5 kpc from the Galactic Centre), two possible distances are obtained (one near, one far). This degeneracy is commonly referred to as the kinematic distance ambiguity (KDA). Here the HI self-absorption technique \citep{2006MNRAS.366.1096B} is used to resolve the KDA. For this purpose HI spectra have been extracted from the Southern Galactic Plane Survey \citep[SGPS:][]{2005ApJS..158..178M} and the VLA Galactic Plane Survey \citep[VGPS:][]{2006AJ....132.1158S}. If a clump is at the near distance, cold and dense HI contained therein will absorb warmer HI background line emission, and the spectrum of HI will show self-absorption, whereas any clumps at the farther distance will not display any self-absorption as there is not any background radiation to absorb \citep{2006MNRAS.366.1096B}. Using this method, we determined the kinematic distance to 257 clumps (see Table 1). For the remaining 16 clumps, 14 of them are located within the solar circle and do not have any HI data, so we are unable to resolve their KDAs. For another two clumps, G331.374$-$0.314 and G353.019+0.976, although their observed N$_{2}$H$^{+}$(1-0) peak lies in a HI trough, HI spikes are present within the trough making it impossible to resolve their KDAs. As an example of this procedure, Fig. 1 shows an unambiguous near-distance solution using the N$_{2}$H$^{+}$(1-0) emission line that coincides with a HI absorption (right panel), and a far-distance clump with no HI absorption located at the N$_{2}$H$^{+}$(1-0) peak velocity (left panel). The N$_{2}$H$^{+}$(1-0) and overlaid HI spectral profiles of all 273 clumps are available online.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Distance_1.pdf}
\caption{Representative spectra for the near and far distances obtained using the HI self-absorption technique. The solid line represents the N$_{2}$H$^{+}$(1-0) spectra overlaid with the HI 21-cm data (bold dashed line), with the HI data scaled to the peak of N$_{2}$H$^{+}$(1-0). The vertical solid red line indicates the velocity of the clump. Left panel represents a far-distance object with no HI self-absorption at the N$_{2}$H$^{+}$(1-0) peak velocity, while right panel shows a clear near-distance solution with the N$_{2}$H$^{+}$(1-0) emission line coinciding with a HI absorption. }
\end{figure}
\subsection{Peak column density}
The peak flux density at 870 $\mu$m of each clump (Table 1) was used to derive the beam-averaged H$_{2}$ column density via the formula $N_{H_2}={{S_\nu}R\over{B_\nu}(T_D){\Omega}{\kappa_\nu}{\mu}{m_H}}$, where S$_{\nu}$ is the peak 870 $\mu$m flux density, R is the gas-to-dust mass ratio (assumed to be 100), $\Omega$ is the beam solid angle, $\mu$ is the mean molecular weight of the interstellar medium assumed to be equal to 2.8, m$_{H}$ is the mass of a hydrogen atom, B$_{\nu}$ is the Planck function for dust temperature T$_{D}$, and $\kappa$$_{\nu}$ is the dust-absorption coefficient taken as 1.85 cm$^{2}$ g$^{-1}$ \citep[interpolated to 870 $\mu$m from Col. 5 of Table 1, ][]{1994A&A...291..943O}. Following the results of \citet{2013ApJ...777..157H}, the temperatures of pre-stellar, proto-stellar and UCHII clumps are assumed to be 13.9 K, 17.9 K, and 26 K, respectively.
The distributions of the 870 $\mu$m peak flux densities for the pre-stellar, proto-stellar and UCHII clumps are shown in Fig. 2 (a), which have median values of 1.13, 1.98 and 4.59 Jy beam$^{-1}$, respectively. This may be ascribed to increasing dust temperature towards the center of each clump, where star-formation processes heat the dust. Fig. 2 (b) and (c) present the H$_{2}$ column density distributions of the pre-stellar, proto-stellar and UCHII region clumps, and the complete sample, respectively. The median values of these distributions are indicated by the dashed black vertical line, which are 5.01$\times$10$^{22}$ cm$^{-2}$, 5.75$\times$10$^{22}$ cm$^{-2}$, and 7.94$\times$10$^{22}$ cm$^{-2}$, respectively. The H$_{2}$ column densities display a trend of increasing with evolutionary stage. The mean, median, and standard deviation of each distribution are summarized in Table 5.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{H2_column_density_beam_averaged_all_2.pdf}
\caption{The 870 $\mu$m emission properties. (a): The distributions of 870 $\mu$m peak flux densities for all classified sources separated by evolutionary stage. (b): The distributions of beam-averaged H$_{2}$ column densities derived from 870 $\mu$m dust emission for all classified sources separated by evolutionary stage, where the vertical dashed black line indicates the median value for each stage. (c): The distribution of beam-averaged H$_{2}$ column densities for the whole sample. Median value is indicated by the dashed black vertical lines.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Mass_distribution_all_3.pdf}
\caption{The 870 $\mu$m dust mass determinations. (a): 870 $\mu$m integrated flux densities for all classified sources separated by evolutionary stage. (b): The dust mass distribution for all classified sources separated by evolutionary stage. (c): The dust mass distribution for the whole sample. Median values are indicated by the dashed black vertical lines.}
\end{figure}
\subsection{Clump mass}
Assuming that the dust emission is optically thin in the sub-millimeter continuum, masses of clumps with known distances can be calculated from the dust continuum emission via $M={{D^2}{S_\nu}{R}\over{B_\nu}(T_D){\kappa_\nu}}$, where S$_{\nu}$ is the integrated 870 $\mu$m flux and D is the heliocentric distance to the source, R, B$_{\nu}$, T$_{D}$, and $\kappa$$_{\nu}$ are the same as in section 3.2. Fig. 3 (a) shows the distributions of the pre-stellar, proto-stellar and UCHII clumps as a function of the integrated 870 $\mu$m flux density. Their corresponding median values are 12.3, 14.98 and 33.06 Jy, respectively. The median values of the mass distributions of the pre-stellar, proto-stellar and UCHII clumps are 1905.5, 1737.8 and 2138.0 M$_{\odot}$, respectively (Fig. 3 (b)). The masses of the pre-stellar, proto-stellar and UCHII clumps show similar distributions. The statistical parameters of these distributions are summarized in Table 5. More than 96.6\% of all clumps have masses larger than 100 M$_{\odot}$, and the median value is 1905.5 M$_{\odot}$ (Fig. 3 (c)).
\citet{2010ApJ...716..433K} found that high-mass star-forming regions obey the mass-size relationship m(r)$\geq$580M$_{\odot}$(R$_{eff}$ pc$^{-1}$)$^{1.33}$, which was confirmed by \citet{2013MNRAS.431.1752U}. Hence the relation may provide a suitable description for massive star formation. In the sample, 375 (96.4\%) clumps have masses larger than the limiting masses for their size, and they are promising sites of high-mass star formation. Fig. 4 displays the mass-size relationship for the 389 clumps, all of them are spatially resolved by the APEX beam. The dashed red line indicates the least-squares fit to 389 clumps expressed as empirical relation of Log(M$_{clump}$)=3.41$\pm$0.01+(1.78$\pm$0.03)$\times$Log(R$_{eff}$) with correlation coefficient of 0.94. The upper and lower red diagonal lines indicate constant surface densities, $\Sigma$(gas), of 1 g cm$^{-2}$ and 0.05 g cm$^{-2}$, respectively. All sources are located in the region $\Sigma$(gas) $>$ 0.05 g cm$^{-2}$. This is consistent with the result of \citet{2013MNRAS.431.1752U}. Therefore, high-mass star formation may take place when the surface density is larger than 0.05 g cm$^{-2}$.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Mass-size-relationship_4.pdf}
\caption{The mass-size relationship of 389 clumps that have determined mass values and are spatially resolved by the APEX beam (blue dots). The yellow shaded region shows the part of the parameter space found to be devoid of massive star formation that satisfies the relationship m(r)$\leq$580M$_{\odot}$(R$_{eff}$ pc$^{-1}$)$^{1.33}$) \citep[cf.][]{2010ApJ...723L...7K}. The green shaded region indicates the region in which young massive cluster progenitors are expected to be found \citep[i.e.][]{2012ApJ...758L..28B}. The dashed red line shows the result of linear least-squares fits to the 389 clumps with calculated mass values. The grey dashed line shows the sensitivity of the ATLASGAL survey, the upper and lower solid red line shows the surface densities of 1 and 0.05 g cm$^{-2}$.}
\end{figure}
Applying the criteria of \citet{2012ApJ...758L..28B} to the current sample results in seven young massive protocluster (MPC) candidates in the green-shaded area of Fig.4. Of these G345.504+0.347 and G351.774-0.537 have already been identified by \citet{2010ApJ...712.1137K}, while G008.691-0.401, G333.299-0.351, G338.459+0.024, G348.183+0.482 and G348.759-0.946 are newly identified in this work.
\section{Infall candidates}
\subsection{Identification of infall candidates}
The HCO$^{+}$(1-0), HNC(1-0) and N$_{2}$H$^{+}$(1-0) line profiles have been extracted at the centers of all 405 clumps as determined by the peak 870 $\mu$m emission of each clump. The line parameters were derived by Gaussian fitting. It should be noted that some clumps show asymmetric profiles that cannot be fitted by a single-Gaussian function directly. However, the upper portion of the highest peak of HCO$^{+}$(1-0) and HNC(1-0) profile shows a symmetric profile, which can be fitted by a single-Gaussian function. The uncertainties of the velocities at these peaks were estimated by the Gaussian fit. The optically thin N$_{2}$H$^{+}$(1-0) lines were fitted using the hyperfine fitting routines in $CLASS$ to determine their peak velocity and FWHM. The derived line parameters of all sources are given in Table 2.
\begin{table*}
\scriptsize
\centering
\begin{minipage}{155mm}
\caption{Examples of the derived line parameters and profiles of the observed sources. Quantities in parentheses give the uncertainties in units of 0.01. The columns are as follows: (1) Clump names; (2) peak velocity of $HCO^{+}(1-0)$; (3) peak velocity of $HNC(1-0)$; (4) peak velocity of $N_{2}H^{+}(1-0)$; (5) FWHM of $N_{2}H^{+}(1-0)$; (6) asymmetry of $HCO^{+}(1-0)$; (7) asymmetry of $HNC(1-0)$; (8) profile of $HCO^{+}(1-0)$ and $HNC(1-0)$. The full table is available online. }
\begin{tabular}{rrrrrrrrrrrrrrrrrrrrrrrr}
\hline
\multicolumn{1}{c}{Clump$^{\rm{a}}$} & \multicolumn{1}{c}{$V_{thick}$\,} & \multicolumn{1}{c}{$V_{thick}$\,} & \multicolumn{1}{c}{$V_{thin}$\,} & \multicolumn{1} {c} {$\Delta V$\,} & \multicolumn{1}{c}{$\delta v$\,} & \multicolumn{1}{c}{$\delta v$\,} & \multicolumn{1}{c}{Profile} \\
\multicolumn{1}{c}{name} & \multicolumn{1}{c}{$HCO^{+}(1-0)$\,} & \multicolumn{1}{c}{$HNC(1-0)$\,)} & \multicolumn{1}{c}{$N_{2}H^{+}(1-0)$\,} & \multicolumn{1}{c}{$N_{2}H^{+}(1-0)$\,} & \multicolumn{1}{c}{$HCO^{+}(1-0)$\,} & \multicolumn{1}{c}{$HNC(1-0)$\,} & \\
& \multicolumn{1}{c}{$km\ s^{-1}$\,} & \multicolumn{1}{c}{$km\ s^{-1}$\,} & \multicolumn{1}{c}{$km\ s^{-1}$\,} & \multicolumn{1}{c}{$km\ s^{-1}$\,} & & & \\
\multicolumn{1}{c}{(1)} & \multicolumn{1}{c}{(2)} & \multicolumn{1}{c}{(3)} & \multicolumn{1}{c}{(4)} & \multicolumn{1}{c}{(5)} & \multicolumn{1}{c}{(6)} & \multicolumn{1}{c}{(7)} & \multicolumn{1}{c}{(8)}\\
\hline
\input{tableA2.dat}
\hline
\end{tabular}
\medskip
$^{\rm{a}}$ Sources are named by galactic coordinates of ATLASGAL sources: An $\ast$ indicates infall candidates.\\
NOTE. The HCO$^{+}$(1-0), and HNC(1-0) profiles are evaluated as follows: B denotes a blue profile, R denotes a red profile, and N denotes neither blue nor red.\\
\end{minipage}
\end{table*}
\normalsize
Following the method of \citet{1997ApJ...489..719M}, 150 blue profiles were identified using the HCO$^{+}$(1-0) lines, 99 blue profiles were identified using the HNC(1-0) lines, and 87 sources show blue profiles in both the HCO$^{+}$(1-0) and HNC(1-0) spectra. The profile asymmetries are given in Table 2. Any sources that display blue profiles are possible infall candidates. However, rotation might cause a blue asymmetric line profile at one side of the rotation axis, but at the same time the red asymmetry could appear at the other side of the rotation axis. Outflow could also cause blue and red asymmetry on opposite directions. Checking the spatial variation of profile asymmetries of these sources can help us to identify reliable infall candidates.
The source G012.418+0.506 is a typical example of an infall candidate (Fig. 5). The right panel of Fig.5 shows N$_{2}$H$^{+}$(1-0), HNC(1-0) and HCO$^{+}$(1-0) spectra detected toward the peak position of this clump. An obvious blue asymmetry is present in both of the HCO$^{+}$(1-0) and HNC(1-0) spectra. It is seen that the HCO$^{+}$(1-0) spectra displays a blue asymmetry in all of the mapping regions (left panel of Fig. 5). Conversely, the source G337.406-0.402 appears to be a typical example of a blue asymmetry caused by rotation (Fig. 6). HCO$^{+}$(1-0) spectra detected toward the peak position of this clump display a blue asymmetry (right panel of Fig. 6). However, the profile asymmetry of HCO$^{+}$(1-0) shows spatial variation across the mapping region. Because of the rotation, the extended blue asymmetry of the HCO$^{+}$(1-0) profiles reverses to an extended red asymmetry with respect to the northwest-southeast axis through the center of the clump (left panel of Fig. 6). The figures of the other infall candidates are available online.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Gridmap_5.pdf}
\caption{Example of an infall source G012.418+0.506. The left panel: the HCO$^{+}$(1-0) map grid (gridded to 1/2 beam size) superposed on the 870 $\mu$m continuum emission map (starting from a flux density of 0.4 Jy beam$^{-1}$, which corresponds to a peak flux above 6$\sigma$). The right panel: the extracted spectra of HCO$^{+}$(1-0), HNC(1-0) and N$_{2}$H$^{+}$(1-0) from the central position of this clump (red square on the left panel). The dashed red lines on the profiles indicate the velocities of N$_{2}$H$^{+}$(1-0).}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Gridmap_6.pdf}
\caption{Example of the rotation source G337.406-0.402. Legend is the same as for Figure 5.}
\end{figure}
Finally, a total of 29 infall candidates were identified among the 84 pre-stellar clumps; 78 infall candidates among the 201 proto-stellar clumps; 17 infall candidates among the 79 UCHII region clumps; and seven infall candidates among the 41 clumps that do not have a classification (Table 1). The parameters of all 131 reliable infall candidates are listed in Table 3. The detection rates of the infall candidates are 0.3452, 0.3881 and 0.2152 for pre-stellar, proto-stellar, and UCHII clumps, respectively (Table 4).
\begin{table*}
\scriptsize
\centering
\begin{minipage}{125mm}
\caption{Mass infall rates of infall candidates.}
\begin{tabular}{lrrlrr}
\hline
\multicolumn{1}{c}{Clump} & \multicolumn{1}{c}{$\dot{M}$} & \multicolumn{1}{c}{$Spitzer$} &
\multicolumn{1}{c}{Clump} & \multicolumn{1}{c}{$\dot{M}$} & \multicolumn{1}{c}{$Spitzer$} \\
\multicolumn{1}{c}{name} & \multicolumn{1}{c}{0.01 M$_{\odot}$ yr$^{-1}$} & \multicolumn{1}{c}{classification} &
\multicolumn{1}{c}{name} & \multicolumn{1}{c}{0.01 M$_{\odot}$ yr$^{-1}$} & \multicolumn{1}{c}{classification} \\
\hline
\input{tableA3.dat}
\hline
\end{tabular}
\medskip
NOTE. Columns are (from left to right) Clump names, mass infall rates, Spitzer classification, Clump names, mass infall rates, Spitzer classification. The units are unit of 0.01 M$_{\odot}$ yr$^{-1}$. \\
\end{minipage}
\end{table*}
\normalsize
In order to quantify whether the blue profile dominates in a given sample, a blue excess may defined as E: E$=$(N$_{blue}$$-$N$_{red}$)$/$N$_{total}$, where N$_{blue}$ and N$_{red}$ are the number of sources that show blue or red profiles, respectively, and N$_{total}$ is the total numbers of sample sources \citep{1997ApJ...489..719M}. The excess E values of pre-stellar, proto-stellar and UCHII clumps are 0.2857, 0.2189 and -0.1139, respectively. This result is consistent with our prediction that infall motions dominate the early stage of high-mass star formation. The corresponding probability P values (see \citet{2005A&A...442..949F}, and references therein), which characterize the blue excess arises by chance, are 0.0003, 0.00005, and 0.13, respectively.
\begin{table*}
\scriptsize
\centering
\begin{minipage}{125mm}
\caption{Blue profile excess and detection rate determined from the HCO$^{+}$(1-0) line profiles for different evolutionary stages.}
\scriptsize
\begin{tabular}{lccccccc}
\hline
Classification & N$_{blue}$ & N$_{red}$ & N$_{total}$ & N$_{infall}$ & E & P & D\\
\hline
Prestellar & 35 & 11 & 84 & 29 & 0.2857 & 0.0003 & 0.3452\\
Protostellar & 85 & 41 & 201 & 78 & 0.2189 & 0.00005 & 0.3881\\
UCHII & 20 & 29 & 79 & 17 & -0.1139 & 0.13 & 0.2152\\
No classification & 6 & 5 & 41 & 7 & $-$ & $-$ & $-$\\
\hline
\end{tabular}
\medskip
NOTE. Columns are (from left to right) number of blue profile clumps, number of red profile clumps, total number of clumps, number of infall candidates, the excess parameter, the probability of the distribution to arise by chance, and the detection rate for infall candidates.
\end{minipage}
\end{table*}
\normalsize
\subsection{Physical properties of the infall candidates}
\subsubsection{Peak column density and clump mass}
Figures 7(a) and (b) show the distributions of the peak 870 $\mu$m flux densities, and their peak column densities for the infall candidates (red histogram) at different stages. The median values of the peak 870 $\mu$m flux densities for the pre-stellar, proto-stellar and UCHII clumps with infall are 1.02, 1.9 and 5.18 Jy beam$^{-1}$, respectively. The corresponding median values of H$_{2}$ column density are 4.47$\times$10$^{22}$ cm$^{-2}$, 5.50$\times$10$^{22}$ cm$^{-2}$ and 8.91$\times$10$^{22}$ cm$^{-2}$, respectively. Both distributions display an increasing trend with evolutionary stage of the clumps. The median values of the peak 870 $\mu$m flux densities for the non-infall candidates (blue histogram) at pre-stellar, proto-stellar and UCHII stages are 1.2, 2.24 and 4 Jy beam$^{-1}$, respectively. The corresponding median values of H$_{2}$ column density are 5.25$\times$10$^{22}$ cm$^{-2}$, 6.46$\times$10$^{22}$ cm$^{-2}$ and 6.92$\times$10$^{22}$ cm$^{-2}$, respectively. They also display an increase with evolutionary stage. It should be noted that the median values of the peak 870 $\mu$m flux densities and peak column densities for the infall candidates at pre-stellar and proto-stellar stages are less than the corresponding values of the non-infall candidates. Instead, the peak 870 $\mu$m flux densities and peak column densities of the infall candidates display a more obvious increasing trend with evolutionary stage. Fig. 7 (c) presents the H$_{2}$ column density distributions of all the infall candidates (red histogram), non-infall candidates (blue histogram) and the whole sample (grey histogram). The median values are 5.50$\times$10$^{22}$ cm$^{-2}$ and 6.46$\times$10$^{22}$ cm$^{-2}$ for infall and non-infall candidates, respectively. This suggests that the peak column densities of the infall candidates, in general, are smaller than those of the non-infall candidates.
The distributions of the integrated 870 $\mu$m flux densities and masses of the infall candidates (red histogram), as separated by evolutionary stage, are shown in Fig. 8 (a) and (b). The median values of the integrated 870 $\mu$m flux densities for the pre-stellar, proto-stellar and UCHII clumps with infall are 10.74, 14.59 and 34.52 Jy, respectively. The corresponding median values of their masses are 1258.9, 1659.6 and 2041.7 M$_{\odot}$, respectively. The distributions of the integrated 870 $\mu$m flux densities and masses of the non-infall candidates (blue histogram) are also shown in Fig. 8 (a) and (b). The median values of the integrated 870 $\mu$m flux densities of the pre-stellar, proto-stellar and UCHII clumps are 12.77, 15.4 and 32.5 Jy, respectively. Their corresponding median masses are 3090.3, 2238.7 and 2187.8 M$_{\odot}$, respectively.
Clearly the masses of the infall candidates increase as the clumps evolve, while those of the non-infall candidates decrease. The median mass values of the infall candidates are smaller than those of the non-infall candidates at the pre-stellar and proto-stellar stages. One reasonable explanation for these results is that a relatively large proportion of non-infall candidates at pre-stellar and proto-stellar stages have large distances, and clumps with larger distances usually have larger masses. The median values of their masses are 1659.6 M$_{\odot}$ and 2290.9 M$_{\odot}$ for all the infall and non-infall candidates, respectively (Fig. 8 (c)). The statistical parameters for each of these distributions are summarized in Table 5.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{H2-columndensity-infall_candidates_7.pdf}
\caption{The 870 $\mu$m properties and H$_2$ column densities for infal and non-infall sources. (a): The 870 $\mu$m peak flux density distributions for infall candidates (red histogram), non-infall candidates (blue histogram), and all classified clumps (gray histogram) separated by evolutionary stage. The median values of each stage are indicated by dashed red, blue and black vertical lines, respectively. (b): The H$_{2}$ beam-averaged column densities derived from the 870 $\mu$m dust emission for infall candidates (red histogram), non-infall candidates (blue histogram), and all classified clumps (gray histogram) separated by evolutionary stage. The median values of each stage are indicated by dashed red, blue and black vertical lines, respectively. (c): The H$_{2}$ beam-averaged column densities distribution for the whole sample (grey histogram), infall candidates (red histogram) and non-infall candidates (blue histogram). Median values are indicated by the dashed black, red and blue vertical lines for the whole sample, infall candidates, and non-infall candidates, respectively.}
\end{figure}
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Mass-distribution-infall_candidates_8.pdf}
\caption{Same as Fig. 7, but showing 870 $\mu$m integrated flux density and clump mass.}
\end{figure}
\subsubsection{Mass infall rate}
For the 131 infall candidates, a rough estimate of their infall rate may be determined from: $\dot{M}$$_{inf}$ =4$\pi$R$^{2}$V$_{inf}$$\rho$ \citep[Eq. 5;][]{2010A&A...517A..66L}, where V$_{inf}$ = V$_{N_{2}H^{+}}$ $-$ V$_{HCO^{+}}$ is an estimate of the infall velocity, $\rho$=M/(4/3$\pi$R$^{3}$) is the average clump volume density, and R is the radius of the clump. Here we used R and M as derived from the dust continuum emission at 870$\mu$m. The obtained mass infall rates are listed in Table 3. Fig. 9 shows the distributions of the mass infall rates of the infall candidates separated into pre-stellar, proto-stellar and UCHII stages. Their corresponding median values are 0.0078, 0.0077 and 0.0074 M$_\odot$ yr$^{-1}$, respectively. The mass infall rates of the infall candidates display no obvious variation with evolutionary stage. The statistical parameters for each of these distributions are summarized in Table 5.
\begin{figure}
\centering
\includegraphics[width=0.45\textwidth]{Mass-infall-rates_9.pdf}
\caption{Mass infall rate distributions for different evolutionary stages. Median values are indicated by the dashed black vertical lines.}
\end{figure}
\begin{table*}
\scriptsize
\centering
\begin{minipage}{125mm}
\caption{The mass infall rates of infall and non-infall candidates.}
\begin{tabular}{lrrrrrrr}
\hline
\multicolumn{1}{c}{Property} & \multicolumn{1}{c}{Group} & \multicolumn{1}{c}{Notes} & \multicolumn{1}{c}{Mean} & \multicolumn{1}{c}{Standard Deviation} & \multicolumn{1}{c}{Minimum} & \multicolumn{1}{c}{Median} & \multicolumn{1}{c}{Maximum} \\
\hline
\input{tableA4.dat}
\hline
\end{tabular}
\medskip
NOTE. Column 3 notes: (1) - infall candidates, (2) - non-infall candidates, (3) - infall + non-infall candidates.\\
\end{minipage}
\end{table*}
\normalsize
\section[]{Conclusions}
A total of 405 compact sources have been selected on the basis of the ATLASGAL and MALT90 survey data. These were then classified as pre-stellar, proto-stellar and UCHII clumps, and the optically thick lines HCO$^{+}$(1-0) and HNC(1-0), and the optically thin N$_{2}$H$^{+}$(1-0) line were used to search for infall candidates.
A total of 96.4\% of our sample sources satisfy the empirical mass-size relationship for massive star formation, and thus have the potential to form high-mass stars. Our result suggests that 0.05 g cm$^{-2}$ is a reliable lower bound for the clump surface density required for massive star formation. Among the sample, five new MPC candidates have been identified: G008.691-0.401, G333.299-0.351, G338.459+0.024, G348.183+0.482 and G348.759-0.946. These five new candidates, as well as two known MPC candidates (G345.504+0.347 and G351.774-0.537) all have large masses ($>$30.000 M$_{\odot}$), and large H$_{2}$ column densities ($>$10$^{23}$ cm$^{-2}$), except for G008.691-0.401. They all lie at far distances ($>$10 kpc), and none of these are infall candidates. The peak 870 $\mu$m flux densities and the column densities of the clumps display an increasing trend with their evolutionary stage.
A total of 131 reliable infall candidates have been identified with a detection rate towards pre-stellar, proto-stellar and UCHII clumps of 0.3452, 0.3881 and 0.2152, respectively. This supports the result that infall motions accompany the high-mass star formation process. The relatively high detection rate of the infall candidates toward the UCHII clumps indicates that many UCHII regions are still accreting matter. The roughly estimated mass infall rates of the infall candidates at pre-stellar, proto-stellar and UCHII stages are 0.0078, 0.0077 and 0.0074 M$_\odot$ yr$^{-1}$, respectively. The peak column densities and masses of the infall candidates, in general, display an increasing trend with evolutionary stage.
\section*{Acknowledgments}
This research has made use of the data products from the Millimetre Astronomy Legacy Team 90 GHz (MALT90) survey, the APEX Telescope Large Area Survey of the Galaxy (ATLASGAL) survey, which is a collaboration between the Max-Planck-Gesellschaft, the European Southern Observatory (ESO) and the Universidad de Chile, and also used NASA/IPAC Infrared Science Archive, which is operated by the Jet Propulsion Laboratory, California Institute of Technology, under contract with the National Aeronautics and Space Administration.
This work was supported by National Basic Research Program of China (973 program) No. 2012CB821802, The National Natural Science Foundation of China under grant Nos. 11373062, 11433008 and 11303081, and The Program of the Light in China's Western Region (LCRW) under grant Nos. RCPY201202 and XBBS-2014-24.
WAB acknowledges the support as a Visiting Professor of the Chinese Academy of Sciences (KJZD-EW-T01).
|
1,477,468,751,022 | arxiv | \section{Introduction}
Research in the field of question answering could route from earliest examples of AI system designed to answer questions like ELIZA \citep{weizenbaumELIZAComputerProgram1966} developed at MIT in the 1960s; or engineering approaches to complex problems; or foundational research in the psychology field on how people understand and interpret language, prioritize and focus on relevant information in context \citep{cuadraOPENINGBLACKBOX1967, wilsonSITUATIONALRELEVANCE1973}, investigate and make decision, retrieve and use information from memory to answer \citep{normanMemoryKnowledgeAnswering1972}, innovate... Even Socrates' philosophical approach \citep{jungMaieuticPromptingLogically2022, pagnoniSocraticPretrainingQuestionDriven2022, shridharAutomaticGenerationSocratic2022, zengSocraticModelsComposing2022, varmaSocraticLearningAugmenting2017} to questioning and critical thinking has recently been directly used.
When a question or problem is too complex to solve as it is, such as "how does the concept of personal freedom vary between different cultures and societies?", a common strategy is to break it down into solvable questions, and then combine solutions to provide an overall answer if there is consensus, or the necessary alternatives and nuances. Solving a complex question, or problem formulated as a question, requires appropriate combination of knowledge, methods, reasoning or tasks resolution. All of these may vary a lot depending on the field and question. Neural language models recently demonstrated their capacity to surpass average human on different tasks, knowledge fields, and mimic methods or uncover appropriate methods for a given problem. However even most advanced systems will fail on some questions a toddler could easily answer, or could assert totally false or biased knowledge without any caution. Involving humans in the loop of this question answering process, and integrating third parties like programs, symbolic AI or other, can greatly improves those models and help ensure ethical and safe outcomes.
Knowing precisely strengths and limitations of each language models alone or composed with humans and third components could progressively allow us to better solve more and more complex problems.
Largest community edited papers HELM \citep{liangHolisticEvaluationLanguage2022}, BLOOM \citep{workshopBLOOM176BParameterOpenAccess2022} and BIG \citep{srivastavaImitationGameQuantifying2022} focus on evaluating, democratizing and improving large language models capabilities, particularly on answering questions and many tasks that will also be useful for more complex questions. HELM provides a comprehensive analysis and benchmark for evaluating global strengths and weaknesses of reference large language models across a range of different scenarios and complementary metrics. Training large language models is done by resource-rich organizations and are frequently kept from the public inaccessible and impossible to train by any individual research group. BigScience, a collaboration of hundreds of international researchers, trained on the French government-funded Jean Zay supercomputer for 3.5 month the largest open source multilingual language model called BLOOM and shared all models, codes and results in the BLOOM paper. BIG focus on spot and difficult capabilities to better assess current skills and limitations of language models.
To tackle more specific or complex questions or problems, those very large language models requires specific architecture, knowledge, skills, tasks, methods, data, human feedback... Therefore we will use benchmarks and insights from these collective papers as a baseline, and leverage topical papers results depending on the focus of the analysis to complement the state-of-the-art of language models hybrid architectures and strategies for solving specific and more complex problems or questions.
\subsection{Question answering typical pipeline}
From question capture and refinement, to answer generation with possible knowledge capitalisation, question answering (QA) or complex question answering (CQA) pipeline can follow a variable number of steps depending on architecture and features. Some steps are explicit and well separated, some can be implicit and fused with others in the same model operation. However, we can identify most frequent steps and options. The "IBM DeepQA Architecture" illustration dating 2011 seems a dated architecture compared to some apparent end-to-end neural language models but well define some major steps.
\begin{figure}
\includegraphics[width=\linewidth]{DeepQA.jpg}
\caption{IBM DeepQA Architecture (2010) \citep{ferrucciBuildingWatsonOverview2010} and CQA pipeline steps.}
\label{DeepQA Architecture}
\end{figure}
\begin{description}
\item [1]\textbf{Question understanding}, analysis including question and context refine,
parsing and understanding the question, context, and intent (for task identification),
optional dialogue management - to allow the system to interact with the user and manage the conversation.
\item [2] \textbf{Query construction} with optional \textbf{decomposition for complex or multi-step queries}.
\item [3] \textbf{Information retrieval (IR)} with optional knowledge base expansion - to add new knowledge to the system.
\item [4] \textbf{Answers extraction (a), evaluation - scoring - ranking - filtering (b)}
\item [5] \textbf{Answer generation}, natural language or defined format (e.g. language program, table, ...)
\item [6] \textbf{Feedback loop \& new knowledge capitalisation}: learning and improving from users and models feedback, plus storing linked and generated knowledge for future use.
\end{description}
This process is only a baseline. Please note that complex questions can be a dynamic and progressive process, and can also be collaborative.
\subsection{Survey methodology}
\textbf{First}, we collected all recent surveys in the last 2 years related to "complex question answering" or "complex problem solving" and "language models". From these elements, we systematically extracted the major concepts and their plans. We merged all their plans into one plan to create this survey to ensure complete coverage of major concepts and adopt a similar methodology. We have also extracted the major conferences, challenges and contributing organizations for the second stage.
\textbf{Second}, we gathered:
\begin{enumerate}
\item[--] latest challenges from CLEF QA \citep{CLEFQATrack}, NTCIR \citep{NTCIRNIITestbeds} and SemEval \citep{SemEvalInternationalWorkshop} related to question answering;
\item[--] papers from latest edition of conferences SIGIR \citep{SIGIRSpecialInterest}, NeurIPS \citep{NeurIPSConferenceNeural}, NAACL \citep{NAACLNorthAmerican}, EMNLP \citep{EMNLPConferenceEmpirical}, ICLR \citep{ICLRInternationalConference}, AAAI \citep{AAAIAssociationAdvancement}, IJCAI \citep{IJCAIInternationalJoint}, CIKM \citep{CIKMConferenceInformation}, SIGKDD \citep{SIGKDDSpecialInterest}, WSDM \citep{WSDMWebSearch} about "complex question answering" and "language models";
\item[--] papers from organizations Deepmind \citep{DeepmindResearch}, OpenAI \citep{OpenAIPublications2020}, Google \citep{GoogleResearchPublications}, Microsoft \citep{MicrosoftResearchPublications}, Meta \citep{MetaResearchPublications} related to "question answering" and "language models".
\end{enumerate}
From these enriched data we identified and clustered major challenges and solution patterns from the field and state-of-the-art:
\begin{enumerate}
\item[--] Domain adaptation or specialization, and maintenance.
\item[--] Decomposition (including multi-hop/step), question to search action design.
\item[--] Safety and data sensitivity / Truthfulness, veracity and hallucinations / explainability/
\item[--] Reasoning (process, methods, chain of thought, code generation, causality).
\item[--] Scalability (inc. routing, space reduction).
\item[--] Alignment to human intentions, expectations and values (e.g. RLHF, clarifications and questions expansion).
\item[--] Long form question answering, long term dependency in reasoning, multi-document summarization.
\item[--] Dialog and context improvement (e.g. clarification question).
\item[--] Multimodal search (e.g. tables, images).
\item[--] Time dimension reasoning.
\item[--] Biases reduction.
\end{enumerate}
\textbf{Third}, we carried out a systematic search in a search engine of scientific articles to identify all the articles published during the last 4 years on the main subject of this investigation (\small{\textit{search queries: "complex question answering" AND "language model"; "question answering" AND "language model architecture"; "question answering" AND "language model" AND "hybrid architecture"; "hybrid language models" OR "hybrid language model" OR "hybrid neural language models"; "language model" "hybrid architecture"}}).
This bibliography collected in all the previous steps was then used throughout this survey to ensure that we summarize the main concepts at the state-of-the-art by systematically searching for articles semantically close to each concept.
Last but not least, we searched most cited papers in this bibliography and extended with recent connected papers or some historical papers often cited, and investigated relevant citations. Through this research, we identified 3 recent research papers (HELM, BLOOM, and BIG) each involving hundreds of researchers from many organizations, so we decided to use them as a baseline or major reference in this study.
\subsection{Contribution}
Through this survey, we provide:
\begin{enumerate}
\item[--] a systematic review of research on complex QA with language models and hybrid architectures, analyzed and linked to an extensive bibliography (\autoref{sec_biblio}).
\item[--] a summary of evaluation metrics and datasets (\autoref{sec_evaluation}) for developing required skills \& tasks(\autoref{sec_skills_tasks}), training methods (\autoref{sec_training}) to best answer with appropriate prompting strategy (see \autoref{sec_solvingCQA}).
\item[--] a review of state-of-the-art question answering (SOTA QA) and complex substasks capabilities (\autoref{sec_metricsSOTA} and \autoref{sec_hybridLLMpatterns}), extended to complex questions/problems (see section \ref{sec_solvingCQA}).
\item[--] qualitative (\autoref{sec_LLMlimits} and \autoref{sec_limits_and_research}) analysis of limitations of language models to answer complex QA, and on specific complex sub-tasks.
\item[--] a consolidated list of architectural patterns (\autoref{sec_hybridLLMpatterns}) which can enrich language models to answer complex questions.
\item[--] a list of major research challenges (\autoref{sec_limits_and_research}) and focus on some blind spots (e.g. data multi sensitivity, \autoref{sec_data_sensitivity}).
\end{enumerate}
\subsection{Structure of the paper}
This survey covers a wide range of topics related to complex question answering systems. To begin, it defines key concepts (\autoref {sec_core_concepts}) that are necessary for understanding the rest of the survey. Next, it delves into the various tasks and skills (\autoref{sec_skills_tasks}) that complex question answering systems must be able to perform, depending on the target. The survey also presents the evaluation metrics and datasets (\autoref{sec_evaluation}) used to assess the performance of these systems. It then analyses language models and limitations for QA (\autoref {sec_architectural_patterns}), different architectural patterns to solve it, with a focus on hybridization. To develop skills and tasks for a given architecture, we explore training techniques (\autoref{sec_training}) for the target usage in complex questions, including methods for dealing with lack of data, poor quality, and adaptation to new tasks and domains. Using trained model, it then presents current methodologies for solving complex questions(\autoref{sec_solvingCQA}), which often involves breaking down overly complex questions into solvable ones, properly prompting question and continually learning to better solve them. Furthermore, the survey discusses tougher challenges, partial solutions that have been identified, and research avenues to overcome them (\autoref{sec_limits_and_research}). All along the survey, we provide an extensive bibliography for readers who wish to delve deeper into the subject matter (\autoref{sec_biblio}).
\section{Core concepts / definitions}\label{sec_core_concepts}
In order to help reader and better define the scope of study, we pose a set of definitions on the key concepts. As different definitions often exist for the same concept, we will choose the one that best defines the scope of survey.
\begin{description}
\item [Question Answering (QA) system] is a computer program that can answer questions from a specific or general open knowledge base (closed-domain or open-domain QA).
\item [Complex question answering (CQA)] is to answer high complexity questions. We here focus on non-factoid, multi-step requiring decomposition, higher reasoning (constraints, deduction, induction, abduction), multi-source questions.
\item [Complex problem solving (CPS)] is to overcome barriers between a given state and a desired goal state by means of behavioral and/or cognitive, multistep activities\citep{frenschComplexProblemSolving1995}. A CPS process is split into two phases, knowledge acquisition and knowledge application.
\item [Question complexity] is a multidimensional construct that can be characterized by various aspects such as syntactic, lexical, semantic, discourse complexity, or cognitive complexity. We focus on this latest and consider the cognitive process required to answer a question, such as the level of concepts abstraction, the degree of reasoning, the amount and variety of knowledge, and the number of steps needed to find the answer. \textbf{Non-factoid questions} (e.g. "What are the causes of poverty in cities?") are typically considered much more complex than factoid questions (e.g. "What is the capital of Suriname ?").
\item [Information Retrieval (IR) \& Semantic search] is the general task of retrieving information for a given query/subject. Semantic search is the IR process of using natural language understanding techniques such as embedding to match the "meaning" of a query (rather than keywords) to the most relevant content in a corpus, regardless of the format of the content (e.g. structured or unstructured text, image...).
\item [Taxonomy of QA formats] is a proposed taxonomy \citep{rogersQADatasetExplosion2022} which covers question format, answer format, and source \& evidence format. Question format can be in the form of natural language questions, queries, cloze format or story completion. Answer format can be in the form of extractive format, multi-choice/categorical format, freeform/generative format, or numerical. The source \& evidence format can be in the form of unstructured text, semi-structured text, structured knowledge, images, audio, video or other combinations, and it can also be characterized by the amount of evidence provided, and whether it is from a single, multiple, partial or no source.
\item [Taxonomy of QA skills] is presented in a further section on elementary tasks \& skills types.
\item [Prompt and instructions engineering VS pre-training and fine-tuning]: pre-training refers to training a model on a large corpus of data to learn general understanding of knowledge and logic; fine-tuning is the process of adapting a pre-trained model to a specific task with additional training on specific examples; prompt engineering is the technique of optimizing input question and optional instructions sent to a pre-trained model, potentially fine-tuned, without re-training it to improve the model's answer performance.
\item [End-to-end QA VS pipeline of specialized tasks]: end-to-end QA use a single call to a model to answer, while a pipeline uses multiple calls, often to specialized models, for different sub-tasks.
\item [Multi-hop/Multi-step QA] is related to questions requiring reasoning over multiple pieces of evidence or multiple steps. It is often done after a question decomposition task or iteratively.
\item [(Neural) Language model] is a statistical model that predicts the likelihood of a sequence of words, or tokens, in a language. It can understand and generate human-like text or code, perform various language-related tasks and even non-textual tasks leveraging skills extracted from text or code \citet{haoLanguageModelsAre2022}.
\item [(Language model with) Knowledge in/out of model] is the distinction between a model with knowledge explicitly encoded inside the model, and a model using external knowledge (e.g. search engine, data sources, program).
\item [Standard/monomodal vs multimodal QA]: monomodal can search only one type of input (e.g text only) while multi-modal can search multiple types of input (text, image, video, sound...)
\item [Human-in-the-loop] is the practice of involving humans in the process to cover a processing task of getting feedback for improving system. Most frequent technique is RLHF (reinforcement learning with human feedback).
\item [Hallucinations] are confident generated responses including false information not justified by the model's training data. They are classified as extrinsic, when a model adds information that is not present in the source data, and intrinsic, when the model distorts information present in the source data into a factually incorrect representation \citep{choubeyCaPEContrastiveParameter2022}.
\item [Knowledge graphs (KG) \& ontologies] are structured representations of knowledge (ontologies for concepts, KG for real-world objects) enabling to link and reason over knowledge.
\end{description}
\section{QA/CQA skills and tasks}\label{sec_skills_tasks}
In order to design a complex question answering system for a domain or capacities, investigating the skills and tasks needed for this system is an important first step. This will determine precisely what kind of expertise and resources are required in this hybrid model to build it and assess it. This skills identification could also be done "a posteriori" if a system fail to properly answer in order to identify weak or missing skills.
\subsection{Skills}
Considering the QA/CQA standard pipeline presented in introduction, a task of question answering requires different complementary skills in the domain of machine reading comprehension (MRC), information retrieval (IR), and also knowledge capitalisation and reinforcement.
In this section, we leverage a proposed QA and MRC taxonomy \citep{rogersQADatasetExplosion2022} augmented with the "Reinforcement and capitalization" skill concept (see fig \ref{QA_MRC_Taxonomy}), important for a CQA system. This later has shown to be a major skill to enable calibration or alignment to intent and values, and continuous improvement by usage \citep{baiConstitutionalAIHarmlessness2022, chiuKnowledgeGroundedReinforcementLearning2022}.
\begin{figure}
\includegraphics[width=1.1\linewidth]{CQA_Taxonomy.jpg}
\caption{Updated QA/CQA taxonomy \citep{rogersQADatasetExplosion2022} \tiny{(reuse of the schema with addition of "reinforcement \& capitalization"})}
\label{QA_MRC_Taxonomy}
\end{figure}
\subsubsection{\textbf{Interpreting \& manipulating input}}
Like humans, machines should capture the meaning of the individual constituent elements of the input (words, numbers) and the global syntax and semantic, and manipulate them in the context of the task and in respect to the language and other shared system (e.g. mathematics). This requires a set of skills, including:
\begin{enumerate}
\item \textbf{Linguistic skills} - e.g. recognizing word categories and meaning, translating, understanding the context, relationships and implications of words and phrases; it might be decomposed into syntactic, grammatical, and semantic skills.
\item \textbf{Numeric skills} - e.g. performing calculations, dealing with precise and imprecise numbers.
\item \textbf{Operation on sets} - e.g. selecting, combining, intersection, operating on elements of a set of input (e.g. Alice, Bob and Charlie are in the room. Bob goes out. Who are the persons in the room?);
\end{enumerate}
\subsubsection{(Information) \textbf{Retrieval}}
It can be summarized as determining whether an answer exists, if yes, to look for it and provide most useful information.
\begin{enumerate}
\item \textbf{Answerability}: ability to identify whether a given query is a valid question and can be answered with provided information. Optionally identify additional information to correctly answer.
\item \textbf{Where to look for the required knowledge?}: ability to identify the correct source of knowledge to get the best answer. It the required knowledge for the answer is in the question, process is to extract the good piece of information. Otherwise, we need to know if it is a precise fact or non factual, then where to look for it. Additionally, a proper answer may require common sense and potential domain information.
\end{enumerate}
\subsubsection{\textbf{Inference type \& reasoning} (Generation)}
Inference / reasoning: it can be summarized as the process of drawing logical conclusions from available facts or other premises. Inference is used in language models to understand a text, and generate responses to questions posed. There are three main aspects to inference in language models:
\begin{enumerate}
\item \textbf{Inference Strength}: could draw general conclusions from specific facts (inductive), or draw specific conclusions from general facts (deductive).
\item \textbf{Inference Mechanism}: draw conclusions from a comparison of two or more elements (analogy), draw conclusions based on the best explanation for a given situation (best explanation)...
\item \textbf{Inference Direction}: conclusion follows necessarily from the premises or from general to specific (Deductive), conclusion is reached through a process of elimination or reasoning from the specific to the general (abductive).
\end{enumerate}
\subsubsection{\textbf{World modeling}} \citep{liLanguageModelingLatent2022}
It can be summarized as the ability to understand and make decisions based on the understanding of the world. It is a complex type of question answering skill that requires understanding of physical and mental states, as well as relationships between them. It involves the following categories:
\begin{enumerate}
\item \textbf{Spatial} reasoning: understand and reason about objects and their locations in space.
\item \textbf{Temporal} reasoning: understand and reason about event order, event attribution to time, script knowledge, event duration, temporal commonsense knowledge, factoid/news questions with answers where the correct answers change with time, temporal reasoning in multimodal setting.
\item \textbf{Belief states}: understand and track beliefs, opinions, and mental states.
\item \textbf{Causal relations}: understand and reason about the cause-and-effect relationships between events.
\item \textbf{Other relations between events}: understand and reason about relationships between events, such as sub-events, conditionals, and counterfactuals.
\item \textbf{Entity properties and relations}: properties of characters, physical properties, numerical properties, social interactions.
\item \textbf{Tracking entities}: understand and track entities over time, across locations, in coreference chains.
\end{enumerate}
\subsubsection{\textbf{Decomposition \& multi-step}}
Complex questions require decomposition down to solvable tasks and resolution in the best chain of action steps. \textbf{Simple} question may use multi-step resolution but all necessary knowledge are located in one place. \textbf{Complex} questions rely on several knowledge, necessitating the combination of information across sentences, paragraphs, documents, or other modalities. It also includes questions that require a combination of context and world knowledge. It can be even broader than simply combining several facts, and could also be taken as combining the “skills” from different dimensions and different methods of resolution. This decomposition and multi-step resolution can be resolve inside a model having these skills and all other necessary for the question, or distributed across multiple components.
\subsubsection{\textbf{Reinforcement \& capitalization}}
A complex QA system should be able to permanently improve itself through: \textbf{reinforcement} by aligning answers to target intent, format, method, values expectations with solutions which could vary with requester person (e.g. knowledge, culture...) or system; \textbf{capitalization} by storing any new knowledge generated or linked in order to improve knowledge enabling to solve more complex problems.
\subsection{Tasks}
A task of complex question answering could be solved in one inference task incorporating all the skills viewed in previous section, or subdivided in several sub-tasks.
\subsubsection{\textbf{Integrated (C)QA task}}
In this case, the CQA system answers from question using only one inference in the model but could include multi-step reasoning inside the model. LLM (large language models) should therefore embed all the necessary skills and knowledge for interpretation \& manipulation, information retrieval, world knowledge, reasoning \& inference, decomposition \& multi-step resolution. If it is not the case, model should be further trained with adapted datasets to acquire those new skills and knowledge, or rely on task decomposition and LLM hybridation.
The papers BIG \citep{srivastavaImitationGameQuantifying2022} and HELM \citep{liangHolisticEvaluationLanguage2022} provide a good overview of current limits of integrated task approach for QA/CQA (see also tables "[HELM] SOTA QA multi-metrics"(\ref{SOTA QA multi-metrics}) and "[BIG] QA complex QA tasks benchmark" (\autoref{Table [BIG] QA complex QA})).
\subsubsection{\textbf{CQA tasks decomposition and primitives}}
Answering tasks when decomposed could be grouped in those different categories:
\begin{itemize}
\item[--] \textbf{Standard sub-tasks} include intent detection, word sense disambiguation, entity recognition (NER) and linking, topic classification, sentiment classification, information extraction, fact retrieval, ranking, and summarization, including query focused summarization...
\item[--] \textbf{Advanced sub-tasks} include multi-hop \& decomposition, domain oriented tasks, sources \& fact checking, code generation \& program synthesis, causal explanation (or possible consequences), temporal explanation...
\item[--] \textbf{External sub-tasks} leverage resources out of model like program synthesis \citep{droriNeuralNetworkSolves2022}, using a solver...
\end{itemize}
Wu et al \citep{wuAIChainsTransparent2022} proposes a taxonomy of \textbf{primitive tasks in decomposed and chained LLM} which could be applicable to CQA:
\begin{description}
\item[--] \textbf{Validate and categorize input} such as \textbf{classification} which assigns the input to categories. Most useful for branching logic and validation (e.g. is the question answerable?).
\item[--] \textbf{Gather additional information} from the LLM such as \textbf{Factual Query} to ask LLM for a fact, {Generation} to ask LLM to do some creative “hallucination” on the input, \textbf{Ideation} to ask a list of ideas or examples.
\item[--] \textbf{Re-organize input} such as \textbf{Information extraction} from the context, \textbf{Rewriting (1-1 mapping)} input to more machine-readable formats (e.g. JSON to natural language) or other usage (e.g. translation), \textbf{Split Points (1-N mapping)} for splitting contexts, digging concepts, {Compose Points (N-1 mapping} to synthesise, reverse operation of decomposition like merge multiple results back together.
\end{description}
\subsubsection{\textbf{CQA tasks hybrid program decomposition examples}}
To better illustrate how CQA tasks could be simply decomposed between a LLM and an external software module to better solve complex problems, we invite you to see how: University level math problems (CQA) can be solved by splitting the task between a LLM and a Python language interpreter \citep{droriNeuralNetworkSolves2022}, physical reasoning question can be solved by splitting the task between a LLM and a physics engine \citep{liuMindEyeGrounded2022}.
Those solutions design can be extrapolated to many complex QA challenges by looking into the previous section about "hybrid LLM patterns". Each of these patterns can be combined to split sub-tasks of complex problem to leverage most adapted module for the task by ensuring necessary context is provided wherever needed.
\section{Evaluation: metrics, cost functions, datasets}\label{sec_evaluation}
The performance of language models on question answering can vary greatly depending on factors such as the domain, question complexity, necessary subtasks, bias, fairness, toxicity, and human expectations. A large language model may perform well overall but struggle with some type of questions (see table "[HELM] SOTA QA multi-metrics(\ref{SOTA QA multi-metrics})" and "[BIG] QA complex QA tasks benchmark(\ref{Table [BIG] QA complex QA})" or areas of evaluation, while a model that is specialized for certain questions or domains may perform poorly on more general tasks. The following section will examine different metrics and datasets used for evaluating and training these models.
\subsection{Metrics \& SOTA}\label{sec_metricsSOTA}
\subsubsection{Standard metrics}~ \citep{zhaoDenseTextRetrieval2022}\label{sec_standard_metrics}
There are a variety of metrics that can be used to evaluate the performance of QA models, each with its own strengths and weaknesses. In this section, we will discuss some of the most common metrics used to evaluate QA models:
\begin{description}
\item [Recall@k] Measures the proportion of relevant answers retrieved by the model among the top k answers. It is a measure of the model's ability to find all the relevant answers, regardless of their position in the ranking. The main weaknesses are that it does not take into account the position of the relevant answer in the ranking and does not penalize irrelevant answers that appear in the top k.
\item [Accuracy@k] Measures the proportion of correct answers among the top k answers returned by the model. It is a measure of the model's ability to return the correct answer and can be used to evaluate the performance of a model in a closed-domain QA task. As with recall@k, the main weaknesses are that it does not take into account the position of the correct answer in the ranking and does not penalize irrelevant answers that appear in the top k.
\item [nDCG] Normalized Discounted Cumulative Gain, a measure of ranking quality that takes into account the relevance and position of answers. It is often used in information retrieval and web search to measure the effectiveness of a ranking algorithm. The main weaknesses are that it does not take into account the number of irrelevant answers in the ranking
\item [MAP] Mean Average Precision, a measure of the quality of a set of ranked answers. It is a commonly used metric for evaluating QA models in open-domain tasks, where the model must return a list of possible answers. Weakness: The main weaknesses are that it does not take into account the position of the correct answer in the ranking.
\afterpage{%
\clearpage
\begin{landscape}
\begin{figure}[ht]
\centering
\includegraphics[width=\linewidth]{Table1QA_and_IR.jpg}
\caption{[HELM] SOTA QA multi-metrics.}
\label{SOTA QA multi-metrics}
\end{figure}
\end{landscape}
\clearpage
}
\item [MRR] Mean Reciprocal Rank, a measure of the quality of a set of ranked answers, with a higher value indicating better performance. It is often used in information retrieval and web search to measure the effectiveness of a ranking algorithm. Weakness: It only considers the position of the first correct answer in the ranking.
\item [CWS] Cross-entropy word-level perplexity, a measure of the model's ability to predict the next word in a sentence. It is often used to evaluate the quality of the language model. Weakness: it cannot be used for answer quality and perplexity is not well correlated with a language model tasks performance.
\item [F1 (Macro, Micro)] F1 score, a measure of a model's accuracy that takes into account both precision and recall. It is used to evaluate the model's performance in a classification task. F1 macro averages the per-class F1 scores, used for imbalanced datasets, F1 micro computes metrics globally by counting total true positives, false negatives, and false positives, used for balanced datasets. Weakness: It does not take into account the relative importance of false positives and false negatives.
\item [EM] Exact Match, a binary metric that measures whether the model's answer is exactly the same as the reference answer. It is often used in closed-domain QA tasks where the correct answer is a single word or phrase. Weakness: It is not able to capture the semantic similarity between the model's answer and the reference answer.
\item [Jacard similarity] measures similarity between two sets of data, based on the size of the intersection divided by the size of the union of the sets. Weakness: It does not take into account the overall size of the sets, which can lead to errors in similarity measurements.
\item [Cosine similarity] measures similarity between two non-zero vectors based on the cosine of the angle between them. Weakness: It is sensitive to the magnitude of the vectors (e.g. zero), which can lead to errors in similarity measurements.
\item [BLEU] Bilingual Evaluation Understudy, a measure of the similarity between a model's answer and a reference answer. It is commonly used in machine translation and natural language generation tasks. Weakness: It does not take into account the meaning of the words (semantic similarity).
\item [ROUGE] Recall-Oriented Understudy for Gisting Evaluation, a measure of the similarity between a model's answer and a reference answer. It is commonly used in natural language summarization tasks. Additionally to BLEU, it takes into account the longest common sequence (LCS). Weakness: it does not take into account the meaning of the words (semantic similarity).
\item [METEOR] Metric for Evaluation of Translation with Explicit ORdering, a measure of the similarity between a model's answer and a reference answer. It is similar to BLEU, but takes into account word alignment and synonymy. Weakness: It is computationally expensive and can be sensitive to the reference translations used.
\item [Human evaluation] human judges provides a subjective measure of the quality of the model's answers. It is considered the gold standard for evaluating QA models. Weakness: it is time-consuming and can vary greatly depending on the individual evaluators and their level of expertise.
\end{description}
\subsubsection{New metrics for free-form/natural language QA}
In the context of free-form QA standard metrics limits question complexity \citep{chenEvaluatingQuestionAnswering2019} and do not capture many good answers semantically close. So new metrics have been proposed using PLM having higher correlations to human expectations. Most wellknown are:
\begin{description}
\item [BERTScore] \citep{zhangBERTScoreEvaluatingText2020} evaluates the quality of text generated based by computing the similarity score of the contextualized embeddings of (answer) generated text's tokens and those of the reference text.
\item [BARTScore] \citep{yuanBARTScoreEvaluatingGenerated2021} similar to BERTScore, it uses the more recent model BART pre-trained using a more robust technique (denoising autoencoding) but on a smaller dataset. A BARTScore variant adds faithfulness in the measure.
\item [MAUVE] \citep{pillutlaMAUVEMeasuringGap2021}: measures the similarity between the generated text and reference text but uses divergence frontiers. It better correlates with human judgement and identify quality differences.
\item [T5Score] this hybrid metric \citep{qinT5ScoreDiscriminativeFinetuning2022} based on mT5 model is not yet compared to date with MAUVE but is globally more robust than BERTScore and BARTScore.
\end{description}
However the paper "\textit{On the Blind Spots of Model-Based Evaluation Metrics for Text Generation}" \citep{heBlindSpotsModelBased2022} (2022) highlights that all those PLM based metrics have flaws, they could assign a high likelihood to degenerate, repetitive text and could be insensitive to perturbations such as word order shuffling, negation, etc. So those blind spots should be taken in account to compose and eval \citep{frisoniNLGMetricverseEndtoEndLibrary2022} the best metric for targeted complex QA application.
\subsubsection{Multi-metrics from HELM}
The Holistic Evaluation of Language Models (HELM \citep{liangHolisticEvaluationLanguage2022}), after referencing a large space of targeted use cases of LLM with a focus on QA, has identified 7 key categories of metrics required to create useful systems: accuracy, calibration, robustness, fairness, bias, toxicity, and efficiency (speed/cost). For accurate definition of theses categories of metrics please refer to HELM taxonomy \citep{liangHolisticEvaluationLanguage2022}. From data provided by this project, we created a new table (see \textbf{[HELM] SOTA QA multi-metrics} \ref{sec_metricsSOTA}) assessing best performing models on each of this metrics in QA and a systematic comparison with the current global best performer (REF). We can see that performance of models are unequal and even best performer is not the best choice depending on metrics preference regarding your needs. The best performing model in these metrics was InstructGPT from OpenAI \citep{chiusanoOpenAIInstructGPTBrings2022}. A much smaller model (Anthropic-LM v4-s3) is close to the leading model which means it should cost much less to operate. The study observed a consistent gap between the current open models and non-open models in terms of accuracy. The gap has been reducing with the recent release of open models such as BLOOM (176B size) \citep{workshopBLOOM176BParameterOpenAccess2022} by BigScience, OPT (175B size) \citep{zhangOPTOpenPretrained2022} by Meta, and GLM (130B size) \citep{zengGLM130BOpenBilingual2022} by Tsinghua University. Study shows that instruction-tuning can provide a broad set of advantages in terms of accuracy, robustness, and fairness metrics.
The relationship between accuracy and calibration depends on the scenario (task \& domain) and adaptation procedure, with some scenario showing trade-offs between accuracy and calibration.
Across all scenarios, there is a strong correlation between accuracy, robustness, and fairness, but trade-offs where the most accurate model is not the most robust or fair.
The study also found performance disparities in models when demographic metadata is available, and low overall average biases and toxicity in model generations, but notes that targeted evaluations are needed to obtain a more detailed characterization.
The study also found that there is not a strong trade-off between accuracy and efficiency, but as models become larger, accuracy improves but with higher training and inference cost. Only a subset of all models are on the accuracy-efficiency Pareto frontier for each scenario.
There is no model leading on all metrics, QA performances also vary depending on the scenario (task \& domain) and model, so weighting or defining a decision tree among the 7 metrics then evaluating on target scenario is necessary for choosing your model.
\subsubsection{Which metrics for "complex" QA?}
Considering that complex QA is not well defined and answers highly depends on human expectations and values, we could not identify standard metrics in the literature. It has been proposed to use Bloom taxonomy to measure question complexity \citep{ullrichUsingBloomTaxonomy2021}, focusing mainly on necessity of multi-hop or not, but does not propose a metric for measuring answers relevance. Metrics identified in previous section for free-form QA could be used when there is a gold/reference answer but shall be unstable considering that a good non-factoid answer can be semantically distance to gold answer. In this survey we restricted complex questions to those with the following characteristics: non-factoid, multi-step (requiring decomposition), multi-source of knowledge, higher order of reasoning questions. We could separately measure each skill of a language model on those characteristics to estimate the capacity to answer using decomposition. Using data from BIG bench, we created a summary evaluation of similar QA capacities (see \textbf{Fig\ref{Table [BIG] QA complex QA} [BIG] QA complex QA tasks benchmark}). However, it will not assess the end-to-end capacity to provide a relevant final answer.
Current systems like ChatGPT \citep{haqueThinkThisMost2022} which solves some complex questions, with high differences in quality but a clear improvement curve, used human feedback~ \citep{baiTrainingHelpfulHarmless2022} measurement (e.g. ranking/preference). A path investigated in papers WebGPT \citep{nakanoWebGPTBrowserassistedQuestionanswering2022} and Constitutional AI paper \citep{baiConstitutionalAIHarmlessness2022} is to build one or several Elo scores mapping human preferences competing on different axes (e.g. helpfulness vs harmlessness, compute efficiency) and maximizing frontiers.
\subsubsection{Explainability, truthfulness, hallucination metrics}
Recent conferences highlights the recurrent problem of hallucination \citep{jiSurveyHallucinationNatural2022} and urge for explainability \citep{leiterExplainableEvaluationMetrics2022, wiegreffeTeachMeExplain2021} and truthfulness \citep{sopranoManyDimensionsTruthfulness2021, linTruthfulQAMeasuringHow2022} when delivering an answer.
Conventional metrics measuring quality of writing including answers are not adequate for quantifying the level of hallucination \citep{jiSurveyHallucinationNatural2022}. There are no mature metrics except human, first proposed are:
\begin{itemize}
\item[--] statistical metrics mainly focus on lexical matching \citep{jiSurveyHallucinationNatural2022} such as PARENT-T, bag-of-vectors sentence similarity (BVSS) \citep{martindaleIdentifyingFluentlyInadequate2019}, Knowledge F1.
\item[--] model based metrics expect to handle more complex syntactic and semantic variations but mainly focus the generation compared to known gold answer: FEQA \citep{durmusFEQAQuestionAnswering2020}, QAGS \citep{wangAskingAnsweringQuestions2020}, QuestEval~ \citep{scialomQuestEvalSummarizationAsks2021}
\item[--] human evaluation is the most commonly used method considering the currently imperfect automatic methods, most common usages are: (1) scoring: annotator rate the evaluation level out of a range; (2) comparing: annotator compares the output texts with baselines or ground-truth references.
\end{itemize}
We identified three types of explanation in the literature \citep{wiegreffeTeachMeExplain2021}: highlights, free-text, and structured explanations. Those explanations could be intrinsic, explaining LM internal logic, or extrinsic, related to external sources and proofs. Therefore some metrics for the extrinsic explanation could be the number of sources, authority and reliability of sources. The intrinsic explanation could be measured trough their quality: compactness (short and coherent), sufficiency, comprehensiveness. Some indirect metrics could be related to the explaination task performance (source identification, fact-checking, coherence...). Explaination are usually evaluated on plausibility and faithfulness (coherent decision process), a common approach is to provide a chain of facts that detail the reasoning steps to reach an answer.
For further details, we invite you to look into main references \citep{wiegreffeTeachMeExplain2021, leiterExplainableEvaluationMetrics2022, jiSurveyHallucinationNatural2022, linTruthfulQAMeasuringHow2022}, the taxonomy in the table~\ref{Table Explainable Evaluation Metrics} "Taxonomy of existing explainable evaluation metrics" \citep{leiterExplainableEvaluationMetrics2022}, and research topics in the further section "Hallucination \& credibility".
\begin{table}[!htp]\centering
\newcolumntype{T}{>{\tiny}c}
\caption{Taxonomy of existing explainable evaluation metrics \citep{leiterExplainableEvaluationMetrics2022}}\label{Table Explainable Evaluation Metrics}
\begin{tabular}{l V{1cm}V{2.5cm}V{1cm}V{1cm}}
\toprule
Work & Type & Method & Goals \\ \midrule
Eval4NLP 2021: (Fomicheva et al. 2021) & Various \\
Rubino, Fujita, and Marie (2021) & FI & Expl. by Design & AL \\
Treviso et al. (2021) & FI & Various & AL \\
SemEval 15/16: (Agirre et al. 2015, 2016) & Various \\
Magnolini, Feltracco, and Magnini (2016) & CAl & Neural Networks & AL, E \\
Yuan, Neubig, and Liu (2021) & CA & Generation Prob. & E \\
Adversarial Attacks (Section 7) & EbE & Perturbations & D, E \\
Kaster, Zhao, and Eger (2021) & EbS/CA & Linear Regression & D, E \\
Sai et al. (2021) & CA & Perturbations & B, D, E \\
\end{tabular}
\caption*{\small{The first column is the research work reference. The second is the explanation types: Concept Attribution (CA), Chunk Alignment (CAl), Feature Importance (FI), Explanation by Example (EbE) and Explanation by Simplification (EbS). The column “Goals” specifies which aspect is measured amongst (B)ias detection, (D)iagnosis, (E)xpressiveness and automated labeling(AL).}}
\end{table}
\subsubsection{Domain/task matrix of performance}
As performance of a model is unequal depending on knowledge domain and tasks \citep{srivastavaImitationGameQuantifying2022} \citep{workshopBLOOM176BParameterOpenAccess2022} \citep{liangHolisticEvaluationLanguage2022}, metrics assessment should be segmented per knowledge domain \& tasks within a matrix of comparison or database like in the 42 scenarios of HELM \citep{liangHolisticEvaluationLanguage2022}.
\subsection{Cost functions}
Cross-entropy loss is the main objective function used for language models training. This function is adapted to each training objective (see sections \ref{sec_pretraining}, \ref{sec_supervisedtraining}, \ref{sec_PETtraining}, \ref{sec_improvetraining}) and some complex implementations can be done for knowledge distillation \citep{wuOneTeacherEnough2021}.
\subsection{Datasets}
To train and evaluate QA/CQA systems, a variety of datasets have been developed to cover a wide range of QA tasks and needs. We cover in the following section:
\begin{enumerate}
\item "QA \& CQA text datasets" (e.g. free text, tables, graphs, databases).
\item "Multimodal QA datasets" (e.g. images, audio, videos, multi format).
\item "(BIG Bench) datasets for Complex QA subtasks" for adressing CQA via complex tasks including decomposition.
\item "Explainable \& truthfulness QA" which helps address hallucination issues.
\item "Local \& multi-lingual datasets" to cover non-English resources.
\item "Reasoning \& instruction fine-tuning (IFT, CoT) datasets"
\item "Dialogue (e.g. Chatbot)"
\item "Generate (e.g. synthesis) or improve datasets".
\end{enumerate}
\subsubsection{QA/CQA text datasets (monomodal)}
Major usage and datasets of QA/CQA focus on text sources:
\begin{description}
\item [MS Marco] (Microsoft Machine Reading Comprehension Dataset): largest publicly available collection of relevance judgments, with 100,000 to 1,000,000 human generated QA, it has been central to the progress in neural IR/QA over the past several years (standard QA task, human performance: Rouge-L: 0.539, BLEU-1: 0.485) \citep{liangHolisticEvaluationLanguage2022, linPretrainedTransformersText2021}.
\item [SQUAD] (Stanford Question Answering Dataset) \citep{rajpurkarSQuAD1000002016}: 100,000+ questions posed by crowdworkers on a set of Wikipedia articles (human performance F1-score: 86.8\%).
\item [SQuAD v2] \citep{rajpurkarKnowWhatYou2018}: add 50,000 unanswerable questions written adversarially by crowdworkers to look similar to answerable ones, to avoid training on unreliable guesses on questions (human performance F1-score: 86.8\%).
\item [TriviaQA] \citep{joshiTriviaQALargeScale2017}: 650K question-answer-evidence triples. TriviaQA includes 95K question-answer pairs authored by trivia enthusiasts and independently gathered evidence documents, six per question on average, with distant supervision for answering. In comparison to other QA datasets in 2017, TriviaQA (1) had relatively complex, compositional questions, (2) considerable syntactic and lexical variability between questions and corresponding answer-evidence sentences, and (3) required more cross sentence reasoning to find answers.
\item [MMLU] (Measuring Massive Multitask Language Understanding)~ \citep{hendrycksMeasuringMassiveMultitask2021}: 15,908 multiple-choice questions packages across 57 different tasks/datasets from subjects in the humanities, social sciences, hard sciences, and many others (unspecialized human performance accuracy: 34.5\%).
\item [NarrativeQA] \citep{kociskyNarrativeQAReadingComprehension2017}: 46,765 human generated questions \& answers requiring understanding of stories (books, movie scripts) requiring summarization (human performance: Bleu-1:44.3, Bleu-4:18.9, Meteor:24.0, Rouge-L:57.1)
\item [NaturalQuestions] (closed-book, open-book) \citep{kwiatkowskiNaturalQuestionsBenchmark2019}: 323,000 questions and documents (full dataset is 42Gb) consisting of real anonymized, aggregated queries from Google search engine providing several long documents (e.g. 5 wikipeda pages) with a long answer (e.g. a paragraph) and a short answer (one or more entities) if present on the pages, or marks null if no long/short answer is present (standard human performance: short answers F1: 57.5\%, long answer F1: 73.4\%).
\item [QuAC] \citep{choiQuACQuestionAnswering2018}: >100,000 questions and their corresponding answers, based on dialogues between two persons where many questions requires understanding of the dialog (human performance F1: 80.9\%).
\item [Semi-structured datasets] - semi-structured data with \textbf{tables-and-text} are abundant on the web and in companies. \citet{wangSurveyTableandTextHybridQA2022} list the following datasets: HybridQA, OTT-QA, GeoTSQA, FinQA, TAT-QA, TAT-HQA, MultiHiertt.
For \textbf{Table only or SQL like data} (not bundled with text), Rogers et al in "QA Dataset Explosion: A Taxonomy of NLP Resources for Question Answeringand Reading Comprehension" \citep{rogersQADatasetExplosion2022} list the following datasets: WikiTableQuestions, TableQA, WikiSQL, WikiOps.
\item [CQA on knowledge bases datasets] - knowledge bases like ontologies and knowledge graphs offers valuables structured symbolic data (e.g. Wikidata, all resources from lod-cloud.net...) not always easy to query to non experts. \citet{lanComplexKnowledgeBase2022} list the following datasets: WebQuestions, ComplexQuestions, WebQuestionsSP, ComplexWebQuestions, QALD series, LC-QuAD, LC-QuAD 2.0, MetaQA Vanilla, CFQ, GrailQA, KQA Pro.
\item [Domain specific datasets] are numerous such MedQA \citep{jinWhatDiseaseDoes2020} and TREC-COVID \citep{voorheesTRECCOVIDConstructingPandemic2020} in medical sector with the later focusing on Covid19, or Qasper QA about research papers in NLP \citep{dasigiDatasetInformationSeekingQuestions2021}.
\item [Decomposition and multi-hop datasets] are required for CQA. Table \ref{TableMultiHopDatasets} "Multi-hop QA datasets" \citep{maviSurveyMultihopQuestion2022} lists major multi-hop datasets; BIG \citep{srivastavaImitationGameQuantifying2022} also provides a \textbf{decomposition and multi-step datasets} selections (see section \ref{sec_BIG_Bench_datasets} "BIG Bench datasets for Complex QA subtasks").
\end{description}
\begin{figure}
\includegraphics[width=\linewidth]{TableMultiHopDatasets.png}
\caption{Multi-hop QA datasets from \citet{maviSurveyMultihopQuestion2022}}
\label{TableMultiHopDatasets}
\end{figure}
\subsubsection{Multimodal QA datasets}
When answering a question, humans build \citep{yangEnhancingMultimodalMultihop2022} a coherent understanding of the world by actively exploring and reasoning over multiple evidences (multi-hop) from different modalities (multimodal). Therefore, natural QA requires to leverage more than text (natural or structured) like images, videos, sensors... Training \& evaluation datasets are emerging like:
\begin{description}
\item [Image QA datasets]: recent survey on visual QA and visual reasoning \citep{zakariVQAVisualReasoning2022} provides a full list of images/visual question-answering (VQA) including reasoning tasks.
\item [Audio QA datasets]: DAQA \citep{fayekTemporalReasoningAudio2019} on audio temporal reasoning, Clotho-AQA \citep{lippingClothoAQACrowdsourcedDataset2022} on binary and multi-choice audio QA.
\item [Video QA datasets]: such as VideoQA \citep{zhongVideoQuestionAnswering2022} for multi-domain, MovieQA \citep{tapaswiMovieQAUnderstandingStories2016}/MovieFIB \citep{maharajDatasetExplorationModels2017}/TVQA \citep{leiTVQALocalizedCompositional2019}/KnowIT VQA \citep{garciaKnowITVQAAnswering2019} for movies and shows, MarioQA \citep{munMarioQAAnsweringQuestions2017} for games, PororoQA \citep{kimDeepStoryVideoStory2017} for cartoons, TurorialVQA \citep{colasTutorialVQAQuestionAnswering2020} for tutorials, CLEVRER \citep{maoCLEVRERHumansDescribingPhysical2022} for physical \& causal reasoning.
\item [Multi-modal multi-hop QA datasets]: MultiModalQA/MMQA \citep{talmorMultiModalQAComplexQuestion2021} for multi-modal and multi-hop QA, WebQA \citep{changWebQAMultihopMultimodal2022a} on web multi-modal QA, MAQA focus on negation learning and testing \citep{liMAQAMultimodalQA2023}.
\item [Unified dataset format] is proposed by \citet{xieUnifiedSKGUnifyingMultiTasking2022} to unify multiple formats of different modality to enable training, inference and evaluation on multi-tasks and sources.
\end{description}
\label{sec_BIG_Bench_datasets}
\subsubsection{BIG Bench datasets for Complex QA subtasks} - The BIG-Bench is a benchmark consisting of 204 tasks and associated datasets on diverse problems from linguistics, childhood development, math, common-sense reasoning, biology, physics, social bias, software development, and other. Many of those tasks are related to QA and IR or could be a subtasks of a complex question anwering pipeline. We created the table "[BIG] QA complex QA tasks benchmark" illustrating such substasks with associated metric, average human performance, top human expert performance, max state-of-the-art performance with the model name, and ratio between this model performance and human performance.
\begin{table}[!htp]\centering
\newcolumntype{T}{>{\tiny}c}
\caption{[BIG] QA complex QA tasks benchmark, focus on decomposition, multi-step, context length, truthful, programmatic, summarization - \tiny{Each task is compared between "best model vs average human vs expert human" on the same given metric specified in the 2nd column: BLEU and Exact (string) match are explained in section \ref{sec_standard_metrics}, "multiple choice grade" is a weighted multiple choice accuracy between 0-100 for a set of targets and scores for each potential target are specified, "normalized aggregate score" is an aggregation of various metrics on a same basline}}\label{Table [BIG] QA complex QA}
\scriptsize
\begin{tabular}{l T V{1cm}V{1cm}V{1cm}MV{1cm}V{1cm}V{1cm}}
\toprule
task & metric & avg human & max expert & max model & model conf & delta model / avg h. & delta model / expert \\\midrule
auto\_categorization &BLEU &1 &7 &16 &PaLM-535B-5shots & \textbf{1500\%} & \textbf{129\%} \\
matrixshapes &exact (string) match &4 &60 &35 &PaLM-535B-2shots & \textbf{785\%} &-41\% \\
factuality\_of\_summary &normalized aggregate score &12 &25 &51 &\setstretch{0.3}BIG-G T=0-137B-0shots & \textbf{321\%} & \textbf{102\%} \\
gre\_reading\_comprehension &multiple choice grade &39 &80 &68 &PaLM-535B-1shots & \textbf{74\%} &-15\% \\
gem &normalized aggregate score &23 &30 &39 &PaLM-535B-1shots & \textbf{71\%} & \textbf{31\%} \\
minute\_mysteries\_qa &normalized aggregate score &4 &31 &7 &PaLM-535B-0shots & \textbf{68\%} &-78\% \\
misconceptions &multiple choice grade &64 &90 &81 &PaLM-535B-2shots & \textbf{27\%} &-10\% \\
strategyqa &multiple choice grade &63 &90 &74 &PaLM-535B-5shots & \textbf{17\%} &-18\% \\
fact\_checker &multiple choice grade &72 &89 &84 &PaLM-535B-5shots & \textbf{17\%} &-6\% \\
understanding\_fables &multiple choice grade &67 &100 &77 &PaLM-535B-2shots & \textbf{15\%} &-23\% \\
question\_selection &multiple choice grade &48 &100 &55 &PaLM-535B-1shots & \textbf{14\%} &-45\% \\
vitaminc\_fact\_verification &multiple choice grade &63 &100 &71 &PaLM-535B-2shots & \textbf{12\%} &-29\% \\
logic\_grid\_puzzle &multiple choice grade &40 &100 &44 &PaLM-535B-2shots & \textbf{9\%} &-56\% \\
analytic\_entailment &multiple choice grade &81 &100 &86 &PaLM-535B-5shots & \textbf{6\%} &-14\% \\
what\_is\_the\_tao &multiple choice grade &79 &100 &83 &PaLM-535B-5shots & \textbf{5\%} &-17\% \\
authorship\_verification &multiple choice grade &49 &90 &50 &\setstretch{0.3}BIG-G T=1-137B-0shots & \textbf{3\%} &-44\% \\
misconceptions\_russian &multiple choice grade &65 &100 &59 &PaLM-535B-5shots &-9\% &-41\% \\
boolean\_expressions &multiple choice grade &79 &100 &69 &\setstretch{0.3}BIG-G T=0-137B-0shots &-13\% &-32\% \\
chess\_state\_tracking &normalized aggregate score &57 &97 &48 &PaLM-535B-0shots &-16\% &-51\% \\
evaluating\_information\_ess. &multiple choice grade &39 &70 &28 &\setstretch{0.3}BIG-G sparse-9B-0shots &-28\% &-60\% \\
hhh\_alignment &multiple choice grade &75 &75 &51 &PaLM-535B-5shots &-32\% &-32\% \\
web\_of\_lies &multiple choice grade &81 &100 &54 &\setstretch{0.3}BIG-G T=1-137B-0shots &-33\% &-46\% \\
\textbf{truthful\_qa} &normalized aggregate score &64 &96 &42 &\setstretch{0.3}BIG-G T=0-137B-0shots &-35\% &-56\% \\
spelling\_bee &normalized aggregate score &5 &9 &3 &\setstretch{0.3}BIG-G T=1-137B--1shots &-36\% &-64\% \\
\textbf{multistep\_arithmetic} &normalized aggregate score &10 &25 &6 &\setstretch{0.3}BIG-G T=0-137B-0shots &-43\% &-77\% \\
python\_programming\_challenge &normalized aggregate score &2 &39 &1 &\setstretch{0.3}BIG-G T=1-137B-0shots &-60\% &-98\% \\
long\_context\_integration &normalized aggregate score &5 &17 &2 &GPT-200B-0shots &-62\% &-89\% \\
tracking\_shuffled\_objects &multiple choice grade &65 &100 &24 &PaLM-535B-5shots &-63\% &-76\% \\
checkmate\_in\_one &exact string match &8 &70 &2 &PaLM-535B-2shots &-80\% &-98\% \\
ascii\_word\_recognition &exact string match &86 &100 &15 &PaLM-535B-2shots &-83\% &-85\% \\
\bottomrule
\end{tabular}
\end{table}
\subsubsection{Explainable \& truthfulness QA datasets}
The veracity and explainability of an answer is a significant challenge for language models where answers are mostly provided without evidence, logic, confidence/trust. Explainability can be trained by models or evaluated through different tasks like source highlighting or URL providing and importance, claim check, commonsense check, answer explanation, logic check...\newline
Leiter et al \citep{leiterExplainableEvaluationMetrics2022} propose three types of ground truth explanations: highlights (rationales or feature importance), free-text explanations, and structured explanations. In table \ref{TableExplDatasets} "Explainability tasks datasets" we enriched an existing comparison~\citep{wiegreffeTeachMeExplain2021} listing important \textbf{datasets with explainability tasks} in field with number of instances, mode of creation, explanation type and format, task.
\citet{rogersQADatasetExplosion2022} proposes an "evidence format" for the explainable part of a dataset composed of Modality (Unstructured text, Semi-structured text, Structured knowledge, Images, Audio, Video, Other combinations) and Amount of evidence (Single source, Multiple sources, Partial source, No sources).
\begin{figure}
\includegraphics[width=\linewidth]{ExplainableNLPv3.png}
\caption{Explainability tasks datasets (data from \citet{wiegreffeTeachMeExplain2021} enriched)}
\label{TableExplDatasets}
\end{figure}
\subsubsection{Local \& multi-lingual datasets}
\subsubsection{Reasoning and Instruction fine-tuning (IFT, CoT) datasets}
Dedicated datasets for specific reasoning abilities \citep{qiaoReasoningLanguageModel2022} have been developed, or existing sets could be derived to take advantage of different abilities.
\begin{enumerate}
\item Instruction fine-tuning (IFT) datasets: collection of written instructions to solution to teach intent declaration to solution logic \& format which can be model generated [Unnatural Instructions:~ \citep{honovichUnnaturalInstructionsTuning2022}, large community effort [Super-natural instructions~ \citep{wangSuperNaturalInstructionsGeneralizationDeclarative2022}, small high-quality crafted~ \citep{wangSelfInstructAligningLanguage2022}, converted existing large datasets to instructions~ \citep{iyerOPTIMLScalingLanguage2023, weiFinetunedLanguageModels2022}, NaturalInstructions~ \citep{mishraCrossTaskGeneralizationNatural2022}, PromptSource~ \citep{sanhMultitaskPromptedTraining2022}
\item Decomposition and chain of reasoning datasets: in order to improve reasoning capabilities by proper problem decomposition, emerging datasets are providing decomposition technique like chain-of-thoughts (e.g. FLAN CoT dataset). In a different manner, Galactica model was trained on scientific papers where step-by-step reasoning were wrapped between 2 tokens <WORK>\citep{taylorGalacticaLargeLanguage2022} both to explicitly learn reasoning and activate working memory which lacks in standard LLM.
\item Reasoning capabilities datasets:
\begin{enumerate}
\item spatial reasoning: bAbI~ \citep{westonAICompleteQuestionAnswering2015}, SpartQA~ \citep{mirzaeeSPARTQATextualQuestion2021}
\item temporal reasoning: event order (QuAIL~ \citep{rogersGettingCloserAI2020}, TORQUE~ \citep{ningTORQUEReadingComprehension2020}),
event attribution to time (TEQUILA~ \citep{jiaTEQUILATemporalQuestion2018}, TempQuestions~ \citep{jiaTempQuestionsBenchmarkTemporal2018},
script knowledge (MCScript~ \citep{ostermannMCScriptNovelDataset2018}), event duration (MCTACO~ \citep{zhouGoingVacationTakes2019}, QuAIL~ \citep{rogersGettingCloserAI2020}),
temporal commonsense knowledge (MCTACO~ \citep{zhouGoingVacationTakes2019}, TIMEDIAL~ \citep{qinTIMEDIALTemporalCommonsense2021}),
factoid/news questions with answers where the correct answers change with time (ArchivalQA~ \citep{wangArchivalQALargescaleBenchmark2022}, SituatedQA~ \citep{zhangSituatedQAIncorporatingExtraLinguistic2021}), temporal reasoning in multimodal setting [DAGA~ \citep{fayekTemporalReasoningAudio2020}, TGIF-QA~ \citep{jangTGIFQASpatioTemporalReasoning2017};
\item belief states: Event2Mind~ \citep{rashkinEvent2MindCommonsenseInference2018}, QuAIL~ \citep{rogersGettingCloserAI2020};
\item causal relations: ROPES~ \citep{linReasoningParagraphEffects2019}, QuAIL~ \citep{rogersGettingCloserAI2020}, QuaRTz~ \citep{tafjordQuaRTzOpenDomainDataset2019}, ESTER~ \citep{hanESTERMachineReading2021};
\item other relations between events: subevents, conditionals, counterfactuals etc. ESTER~ \citep{hanESTERMachineReading2021};
\item entity properties and relations : 20 social interactions (SocialIQa~ \citep{sapSocialIQaCommonsense2019}), properties of characters (QuAIL~ \citep{rogersGettingCloserAI2020}),
physical properties (PIQA~ \citep{biskPIQAReasoningPhysical2020}, QuaRel~ \citep{tafjordQuaRelDatasetModels2018}), numerical properties (NumberSense~ \citep{linBirdsHaveFour2020});
\item tracking entities: across locations (bAbI [arXiv:1502.05698]), in coreference chains (Quoref~ \citep{dasigiQuorefReadingComprehension2019}, resources in the Winograd Schema Challenge family~ \citep{sakaguchiWinoGrandeAdversarialWinograd2019}).
Arguably the cloze-style resources based on named entities also fall into this category (CBT~ \citep{hillGoldilocksPrincipleReading2016}, CNN/DailyMail~ \citep{hermannTeachingMachinesRead2015}, WhoDidWhat~ \citep{onishiWhoDidWhat2016})
\end{enumerate}
\end{enumerate}
\subsubsection{Dialogue datasets}
The "Dialogue System Technology Challenge" started as an initiative to provide a common testbed on dialog state tracking and is now a reference in terms of dialog dataset \citep{zhangAutomaticEvaluationModeration2021} with each year several tacks (challenge). The most recent 10 tracks were released for \href{https://dstc10.dstc.community/tracks}{DSTC10} and \href{https://dstc11.dstc.community/tracks}{DSTC11 in 2022}.
Some other reference datasets are:
\begin{itemize}
\item[--] CommaQA \citep{khotLearningSolveComplex2021} on complex tasks solving by talking to agents.
\item[--] SODA \citep{kimSODAMillionscaleDialogue2022} with million exchange with with social commonsense contextualization.
\item[--] DeliData \citep{karadzhovDeliDataDatasetDeliberation2022} for multi-party problem solving with deliberation.
\item[--] TIMEDIAL \citep{qinTIMEDIALTemporalCommonsense2021} for temporal commonsense reasoning.
\end{itemize}
\subsubsection{Generate (e.g. synthesis) or improve datasets}
Creating datasets is very expensive but often necessary for domain adaptation. A growing trend is the \textbf{generation of synthetic QA datasets} from models~ \citep{jeronymoInParsv2LargeLanguage2023} or unstructured text using different techniques such as ICT \citep{leeLatentRetrievalWeakly2019}, GPL \citep{wangGPLGenerativePseudo2022}, GenQ \citep{thakurBEIRHeterogenousBenchmark2021}, Promptagator \citep{daiPromptagatorFewshotDense2022}, COCO-DR \citep{yuCOCODRCombatingDistribution2022}.
Some other technics like natural language augmentation \citep{dholeNLAugmenterFrameworkTaskSensitive2022} aims at enriching existing datasets for a more robust training through transformation and data filtering.
An interesting paper from \citet{yuanReStructuredPretraining2022} highlights the "signals" present in datasets for learning knowledge and capabilities. They propose the RST \citep{yuanReStructuredPretraining2022} method including restructuring pre-training dataset, such as enriching to highly improve model performance learnt from the same dataset.
\section{Architectural patterns}\label{sec_architectural_patterns}
Architectures of QA systems have importantly evolved recently with the arrival of transformers architectures and large language models. We will first quickly review typical architectures, then the transformers with large language models, new trends, and the hybrid architectures. Will we go to gigantic knowledge models or more complex and composed architectures, maybe in network of smaller specialized models and other components ? We do not have the answer but the following lines will give you an overview some major trends on how to augment LLM to answer complex QA.
\subsection{Typical architectures before transformers, a modular QA approach}
Typical architectures of QA systems before transformers could be grouped in the following complementary approaches/methods:
\begin{description}
\item [Rule-based approach/methods] - These systems are based on a set of predefined rules that the system uses to answer questions.
\item [Retrieval-based approach/methods] - These systems use a search engine or database to find answers to questions.
\item [Information extraction approach/methods] - These systems use natural language processing (NLP) to extract relevant information from text documents and often leverage information retrieval (IR) systems.
\item [Knowledge-based approach/methods] - These systems store and retrieve information from a knowledge base.
\item [Case-based reasoning systems] - These systems use a database of previously solved problems to find solutions to new questions.
\item [Hybrid architectures] - The task oriented approaches below could sometimes be assembled (e.g. "IBM DeepQA Architecture, 2010" \citep{ferrucciBuildingWatsonOverview2010}) for delivering a more advanced QA system and could be integrated with natural language models for understanding initial question for example.
\end{description}
\subsection{LLM with transformers breakthrough... and Limits}
The emergence of deep feedforward layers, allowing to learn and infer a wide range of relationships with input. Then the raise of attention mechanism, allowing a model to selectively focus on certain parts of the input for better understanding and contextualizing, have led to language models surpassing humans on certain tasks. We can group current transformer based language models in three types \citep{lewisNaturalLanguageProcessing2022}:
\begin{description}
\item [Encoders only (e.g. BERT \citep{devlinBERTPretrainingDeep2019}, RoBERTa \citep{liuRoBERTaRobustlyOptimized2019})] encode a sequence of text (input) into a rich representation (vector embedding) which can be used by a task-specific model or function for classification, named entity recognition (NER), semantic similarity measure used in IR and QA or topic modeling. This is often called bidirectional attention, meaning they take the context of the words before and after the target word into account, which allows them to perform better on some tasks. BERT is one of the most well-known encoder-only models, and RoBERTa is an optimized version of BERT.
\item [Decoders only (e.g. GPT-3 \citep{zongSurveyGPT32022})] complete an input sequence (mostly text prompt) by the most probable next words (generation). This left to right generation can be very rich like writing a story or answering a question (e.g. used by ChatGPT). The input prompt can be formatted with appropriate instructions (prompt \& instructions engineering) to use the generated text as a task-specific model (e.g. classification, summarization, decomposition...). This is often called causal or autoregressive attention (non-causal decoders exist but has limited adoption in the literature). GPT family of models are one of the most well-known decoder-only models, known for their ability to generate human-like text.
\item [Encoders-Decoders (e.g. T5 \citep{raffelExploringLimitsTransfer2020}, BART \citep{lewisBARTDenoisingSequencetoSequence2019}, BigBird \citep{zaheerBigBirdTransformers2021})] encode the input text and decode it into a different form (text-to-text mapping), they are suitable for translation, summarization, or generating a response to a question. They consist of both an encoder and a decoder, where the encoder generates a fixed-size representation of the input text, and the decoder generates the output text. T5 is known for its multi-tasks ability, BART is mainly used for text generation and summarization, and BigBird allows to process much longer sequence of texts than other models.
\end{description}
\textbf{Large language models (LLM) limits}\label{sec_LLMlimits}: scaling up language models has been shown to predictably improve performance\citep{weiEmergentAbilitiesLarge2022} on a wide range of downstream tasks. The HELM \citep{liangHolisticEvaluationLanguage2022} and BIG \citep{srivastavaImitationGameQuantifying2022} studies show that state-of-the-art on most scenarios are led by those very larges models but still lack on different sides (e.g. fairness, robustness across tasks \& domains, reasoning). Complex question answering are even more demanding and require to push further the limits. Many additional components are used or investigated to face limitations of those models by hybridation with LLM. For example, ChatGPT and InstructGPT \citep{baiTrainingHelpfulHarmless2022} added reinforcement learning with human feedback to base GPT-3 large language model to highly improve \citep{mahowaldDissociatingLanguageThought2023} their answer performance (e.g. calibration, human expectation alignment). Therefore we decided to cover in this study the improvement of the language model itself (see next section) and the hybridation patterns which can overcome different limits of base models (below). Those different challenges have been identified in our systematic review for complex QA using language models:
\begin{enumerate}
\item \textbf{Domain adaptation \& task specialization, and knowledge maintenance.}
\item \textbf{Bias} - many type of bias exist but focus is on racial, gender, religion.
\item \textbf{Scalability} - both at training and inference.
\item \textbf{Question context improvement} - clarification question, question expansion, dialog.
\item \textbf{Question decomposition strategy, multi-hop/step, question to action design} - key element for complex question answering.
\item \textbf{Reasoning} - logic, process, methods, chain of thought (CoT), causality, learning from code.
\item \textbf{Alignment to human expectation \& values in answer} - capturing \& ensuring alignment, managing trade-off in competing expectations/values, cultural \& social alignment.
\item \textbf{Hallucination / Veracity / Explainability / Security} - sine qua non in many domains.
\item \textbf{Long form question answering, long term dependency in reasoning, multi-document/source summarization} - most LLM are designed with important limitation in lengths impacting: size of input knowledge, reasoning length dependencies, answer size.
\item \textbf{Multi-modal search (text, table, images)} - many knowledge and world model cannot be capture with text only.
\item \textbf{Time dimension} - Time reasoning, knowledge update \& forget, sequence \& workflow understanding.
\item \textbf{Data sensitivity protection} is rarely covered but will be key to leverage and protect data which are not public (privacy, intellectual property, organisation/company sensitive, government sensitive...).
\end{enumerate}
\subsection{LLM hybrid architectural design patterns to augment base LLM}\label{sec_hybridLLMpatterns}
To address the different limits of LLM and skills identified for complex QA, we survey architectural components which could be added to a base LLM like another general-purpose or specialized LLM, a fine tuned model, a search engine, a software, a code interpreter... To help design the different ways to increase an LLM both at inference or training, even if some may overlap, we have grouped them into the following table of key hybrid architectural patterns.
\tiny{
\begin{longtable}{p{0.55\linewidth} p{0.45\linewidth}}
\label{Table2LLMArchPatterns}\\
\toprule
\multicolumn{2}{p{\linewidth}}{\textbf{1. LLM base transformer}: model usually trained on a large corpus of web resources to properly model target language(s) and various degrees of knowledge, common sense, reasoning capabilities. It is the core building bloc of all the architectures below.}. \\\midrule {\textbf{S}trengths: leverage knowledge from large unstructured data, can hande wide and versatile knowledge, some long-range dependencies and reasoning.
\newline \textbf{W}eaknesses:} training too long to easily acquire new data, hallucination with confidence, acquiring reasoning capabilities require very large models, no internal mechanism to protect sensitive data. & \textbf{e.g.} encoders BERT \citep{devlinBERTPretrainingDeep2019} and RoBERTa \citep{liuRoBERTaRobustlyOptimized2019}(optimized BERT); decoders GPT2/GPT-3 \citep{brownLanguageModelsAre2020}; encoders-decoders T5 \citep{raffelExploringLimitsTransfer2020}, BART, longer sequences with BigBird \citep{zaheerBigBirdTransformers2021}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{2. LLM + Task-specific head}: connected layers or separated module trained to specialize on a particular task using output of LLM.}. \\\midrule {\textbf{S}trengths: SOTA performance on specific task and adapt pre-trained LLM to domain.
\newline \textbf{W}eaknesses:} requires structured dataset for training, limited to the specific task it has been designed for, may struggle with more general or open-ended tasks. & \textbf{e.g.} BERT with classification head, GPT-2 with translation head, RoBERTa with a question answering head, RoBERTa with a sequence labeling head \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{3. LLM + Prompt/instruction tuning module}: discover best prompt, context, instructions to query a LLM for given tasks and domains. This tuning can occur in pre of supervised training, or dynamically added to a question prompt.}. \\\midrule {\textbf{S}trengths: improve LLM performance on one or many task without high cost of retrain, and can dynamically adapt reasoning skills to context.
\newline \textbf{W}eaknesses:} very sensitive to slight prompt vairations, may require a lot of context. & \textbf{e.g.} for prompt tuning see section \ref{sec_PETtraining}, programmatic retrieval-augmented in-context learning \citep{khattabDemonstrateSearchPredictComposingRetrieval2023}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{4. LLM + Question/task decompose, plan, act module}: efficient break down of complex tasks into addressable subtasks following a resolution plan.}. \\\midrule {\textbf{S}trengths: efficient at solving more complex tasks by converting into several manageable subtasks and an efficient resolution plan (a priori, or iterative, recursive). \newline \textbf{W}eaknesses:} more time and resources to implement, depending on implementation may struggle with long context and reasoning dependencies. & \textbf{e.g.} iterated decomposition w/ reasoning process supervision \citep{reppertIteratedDecompositionImproving2023}, links reasoning and acting decomposition \citep{yaoReActSynergizingReasoning2022},
unsupervised QA decomposition \citep{perezUnsupervisedQuestionDecomposition2020}, Text Modular Networks learns to decompose w/ existing models \citep{khotTextModularNetworks2021}, Talk2Data for high-Level QA decomposition \citep{shiTalk2DataHighLevelQuestion2021},
DeepQA uses fact-based QA decomposition \citep{kalyanpurFactbasedQuestionDecomposition2012}, learns to decompose compound QA w/ RL \citep{yangLearningDecomposeCompound2018}, successive prompting decomposition for CQA \citep{duaSuccessivePromptingDecomposing2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{5. LLM + Semantic Information Retriever}: incorporate external sources rather than storing all in the LLM model. Can be improved with a RL mechanism.}. \\\midrule {\textbf{S}trengths: incorporates any external sources up-to-date without impacting LLM size, allows for much smaller models (RETRO is 1/25 size of GPT-3 for same perforamnce), allows control over sources (sensitivity, explainability).
\newline \textbf{W}eaknesses:} may struggle with tasks that require more abstract or creative reasoning, may be limited by the quality and coverage of the external sources. & \textbf{e.g.} Deepmind RETRO \citep{borgeaudImprovingLanguageModels2022}, Facebook DrQA \citep{chenReadingWikipediaAnswer2017}, FiDO \citep{dejongFiDOFusioninDecoderOptimized2022}, Atlas \citep{izacardAtlasFewshotLearning2022}, training RL agents to query external knowledge \citep{liuAskingKnowledgeTraining2022}, Toolformer \citep{schickToolformerLanguageModels2023}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{6. LLM + Symbolic/structured Information Retriever}: leverage symbolic \& structured information (e.g. KG, ontologies, SQL). }. \\\midrule {\textbf{S}trengths: faster cold start tasks and easier domain adaptation with low data, better ability to follow rules and concepts, more effective handling of evolving information, enrich context (ontologies, thesuarus, taxonomy, syntactic, metadata, annotations).
\newline \textbf{W}eaknesses:} neuro-symbolic integration is complex, symbolic data very time-consuming. & \textbf{e.g.} UNIQORN \citep{pramanikUNIQORNUnifiedQuestion2022}, Heterformer \citep{jinHeterformerTransformerArchitecture2022}, UnifiedSKG \citep{xieUnifiedSKGUnifyingMultiTasking2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{7. LLM + Program (software or service via API)}: leverage capabilities of external task specialized software tools with associated proven robustness and performance (e.g. math solver, simulation, WWW search engine).}. \\\midrule {\textbf{S}trengths: Leverage capabilities of external modules with proven performance and robustness in IR, logic, world modelling. \newline \textbf{W}eaknesses:} QA end-to-end learning can be difficult. & \textbf{e.g.} physics with MindsEye \citep{liuMindEyeGrounded2022}, WebGPT \citep{nakanoWebGPTBrowserassistedQuestionanswering2022}, SeeKeR \citep{shusterLanguageModelsThat2022}, Toolformer \citep{schickToolformerLanguageModels2023}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{8. LLM + Code interpreter}: Generate code to delegate complex tasks well handled by compiler/solver, can also learn complex logics by learning program input/output.}. \\\midrule {\textbf{S}trengths: leverage robust reasoning \& algorithmic capabilities, and language ecosystem. \newline \textbf{W}eaknesses:} may struggle with tasks that require a deep understanding of the underlying context or concepts. & \textbf{e.g.} PAL \citep{gaoPALProgramaidedLanguage2023}, solving math problems via cooperative reasoning \citep{zhuSolvingMathWord2022} or program synthesis \citep{droriNeuralNetworkSolves2022}, LM self-improve its programing capabilities \citep{haluptzokLanguageModelsCan2022}, Codex \citep{chenEvaluatingLargeLanguage2021}, AlphaCode \citep{liCompetitionLevelCodeGeneration2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{9. LLM + Human/AI RL feedback}: learn optimal policy for a goal (answer quality, safety, best data sourcing...).}. \\\midrule {\textbf{S}trengths: support in most difficult challenges of LLM: human expectation alignment and personalisation, knowledge enrichment, safety control, quality control. \newline \textbf{W}eaknesses:} complexity/time for human feedback and convergence (could be mitigated with an additional AI feedback learning from human feedback). & \textbf{e.g.} RLHF \citep{ouyangTrainingLanguageModels2022, baiTrainingHelpfulHarmless2022, daniels-kochExpertiseProblemLearning2022, anonymousTaskAmbiguityHumans2022, ganguliRedTeamingLanguage2022, perezDiscoveringLanguageModel2022, baiConstitutionalAIHarmlessness2022, perezDiscoveringLanguageModel2022}, RLHP \citep{menickTeachingLanguageModels2022}, RLAIF \citep{baiConstitutionalAIHarmlessness2022}, RL search algorithms MCTS \citep{huInteractiveQuestionClarification2020, zhuSolvingMathWord2022, laurentLearningFindProofs2022, yangChainThoughtImitation2022} or DiL-piKL \citep{bakhtinMasteringGameNoPress2022} or PPO with human feedback in Diplomacy \citep{bakhtinMasteringGameNoPress2022}, WebGpt \citep{nakanoWebGPTBrowserassistedQuestionanswering2022}, InstructGPT \citep{ouyangTrainingLanguageModels2022}, chatGPT, GopherCite \citep{menickTeachingLanguageModels2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{10. Cascaded/chained/looped LLM}: solve complex problems by solving it by steps as a pipeline with multiple sequential requests to LLM, this sequence could be also iterative or recursive.}. \\\midrule {\textbf{S}trengths: allows human easier design and control over the execution process, can solve higher complexity problems than LLM core skills by breaking down, provide a causal chain useful for explainability. \newline \textbf{W}eaknesses:} May be less effective at tasks that require more context and long reasoning dependencies. & \textbf{e.g.} solving by cacasding language models \citep{dohanLanguageModelCascades2022}, AI Chains \citep{wuAIChainsTransparent2022}, in a collaborative visual chain of prompt~\citep{wuPromptChainerChainingLarge2022}, logical and robust reasoning with selection-inference~\citep{creswellSelectionInferenceExploitingLarge2022}, human readable multi-step logical deduction on scientific QA improving accuracy and faithfulness \citep{creswellFaithfulReasoningUsing2022}, iterative prompting an LLM\citep{wangIterativelyPromptPretrained2022, yangRe3GeneratingLonger2022, duaSuccessivePromptingDecomposing2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{11. LLM + Veracity/evidence checker}: provide truthfulness and mitigate hallucination LLM hard problem by providing sources, veracity assessment.}. \\\midrule {\textbf{S}trengths: ensures the credibility and reliability of information, mitigate hallucinations problem. \newline \textbf{W}eaknesses:} may be limited by the quality and coverage of the external sources. & \textbf{e.g.} GopherCite supports answers with verified quotes \citep{menickTeachingLanguageModels2022}, logic-regularized reasoning for interpretable fact verification \citep{chenLORENLogicRegularizedReasoning2022}, survey on automated fact-checking \citep{guoSurveyAutomatedFactChecking2022}, hallucinated content detection \citep{zhouDetectingHallucinatedContent2021}, RL approach for explainability using entailment trees \citep{liuRLETReinforcementLearning2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{12. LLM + Router or Task discriminator}: route task/domain to best model/expert with adapted instructions.}. \\\midrule {\textbf{S}trengths: can accelerate LLM training, improve inference LLM performance by routing to the most appropriate model with appropriate instructions. \newline \textbf{W}eaknesses:} currently complex to implement and maintain, shared reasoning and long terme dependencies might be lower. & \textbf{e.g.} universal discriminator for zero-shot generalization \citep{xuUniversalDiscriminatorZeroShot2022}, Branch-Train-Merge for fast parallel training of experts LM \citep{liBranchTrainMergeEmbarrassinglyParallel2022}, DEMIXLayers \citep{gururanganDEMixLayersDisentangling2021}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{13. LLM + Dialog module}: Enable dialog with requester and long context capture, can include an ontology to help structure tasks definition and tracking. }. \\\midrule {\textbf{S}trengths: better understanding of complex contexts, problems and concepts with human interaction and progressive refinement and solving. \newline \textbf{W}eaknesses:} might not fit targeted usage format, more time and resources implementation. & \textbf{e.g.} GODEL \citep{pengGODELLargeScalePreTraining2022}, OPAL \citep{chenOPALOntologyAwarePretrained2022}, CommaQA \citep{khotLearningSolveComplex2021}, ChatGPT \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{14. LM + Read-write memory}: add an external memory to LLM allowing to process unbounded inputs, improve long-term capacities, store and pass information between multiple inferences.}. \\\midrule {\textbf{S}trengths: without LLM modification can simulate any algorithm, process arbitrarily large inputs, strengthen controllability and long-term dependencies (for reasoning, dialogs, summarization, retrieval, algorithmic...). \& robustness by incorporating counterfactual and irrelevant contexts. \newline \textbf{W}eaknesses:} currently do not scale easily with increasing model size. & \textbf{e.g.} universal memory augmented large LM \citep{schuurmansMemoryAugmentedLarge2023}, Recurrent Memory Transformer \citep{bulatovRecurrentMemoryTransformer2022}, long-term open-domain conversations \citep{xuGoldfishMemoryLongTerm2021}, working memory for scientific reasoning in Galactica\citep{taylorGalacticaLargeLanguage2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{15. LLM + Generator/Verifier}: generate many potential solutions, checks for consistency, groups and classifies to come up with the best alternative answers.}. \\\midrule {\textbf{S}trengths: solving very complex unsolved tasks by innovative problem solving.
\newline \textbf{W}eaknesses:} resource intensive / costly. & \textbf{e.g.} AlphaCode \citep{liCompetitionLevelCodeGeneration2022}, DiVeRSe \citep{liAdvanceMakingLanguage2022}, CoRe (cooperative reasoning) \citep{zhuSolvingMathWord2022}, self-consistency for chain of thought \citep{wangSelfConsistencyImprovesChain2022}, training verifiers to solve math \citep{cobbeTrainingVerifiersSolve2021}, STaR bootstrapps reasoning with reasoning \citep{zelikmanSTaRBootstrappingReasoning2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{16. LLM + Multimodal}: search and reason over knowledge out of texts (image, audio, video, sensors...).}. \\\midrule {\textbf{S}trengths: enable to leverage knowledge not in text and cross latent knowledge by combining modes.
\newline \textbf{W}eaknesses:} integration complexity (representation, alignment, reasoning, generation...) \& cost, explainability and hallucination is even more challenging. & \textbf{e.g.} foundations and recent trends in multimodal ML \citep{liangFoundationsRecentTrends2022}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{17. LLM Ensembling and LLM composing}: ensembling different LLM to answer QA or a subtask or automatic construction of an architecture composed of multiple components/models.}. \\\midrule {\textbf{S}trengths: improve inference accuracy, generalization and stability.
\newline \textbf{W}eaknesses:} cost \& complexity. & \textbf{e.g.} ensemble learning for validation and explanation \citep{huyAutoencodingLanguageModel2022},
parallel training of expert LLM \citep{liBranchTrainMergeEmbarrassinglyParallel2022},
ensembles of LLM via iterative consensus \citep{liComposingEnsemblesPretrained2022}, automatic neural module networks composition \citep{andreasNeuralModuleNetworks2017}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{18. LLM + (multi) Teacher}: efficiently improve knowledge to a LLM with 1 or more teachers expert on given domains/tasks.}. \\\midrule {\textbf{S}trengths: knowledge transfer, knowledge expansion, knowledge adaption, teach new reasoning capabilities (e.g. temporal), and multi-task learning. \newline \textbf{W}eaknesses:} cost \& complexity, control of teacher knowledge transfer (e.g. biases, domain). & \textbf{e.g.} Teacher-Student Architecture for Knowledge Learning survey \citep{huTeacherStudentArchitectureKnowledge2022}, better learn from multiple teachers \citep{wuOneTeacherEnough2021}. \\\midrule
\multicolumn{2}{p{\linewidth}}{\textbf{19. LLM + Temporal reasoning}: improve performance on time-related tasks, enhance temporal information understanding, retrieval and reasoning.}. \\\midrule {\textbf{S}trengths: allow temporal estimation, ranking \& clustering, reasoning, incoherence detection, address knowledge forget and update. \newline \textbf{W}eaknesses:} this field is not mature (very few papers). & \textbf{e.g.} TimeBERT: Extending Pre-Trained Language Representations with Temporal Information \citep{wangTimeBERTExtendingPreTrained2022}, Survey of Temporal Information Retrieval and Related Applications \citep{camposSurveyTemporalInformation2014}. \\\midrule
\end{longtable}}
\small\section{Complex tasks training}\label{sec_training}
Now that we surveyed the skills we need to develop, the tasks and challenges to solve, the datasets needed for training, the specializable blocks on skills of a hybrid architecture on which to dispatch our tasks, how to train for complex QA ? We will see the importance of pre-training, domain adaptation and fine-tuning.
\subsection{Training dataset quality}
Even before pre-training, a very important step is to maximize the quality of the input datasets (eg, accuracy, completeness, consistency, relevance, uniformity) \citep{budachEffectsDataQuality2022, whangDataCollectionQuality2022}. It is commonly said that machine learning project could spend up to 80\% of time on data preparation with a major part dedicated to data cleaning \citep{pfistererHumanCenteredAutoML2019}. The efficiency of a model task is directly and heavily affected by the quality of the training dataset and its improvement~ \citep{budachEffectsDataQuality2022, whangDataCollectionQuality2022}. We will not dig this subject because, surprisingly, our review methodology did not preselect any scientific article with the word "quality" in their title and is anecdotal in their abstracts. It might be that it is not specific to QA/CQA or language models training but an assumption in any machine learning related subject. Developers of GPT-3 spent important efforts to filter on a high-quality training dataset \citep{zongSurveyGPT32022}, and \citet{sunImportanceBuildingHighquality2022} highlights this important preparation task of complex question answering. To improve the quality of your data, please refer to the citations above and \citet{yuanReStructuredPretraining2022} which augments the learning signals, a quality aspect, of already high-quality datasets to improve LLM learning.
\subsection{Type of LLM training}
As introduced in the "core concepts" section, training can apply to:
\begin{enumerate}
\item a \textbf{pretrained language model (PLM)} which is trained without supervision (unsupervised or self-supervised) mainly on large text (e.g. reddit, Wikipedia...) to discover general knowledge and logic. It could then be re-used and augmented with a "head" to be further trained (fine-tuned) on specific tasks (supervised training). It can also be trained on very large corpus of data (e.g. GPT-3.5) to uncover enough knowledge and logic to be used "as is" without additional training but oftenly some additional instructions to better align with requester expectations (e.g. ChatGPT). Kalyan et al (2021) \citep{kalyanAMMUSSurveyTransformerbased2021} identifies several type of pre-training: Pretraining from Scratch (PTS), Continual Pretraining (CPT - initialization from an existing PLM), Simultaneous Pretraining (SPT - synchronized mix of general and domain-specific corpus from beginning), Task Adaptive Pretraining (TAPT - continually adapts mix of general and specific training examples), Knowledge Inherited Pretraining (KIPT \citep{qinKnowledgeInheritancePretrained2022} - adds knowledge distillation in the process).
\item \textbf{fine-tuned model} on specific task(s) in a supervised manner (each training example is provided with input and expected output solution), by re-purposing a pre-trained language model.
\item adapting an existing model (PLM or fine-tuned model) to a new domain of knowledge (\textbf{domain adaptation} - e.g. COVID19 terms and facts) or to new task(s) (\textbf{knowledge transfer}) leveraging existing knowledge and logic in the model.
\end{enumerate}
\subsection{Pre-training techniques}\label{sec_pretraining}
\subsubsection{Self-supervised learning (SSL)}
This type of machine learning technique, largely used for PLM, trains on a dataset without any pre-labeled outputs. Instead, the model must learn to generate the correct output itself, using information from the input data. It is often based on an unlabeled training converted to a supervised coherence task. Kalyan et al (2021) \citep{kalyanAMMUSSurveyTransformerbased2021} identify three major techniques:
\begin{description}
\item [Generative SSL], depending on chosen technique, the model learns to predict different scenarios: (1) next token based on current tokens (CLM - used by GPT-1, GPT-2, GPT-3 models); (2) masked tokens in a phrase (MLM \citep{devlinBERTPretrainingDeep2019} is most used technique, but variants exists such as TLM \citep{lampleCrosslingualLanguageModel2019}, Seq2SeqLM - used by RoBERTa, XLM, XLM-R models); (3) reconstruct original text which has been corrupted (denoising autoencoder, DAE, is used by BART, mBART models).
\item [Contrastive SSL] augments learning by comparison. It is not used alone but to further improve a model like in continual pretraining, to learn sentence-level semantics. Different techniques exist: next-sentence prediction NSP \citep{devlinBERTPretrainingDeep2019}, sentence order prediction SOP \citep{lanALBERTLiteBERT2020}, SimCSE \citep{gaoSimCSESimpleContrastive2022}, SimCLR \citep{chenSimpleFrameworkContrastive2020}, MoCo-v2 \citep{chenImprovedBaselinesMomentum2020}, BYOL \citep{grillBootstrapYourOwn2020}, cross lingual contrastive pretraining XLCo \citep{chiInfoXLMInformationTheoreticFramework2021}.
\item [Adversarial SSL] learns by distinguishing corrupted tokens (replaced or shuffled), can be used alone or in continual pretraining like contrastive. Different techniques exist: replaced token detection (RTD - used by ELECTRA model), multi-lingual replaced token detection (MRTD) is used by XLM-E model, translation replaced token detection (TRTD), shuffled token detection (STD) is used by RoBERTa model.
\item [Hybrid SSL] uses more than one type of SSL - e.g. U-Palm uses up to 7 denoising objectives (\textbf{mixtures-of-denoisers \citep{tayUL2UnifyingLanguage2022}}), BERT uses MLM (generative) and NSP (contrastive), ALBERT used MLM and SOP (contrastive), infoXLM uses MLM + TLM (generative) and XLCo (contrastive). Here XLCo represents the cross lingual contrastive pretraining task.
\end{description}
\subsubsection{Transfer learning, domain adaptation, knowledge distillation}
Those techniques are also used as supervised learning \& finetuning which we cover in the next section.
\subsubsection{Program execution learning}
This technique\citep{piReasoningProgramExecutors2022} tries to learn, or mimic, how a program works to capture its logic on the specific scope of the program or more general skills like numerical reasoning, logical reasoning, better multi-hop reasoning. This technique useful at pre-training stage can be viewed as a self-supervised learning.
\subsection{Supervised learning \& fine-tuning}\label{sec_supervisedtraining}
Supervised learning is the ancestor and most well-known ML technique. It trains on labeled dataset to predict the expected ouput from given input. This allows the model to learn from the data and make predictions about new, unseen data but similar task. This assumes that dataset is representative of new, unseen data. We will see in sections \ref{sec_PETtraining} and \ref{sec_prompting} that task specific fine-tuning, which can require a lot of compute and examples, can be avoided via complementary strategies like prompt engineering, tuning adapters, soft prompts prefix, late prompts.
\subsubsection{(Task specific) Vanilla Fine-Tuning}
Vanilla fine-tuning, is commonly used to refer to the basic or standard method of fine-tuning a pre-trained deep learning model based on task-specific loss, the weights of the few layers near the output (the tasks specific head) are updated while keeping the PLM weights fixed. \citet{kalyanAMMUSSurveyTransformerbased2021} highlights that the main drawback is that PLM having large parameters is prone to overfit on small task specific datasets limiting performance. Intermediate fine-tuning or multi-task fine-tuning overcome this.
\subsubsection{Intermediate Fine-Tuning (IFT)}
IFT fine-tunes a model using an intermediate dataset with a large number of labeled instances to learn domain knowledge (DAIFT - domain adaptative IFT) or task logic (TAIFT - task adaptative IFT) to avoid overfitting on small final datasets. However, \citet{kalyanAMMUSSurveyTransformerbased2021} warns that IFT may sometimes reduce the performance on final tasks \citep{pruksachatkunIntermediateTaskTransferLearning2020} but authors showed that tasks requiring high-level inference and reasoning abilities work best such as QA tasks.
\subsubsection{Multi-task learning}
According to \citet{kalyanAMMUSSurveyTransformerbased2021}, training a model to perform \textbf{multiple different tasks} can help the model to learn more generalizable features (regularization effect), and improve its performance on multiple tasks (transverse knowledge and skills acquired from multiple datasets). From MQAN (\citet{mccannNaturalLanguageDecathlon2018}) passing the decathlon of QA, and T5 (2019) \citep{raffelExploringLimitsTransfer2020} reaching many state-of-the-art benchmarks in one model, the field does not stop improving. This learning can be done: \textbf{simultaneously} on all tasks \citep{liuMultiTaskDeepNeural2019}, in \textbf{sequence} \citep{mahajanIdentificationSemanticallySimilar2020}, \textbf{mixed} \citep{piergiovanniAnswerMeMultiTaskOpenVocabulary2022}, or \textbf{optimized learning per task with hypernetwork} (e.g. imbalanced)~ \citep{jiPatientOutcomeZeroshot2023}. Multi-task learning can spread on \textbf{similar tasks from different domains and cross-language} (e.g. similar summarization tasks \citep{goodwinZeroShotConditionalSummarization2020, baiCrossLingualAbstractiveSummarization2021}), or \textbf{auxiliary tasks} \citep{jinHooksHeadlineLearning2020} to improve different skills.
\subsubsection{Instruction fine-tuning} \citep{chungScalingInstructionFinetunedLanguage2022, wangSelfInstructAligningLanguage2022, honovichUnnaturalInstructionsTuning2022}
Recent work from Chung et al \citep{chungScalingInstructionFinetunedLanguage2022}, additional to multi-task learning, demonstrates the capacity to highly increase the number of tasks, the reasoning capabilities and global performance by finetuning a pre-trained multi-task model with example with instructions.
\subsubsection{Meta learning} \citep{debBoostingNaturalLanguage2022, upadhyaySharingLearnLearning2023, wangSuperNaturalInstructionsGeneralizationDeclarative2022} Meta Learning is the ability to learn faster new tasks with lesser data and time, like "learning to learn" \citep{baxterTheoreticalModelsLearning1998, thrunLearningLearnIntroduction1998}. This ability is well used through the capacity to learn instructions to be addressed to a language model with the example of Tk-INSTRUCT supporting >1600 NLP tasks from 76 types reaching a performance near SOTA supervised tasks~ \citep{wangSuperNaturalInstructionsGeneralizationDeclarative2022}.
\subsubsection{Active learning} \citep{boreshbanImprovingQuestionAnswering2021, jukicSmoothSailingImproving2022, kocielnikCanYouLabel2022, yuAcTuneUncertaintyawareActive2022, buddSurveyActiveLearning2021}
This technique enables a language model to be trained on a small initial set of examples and then, in a iterative manner, the model can request additional labeled data based on its own uncertainty, in order to improve its accuracy on a given task with minimal effort. This highly reduces the training time and the manual creation of a dataset when required. This can also help to craft better examples and avoid overfitting due to excessive examples on a subject.
\subsubsection{Knowledge distillation (KD)} \citep{boreshbanImprovingQuestionAnswering2021}
This technique enables a smaller, more efficient model (student) to be trained to imitate the predictions of a larger, more complex model (teacher), leveraging the knowledge learned by the larger model. For a given QA question/answer pair, it not only provides answer but could provide confidence, attention map and activated features. The. KD can be jointly used with active learning to reduce even more the training examples needed.
\subsubsection{Multi-view learning} \citep{liLearningDiverseDocument2022, dengMultihopInferenceQuestiondriven2020} This technique learns multiple representations or "views" of the same input data to improve the model's performance on a specific task. The idea is to leverage those different representations to better capture different knowledge facets or aspects of the data, leading to a more nuanced and effective representation for the task at hand. In the case of question-driven summarization, multi-view learning is used to model the relevance to the question and the interrelation among different sentences to generate a more informed and justified summary. This can improve the performance of the model compared to using a single view of the data, as the model is able to capture more diverse and complementary information from multiple perspectives.
\subsubsection{Transfer learning, knowledge \& domain adaptation, continual learning}
Those techniques leverage an already trained model which captured expected knowledge and/or logic for my target domain or application, which we then fine-tune using techniques explained above. This enables faster adaptation to target usage and allows to address a task even when there is not enough data available. Transfer learning could be further divided into inductive (related task) and transductive (same task, new domain) learning, unlabeled to labeled transfer (similar to unsupervised pre-training to fine-tuning), and feature and parameter transfer (capture high level concepts of domain). A model can be very efficient on a given knowledge domain but will be later enable to process new questions requiring new or updated facts, this highlights the need for continual knowledge update in models which can be covered with continual learning technique \citep{yuanUnifiedQuestionGeneration2022, scialomFinetunedLanguageModels2022}.
\subsubsection{Reinforcement learning \citep{goyalRetrievalAugmentedReinforcementLearning2022, chiuKnowledgeGroundedReinforcementLearning2022}, Inverse reinforcement learning \citep{zhouInverseReinforcementLearning2020}}
Reinforcement learning learns by interacting with its environment and later receiving evaluation for its actions (e.g. rewards vs punishments). Model can then infer the best policy to maximize expected result. This technique is used when teaching to a model exact output for each input is non trivial or evaluation is indirect (later reward). Reinforcement learning with human feedback is a key technique used to improve answer alignment to human requester expectations and values. For example a QA dialog system can generate possible responses to a user request, and then a human moderator provides feedback on which response is the best. This feedback is then used to uncover best factors leading to expected result and update the system policy so that it produces better responses in the future. This \textbf{key element of future CQA systems design} is further discussed in section \ref{sec_ImprovementLoop_and_kg_capitalization}.
\subsection{Parameter-efficient tuning of a frozen PLM}\label{sec_PETtraining}
As per HELM study \citep{liangHolisticEvaluationLanguage2022}, larger models lead to better performance although some architecture allows to get them a bit smaller. Those large-scale PLM are very expensive to retrain or just fine-tune. The parameter-efficient tuning alternative targets a small fraction of model parameter update with similar performance than full-model fine-tuning, sometimes better \citep{dingDeltaTuningComprehensive2022}.
\begin{enumerate}
\item \textbf{Addition-based methods} introduce extra trainable neural modules or parameters that do not exist in the original model or process (Adapters-based Tuning, Prompt-based Tuning).
\begin{itemize}
\item[--] \textbf{Adapters-based Tuning}. It works by adding small adapter layer modules to a PLM and only updating its own parameters when learning a task. He et al demonstrates adapter-based tuning outperforms fine-tuning on low-resource and cross-lingual tasks, is more robust to overfitting and less sensitive to changes in learning rates \citep{heEffectivenessAdapterbasedTuning2021}. Prompt-free tuning uses task-specific adapters and learnable multi-token label embeddings to enable few-shot fine-tuning with frozen PLM \citep{mahabadiPERFECTPromptfreeEfficient2022}.
\item[--] \textbf{Prompt-based Tuning} including \textbf{Prefix tuning} \citep{xieUnifiedSKGUnifyingMultiTasking2022, yuanFewshotQueryFocusedSummarization2022}. This technique allows to integrate additional knowledge for a given task, such as QA or question driven summarization, into a pre-training strategy without modifying the PLM. This prefix is learnt by training on the task with the same type of dataset but requires few examples. Then, it is injected as knowledge when running the target task. This can highly improve the performance of the target task with a small number of trainable parameters (e.g. 0.1\%).
\item[--] Late prompt tuning \citep{liuLatePromptTuning2022}.
\end{itemize}
\item \textbf{Specification-based methods} specify certain parameters in the original model to become trainable, while others are frozen, this selection is either based on heuristic, either learned.
\item \textbf{Reparameterization-based methods} reparameterize existing parameters to a parameter-efficient form by transformation - one technique is LoRA \citep{huLoRALowRankAdaptation2021} hypothesizes that model tuning has a low intrinsic rank and propose to optimize the low-rank decomposition for the change of original weight matrices in the self-attention modules, LoRA match fine-tuning performance on the GLUE benchmark.
\item Some new approaches emerge such as hypernetwork \citep{phangHyperTuningAdaptingLarge2022} which produce soft prompts or adapters through just one forward pass enbaling to quickly adapt to unseen tasks.
\end{enumerate}
\subsection{Techniques for improving training}\label{sec_improvetraining}
Additionally the different training techniques, these are complementary to improve different aspects:
\begin{description}
\item [Regularization and early-stopping techniques.]
Like in other deep learning training, dropout, weight decay, and early stopping are important to prevents over-fitting, reduce complexity and help to learn more generalizable features. Additionally, early stopping strategies while training reduces time and avoids over-fitting \citep{schusterConfidentAdaptiveLanguage2022}.
\item [Additional mixture-of-denoisers \citep{tayUL2UnifyingLanguage2022} training.]
UL2R proposes to additionally train for few iterations (0.1\%) an already trained LM to largely improve its capabilities in terms of accuracy and reasoning capabilities \citep{tayTranscendingScalingLaws2022}.
\item [Tasks disambiguation for generalization.]
Training on ambiguous examples in different contexts \citep{anonymousTaskAmbiguityHumans2022} can dramatically improve the accuracy of language models trained without large-scale human feedback training (RLHF).
\item [Red teaming for secure/harmless content.]
Automatic red teaming \citep{ganguliRedTeamingLanguage2022, perezRedTeamingLanguage2022} are different methods to automatically generate test cases to detect offensive content in LM.
\item [Post pruning.]
A pruning method like SparseGPT \citep{frantarSparseGPTMassiveLanguage2023} can reduce model size by 50\% of a very large model (GPT family) in one-shot without any retraining with a minimal loss of accuracy.
\item [Cross-lingual learning.] \citep{lampleCrosslingualLanguageModel2019, chiInfoXLMInformationTheoreticFramework2021}
Learning from multiple languages can help the model to learn language-agnostic features, knowledge not available on target knowledge, and improve its performance on tasks that involve multiple languages.
\end{description}
\section{Solving complex QA after training ? From better asking to always improving}\label{sec_solvingCQA}
Now that we have trained models to acquire the required skills to solve a complex QA, let's see how to design questions and instructions (i.e. prompt), and how to create a system able to align to expectations and continually improve its solving capabilities.
\subsection{Asking my question (prompting)}\label{sec_prompting}
In addition to that a "problem well stated is half solved", LLM can also improve its ability to solve a problem by leveraging information provided with a question. Many key information can be provided while posing a question to a LLM such as context, additional knowledge, constraints, instructions, examples... Engineering a good prompt (text provided to a LLM when posing a question) can rival or beat model finetuning on QA in many cases~ \citep{weiChainThoughtPrompting2022, chowdheryPaLMScalingLanguage2022, weiEmergentAbilitiesLarge2022, kojimaLargeLanguageModels2023}. Prompting techniques are now improving ability of language models on compositional generalization \citep{drozdovCompositionalSemanticParsing2022}, \citep{zhouLeasttoMostPromptingEnables2022, duaSuccessivePromptingDecomposing2022}. There are many prompting techniques, we first shortlist key prompting techniques, then expose a taxonomy of prompt reasoning. \\
Shortlist of key prompting techniques:
\begin{description}
\item [Zero-shot prompting] provides prompt for task without additional context or guidance.
\item [In-Context learning and Few-shot Prompting] provides prompt with relevant context for expected task helping LLM to better answer \citep{brownLanguageModelsAre2020, minRethinkingRoleDemonstrations2022, rubinLearningRetrievePrompts2022}. Most well-known example is few-shot learning which provides a prompt with a few examples of expected task for helping to generate the best answer.
\item [Prompt tuning] adjusts the initial prompt, often by trial and error, to improve answer accuracy. This improving process can be manual or automated \citep{jiangInstancewisePromptTuning2022}.
\item [Soft prompt tuning] creates soft prompts \citep{lesterPowerScaleParameterEfficient2021} which are concatenated to the input text. Tokens of this soft prompt are learned vectors optimized end-to-end over a training dataset. See how to train it in section \ref{sec_PETtraining}. Some innovative examples are:
\begin{itemize}
\item[--] {Exploring Universal Intrinsic Task Subspace via Prompt Tuning}: adapt to many NLP tasks with small-scale data by optimizing only a few free parameters in a unified low-dimensional intrinsic task subspace \citep{qinExploringUniversalIntrinsic2022}.
\item[--] {Compositional Task Representations} learn specific codebook for compositional tasks \citep{shaoCompositionalTaskRepresentations2023}.
\end{itemize}
\item [Chain-of-thought prompting] \citep{weiChainThoughtPrompting2022} provides relevant examples of multi-steps of reasoning/thoughts up to the solution to improve reliability or more easily spot errors in the result. It largely outperforms the state-of-the-art results with zero and few-shots learning with the same model on many advanced natural language processing tasks and fine-tuned models trained with hundreds times of examples, with the advantage of being interpretable.
\item [Self-consistency] \citep{wangSelfConsistencyImprovesChain2022} generates multiples prompts, verifies and votes \citep{wengLargeLanguageModels2022}.
\item [Least-to-most prompting] \citep{zhouLeasttoMostPromptingEnables2022} improves chain-of-thought with multi-step examples that gradually becomes more specific or detailed; Chain-of-thought often performs poorly on tasks requiring to solve problems harder than those in demonstration examples. To tackle this, LtM first reduces a complex problem into a list of easier subproblems, and then sequentially solves these subproblems with gradual complexity. LtM can be combined with self-consistency to improve robustness.
\item [Dynamic least-to-most prompting]~ \citep{drozdovCompositionalSemanticParsing2022} is a refinement of least-to-most prompting using the following steps:(1) prompts LLM to teach it to perform a synthatic parsing of all inputs to create a tree-structured decomposition, (2) use decomposition to select demonstration exemples, (3) linearize the decomposition tree and prompt the model to sequentially generate answers to subproblems.
\item [Successive prompting] \citep{duaSuccessivePromptingDecomposing2022} develops successive prompting decomposing a complex problem into a first simple problem, with each next subproblem prediction having access to the answers to each previous subproblems.
\item [Maieutic prompting] \citep{jungMaieuticPromptingLogically2022} is inspired by Socratic way of questioning, it generates a tree of logical explanations up to the truth values that max-satisfy these relations to verify its veracity. It surpass many approaches and provides intrinsic interpretations of inference.
\end{description}
\subsection{Prompting CQA enhancement taxonomy: strategy, knowledge retrieval, reasoning skills}
We suggest to re-use taxonomy issued from \citet{qiaoReasoningLanguageModel2022} to classify how to enrich prompt mechanism for complex question answering .
\begin{enumerate}
\item \textbf{CQA / problem solving strategies}
\begin{description}
\item [Prompt Engineering > single-pass] covers how to better design a prompt for solving my answer directly
\citep{chenLargeLanguageModels2023, beurer-kellnerPromptingProgrammingQuery2022, zhouTeachingAlgorithmicReasoning2022, fuComplexityBasedPromptingMultiStep2023, rajagopalTemplateFillingControllable2022, paranjapePromptingContrastiveExplanations2021, kojimaLargeLanguageModels2023, zhangAutomaticChainThought2022, shiLanguageModelsAre2022, weiChainThoughtPrompting2022}
\item [Prompt Engineering > multi-step] covers how to better design a prompt for solving my answer through best iterative steps
\citep{zhangImpactSymbolicRepresentations2022, kazemiLAMBADABackwardChaining2022, zhangMultimodalChainofThoughtReasoning2023, reppertIteratedDecompositionImproving2023, wangIterativelyPromptPretrained2022, khotDecomposedPromptingModular2022, jungMaieuticPromptingLogically2022, shaoSyntheticPromptingGenerating2023, creswellSelectionInferenceExploitingLarge2022, duaSuccessivePromptingDecomposing2022, pressMeasuringNarrowingCompositionality2022, creswellFaithfulReasoningUsing2022, zhouLeasttoMostPromptingEnables2022, khattabDemonstrateSearchPredictComposingRetrieval2023}
\item [Process > Self-Optimization] covers self refining processes (e.g. calibrators, filters)
\citep{yeUnreliabilityExplanationsFewshot2022, wiegreffeReframingHumanAICollaboration2021}.
\item [Process > Ensemble-Optimization] encompasses ensembling techniques used to generate more consistent answers by majority vote or ensembling decision process.
\citep{fuComplexityBasedPromptingMultiStep2023, liAdvanceMakingLanguage2022, shaoSyntheticPromptingGenerating2023, wengLargeLanguageModels2022, wangSelfConsistencyImprovesChain2022}
\item [Process > Iterative-Optimization] fine-tunes iteratively to produce better reasoning processes and answers.
\citep{wangIterativelyPromptPretrained2022, huangLargeLanguageModels2022, zelikmanSTaRBootstrappingReasoning2022}
\item [Use of external module > simulator] uses a computational simulator (e.g. physics engine) to simulate processes and aid LMs in real-world reasoning.
\citep{liuMindEyeGrounded2022, jacksonNaturalLanguageSimulations2022}
\item [Use of external module > code interpreter] uses external code interpreters to enrich LLM answers for complex structures and calculations.
\citep{lyuFaithfulChainofThoughtReasoning2023, yeLargeLanguageModels2023, chenProgramThoughtsPrompting2022, madaanLanguageModelsCode2022, gaoPALProgramaidedLanguage2023}
\end{description}
\item \textbf{Enhancing knowledge retrieval}
\begin{description}
\item [Implicit Knowledge] -
Improve Implicit Knowledge to enrich LLM answers through Few-Shot Prompting and Reinforcement Learning. \citep{royReasoningQuantitiesNatural2015, liuWhatMakesGood2022, wangPINTOFaithfulLanguage2022, liuRainierReinforcedKnowledge2022, liuGeneratedKnowledgePrompting2022}
\item [Explicit Knowledge] - Retrieve and provide in-context labeled examples in prompt to improve explicit knowledge, reduce hallucination and enrich LLM answers. \citep{luDynamicPromptLearning2022, yangLogicSolverInterpretableMath2022, heRethinkingRetrievalFaithful2022, suSelectiveAnnotationMakes2022}
\end{description}
\item \textbf{Enhancing reasoning skills}
\begin{description}
\item [Arithmetic] - improves arithmetic reasoning - e.g. constructing a unified benchmark with increased complexity and scale.
\citep{beurer-kellnerPromptingProgrammingQuery2022, chenProgramThoughtsPrompting2022, zhouTeachingAlgorithmicReasoning2022, fuComplexityBasedPromptingMultiStep2023, luDynamicPromptLearning2022, yangLogicSolverInterpretableMath2022, huangLargeLanguageModels2022, liAdvanceMakingLanguage2022, gaoPALProgramaidedLanguage2023, kojimaLargeLanguageModels2023, zhangAutomaticChainThought2022, shiLanguageModelsAre2022, lewkowyczSolvingQuantitativeReasoning2022, zelikmanSTaRBootstrappingReasoning2022, wangSelfConsistencyImprovesChain2022, zhouLeasttoMostPromptingEnables2022, weiChainThoughtPrompting2022}
\item [Commonsense] - improves commonsense reasoning - e.g. prompt incorporating physical and human interactions with general background knowledge and using benchmark datasets such as CommonsenseQA.
\citep{sunTSGPTwoStageGenerative2022, wangPINTOFaithfulLanguage2022, huangLargeLanguageModels2022, madaanLanguageModelsCode2022, liExplanationsLargeLanguage2022, liuRainierReinforcedKnowledge2022, roySolvingGeneralArithmetic2015, liAdvanceMakingLanguage2022, yeUnreliabilityExplanationsFewshot2022, rajagopalTemplateFillingControllable2022, paranjapePromptingContrastiveExplanations2021, jungMaieuticPromptingLogically2022, kojimaLargeLanguageModels2023, zhangAutomaticChainThought2022, pressMeasuringNarrowingCompositionality2022, zelikmanSTaRBootstrappingReasoning2022, wangSelfConsistencyImprovesChain2022, weiChainThoughtPrompting2022, liuGeneratedKnowledgePrompting2022, wiegreffeReframingHumanAICollaboration2021}
\item [Creativity] - supports ideation and creation process like automated diverse prompting ideas \citep{leePromptiverseScalableGeneration2022, rhyscoxDirectedDiversityLeveraging2021, schickPEERCollaborativeLanguage2022, gozalo-brizuelaChatGPTNotAll2023}.
\item [Logical] - improves logical reasoning - e.g. use of examples that contain synthetic rule bases, entailment trees, and diagnostic benchmarks.
\citep{creswellSelectionInferenceExploitingLarge2022, creswellFaithfulReasoningUsing2022}
\item [Symbolic] - improves symbolic reasoning - e.g. use of examples that contain symbolic rationales, rules...
\citep{gaoPALProgramaidedLanguage2023, khotDecomposedPromptingModular2022, kojimaLargeLanguageModels2023, zhangAutomaticChainThought2022, zelikmanSTaRBootstrappingReasoning2022, wangSelfConsistencyImprovesChain2022, zhouLeasttoMostPromptingEnables2022, weiChainThoughtPrompting2022}
\item [Multimodal] - leverages multimodal reasoning - e.g. incorporate existing multimodal reasoning benchmarks such as ScienceQA, ALERT, into the LLM prompting.
\citep{zhangMultimodalAnalogicalReasoning2023, luLearnExplainMultimodal2022}
\end{description}
\end{enumerate}
\subsection{Improvement loop \& knowledge capitalization}\label{sec_ImprovementLoop_and_kg_capitalization}
The last step in the standard pipeline presented in introduction is the improvement loop and knowledge capitalization. How to learn from each answer to better align to expectations (e.g. max usefulness vs min harmless answer), improve skills and knowledge. Our survey methodology has identified “reinforcement learning” and "human-in-the-loop" as the main levers.
\subsubsection{Reinforcement learning methods}
Main RL techniques are:
\begin{itemize}
\item Reward-based reinforcement learning: the LM is trained to maximize a reward signal that is provided by a human or some other external source. This could involve providing the model with a fixed reward for generating correct answers, or using more complex reward functions that take into account the quality and specificity of the model's answers.
\item Inverse reinforcement learning: the LM attempts to infer the reward function from human feedback or other forms of guidance. This can be a more flexible approach, as it allows the model to learn from a wider range of feedback signals and to adapt to changing requirements over time.
\item Imitation learning (include procedure cloning): the LM is trained to imitate the behavior of a human expert or some other reference policy. This can be a useful way to incorporate domain knowledge or other types of expertise into the model, and can help the model learn to generate high-quality answers more quickly by mimicking.
\end{itemize}
\subsubsection{Human-in-the-loop and RLHF} \citep{wangPuttingHumansNatural2021}
Human-in-the-loop is meant to improve the outcome of an event or process via user input in the system loop. Humans can intervene at many steps in a QA system from task definition or data creation, to final answer assessment. We focus on the QA feedback loops. Human can explicitly validate, rank, correct, provide guidance for improvement, or implicitly rate via click-through. The outcome of each question expect the answer to best fit with the user’s intention \citep{leikeScalableAgentAlignment2018}. Those intentions are explicit on one side (following instructions) and implicit on the other side (answer is helpful, truthful, not biased, nor toxic, nor harmful) \citep{askellGeneralLanguageAssistant2021, baiTrainingHelpfulHarmless2022}.
We therefore need to:
\begin{itemize}
\item[--] capture user rich explicit and implicit feedback through different human-in-the-loop input feedback.
\item[--] estimate and maximize user explicit and implicit intentions satisfaction for each answer and also in total through diverse reinforcement learning (RL) techniques.
\end{itemize}
This approach is often called RLHF (reinforcement learning with human feedback) \citep{ouyangTrainingLanguageModels2022, baiTrainingHelpfulHarmless2022, daniels-kochExpertiseProblemLearning2022, anonymousTaskAmbiguityHumans2022, ganguliRedTeamingLanguage2022, baiConstitutionalAIHarmlessness2022} and illustrated in the figure enhanced by an AI supervision process to better scale, reduce human workload and biases.
\begin{figure}
\includegraphics[width=\linewidth]{InstructGPT-RLHF_CAI.png}
\caption{Reinforcement learning with human feedback enhanced with AI supervision (this figure is a composition of 2 figures extracted from \citet{ouyangTrainingLanguageModels2022, baiConstitutionalAIHarmlessness2022})}
\label{RLHF CAI}
\end{figure}
When compared the current best in class language model GPT-3 with 175B parameters to model InstructGPT 1.3B more than 100 times smaller, prompt answers from this small model built with RLHF are 85\% preferred by humans \citep{ouyangTrainingLanguageModels2022} .
However, learning from human expertise has limits:
\begin{enumerate}
\item \textbf{scaling cost selection}: manual labeling of data is slow and expensive so maybe restricted to some wealthy organizations or labeled with less expertise. To highly reduce this, Su et al \citep{suSelectiveAnnotationMakes2022} proposes voke-k an unsupervised graph-based selective annotation method yielding to drast reduction and more robust learning, Anthropic et al \citep{baiConstitutionalAIHarmlessness2022} introduce different supervised and RL techniques (SL-CAI, RLAIF (RL with AI feedback), RL-CAI) which can learn from far fewer labelers and generate higher quality labeling.
\item \textbf{labeler biases}: longer RLHF training can bias language model with stronger political views (e.g. gun rights, immigration) and desires to pursue specific goals (e.g. resource acquisition) and their preservation \citep{perezDiscoveringLanguageModel2022}. AI supervision from Anthropic \citep{baiConstitutionalAIHarmlessness2022} driven by principles could allow more controllable and transparent feedback behaviors.
\item \textbf{expertise problem}: even if same questions are challenged by different persons and agreed with majority, some questions may require specific expertise to be correctly analyzed. Problem formalization and query-teacher selection solution is discussed in "The Expertise Problem: Learning from Specialized Feedback" \citep{daniels-kochExpertiseProblemLearning2022}.
\item \textbf{harmlessness vs helpfulness trade-off}: "helpfulness tends to increase harmfulness, since models are willing to obey pernicious requests, and conversely models trained to be harmless tend to be more evasive and generally less helpful" \citep{baiConstitutionalAIHarmlessness2022}. This competitive objective solving is well discussed and solutions provided in "Training a Helpful and Harmless Assistant with Reinforcement Learning from Human Feedback" \citep{baiTrainingHelpfulHarmless2022} and recent "Constitutional AI: Harmlessness from AI Feedback" \citep{baiConstitutionalAIHarmlessness2022}.
\end{enumerate}
\subsubsection{Continuous improvement throughout the CQA pipeline}
This reinforcement loop can improve at many stage of the QA pipeline such as:
\begin{enumerate}
\item {Question understanding and context improvement} demonstrated in reinforced clarification question generation \citep{pyatkinReinforcedClarificationQuestion2022} and within dialogue \citep{huInteractiveQuestionClarification2020}.
\item {Decomposition strategies} in learning to decompose compound questions~ \citep{yangLearningDecomposeCompound2018}, automatic generation of socratic subquestions \citep{shridharAutomaticGenerationSocratic2022}, decision-making with multi-step expert advices on the web \citep{philippDecisionMakingMultiStepExpert2019}.
\item {Query construction (prompting)} in reinforced knowledge introspector for commonsense QA \citep{liuRainierReinforcedKnowledge2022}, optimizing discrete text prompts \citep{dengRLPromptOptimizingDiscrete2022}, improving prompt in-context policy iteration \citep{brooksInContextPolicyIteration2022}.
\item {Information retrieval} in reinforced browser-assisted QA with human feedback \citep{nakanoWebGPTBrowserassistedQuestionanswering2022}, knowledge-grounded QA \citep{chiuKnowledgeGroundedReinforcementLearning2022}, retrieval augmented process \citep{goyalRetrievalAugmentedReinforcementLearning2022}, answering with verified quotes \citep{menickTeachingLanguageModels2022}, querying external knowledge \citep{liuAskingKnowledgeTraining2022}, reasoning and acting in multiple search \citep{yaoReActSynergizingReasoning2022}.
\item {Answer generation} reinforcement mainly with instructions/expectation alignment (quality, safety, ambiguity...) in training a helpful and harmless assistant from human feedback \citep{baiTrainingHelpfulHarmless2022}, improving it with AI feedback \citep{baiConstitutionalAIHarmlessness2022}, aligning with natural language goals \citep{zhouInverseReinforcementLearning2020}, benchmarking with preference \citep{leeBPrefBenchmarkingPreferenceBased2021}.
\item {Knowledge capitalization}: we did not find any articles related to QA progressive capitalization with a reinforcement loop. However, QA entailment tree \citep{ribeiroEntailmentTreeExplanations2022, tafjordEntailerAnsweringQuestions2022, liuRLETReinforcementLearning2022}, self-consistency \citep{huangLargeLanguageModels2022}, compositional solving above model capacity \citep{drozdovCompositionalSemanticParsing2022, shaoCompositionalTaskRepresentations2023} seem potential ways to capitalize from answers to answers building a faithful and truthful explainable tree of information and this kind of tree has already been used in a RL process~ \citep{liuRLETReinforcementLearning2022}.
\end{enumerate}
\section{Limitations and research topics for solving more complex QA and problems}\label{sec_limits_and_research}
In the architectural patterns section, we listed the most frequent topics identified has challenge or limits of LLM. After reviewing the collected literature and identifying different solutions in this study, some limits seem tougher research limits to enable more complex QA and problems solving:
\begin{itemize}
\item The hallucination problem which limits a clear expectation of credibility/truthfulness in "Alignment to human expectation \& values in answer".
\item The scalability problem which we can extend to compute and costs limits.
\item Data availability \& quality which limits "domain adaptation \& task specialization" and "bias"
\item Data multi-sensitivity in LLM: this point is nearly uncovered.
\item Question decomposition of very complex problems and its explainability.
\end{itemize}
A recent paper from Meta\citep{mialonAugmentedLanguageModels2023} surveys LLM augmentation with more details on reinforcement learning. It also proposes additional research topics such as an optimal tradeoff between getting knowledge in or out of the model, which would help to better design modules presented in section \ref{sec_architectural_patterns}), and to extend LLM decomposition and planning module presented in section \ref{sec_architectural_patterns} to be a central orchestrator.
\subsection{Hallucination \& credibility}
Recent debates about Galactica \citep{taylorGalacticaLargeLanguage2022, WhyMetaLatest} and ChatGPT \citep{rudolphChatGPTBullshitSpewer2023} shade the light of limits and credibility of such language models concerning hallucination. It generates plausible-looking statements that are irrelevant or factually incorrect. It predicts without giving clues about which part of a false claim goes wrong, even sources given are not trustworthy. It even has difficulty to learn correct associations between entities from factual text corpus (e.g. Wikipedia). Explainability of an answer with a supported citation is a pointer but does not mean it is true. We have identified different avenues of research to address the challenge of hallucinations such as:
\begin{itemize}
\item More robust training \& prompting (self-consistency, context optimization, prompt tuning, denoising...) \citep{lyuFaithfulChainofThoughtReasoning2023}.
\item Detect hallucination \citep{zhouDetectingHallucinatedContent2021}.
\item Provide references, traceability, faithful explanation logic \citep{chenLORENLogicRegularizedReasoning2022} or the emerging field of entailment tree explanation \citep{ribeiroEntailmentTreeExplanations2022, liuRLETReinforcementLearning2022}.
\item Automated fact-checking \citep{guoSurveyAutomatedFactChecking2022}.
\item Identify faithfulness performance per tasks/domain \citep{liangHolisticEvaluationLanguage2022} and biases, to better ensemble experts \citep{choubeyMoFEMixtureFactual2021}.
\item Contrastive learning to reduce hallucination \citep{sunContrastiveLearningReduces2022}.
\item Reinforcement learning from human feedback, including red teaming, boosted with AI supervision (RLHP, RLHF, RLAIF) seams the strongest area of research to improve QA including hallucination \citep{baiConstitutionalAIHarmlessness2022, ganguliRedTeamingLanguage2022, baiTrainingHelpfulHarmless2022}.
\end{itemize}
\subsection{Compute, scaling... Costs}
More than 8 million TPUv4 hours is the time taken to train PalM 540B parameters model \citep{tayTranscendingScalingLaws2022}. For a far smaller model "T5 11B paremeters" and its variants, it is estimated that the cost of the project is estimated \$10 millions~ \citep{sharirCostTrainingNLP2020}. Those models continue to scale. Time of compute for training is therefore reserved to few organization. Operational costs for usage (inference) are less impressive \citep{liangHolisticEvaluationLanguage2022} but limits the use cases considering inference latency and minimal required hardware. We can apply standard model size reductions like quantization, distillation, pruning, early stopping at training... Different research avenues try to reduce this computing and costs required and inverse this scaling law.
\begin{itemize}
\item Frozen PLM techniques: we presented in previous sections prompt tuning and parameters-efficient tuning, there is a constant research on this related approach re-using already trained LLM to avoid re-training it or at minimal.
\item Retrieval augmented LM: keeping a maximum of information out of model while making it easy to access and update without any re-train has an important potential but is often less efficient and may require equal computation when comparing "total additional answer generation time" vs "training", new techniques try to close the gap \citep{dejongPrecomputedMemoryOnthefly2023}.
\item Scaling in-context learning: in-context learning highly improves tasks efficiency without re-training but is limited by maximum length input constraints due to quadratic complexity in computation and memory, different techniques allow to efficiently scale it it \citep{haoStructuredPromptingScaling2022, choPromptAugmentedLinearProbing2022, martinsInftyFormerInfinite2022}.
\item Mixture of denoisers: in "Transcending Scaling Laws with 0.1\% Extra Compute" \citep{tayTranscendingScalingLaws2022}, same model is train at half the budget to reach the same performance using mixture-of-denoisers (" U-PaLM achieves the same performance as the final PaLM 540B model at around half its computational budget - saving approx. 4.4 million TPUv4 hours"). UL2, proposes an unification of LM learning paradigms \citep{tayUL2UnifyingLanguage2022}.
\item Improve pruning techniques: in "SparseGPT: Massive Language Models Can Be Accurately Pruned in One-Shot" \citep{frantarSparseGPTMassiveLanguage2023}, same model can be reduced more than 50\% parameters with only one-shot, without any retraining, and nearly no loss in accuracy.
\item Mixture of experts, parameters sharing and routing techniques: "Switch Transformers: Scaling to Trillion Parameter Models with Simple and Efficient Sparsity" \citep{fedusSwitchTransformersScaling2022} demonstrated the usage of internal routing to expert layers in large language models to limit compute to part of the whole model to allow to scale model without augmenting compute. Many researchers propose new MoE: HetuMoE \citep{nieHetuMoEEfficientTrillionscale2022}, MoE distributed training system \citep{nieHetuMoEEfficientTrillionscale2022}, evolutional EvoMoE using dense-to-sparse gates \citep{nieEvoMoEEvolutionalMixtureofExperts2022}, FlexMoE with dynamic device placement \citep{nieFlexMoEScalingLargescale2023}.
\item Knowledge distillation improvement \citep{blakeneyReduceReuseRecycle2022, zaheerTeacherGuidedTraining2022, wahleCohesiveDistillationArchitecture2023} and dynamic composition of model \citep{xuSurveyDynamicNeural2022} to compose optimal and smaller models.
\item Adaptive computation and training: while training all samples are equally computed, easy samples could be less worked out than hard ones, adaptive computing enable sample-dependent computation and reached 2/3x computation for CALM \citep{schusterConfidentAdaptiveLanguage2022}. It has also been demonstrated with Chinchilla \citep{hoffmannTrainingComputeOptimalLarge2022} that we can highly reduce inference budget while improving accuracy by using the same training budget but on a much smaller LLM trained with much more data.
\item Dedicated hardware: language models are typically accelerated by running on GPU but an area of research investigate dedicated hardware (e.g. FPGA, ASICS, ) for saving energy and costs \citep{hongDFXLowlatencyMultiFPGA2022a}.
\end{itemize}
\subsection{Data availability \& quality}
The skills and training datasets sections as well as human feedback highlighted the need for specialized data and of high quality, to acquire skills and domain knowedlge as well as to calibrate to requester intents. Large language models requires huge volume of data to develop each skills and domains targeted. Wealthy organization can spend millions to clean or produce data but even them are limited.
\begin{itemize}
\item Frugal and rich data with AI supervision: we saw in previous sections techniques like dynamic least-to-most requiring far less data for training while improving accuracy and skills, or active learning identifying best examples for improvement. Those data could be more and more automatically generated like in Auto-CoT \citep{zhangAutomaticChainThought2022}, CAI \citep{baiConstitutionalAIHarmlessness2022}.
\item Simulation, distillation and code interpreter (program execution): simulation \citep{liuMindEyeGrounded2022, jacksonNaturalLanguageSimulations2022}, existing models \citep{wahleCohesiveDistillationArchitecture2023}, and code interpreters \citep{haluptzokLanguageModelsCan2022} can provide infinite examples in different domains of logic and knowledge. We also uncovered that code interpreter allow to learn logic transferable to many domains.
\item More signals: better leverage all available symbolic data which have clear signals, already structured (Linked Open Data) or re-structured \citep{yuanReStructuredPretraining2022}.
\item Open dataset: through all of the research read during this survey, it is clear that this field has continuously progressed through the availability of existing and new open datasets \citep{jerniteDataGovernanceAge2022}.
\end{itemize}
\subsection{Data multi-sensitivity usage \& protection}\label{sec_data_sensitivity}
How a QA can ensure protection of rules associated to each data sensitivity ?
Sensitive data can be personal but also business confidential, regulated, high risk... The same data can have multiple sensitivity and should be restricted to only person required to know it. Large language models are currently design without any access control mechanism for the embedded data in the model. Apart from removing any sensitive data, making it impossible for authorized persons to use them legitimately (e.g. a doctor on a patient's medical data), different techniques and strategies have been identified as avenues of research:
\begin{itemize}
\item Data access control through retriever: language models using a retriever could avoid embedding sensitive data in the model and use sensitive data stored outside the model in a suitable system (eg RDBMS, EDM) incorporating a data access control mechanism.
\item Models per sensitivity: the different type of sensitive data could each be stored in a specific model and a common frontend could manage the access control to each model.
\item Data privacy mechanisms \citep{chenTHEXPrivacyPreservingTransformer2022, haoIronPrivateInference2022, huangTextHideTacklingData2020, kimPrivacypreservingTextEmbedding2022, mireshghallahQuantifyingPrivacyRisks2022, quNaturalLanguageUnderstanding2021, quPrivacyAdaptiveBERTNatural2021, zhouTextFusionPrivacyPreservingPretrained2022, xuPrivacyPreservingMachineLearning2021}: QA can ensure protection of private data by implementing privacy-preserving techniques such as homomorphic encryption, local differential privacy, or secure multiparty computation. These techniques can be applied to the input text, token embeddings, and sequence representations in order to protect the data from malicious actors. In addition, QA can ensure privacy by using secure protocols for matrix multiplication and complex non-linear functions such as Softmax, GELU activations, and LayerNorm. QA can also use TextHide which introduces an encryption step to prevent an eavesdropping attacker from recovering private text data. Finally, QA can use TextFusion which employs an adversarial training regime to privatize token representations and make it difficult to accurately recover the complete plain text.
\item Functional encryption: this technique could use encryption to finely restrict data usage based on defined roles and functions (e.g. right to query medical data)~ \citep{xuPrivacyPreservingMachineLearning2021}.
\item Decentralized (federated) approach with sensitive data: federated learning could allow to keep sensitive data at source to avoid sharing and unauthorized usage \citep{chenFedMatchFederatedLearning2021, xuPrivacyPreservingMachineLearning2021, xiaoOffsiteTuningTransferLearning2023}.
\item other techniques \citep{xuPrivacyPreservingMachineLearning2021} could avoid elimination of data by perturbation or confusion, knowledge transfer with privacy guarantee.
\end{itemize}
\subsection{Decomposition of very complex QA and explainability}
We defined complex question answering as high complexity questions which are non-factoid, multi-step requiring decomposition, higher reasoning (constraints, deduction, induction, abduction), multi-source questions. This need for decomposition is central in this solving process because it breaks down a non solvable complex question (problem) down to solvable questions. Moreover, as we have seen in chain-of-thought and dynamic least-to-most, those traceable steps improve solving capacities of a given model but also makes the answer auditable, truthful and explainable in case of errors. However nearly all examples of papers reviewed related to decomposition are factoid questions. \citet{duaSuccessivePromptingDecomposing2022} used "Who kicked the longest field goal in the first half?" as the main example of complex question, it is a factoid question. What if we ask "What are the most adapted LM hybrid architectures to answer complex questions ?". That would be a non-factoid question, requiring multiple sources, reasoning... And a compatible decomposition process aligned with an acceptable and auditable scientific methodology. We could learn this decomposition behaviour by cloning a human process \citep{yangChainThoughtImitation2022} or learn to discover it through a human contribution. Iterated decomposition \citep{reppertIteratedDecompositionImproving2023}, a human-in-the-loop workflow for process supervision using a language model allows to address new type of problems and enables users to inspect and intervene in the LM’s reasoning process. Those two research avenues seem of high potential even if it still requires a lot of human expert feedback, domain specific data and computation is an important limit. The generalization of such a process of decomposition for complex non factoid questions to many domains and practice could be accelerated by the same techniques identified for scaling RLHF, research avenues in "Data availability \& quality", "Compute, scaling... Costs" and "Hallucination \& credibility" sections; and improved with adapted shortest path iterative approach.
\section{Conclusion}
In this paper, we present a comprehensive survey of language model hybrid architectures for answering complex questions. We review the various skills required and typical approach, datasets and metrics that are used, the current limits of large language models for complex QA, the potential of hybrid architectures, better training and prompting strategies for this goal. We also identify the main challenges and research avenues for solving more complex questions including knowledge capitalization. We identify the need to address multi-sensitivity data in language models architectures and potential approaches. Finally, we outline research avenues and highlight the potential of exploration in this field. This paper aim to provide a comprehensive and useful resource for readers interested in the development of complex non-factoid question answering.
|
1,477,468,751,023 | arxiv | \section{Introduction}
Cancer is an evolutionary process where cellular sub-populations known as sub-clones compete with each other under conditions of Darwinian natural selection\cite{yates2015subclonal}. Resultant Intra-Tumor Heterogeneity (ITH) has been associated with higher likelihood of relapse and increased resistance to therapy in cancer patients\cite{greaves2012clonal, o2015imaging}. The ability to precisely predict how these sub-clones will evolve over time can help clinicians to develop an effective cancer treatment and reduce treatment failures\cite{chkhaidze2019spatially}.
Tumor evolution in literature has mostly been addressed through the mathematical modelling approach. One early analytical approach is based on using models from population genetics to quantify the selective advantage of sub-clones, thus enabling predictions of which sub-clones are more likely to grow \cite{nielsen2003estimating}. Until recently, it was almost impossible to observe the clonal growth at different time points due to the invasive nature of tissue biopsies. Therefore, the analysis of tumour growth and mutation frequencies was normally conducted at one time point - from the excised tumour sample.
Next Generation Sequencing (NGS) methods such as targeted sequencing or full exome sequencing, followed by variant calling algorithms, allow the calculation of Variant Allele Frequencies (VAFs) in a sample. A number of quantitative models have been developed to infer the evolutionary events that generated the resulting distribution of allele frequencies \cite{williams2018quantification, graham2017measuring}. These methods allow us to determine whether there is selection occurring in the tumour by analysing the VAF spectra.
All of these tumor evolution models are validated either by fitting the mathematical model on a single instance of tumor VAFs or by using simulated material. The validations of these models were not done on real-world longitudinal samples.
In this paper, we present a novel data-driven approach to predict cancer evolution for real-world data. Our proposed method is based on the intuition that if we can capture the true characteristics of sub-clones within a tumor and represent it in the form of features, a sophisticated machine learning algorithm can be trained to predict its behavior. Our main challenges are as follows:
\begin{itemize}
\setlength\itemsep{0.5em}
\item Accurately identifying the \textbf{number of sub-clones} and their formation and constitution in a given tumor.
\item Feature-based \textbf{representation of sub-clones} to encapsulate the characteristics of underlying mutations towards tumorigenesis.
\item Gathering \textbf{labelled data across timepoints} for application of machine learning models to predict evolution.
\end{itemize}
\section{Methods}
\begin{figure}[H]
\centering
\includegraphics[width=0.95\linewidth]{foresite-dataflow.jpg}\\
\caption{An overview of the proposed method}
\end{figure}
\subsection{Data acquisition and pre-processing}
We used longitudinal data from the TRACERx lung cancer liquid biopsy study by Abbosh et al. (2017).\cite{Abbosh2017PhylogeneticCA}. This includes data from 24 patients monitored for 1-3 years. For these patients, DNA mutational data was generated by liquid biopsy (2-8 samples per patient) and subclonal information was generated with PyClone\cite{roth2014pyclone}.
\subsection{Feature extraction}
Features were extracted at three levels: mutation, codon, and gene (Figure \ref{fig:features}).
For mutation-level features we utilized the Ensembl Variant Effect Predictor\cite{McLaren2016TheEV}, which provides individual scores around likely consequences and potential impacts for a given mutation, including scores predicting protein effect such as SIFT and PolyPhen.
For codon-level features we applied a statistical model to data from large-scale genomics projects, The Cancer Genome Atlas \cite{Collins2007TheCG} and Genie \cite{aacr2017aacr}, to give a hotspot score per codon based on frequency of observed mutations against background mutability.
For gene-level features we utilized the various scores and metrics produced by the 20/20+ tool \cite{Tokheim2016EvaluatingTE}, which calculates these as part of a machine-learning-based ratiometric method of driver-gene classification through evaluation of the proportion of inactivating mutations and recurrent missense mutations in a gene of interest.
\begin{figure}[H]
\centering
\includegraphics[width=.85\linewidth]{features_poster_new.jpg}
\caption{Final features extracted}
\label{fig:features}
\end{figure}
\subsection{Feature selection and integration}
As different sub-clones have differing numbers of mutations, statistical features including mean, max, min and median were calculated from the sets of mutations to give fixed length feature vectors to represent the sub-clones. The final form of the features matrix is as follows:
\begin{multicols}{2}
\[features\_matrix=\begin{bmatrix}
x_{11} & \dots & x_{1f} \\
\vdots & \ddots & \vdots \\
x_{c1} & \dots & x_{cf}
\end{bmatrix}\]
\\
$c$ : number of sub-clones\\
$f$ : number of features\\
\end{multicols}
\subsection{Model training and evaluation}
Clonal-evolution within a tumor is predicted using two approaches: a regression analysis and a classification analysis. We evaluate both approaches with a range of algorithms, hyperparameter optimization, and K-fold cross validation at k=5.
\section{Results}
\subsection{Regression Analysis}
A random forest regression analysis returned promising results (Figure \ref{fig:regression}). Overall, an $R^2$ of 0.16 was achieved. Although there is much to be improved in terms of model performance, this shows promising signal, indicating a better than random performance.
\begin{figure}[H]
\centering
\centering\includegraphics[width=.49\linewidth]{pred1.jpg}
\centering\includegraphics[width=.49\linewidth]{pred2.jpg}
\centering\includegraphics[width=.49\linewidth]{pred3.jpg}
\centering\includegraphics[width=.49\linewidth]{pred4.jpg}
\caption{Predicting change in VAF over time (Days) for four randomly chosen sub-clones}
\label{fig:regression}
\end{figure}
\subsection{Classification Analysis}
As a classification problem the approach is simplified greatly, with the goal of simply predicting if the sub-clonal VAF will increase or decrease. Classification methods used were logistic regression, random forest, support vector machine (SVM) and artificial neural network, respectively. We present results for predicting change of sub-clones, PCA of sub-clones, and driver mutations, respectively (Table \ref{tab:classification}).
\bgroup
\def1.5{1.5}
\begin{table}[H]
\centering
\caption{Performance summary of classification models (ROC-AUC; F1 score)\medskip}
\label{tab:classification}
\begin{tabular}{|c|c|c|c|c|}
\hline
& \textbf{\begin{tabular}[c]{@{}c@{}}Logistic Regression\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}Random Forest\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}SVM\end{tabular}} & \textbf{\begin{tabular}[c]{@{}c@{}}ANN\end{tabular}} \\
\hline
Sub-clones & 0.298; 0.399 & 0.354; 0.333 & 0.422; 0.389 & 0.512; 0.623 \\
\hline
PCA sub-clones & 0.456; 0.519 & 0.573; 0.527 & 0.511; 0.476 & 0.505; 0.421 \\
\hline
Driver mutations & 0.618; 0.348 & \textbf{0.785; 0.643} & 0.566; 0.488 & 0.511; 0.364 \\
\hline
\end{tabular}
\end{table}
\egroup
\section{Conclusions \& Future Work}
In this paper, we have shown that is possible to predict the course of tumor evolution using prior information. This has profound implications in the clinical management of cancer as a chronic disease. Interestingly, our models performed best on 'driver' mutations, i.e. those thought to be primarily responsible for driving clonal growth. This adds weight to the biological significance of the results.
Clearly there is much to be improved in terms of model performance. Current and future work by our group will include greater data collection, applying improved liquid biopsy analytical methods developed by Cambridge Cancer Genomics, and further refinement of the evolutionary model choice, which was found to greatly influence performance. Overall, this enable more accurate prediction of cancer evolution through data-driven machine learning approaches, and ultimately greatly improve cancer care.
\bigskip
|
1,477,468,751,024 | arxiv | \section{Introduction}
A 19th century
theorem of Gegenbauer asserts that for each fixed $k$, the set of positive integers not divisible by the $k$th power of an integer larger than $1$ has asymptotic density $\zeta(k)^{-1}$, where $\zeta(s)$ is the familiar Riemann zeta function. Call such an integer \textsf{$k$th-power-free}, or \textsf{$k$-free} for short.
In this note we investigate the frequency with which the sum-of-proper-divisors function $s(n):=\sum_{d\mid n,~d<n} d$ assumes $k$-free values. As we proceed to explain, there is a natural guess to make here, formulated below as Conjecture \ref{conj:main}.
Fix $k\ge 2$. If $n$ is not $k$-free, then $p^k \mid n$ for some prime $p$. Moreover, if $y=y(x)$ is any function tending to infinity, the the upper density of $n$ divisible by $p^k$ for some $p > y^{1/k}$ is at most $\sum_{p > y^{1/k}}p^{-k} = o(1)$. Hence, almost always a non $k$-free number $n$ is divisible by $p^k$ for some $p^k \le y$. To be precise, when we say a statement about positive integers $n$ holds \textsf{almost always}, we mean that it holds for all $n\le x$ with $o(x)$ exceptions, as $x\to\infty$. (Importantly, we allow the statement itself to involve the growing upper bound $x$.)
It was noticed by Alaoglu and Erd\H{o}s \cite{AE44} that whenever $y=y(x)$ tends to infinity with $x$ slowly enough, $\sigma(n)$ is divisible by all of the integers in $[1,y]$ almost always. (We give a proof below with $y:=(\log\log{x})^{1-\epsilon}$; see Lemma \ref{lem:easylem}.) Hence, almost always $n$ and $s(n)=\sigma(n)-n$ share the same set of divisors up to $y$. Putting this together with the observations of the last paragraph, we see that if $n$ is not $k$-free, then $s(n)$ is not $k$-free, almost always. The same reasoning shows that if $n$ is $k$-free, then $s(n)$ is not divisible by $p^k$ for any $p \le y^{1/k}$, almost always. Thus, if it could be shown that almost always $s(n)$ is not divisible by $p^k$ for any prime $p > y^{1/k}$, then we would have established the following conjecture.
\begin{conj}\label{conj:main} Fix $k\ge 2$. On a set of integers $n$ of asymptotic density $1$,
\[ \text{$n$ is $k$-free} \Longleftrightarrow \text{$s(n)$ is $k$-free}. \]
\end{conj}
The case $k=2$ of Conjecture \ref{conj:main} is alluded to by Luca and Pomerance in \cite{LP15} (see Lemma 2.2 there and the discussion following). Their arguments show that $s(n)$ is squarefree on a set of positive lower density (in fact, of lower density at least $\zeta(2)^{-1} \log{2}$). Conjecture \ref{conj:main}, for every $k \ge 2$, would follow from a very general conjecture of Erd\H{o}s--Granville--Pomerance--Spiro \cite{EGPS90} that the $s$-preimage of a density zero set also has density zero; see Remark \ref{rmk:EGPS} below.
Our result is as follows.
\begin{thm} \label{thm:main} Conjecture \ref{conj:main} holds for each $k\ge 4$.
\end{thm}
To prove Conjecture \ref{conj:main} for a given $k$, it is enough (by the above discussion) to show that almost always $s(n)$ is not divisible by $p^k$ for any $p^k > (\log\log{x})^{0.9}$. The range $p \le x^{o(1)}$ can be treated quickly using familiar arguments (versions of which appear, e.g., in \cite{pollack14}). The main innovation in our argument --- and the source of the restriction to $k\ge 4$ --- is the handling of larger $p$ using a theorem of Wirsing \cite{wirsing59} that bounds the ``popularity'' of values of the function $\sigma(n)/n$.
The reader interested in other work on powerfree values of arithmetic functions may consult \cite{pappalardi03,PSS03,BL05,BP06} as well as the survey \cite{pappalardi05}.
\subsection*{Notation and conventions} We reserve the letters $p, q, P$, with or without subscripts, for primes and we write $\log_k$ for the $k$th iterate of the natural logarithm. We write $P^{+}(n)$ and $P^{-}(n)$ for the largest and smallest prime factors of $n$, with the conventions that $P^{+}(1)=1$ and $P^{-}(1)=\infty$. We adopt the Landau--Bachmann--Vinogradov notation from asymptotic analysis, with all implied constants being absolute unless specified otherwise.
\section{Preliminaries}
The following lemma is due to Pomerance (see \cite[Theorem 2]{pomerance77}).
\begin{lem}\label{lem:pomerance} Let $a, k$ be integers with $\gcd(a,k)=1$ and $k > 0$. Let $x\ge 3$. The number of $n\le x$ for which there does not exist a prime $p\equiv a\pmod{k}$ for which $p\parallel n$ is $O(x (\log{x})^{-1/\phi(k)})$.
\end{lem}
The following lemma justifies the claim in the introduction that $\sigma(n)$ is usually divisible by all small primes. It is well-known but, for lack of a suitable reference, we include the short proof.
\begin{lem}\label{lem:easylem} Fix $\epsilon > 0$. Almost always, the number $\sigma(n)$ is divisible by every positive integer $d\le (\log_2 x)^{1-\epsilon}$.
\end{lem}
\begin{proof} Notice that $d\mid \sigma(n)$ whenever there is a prime $p\equiv -1\pmod{d}$ such that $p \parallel n$. For each $d\le (\log_2 x)^{1-\epsilon}$, the number of $n\le x$ for which there is no such $p$ is $O(x \exp(-(\log_2 x)^{\epsilon}))$, by Lemma \ref{lem:pomerance}. Now sum on $d\le (\log_2 x)^{1-\epsilon}$.
\end{proof}
Our next lemma bounds the number of $n \leq x$ for which $n$ and $\sigma(n)$ possess a large common prime divisor.
\begin{lem}\label{gcd(n,sigma(n)) log log x smooth} Almost always, the greatest common divisor of $n$ and $\sigma(n)$ has no prime divisor exceeding $\log \log x$.
\end{lem}
With more effort, it could be shown that $\gcd(n,\sigma(n))$ is almost always the largest divisor of $n$ supported on primes not exceeding $\log\log{x}$. Compare with Theorem 8 in \cite{ELP08}, which is the corresponding assertion with $\sigma(n)$ replaced by $\phi(n)$.
\begin{proof} Put $y:=\log_2 x$. We start by removing those $n\le x$ with squarefull part exceeding $\frac{1}{2}y$. The number of these $n$ is $O(xy^{-1/2})$, which is $o(x)$ and hence negligible.
Suppose that $n$ survives and there is a prime $p > y$ dividing $n$ and $\sigma(n)$. Since $p \mid \sigma(n)$, we can choose a prime power $q^e \parallel n$ for which $p \mid \sigma(q^e)$. Then $y < p \leq \sigma(q^e) < 2q^e$, forcing $e=1$. Hence, $p\mid \sigma(q)=q+1$ and $q \equiv -1 \pmod p$. Since $pq\mid n$, we deduce that the number of $n$ belonging to this case is at most
\[ \sum_{p > y} \sum_{\substack{q \equiv -1\pmod{p} \\ q\le x}} \frac{x}{pq} \ll x\sum_{p > y} \frac{1}{p} \sum_{\substack{q \le x \\ q\equiv -1\pmod{p}}} \frac{1}{q} \ll x\log_2 x\sum_{p > y} \frac{1}{p^2} \ll \frac{x\log_ 2 x}{y\log y} = \frac{x}{\log_3 x}, \]
which is again $o(x)$. Here the sum on $q$ has been estimated by the Brun--Titchmarsh inequality (see, e.g., Theorem 416 on p.\ 83 of \cite{tenenbaum15}) and partial summation. \end{proof}
The next lemma bounds the number of $n\le x$ with two large prime factors that are multiplicatively close.
\begin{lem}\label{MultiplicativelyCloseLargestandSecondLargestPrimeDivisors} For all large $x$, the number of $n\le x$ divisible by a pair of primes $q_1, q_2$ with
\[ x^{1/10\log_3 x} < q_1 \leq x\quad\text{and}\quad q_1 x^{-1/(\log_3 x)^2} \leq q_2 \leq q_1 \]
is $O(x/\log_3 x)$.
\end{lem}
\begin{proof} The number of such $n$ is at most $x\sum_{x^{1/10\log_3 x}< q_1 \leq x} \frac1{q_1} \sum_{ q_1 x^{-1/(\log_3 x)^2} \leq q_2 \leq q_1} \frac1{q_2}$.
By Mertens' theorem, the inner sum is
\begin{align*} \ll \log\left(\frac{\log{(q_1)}}{\log{(q_1 x^{-1/(\log_3 x)^2})}}\right) + \frac{1}{\log{(q_1 x^{-1/(\log_3 x)^2}})} \ll \frac{\log{x}}{(\log{q_1}) (\log_3{x})^2},
\end{align*}
leading to an upper bound for our count of $n$ of
\begin{equation*} \ll \frac{x\log x}{(\log_3 x)^2} \sum_{x^{1/10\log_3 x} < q_1 \leq x} \frac1{q_1 \log q_1} \ll \frac{x\log x}{(\log_3 x)^2} \cdot \frac{\log_3 x}{\log x} = \frac x{\log_3 x}. \end{equation*}
Here the final sum has been estimated by the prime number theorem and partial summation. \end{proof}
We conclude this section by quoting the main result of \cite{wirsing59}.
\begin{lem}[Wirsing]\label{Wirsing}
There exists an absolute constant $\lambda_0>0$ such that
$$\#\left\{ m \leq x : \frac{\sigma(m)}m = \alpha \right\} \leq \exp\left(\lambda_0 \frac{\log x}{\log \log x} \right)$$
for all $x \geq 3$ and all real numbers $\alpha$.
\end{lem}
\section{Proof of Theorem \ref{thm:main}}
As discussed in the introduction, it is enough to establish the following proposition. From now on, $y:=(\log\log{x})^{0.9}$.
\begin{prop}\label{prop:main} Fix $k\ge 4$. Almost always, $s(n)$ is not divisible by $p^k$ for any $p^k > y$. \end{prop}
We split the proof of Proposition \ref{prop:main} into two parts, according to the size of $p$.
\subsection{\dots when $y < p^k \le x^{1/2\log_3{x}}$} The following is a weakened form of Lemma 2.8 from \cite{pollack14}.
\begin{lem}\label{lem:pollack} For all large $x$, there is a set $\mathcal{E}(x)$ having size $o(x)$, as $x\to\infty$, such that the following holds. For all $d\le x^{1/2\log_3 x}$, the number of $n\le x$ not belonging to $\mathcal{E}(x)$ for which $d\mid s(n)$ is $O(x/d^{0.9})$.
\end{lem}
Summing the bound of Lemma \ref{lem:pollack} over $d=p^k$ with $y < p^k \le x^{1/2\log_3{x}}$ gives $o(x)$. It follows that almost always, $s(n)$ is not divisible by $p^k$ for any $p^k \in (y,x^{1/2\log_3 x}]$.
\subsection{\dots when $p^k > x^{1/2\log_3{x}}$}
The treatment of this range of $p$ is based on the following result, which may be of independent interest.
\begin{thm}\label{thm:divthm} For all large $x$, there is a set $\mathcal{E}(x)$ having size $o(x)$, as $x\to\infty$, such that the following holds. The number of $n\le x$ not belonging to $\mathcal{E}(x)$ for which $d\mid s(n)$ is
\[ \ll \frac{x}{d^{1/4}\log x} \]
uniformly for positive integers $d > x^{1/2\log_3 x}$ satisfying $P^{-}(d) > \log_2 x$.\end{thm}
The crucial advantage of Theorem \ref{thm:divthm} over Lemma \ref{lem:pollack} is the lack of any restriction on the size of $d$. Since $k\ge 4$, when we sum the bound of Theorem \ref{thm:divthm} over $d=p^k$ with $x^{1/2\log_3 x} < p^k < x^2$, the result is $O(x \log_2 x/\log{x})$, which is $o(x)$. So the proof of Theorem \ref{thm:main} will be completed once Theorem \ref{thm:divthm} is established.
Turning to the proof of Theorem \ref{thm:divthm}, let $\mathcal{E}(x)$ denote the collection of $n \leq x$ for which at least one of the following fails:
\begin{enumerate}
\item[(1)] $n> x/\log x$,
\item[(2)] the largest squarefull divisor of $n$ is at most $\log_2 x$,
\item[(3)] $P^+(n) > x^{1/10\log_3 x}$,
\item[(4)] $P^+(n)^2 \nmid n$,
\item[(5)] $P^+(\gcd(n, \sigma(n))) \le \log_2 x$,
\item[(6)] $P^+(n)>P_2^+(n)x^{1/(\log_3 x)^2}$, where $P_2^{+}(n):=P^{+}(n/P^{+}(n))$ is the second-largest prime factor of $n$.
\end{enumerate}
Let us show that only $o(x)$ integers $n\le x$ fail one of (1)--(6). This is obvious for (1). The count of $n\le x$ failing (2) is $\ll x \sum_{r > \log_2 x,~r\text{ squarefull}} 1/r \ll x/\sqrt{\log_2 x}$, and thus is $o(x)$. That the count of $n\le x$ failing (3) is $o(x)$ follows from standard bounds on the counting function of smooth (friable) numbers (e.g., Theorem 5.1 on p.\ 512 of \cite{tenenbaum15}), or Brun's sieve. The set of $n \leq x$ passing (3) but failing (4) has cardinality $\ll x \sum_{r>x^{1/10\log_3 x}}1/r^2 = o(x)$. Condition (5) is handled by Lemma \ref{gcd(n,sigma(n)) log log x smooth}. That the count of $n\le x$ satisfying (1)--(5) and failing (6) is $o(x)$ follows from Lemma \ref{MultiplicativelyCloseLargestandSecondLargestPrimeDivisors}.
Let $d$ be as in Theorem \ref{thm:divthm}. We separate the count of $n\notin \mathcal{E}(x)$ for which $d\mid s(n)$ according to whether $P^+(n) < d^{1/4} (\log x)^2$ or $P^+(n) \geq d^{1/4} (\log x)^2$.
We first consider $n\notin \mathcal{E}(x)$ with $P^{+}(n) \ge d^{1/4}(\log{x})^2$.
Write $n=mP$, where $P:=P^{+}(n)$. Then $\gcd(m,P)=1$, and
\[ x/m \ge d^{1/4} (\log{x})^2. \]
We can rewrite the condition $d\mid s(n)$ as $$ Ps(m) \equiv -\sigma(m) \pmod d.$$ For this congruence to have solutions, we must have $\gcd(s(m)\sigma(m), d)=1$. Indeed, if there exists a prime $q$ dividing both $\sigma(m)$ and $d$, then from $q\mid d$, we have $q> \log_2 x$, whereas since $d \mid s(n)$, we also have $q\mid s(n)$. But then the divisibility $q \mid \sigma(m)\mid \sigma(n)$ leads to $q \mid \gcd(n, \sigma(n))$, contradicting condition (5) above. Since any common prime divisor of $s(m)$ and $d$ would, by the congruence, have to divide $\sigma(m)$ as well, we must indeed have $\gcd(s(m)\sigma(m), d)=1$.
As such, the above congruence condition on $P$ places it in a unique coprime residue class modulo $d$. Hence, given $m$, the number of possible $P$ (and hence possible $n=mP$) is
$$\ll \frac x{md} + 1 \ll \frac x{md} + \frac x{md^{1/4} (\log x)^2},$$
which when summed over $m \leq x$ is $\ll x/{d^{1/4} \log x}$, consistent with Theorem \ref{thm:divthm}. (We use here the lower bound on $d$.)
It remains to count $n\le x$, $n\notin \mathcal{E}(x)$ where $d\mid s(n)$ and $P^{+}(n) < d^{1/4}(\log x)^2$. For this case, we fix a constant
\[ \lambda> 2\lambda_0, \]
where $\lambda_0$ is the constant appearing in Wirsing's bound (Lemma \ref{Wirsing}). We will assume that $d\le x^{3/2}$, since $s(n) \le \sigma(n) < x^{3/2}$ for all $n\le x$, once $x$ is sufficiently large (e.g., as a consequence of the bound $\sigma(n) \ll n \log\log{(3n)}$; see Theorem 323 in \cite{Hw08}).
We write $n=AB$, where $A$ is the least unitary squarefree divisor of $n/P^+(n)$ exceeding $d^{1/4}\exp\left(\frac{\lambda}2 \frac{\log x}{\log_2 x}\right)$. Such a divisor exists as $n>x/\log x$ has maximal squarefull divisor at most $\log_2 x$, whereupon its largest unitary squarefree divisor coprime to $P^+(n)$ must be no less than
$$\frac 1{d^{1/4} (\log x)^2 \log_2 x} \cdot \frac x{\log x} > d^{1/4} \exp\left(\frac{\lambda}2 \frac{\log x}{\log_2 x}\right).$$
(We assume throughout this argument that $x$ is sufficiently large.) Then \begin{equation}\label{eq:Bupper} B \leq \frac xA \leq \frac x{d^{1/4}}\exp\left(-\frac{\lambda}2 \frac{\log x}{\log_2 x}\right).\end{equation} Furthermore,
\[ P^{+}(A) \le P_2^+(n)< P^+(n)x^{-1/(\log_3 x)^2} < d^{1/4} (\log x)^2 x^{-1/(\log_3 x)^2} < d^{1/4}x^{-\lambda/\log_2 x}. \] Since $A/P^{+}(A)$ is a unitary squarefree divisor of $n/P^{+}(n)$, to avoid contradicting the choice of $A$, we must have $A \leq d^{1/2}\exp\left(-\frac{\lambda}2 \frac{\log x}{\log_2 x}\right)$. Then $\sigma(A) \ll A \log\log{A} \ll A\log_2 x$, so that (for large $x$) $\sigma(A) < d^{1/2}$.
For each $B$ as above, we bound the number of corresponding $A$. First of all, since $\gcd(A, B)=1$, the divisibility $d\mid s(n)$ translates to the congruence $\sigma(A)\sigma(B) \equiv AB \pmod d$. Now, we claim that $\gcd(A\sigma(B), d)=1$: indeed, for any prime $q$ dividing both $A$ and $d$, we must have, on one hand, $q \geq P^-(d)>\log_2 x$, while on the other, $q\mid d \mid s(n)$ and $q \mid A \mid n$ imply $q\mid \gcd(n, \sigma(n))$. This contradicts (5). It follows by an analogous argument that $\gcd(\sigma(B), d)=1$, thus proving our claim. Consequently, the above congruence may be rewritten as
$$\frac{\sigma(A)}A \equiv \frac B{\sigma(B)} \pmod d.$$
Now for some $B$, consider any pair of squarefree integers $A_1$ and $A_2$ satisfying the above congruence along with the conditions $\sigma(A_1), \sigma(A_2) < d^{1/2}$. Then $\sigma(A_1)/A_1 \equiv \sigma(A_2)/A_2 \pmod d$, leading to $\sigma(A_1)A_2 \equiv A_1\sigma(A_2) \pmod d$. But also $$|\sigma(A_1)A_2 - A_1\sigma(A_2)| \leq \max\{\sigma(A_1)A_2, A_1\sigma(A_2)\} < d,$$
thereby forcing $\sigma(A_1)/A_1 = \sigma(A_2)/A_2$. This shows that for each $B$, all corresponding $A$ have $\sigma(A)/A$ assume the same value, whereupon Lemma \ref{Wirsing} bounds the number of possible $A$ by $\exp\left(\lambda_0 \frac{\log x}{\log_2 x}\right)$. Keeping in mind the upper bound \eqref{eq:Bupper} on $B$, we deduce that the number of $n$ falling into this case is at most
$$\frac x{d^{1/4}}\exp\left(-\frac{\lambda}2 \frac{\log x}{\log_2 x}\right) \cdot \exp\left(\lambda_0 \frac{\log x}{\log_2 x}\right) = \frac x{d^{1/4}} \exp\left(\left(\lambda_0 - \frac{\lambda}2\right)\frac{\log x}{\log_2 x}\right).$$
Since $\lambda > 2\lambda_0$, this final quantity is $\ll x/d^{1/4}\log{x}$. This completes the proof of Theorem \ref{thm:divthm}, and so also that of Theorem \ref{thm:main}.
\begin{rmk}\label{rmk:EGPS} Erd\H{o}s, Granville, Pomerance, and Spiro have conjectured \cite[Conjecture 4]{EGPS90} that $s^{-1}(\mathcal{A})$ has density $0$ whenever $\mathcal{A}$ has density $0$. If this holds, then the conclusion of Proposition \ref{prop:main} follows for each $k\ge 2$: take $$\mathcal{A} =\{ n\text{ divisible by $p^k$ for some $p^k > \log_3(100n)$}\}.$$ Unfortunately, very little is known in the direction of the EGPS conjecture. The record result (still quite weak) seems to be that of \cite{PPT18}, where it is shown that $s^{-1}(\mathcal{A})$ has density $0$ whenever $\mathcal{A}$ has counting function bounded by $x^{1/2+o(1)}$, as $x\to\infty$.
\end{rmk}
\providecommand{\bysame}{\leavevmode\hbox to3em{\hrulefill}\thinspace}
\providecommand{\MR}{\relax\ifhmode\unskip\space\fi MR }
\providecommand{\MRhref}[2]{%
\href{http://www.ams.org/mathscinet-getitem?mr=#1}{#2}
}
\providecommand{\href}[2]{#2}
|
1,477,468,751,025 | arxiv | \section{Introduction}
\label{sec:intro}
General relativity is nowadays the standard physical theory to describe gravitational phenomena. From a theoretical and astrophysical point of view, black holes are one of the most successful predictions of general relativity, testing this theory under regimes of extreme gravity. This is the case of the gravitational waves produced by the coalescence of two black holes \citep{gw_merger}, the EHT observations inside the galaxy M87 \citep{eht}, and the very recent observation of Sagittarius A* \citep{SgrA1, SgrA2, SgrA3, SgrA4, SgrA5, SgrA6}. Those observations were possible by detecting radiation from the accretion disk around the black hole. Photons that pass marginally near the event horizon and escape from the gravitational pull make up the shadow of the black hole \citep{Falcke_2000}. That radiation is generated either because the cloud of gas and dust in the accretion disk heats up \citep{thermal} or due to the acceleration of charges inside the disk \citep{non-thermal}.
Accretion disks are intimately linked to black holes since it is possible to extract information both from space-time itself and from the matter that makes up the disk, allowing to describe of astrophysical scenarios such as active nuclei of galaxies or microquasars \citep{living-review}. Thus, the study of accretion disks has been a focus of interest for several decades, so it is possible to find in the literature some analytical solutions to model equilibrium accretion disks around black holes \citep{Kozlowski, Fragile_polish, Pugliese}. In particular, when we talk about accretion disks, the study of the magnetic field stands out since that could be responsible for the generation of instabilities in the disk that would give rise to the accretion process \citep{magneto-rotational}. Besides, it could accelerate and collimate the gas in relativistic jets that emanate towards the interstellar medium from the accretion disk \citep{blandford}. Based on this, some models describe magnetized tori around black holes \citep{okada1989model,komissarov, font_tori, Fragile_magnetized, Soler}, being Komissarov's work particularly noteworthy, where he shows for the first time an analytical solution to describe tori in magnetohydrodynamic equilibrium around Kerr black holes. From that result, it is possible to show that magnetized disks are unstable to non-axial disturbances, in addition to the fact that energy dissipation and angular momentum transport are a consequence of magnetorotational instability \citep{fragile2}.
An interesting aspect to take into account is the degree of magnetic polarization of the material and its effects on the dynamics of the disk. \cite{Oscar} (Pimentel, from now on) established, based on Komissarov's work, a model to describe magnetized tori with arbitrary magnetic susceptibility around Kerr black holes. In this paper, they found that magnetic susceptibility affects the compactness of the disk, becoming more compact for paramagnetic disks and less compact for diamagnetic disks. Additionally, the impact of the magnetic susceptibility, on the development of the magneto-rotational instability in weakly magnetized accretion disks, was studied in \citep{Pimentel_2021}. This work shows that paramagnetic disks have larger turbulent structures than diamagnetic ones. Moreover, the beta-plasma parameter shows that the paramagnetic disk becomes more strongly magnetized than the no polarized case, which itself is more strongly magnetized than the diamagnetic disk. As it can be seen, the inclusion of the magnetic properties of the material in the dynamics of the relativistic accretion disks is relevant to understanding the different physical processes happening in this system. It is worth mentioning that magnetic polarization has been considered in other astrophysical scenarios such as neutron stars \citep{blandford1982magnetic,suh2010magnetic,chatterjee2015consistent,wang2016diamagnetic}, relativistic waves \citep{Pimentel_2018}, the Kelvin-Helmholtz instability in relativistic plasmas \citep{Pimentel_2019}, among others.
Inspired by the above, an interest arises in determining the potential impact that magnetic polarization may have on future observations. For this reason, we study the geodesics of the photons emanating from the torus and that manage to escape the gravitational attraction of the black hole. That is where numerical simulations play a relevant role. Nowadays, ray tracing codes represent an essential tool for testing some models of accretion disks around black holes when comparing observations. In the literature, it is possible to find codes such as \texttt{BHAC} \citep{bhac}, \texttt{RAPTOR} \citep{raptor}, \texttt{HARMRAD} \citep{harm}, among others characterized by their computing power and the ability to simulate more realistic astrophysical scenarios by numerically solving the equations of relativistic magnetohydrodynamics with radiative terms. To determine the impact of the magnetic polarization on the disk radiation intensity map, we use \texttt{OSIRIS} \citep{OSIRIS}, a code of our authorship that is based on the Hamiltonian formalism and solves null geodesics through the tracing inverse of rays. This code has been validated by successfully reproducing thin accretion disks around Kerr black holes, simulating the shadow of compact objects with arbitrary quadrupole and naked singularities, as well as reproducing time-like orbits around these exotic bodies \citep{Arrieta_Villamizar_2020}.
Thus, in this paper, we carry out for the first time simulations of the intensity map and the emission-line profiles for a magnetized torus considering the magnetic polarization of the fluid. We found that magnetic susceptibility modifies the intensity and the emission-line profiles depending on how magnetized it is the disk, by varying the magnetic pressure and showing in the map of intensities the changes in the compactness of the torus. The organization of this article is as follows: Section 2 We make a brief description of the magnetically polarized disk model developed by Pimentel, where a toroidal magnetic field is assumed and the particles that make up the torus move in circular orbits; Section 3 describes the radiative transport theory and the radiation mechanism employed; Section 4 briefly describes the code used for the simulations and the corresponding numerical configurations; finally, in Section 5 we show the results of our simulations of magnetically polarized tori around a Kerr black hole. Throughout this document, we will work with a signature $(-,+,+,+)$ and geometrized units, where $G=c=1$, being $G$ the gravitational constant, and $c$ the speed of light. Paramagnetic materials present a magnetic susceptibility $\chi > 0$, and diamagnetic materials, $\chi < 0$.
\section{KOMISSAROV TORUS WITH MAGNETIC POLARIZATION IN KERR SPACE-TIME}
\subsection{Magnetic susceptibility}
For an electron gas, the magnetic polarization is composed of the paramagnetic and diamagnetic contribution \citep{magnetism_landau}. Paramagnetism is associated with the intrinsic magnetic moment of electrons and the alignment of the spins due to magnetic torques, while diamagnetism is associated with the electron orbital motion around the magnetic field lines. In particular, the quantization of the orbital angular momentum of free electrons in a gas around magnetic field lines. For free atoms, the magnetic susceptibility is written as follows \citep{weber}
\begin{equation}
\chi_m = -\frac{e^2\mu_0}{6m_e}\sum_Z\langle R^2 \rangle, \label{eq:chi_diam}
\end{equation}
where $\mu_0$ is the magnetic permeability in the vacuum, $e$ and $m_e$ are the charge and mass of the electron, respectively, $Z$ is the number of atoms, and $\langle R \rangle$ is the average radii of electrons orbiting around the nucleus assuming spherical symmetry. On the other hand, for paramagnetic materials, the magnetization is calculated as
\begin{equation}
M = ng\mu_{\text{B}}JB_{j}(T),
\end{equation}
with $n$ the density of particles, $\mu_{\text{B}}$ the Bohr magneton and $g$ the Landé factor,
\begin{equation}
g = 1+\frac{J(J+1)+S(S+1)-L(L+1)}{2 J(J+1)},
\end{equation}
being $J$, $S$ and $L$ the total, spin, and orbital magnetic quantum number, respectively. $B_j(T)$ corresponds to the Brillouin function,
\begin{equation}
B_J(\alpha)=\frac{2 J+1}{2 J} \operatorname{coth}\left(\frac{(2 J+1) \alpha}{2 J}\right)-\frac{1}{2 J} \operatorname{coth}\left(\frac{\alpha}{2 J}\right),
\end{equation}
where $T$ is the temperature, and
\begin{equation}
\alpha=\frac{g \mu_0 \mu_{\mathrm{B}} J H}{k_{\mathrm{B}} T}, \label{eq:alpha}
\end{equation}
with $H$ the magnetic field intensity. Thus, the magnetic susceptibility is calculated as $\chi_m = \partial M/\partial H$.
For diamagnetic materials, magnetic susceptibility only depends on the number of particles. Considering a hydrogen gas, we calculate the susceptibility in a range from densities in accretion disks ($10^{5} [kg/m^3] - 10^{7} [kg/m^3]$, \cite{cold_plasma}) to densities in neutron star densities ($10^{19} [kg/m^3]$, \cite{neutron-star}). In the model we are using, the magnetic susceptibility looks proportional to density, as we are showing in figure \eqref{fig:chi_rho}.
\begin{figure}
\centering
\includegraphics[width=8.5cm]{fig1.pdf}
\caption{Magnetic susceptibility for diamagnetism as a function of density, in logarithmic scale, for a hydrogen plasma. Magnetic susceptibility increases as density increase too, showing a linear dependence. Values of density were taken considering from an accretion disk to a neutron star.}
\label{fig:chi_rho}
\end{figure}
For accretion disk, the scales for magnetic susceptibility are of the order of $10^{-3} - 10^{-1}$, while for more compact objects like neutron stars, these scales are of the order of $10^{9} - 10^{11}$. However, it is unknown if this model applies to such high densities. Nevertheless, it is the first approach to determine the magnetic susceptibility of different diamagnetic astrophysical objects.
On the other hand, for paramagnetic materials, density, temperature, and magnetic field affect magnetic susceptibility. In figure \eqref{fig:chi_T} we show the magnetic susceptibility for different density values as a function of temperature.
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig2.pdf}
\caption{Magnetic susceptibility to paramagnetism as a function of temperature for different densities and magnetic fields in logarithmic scale. For magnetic fields between $10^{-2} [T]$ and $10^{2} [T]$, susceptibility decreases as temperature increases; however, for magnetic fields of $10^{3} [T]$, susceptibility increases as the temperature increases until it reaches a maximum from which it begins to decrease. In addition, it is noticeable that the magnetic field does not modify the magnitude of the magnetic susceptibility. One aspect to take into account is the noise that appears in the vicinity of the temperature $10^6 [K]$ for magnetic fields of $10^{-2} [T]$ and $10^{-1} [T]$, which is due to the nature of the Brillouin function: asymptotic nature of hyperbolic trigonometric functions makes not possible to calculate the derivative of magnetization in the vicinity of critic temperatures with precision. As the magnetic field increases, the range of admissible temperatures for calculating the magnetic susceptibility expands. This is evidenced by the fact that the noise is shifted to the right in the upper right panel, where it is greatly reduced, but still noticeable. On the rear panels, such noise is no longer perceived.}
\label{fig:chi_T}
\end{figure*}
For accretion disks, temperatures for cool accretion in some astrophysical scenarios are in the range of $10^{2}[K] - 10^{4} [K]$, while hot gas can reach temperatures of $10^7[K] - 10^9 [K]$ \cite{cold_plasma}. For example, it is believed that the nucleus of the X-ray source Cygnus X-1 is a stellar-mass black hole surrounded by a gas with a temperature of $\sim 10^4[K]$. Furthermore, we show the influence of the magnetic field considering scales from $10^{-2}[T] - 10^{-1} [T]$. The magnetic field of these orders of magnitude may star the accretion process, for example, O-type stars feeding $SgrA*$ or in X-ray binaries systems \citep{mag_field}. We found that the magnetic field does not affect significantly the magnitude of the magnetic susceptibility in the range between $10^{-2}[T]$ and $10[T]$, but amplifies the range of temperatures by which it is possible to calculate the magnetic susceptibility. This is due to a critical value in the temperature that restricts the range of the magnetic susceptibility. This behavior is evidenced in the first row of the figure \eqref{fig:chi_T}. Nevertheless, if the magnetic field is stronger, for example in the range of $10^{2} [T]- 10^{3} [T]$, the magnetic field modifies the magnitude of the susceptibility. In the last row of figure \eqref{fig:chi_T} $\chi_m$ increases as $T$ increases too until a maximum value is reached, from which $\chi_m$ begins to decrease. In general, for low temperatures, it is possible to obtain susceptibilities in order of $10^{1} - 10^{-1}$, but for high temperatures and densities, we found orders of $10^{-1} - 10^{-2}$.
Magnetic properties of plasmas have been studied in quantum collisional and collisionless plasmas \citep{Latyshev} or in the propagation of waves quantum plasmas \citep{Safdar, Nauman} considering the Landau diamagnetism and Pauli paramagnetism applicable astrophysical scenarios like neutron stars, extragalactic jets or in auroral forms, where the acceleration of electrons is generated by magnetic fields. Expression for magnetic susceptibility in these cases is similar to equation \eqref{eq:chi_diam} where the relation between $H$ and $T$ is equivalent to equation \eqref{eq:alpha}. For this reason, it is believed that plasmas described by Landau diamagnetism and Pauli paramagnetism do not would exhibit a behavior much different concerning the model employed in this work.
\subsection{Geometrical considerations}
The Kerr space-time line element in Boyer-Lindquist coordinates, $\left\lbrace t, r, \theta,\phi \right\rbrace$, reads as follows
\begin{equation}
\begin{split}
& ds^2 = -\left(1-\frac{2Mr}{\Sigma}\right)dt^2 + \frac{\Sigma}{\Delta}dr^2 + \Sigma d\theta^2 +
\\
& \left(r^2 + a^2 + \frac{2Mra^2\sin^2\theta}{\Sigma} \right)\sin^2\theta
d\phi^2 - \frac{4Mra\sin^2\theta}{\Sigma}dtd\phi,
\end{split}
\end{equation}
with $a$ the dimensionless spin parameter and
\begin{equation}
\begin{split}
&\Sigma = r^2 + a^2\cos^2(\theta),
\\
& \Delta = r^2-2Mr+a^2.
\end{split}
\end{equation}
On the other hand, it is useful to describe the torus structure in terms of the specific angular momentum, $l$, and the specific angular velocity, $\Omega$, defined as
\begin{equation}
\begin{split}
& l = -u_\phi/u_t = -\frac{g_{t\phi} + \Omega g_{\phi\phi} }{g_{tt}+\Omega g_{t\phi}},
\\
& \Omega = u^\phi/u^t = -\frac{g_{t\phi}+lg_{tt}}{g_{\phi\phi}+lg_{t\phi}},
\end{split}
\end{equation}
where $g_{\mu\nu}$ are the covariant components of the metric tensor. We define the center and cusp of the torus, as the places where the pressure gradient equals zero, so the specific angular momentum becomes Keplerian \citep{bardeen1972rotating},
\begin{equation}
l_K = \frac{M^{1/2}[r^2-2a(Mr)^{1/2}+a^2]}{r^{3/2}-2Mr^{1/2}+aM^{1/2}}, \label{eq:keplerian}
\end{equation}
thus, it is possible to calculate the positions on the equatorial plane for the center, $r_{\text{center}}$, and the cusp, $r_{\text{cusp}}$, solving \eqref{eq:keplerian} for a specific value of $l_K$ and $a$. Outside the event horizon, $r_{\text{center}}$ and $r_{\text{cusp}}$ are the largest and the smallest of the roots of \eqref{eq:keplerian}, respectively.
\subsection{Magnetohydrodynamic structure of the torus}
In our disk model, we consider the test fluid approximation, namely, we ignore the gravitational effects generated by the disk. On the other hand, the total energy-momentum tensor is composed of the sum of the energy-momentum tensors for a perfect fluid, a magnetized fluid, and a fluid with magnetic polarization \citep{Oscar}
\begin{equation}
\begin{split}
T^{\mu\nu} = & \left[ w + b^2(1-\chi) \right]u^\mu u^\nu +
\\
& \left[ p+\frac{1}{2}b^2(1-2\chi) \right]g^{\mu\nu}-(1-\chi)b^\mu b^\nu,
\end{split}
\end{equation}
where $w$ is the specific enthalpy; $b^\gamma$ and $b$ are the components and magnitude of the magnetic field measured in the comovil frame; $u^\gamma$ are the components of the four-velocity of the disk, $p$ is the fluid pressure, and $g^{\mu\nu}$ are the contravariant components of the metric tensor. Furthermore,
\begin{equation}
\chi = \chi_m/(1+\chi_m),
\end{equation}
where $\chi_m$ is the magnetic susceptibility of the medium. In particular, for a magnetized, stationary, and axially-symmetric fluid around a Kerr black hole, the state variables do not depend on the coordinate time, $t$, or azimuthal angle, $\phi$. Besides, we assume that the fluid moves in circular orbits, such that $u^r$ = $u^\theta$ = 0. Furthermore, we consider that $b^r=b^\theta = 0$, which implies that the topology of the magnetic field is purely toroidal.
From the conservation of the energy-momentum tensor, $\nabla_\mu T^{\mu\nu} = 0$, it is possible to show that
\begin{equation}
(\ln|u_t|)_{,i} - \frac{\Omega l_{,i}}{1-l\Omega} + \frac{p_{,i}}{w} - \frac{(\chi p_m)_{,i}}{w} + \frac{\left[ (1-\chi)\mathcal{L}p_m \right]_{,i}}{\mathcal{L}w} = 0, \label{eq:euler1}
\end{equation}
where the subscript $,i$ denotes the usual derivatives with respect to $r$, $\theta$, and $p_m = b^2/2$ is the magnetic pressure, and
\begin{equation}
\mathcal{L} = g_{t\phi}^2-g_{tt}g_{\phi\phi}.
\end{equation}
The first term in \eqref{eq:euler1} describes the gravitational interaction between the black hole and the torus, the second one refers to the centrifugal force due to the rotational motion of the disk, the third one is a force due to the pressure gradient, and the last terms are associated with magnetic forces. It is worth mentioning that if $\chi = 0$, \eqref{eq:euler1} reduces to Komissarov's case, where the torus is not magnetically polarized.
In our study, we assume that the surfaces of $l$ and $\Omega$ constant coincide, namely $\Omega = \Omega(l)$; besides, the torus moves with constant specific angular momentum, $l_K = l_0$, and the fluid obeys the equation of state
\begin{equation}
p = K w^\kappa,
\end{equation}
with $K$ the polytropic constant and $\kappa$ the adiabatic index. Besides, assuming that $\chi$ can be written as a functional of $\mathcal{L}$, $\chi = \chi(\mathcal{L})$, the equation \eqref{eq:euler1} can be completely solved. Thus
\begin{equation}
W - W_{\text{in}} + \frac{\kappa}{\kappa - 1}\frac{p}{w} + (1-2\chi)\frac{\eta}{\eta - 1}\frac{p_\text{m}}{w} = 0, \label{eq:solution}
\end{equation}
where we define the effective potential, W, as follows
\begin{equation}
W = \ln|u_t| = \ln|\mathcal{L}/\mathcal{A}|,
\end{equation}
with
\begin{equation}
\mathcal{A} = g_{t\phi} + 2lg_{t\phi}+l^2g_{tt}.
\end{equation}
Furthermore, $W_{\text{in}}$ is the effective potential in the inner edge of the disk and $\eta$ is an arbitrary constant. From \eqref{eq:solution} it is possible to calculate $p$ and $p_m$ at the disk center
\begin{equation}
\begin{split}
& p_\text{c} = w_\text{c}(W_{\text{in}} - W_\text{c}) \left( \frac{\kappa}{\kappa - 1} + \frac{\eta}{\eta - 1} \frac{1-2\chi_\text{c}}{\beta_\text{c}} \right)^{-1},
\\
& p_\text{m$_\text{c}$} = \frac{p_\text{c}}{\beta_\text{c}}.
\end{split}
\end{equation}
where the subscript "c" refers to variables evaluated at the center of the disk on the equatorial plane. Now, assuming a magnetic susceptibility of the form
\begin{equation}
\chi = \chi_0 + \chi_1\mathcal{L}^\alpha,
\end{equation}
with $\chi_0$, $\chi_1$, and $\alpha$ constants, we can calculate the magnetic pressure
%
\begin{equation}
p_m = K_m \mathcal{L}^\lambda w^\eta f,
\end{equation}
with
\begin{equation}
\begin{split}
& K_\text{m} = \frac{p_{\text{m}_\text{c}}}{\mathcal{L}_\text{c}^\lambda w_\text{c}^\eta f_\text{c}},
\quad
f = (1 - 2\chi)^{\frac{1-\eta}{2\alpha\left(1 - 2\chi_0\right)} - 1},
\\
& \lambda = \frac{1-\chi_{0}}{1-2 \chi_{0}}(\eta-1).
\end{split}
\end{equation}
Furthermore, physical variables such as the enthalpy, $w$, and the mass density, $\rho$, can be written as follows
\begin{equation}
\begin{split}
&\rho = w - \frac{\kappa p}{\kappa - 1},
\\
&w = \left( \frac{W_{\text{in}} - W} {\frac{\kappa}{\kappa - 1}K + \frac{\eta}{\eta - 1} (1-2\chi)K_\text{m}\mathcal{L}^\lambda f} \right)^{\frac{1}{\eta-1}},
\end{split}
\end{equation}
where
\begin{equation}
K = \frac{p_\text{c}}{w_\text{c}^\eta}.
\end{equation}
Finally, the components of the magnetic field are completely specified as
\begin{equation}
b^{\phi} = \sqrt{\frac{2p_m}{\mathcal{A}}}, \quad b^t = l_0 b^{\phi},
\end{equation}
In this model, the torus structure has the following free parameters: $l_0$, $\beta_{\text{c}}$, $w_\text{c}$, $\kappa$, $\eta$, $\alpha$, $\chi_m$, $\chi_1$, and $W_\text{in}$, which will be used as conditions for the structure and morphology of the torus in our simulations.
\section{Radiative Transfer Equations}
\label{sec:RTE}
To get an intensity map of the radiation coming from the disk as a function of the torus parameters, we describe the radiative transfer through the covariant formulation \citep{rybicki}
\begin{equation}
\frac{d}{d\lambda}\left( \frac{I_\nu}{\nu^3} \right) = \left( \frac{j_\nu}{\nu^2} \right) - (\nu\alpha_\nu)\left( \frac{I_\nu}{\nu^3}\right), \label{eq:radtrans}
\end{equation}
where $I_{\nu}$, $j_{\nu}$, and $\alpha_{\nu}$ are the specific intensity, the emission coefficient, and the absorption coefficient, respectively, measured in the comovil frame, $\lambda$ is an affine parameter and the terms in parentheses are Lorentz invariants. The subscript $\nu$ indicates frequency dependence. To solve \eqref{eq:radtrans}, it is useful to rewrite the equation in terms of the optical depth, $\tau_\nu$, defined as
\begin{equation}
\tau_\nu = \int_{\lambda_0}^{\lambda}\nu\alpha_\nu d\lambda', \label{eq:tau}
\end{equation}
from which it is possible to classify the torus as optically thick, if $\tau_\nu = 0$, or optically thin, if $\tau_\nu > 0$. Usually, a minus sign appears if the integration is done backward along the path of photons inside the disk. Rewriting \eqref{eq:radtrans} in terms of $\tau_\nu$, it is possible to find an analytic solution to the differential equation, which reads as follows
\begin{equation}
I_\nu(\tau_\nu) = I_\nu(\tau_{\nu,0})\text{e}^{-\tau_\nu} + \int_{\tau_{\nu,0}}^{\tau_{\nu}}S_\nu(\tau'_\nu)\text{e}^{-(\tau'_\nu - \tau_{\nu,0})} d\tau'_\nu,
\end{equation}
where $S_\nu = j_\nu/\alpha_\nu$ is known as a source function, and if that is constant concerning $\tau_{\nu}$
\begin{equation}
I_\nu(\tau_\nu) = I_\nu(0)\text{e}^{-\tau_\nu} + S_\nu(1 - \text{e}^{-\tau_\nu}). \label{eq:intensity}
\end{equation}
Emission and absorption coefficients depend on the radiative process. In our case, we decided to study the effect of non-thermal synchrotron radiation as a power law with constant coefficients
\begin{equation}
j_\nu \propto B^{(\gamma+1)/2}\nu^{(1-\gamma)/2},
\quad
\alpha_\nu \propto B^{(\gamma+2)/2}\nu^{-(\gamma+4)/2}, \label{eq:coefficents}
\end{equation}
where $\gamma = 2s+1$ with $s$ the spectral index, which is adjusted to the observations. The coefficients depend on the magnetic field, which is modified in magnetically polarized materials by the action of magnetic susceptibility. It is for this reason that synchrotron radiation is an interesting mechanism to prove the effect of magnetic polarization on the intensity map and observed flux. Through Lorentz invariants, it is possible to calculate the specific intensity measured by the distant observer. For it,
\begin{equation}
I_{\nu_{\text{obs}}} = \left( \frac{\nu_{\text{obs}}}{\nu_{\text{em}}} \right)^3 I_{\nu_{\text{em}}} = g^3 I_{\nu_{\text{em}}}, \label{eq:iobs}
\end{equation}
where the subscripts "obs" and "em" refer to intensity and frequency received by the observer and emitted by the disk, respectively, and $g = (1+z)^{-1}$ is the red-shift factor calculated as
\begin{equation}
g = \frac{\nu_{\text{obs}}}{\nu_{\text{em}}} = \frac{-p_{\mu}v^\mu|_{\lambda_\text{obs}}}{-p_{\mu}u^\mu|_\lambda},
\end{equation}
with $\left\lbrace v^\mu \right\rbrace = \left\lbrace-1,0,0,0\right\rbrace$ the four-velocity of the distant observer, and $p_\mu$ the components of four-momentum for photons. Without loss of generality, we set $p_{t_{\text{obs}}} = -E_{\text{obs}} = -1$ as a normalization condition.
\section{Numerical Setup}
\label{sec:NS}
We carried out numerical simulations using \texttt{OSIRIS} ({\bf O}rbits and {\bf S}hadows {\bf I}n {\bf R}elativ{\bf I}stic {\bf S}pace-times) \citep{OSIRIS}, a code of our authorship based on the backward ray-tracing algorithm for stationary and axially-symmetric space-times. \texttt{OSIRIS} evolves null geodesics “backward in time” by solving the equations of motion in the Hamiltonian formalism. Our code works based on the image-plane model, an assumption where photons came from the observation screen in the direction of the black hole. Some authors employ this method to simulate shadows around black holes \citep{Johannsen_2013, doi:10.1142/S0218271816410212, disformal}, classifying the orbits of photons into two groups: those that reach the event horizon and those that escape to infinity. Every pixel on the image plane corresponds to an initial condition for each photon. Thus, when the code assigns a color to each pixel according to the initial conditions, we obtain an intensity map corresponding to the image of the radiation coming from the accretion disk around the black hole.
The null-geodesic integration process is performed for two different angles, $\theta_0 = 45^\circ$ and $\theta_0 = 85^\circ$, assuming that the Minkowskian observer is located at $\left\lbrace t_0, r_0, \theta_0, \phi_0 \right\rbrace $ = $\left\lbrace 0, 1000, \theta_0, 0 \right\rbrace$. We define the surface of the tori, as the two-dimensional region where $\rho = 0.01$. Once the photon reaches this surface, the integration process of the radiative equation starts and keeps going while the photon remains inside the tori. For an optically thick disk $\tau_\nu$ = 0, implying that $I_{\nu_{\text{obs}}} \propto g^3$. For an optically thin disk, the algorithm calculates the value of $\lambda_0$ once the photon reaches the disk surface. Then, we use an Euler method to solve the equation \eqref{eq:tau} along the photon path inside the disk. Finally, the observation screen has a range $-12 \le x, y \le 12$ with a resolution of $1024\times 1024$ pixels for all simulations of the intensity map of the torus.
\section{Torus spectra}
\label{sec:KTMP}
\subsection{Exploring the space of parameters}
In particular, we study some combinations of parameters for paramagnetic and diamagnetic fluids about the case without magnetic polarization; whereby, it is interesting to compare the effects of magnetic polarization in the observed intensity and the morphology of the torus, which are observable characteristics. For this, we will consider a magnetically polarized disk with constant $\chi$ which implies that $\chi_1 = 0$, thereby $\chi = \chi_0 = \chi_m/(\chi_m+1)$. Thus, the magnetic susceptibility is constant which implies that $\chi_c$ and $f_c$ are also constants.
In tables \eqref{tab:constants} and \eqref{tab:chis} we present the parameters that define the structure and morphology of the tori used in this paper.
\\
\begin{table}
\centering
\begin{tabular}
{|c|c|c|c|c|c|}
\hline
$ a = 0.9$ &
$l_0 = 2.8$ &
$r_{\text{center}} = 4.622$ &
$r_{\text{cusp}} = 1.583$ \\
\hline
$W_{\text{in}} = -0.05$ & $W_{\text{c}} = -0.103$ & $\mathcal{L}_{\text{c}}=12.932$ &
$w_{\text{c}} = 1$ \\
\hline
$\kappa = 4/3$ &
$\eta = 4/3$ &
$\alpha = 1$ &
$\chi_1 = 0$ \\ \hline
\end{tabular}
\caption{Set of constant parameters employed in all simulations. Some of these are inspired in the article of Komissarov.} \label{tab:constants}
\end{table}
\begin{table}
\centering
\begin{tabular}{cccccc}
& \multicolumn{5}{c}{} \\ \hline
\multicolumn{1}{|c|}{$\chi_m$} & \multicolumn{1}{c|}{$-0.4$} & \multicolumn{1}{c|}{$-0.2$} & \multicolumn{1}{c|}{$0$} & \multicolumn{1}{c|}{$0.2$} & \multicolumn{1}{c|}{$0.4$} \\ \hline
\multicolumn{1}{|c|}{$\chi_\text{c}$} & \multicolumn{1}{c|}{$-0.666$} & \multicolumn{1}{c|}{$-0.250$} & \multicolumn{1}{c|}{0} & \multicolumn{1}{c|}{$0.166$} & \multicolumn{1}{c|}{$0.286$} \\ \hline
\multicolumn{1}{|c|}{$f_{\text{c}}$} & \multicolumn{1}{c|}{$0.403$} & \multicolumn{1}{c|}{$0.637$} & \multicolumn{1}{c|}{1} & \multicolumn{1}{c|}{$1.660$} & \multicolumn{1}{c|}{$3.244$} \\ \hline
\end{tabular}
\caption{Values of $\chi_m$ and their corresponding values of $\chi_\text{c}$ and $f_\text{c}$.} \label{tab:chis}
\end{table}
Furthermore, we select constant emission and absorption coefficients as well as a spectral index $s = 0.75$ in the power-law synchrotron radiation model, which is motivated by observations of S-type radio sources \citep{Radio_astro}. This particular value for $s$ leads to $\gamma = 2.5$ in the coefficients of \eqref{eq:coefficents}.
On the other hand, the effects of the magnetic polarization should depend on the "beta-plasma" parameter, $\beta_c$, because it indicates the relative importance of the magnetic interactions as compared to the hydrodynamic forces. Based on this, we carried out a parameter study to establish which combination of these parameters allows the best visualization of the magnetic polarization effects. In figure \eqref{fig:sp} we show the maximum value of intensity measured for different combinations of $\chi_m$ and $\beta_{\text{c}}$, for both optically thick and optically thin disk viewed from $\theta_0 = 45^\circ$ and $\theta_0 = 85^\circ$. In this figure, it is possible to appreciate that for high values of beta-plasma (disk dominated by fluid pressure) the effects of magnetic polarization are practically negligible; in addition, for this range of values, the minimum intensity emitted is obtained in the four cases. On the other hand, for low values of beta-plasma (disk dominated by magnetic pressure) the effects of magnetic susceptibility become relevant, showing a tendency towards paramagnetic disks in terms of maximum values of emitted intensity. Likewise, it is possible to appreciate that these values also depend on the observation angle and the optical penetration of the photons into the disk. For an observation angle of $45^\circ$ the maximum value of intensity is achieved when the disk is optically thin ($I_{max} = 1.66$), while from an inclination of $85^\circ $ the maximum intensity emitted occurs when the disk is optically thick ($I_{max} = 3.0$). In general, the emission is more intense for 85$^\circ$ in both types of disks compared to the inclination of 45$^\circ$.
\begin{figure*}
\centering
\includegraphics[width=13.5 cm]{fig3.pdf}
\caption{Maximum values of intensity emitted by an optically thick (left column) and optically thin (right column) torus around a Kerr black hole with dimensionless rotation parameter a = 0.9 seen from observation angles $\theta = 45^\circ$ (top row) and $\theta = 85^\circ$ (bottom row). It is clear that the effects of magnetic polarization are more relevant as $\beta_{\text{c}}$ decreases, and for paramagnetic disks, the tendency is to emit with higher intensity than in the case of diamagnetic disks. Besides, optically thin disks emit more intensely than optically thick disks if the angle of observation is $\theta_0 = 45^\circ$; nevertheless, if $\theta_0 = 85^\circ$ a more intensification emission comes from the optically thin disk.}
\label{fig:sp}
\end{figure*}
\subsection{Emission-line profiles}
Next, based on the previous results we decided to use the values at which the effects of magnetic susceptibility are most appreciable. For $\beta_c = 0.001, 0.1$ and $1.0$, and $\chi_c = -0.4, 0.0$ and $0.4$, the emission spectra were calculated by computing the flux from the tori, both optically thick and optically thin, knowing that flux can be expressed as
\begin{equation}
F_\nu = \int I_\nu d\Omega,
\end{equation}
where $d\Omega$ is the element of solid angle. Due to how \texttt{OSIRIS} is built under the image plane model, the solid angle element can be written as $d\Omega = dxdy/r_0^2$ (following a procedure similar to that of \cite{10.1111/j.1365-2966.2007.11855.x}), being completely specified when selecting the resolution of the simulations. We defined a frequency grid with a resolution $d\nu = 0.0375$ and with $\nu_i = 0.5$ and $\nu_f = 2.0$ as the minimum and maximum frequencies measured inside the torus, respectively. Then, the flux calculation is reduced by adding the intensity contributions for each frequency. We computed the frequency at each point in space and assumed that a photon was emitted with a specific frequency if the difference $|\nu_{\text{mesh}} - \nu_{\text{emitted}}|$ was less than a tolerance $\delta_\nu = 10^{-2}$. In that case, that intensity is considered a contributor to the total flux for that particular frequency. In figures \eqref{fig:flux_opaque} and \eqref{fig:flux_trans} it is possible to appreciate the fluxes for the optically thick and optically thin torus, for different values of $\beta_c$, $\chi_c$, and $\theta_0$ as a function of the $g$ parameter.
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig4.pdf}
\caption{Flux spectra from an optically thick torus with $\beta=1.0$, $0.1$ , $0.001$ and for values of susceptibility $\chi_m = -0.4$, $0$, $0.4$ viewed form angles $\theta = 45^\circ$ (top) and $\theta = 85^\circ$ (bottom).}
\label{fig:flux_opaque}
\end{figure*}
\begin{figure*}
\centering
\includegraphics[width=17cm]{fig5.pdf}
\caption{Flux spectra from an optically thin torus with $\beta=1.0$, $0.1$ , $0.001$ and for values of susceptibility $\chi_m = -0.4$, $0$, $0.4$ viewed form angles $\theta = 45^\circ$ (top) and $\theta = 85^\circ$ (bottom).}
\label{fig:flux_trans}
\end{figure*}
First of all, the flux emitted by the paramagnetic torus is less than the flux emitted by the Komissarov and diamagnetic tori. This makes sense based on Pimentel's results, where paramagnetic tori are found to be more compact relative to the others. In the case of optically thick tori (figure \ref{fig:flux_opaque}), it is natural to think that the number of photons emanating from a paramagnetic torus surface is less than for a diamagnetic torus (less compact), due to covering less surface area and contributing less to the sum of intensities. Thus, a paramagnetic torus leads to a lower flux than a diamagnetic torus. In the case of optically thin tori (figure \ref{fig:flux_trans}), the optical path followed by the photons inside the paramagnetic torus is shorter (because of the mentioned above concerning surfaces), and therefore the contribution of the intensities, in this case, is lower in contrast to diamagnetic tori, where the photon travels more distance inside the torus.
In all cases the effect of magnetic susceptibility decreases with increasing beta-plasma; however, it is remarkable that the maximum flux peak occurs for higher values of beta-plasma. This behavior can be explained due to the synchrotron radiation model that we used. From the coefficients shown in equation \eqref{eq:coefficents} it can be seen that the magnitude of the magnetic field is written as a power law of the spectral index. However, the source function, $S$, defined as the ratio between the emission and absorption coefficients, does not depend on that index and also results in the magnitude of the magnetic field always decreasing as $B^{-1/2}$. In this sense, as the magnetic field increases the contribution of the source function to the total intensity is less, which can be seen in equation \eqref{eq:intensity}. Based on the above, as the beta-plasma is smaller, it implies that the magnetic pressure is getting larger, which leads to a larger magnetic field. Thus, it is consistent that the flux decreases as beta-plasma also decreases, and increases as beta-plasma increases too.
On the other hand, the angle of inclination also plays a role in the flux maximum. For $\theta_0 = 45^\circ$ and opaque torus (top of figure \ref{fig:flux_opaque}), we found that emission lines are larger than emission lines for opaque torus at 85$^\circ$ (bottom of figure \ref{fig:flux_opaque}). This behavior can be explained because the self-eclipsing at $85^\circ$ is higher than at $45^\circ$. Thus, if the observer is closer to the equatorial plane the observable region of the torus will be smaller and therefore the observed flux will be smaller also. Another interesting feature is that the emission profiles at $45^\circ$ for the torus (geometrically thick disk) are similar to emission profiles for a geometrically thin disk \citep{Schnittman_2004}, where we can observe two peaks: a smaller one to the left which corresponds to red-shifted emission that comes from material that is moving away from the observer, and a larger one to the right which corresponds to blue-shifted emission that comes from material approaching the observer \citep{fabian}. At $85^\circ$ the red-shift peak is considerably less than the blue-shift peak due to the self-eclipsing mentioned above, leading to a broad line that looks like a single horn. Besides, it is interesting how the paramagnetic torus at this inclination is blue-shifted in comparison with the other two cases. On the other hand, changing the viewing angle occurs a blue shift of the peaks, which is consistent with the results of \cite{JOVANOVIC201237}.
Now, for both $45^\circ$ and $85^\circ$ inclinations for optically thin torus (figure \ref{fig:flux_trans}) the behavior is similar to previous cases, but with the main difference that the order of magnitude of the flux is the same for both, being higher for translucent torus flux than for both opaque cases. This is clear because the optical path followed by the photons inside of the optically thin torus is greater than in comparison with the optically thick torus, adding more intensities to the contribution of the total flux, keeping in mind that there are high-order lensed photons that orbit more than once the black hole, which contributes substantially to the flux. Also, in all the cases, the blue-shift peak is more noticeable, being a remarkable difference mainly for an inclination of $85^\circ$ when we compare the bottom rows of figures \eqref{fig:flux_opaque} and \eqref{fig:flux_trans}.
\subsection{Intensity map}
Finally, the results of our simulations for the intensity map from optically thick and optically thin emitting tori under the model of synchrotron radiation around a Kerr black hole are shown below. In figures \eqref{fig:torus_45} and \eqref{fig:torus_85} it is possible to see the photons coming from the disk that passes marginally near the event horizon in the last circular orbit and that manage to escape from the gravitational attraction, which forms or delimits the shadow of the black hole.
\begin{figure*}
\includegraphics[width=17cm]{fig6.pdf}
\caption{Magnetized torus optically thick (top panel) and optically thin (bottom panel) around a Kerr black hole with dimensionless spin parameter $a = 0.9$ view from $\theta = 45^\circ$ for different values of magnetic susceptibility $\beta = 0.001$.}
\label{fig:torus_45}
\end{figure*}
\begin{figure*}
\includegraphics[width=17cm]{fig7.pdf}
\caption{Magnetized torus optically thick (top panel) and optically thin (bottom panel) around a Kerr black hole with dimensionless spin parameter $a = 0.9$ view from $\theta = 85^\circ$ for different values of magnetic susceptibility $\beta = 0.001$.}
\label{fig:torus_85}
\end{figure*}
Optically thick disks exhibit self-eclipsing because the radiation is assumed that comes only from photons at the surface. Photons inside the disk are absorbed and emitted in such a way that they do not reach the surface; in this sense, the image of the optically thick disks corresponds to the surface of the disk, and according to the equation \eqref{eq:iobs}, the map of intensities corresponds only to the factor $g$. For optically thin disks, it is possible to see the entire shadow of the black hole through the torus, in addition to being more clearly visible on the rear part of the torus, which in principle should be hidden from the observer. This effect, purposely caused by the deflection of light due to the strong gravitational field of the black hole, is more noticeable as viewing angles increase.
On the other hand, as is expected from \cite{Oscar} results, it is evident that the paramagnetic tori (on the left) is more compact than the diamagnetic ones (on the right) compared to the \cite{komissarov} case (in the middle). This effect is reflected in the intensity profile, showing that there is a more concentrated region where the maximum intensity is located. This region lies to the left of the disk, corresponding to the direction of the spin of the black hole, and is more concentrated for the paramagnetic torus and spreads out further along the disk for the diamagnetic torus. Viewed from an 85$^\circ$ angle, the torus shape (bright bulge) is clear in the translucent paramagnetic disk, fading as the disk becomes more diamagnetic.
\section{Discussion \& Conclusions} \label{sec:conclusions}
In this paper, we simulate the intensity map and the emission-line profiles of the radiation coming from a torus around a Kerr black hole. We show the change of the magnetic susceptibility as a function of density, temperature, and magnetic field for both diamagnetic and paramagnetic cases. For diamagnetic plasmas, the susceptibility only depends on the density and exhibits a linear behavior. For paramagnetic plasmas, in particular, magnetic susceptibility decreases as temperature increases in the range of magnetic field between $10^{-2} [T] - 10[T]$, while for $10^{2} [T] - 10^{3}[T]$ for low temperatures magnetic susceptibility increases as temperature increases too, and returns to its previous behavior as temperature still increasing. For some configurations of density, temperatures, and magnetic field the magnetic susceptibility is of the order $10^{-3} - 10$, corresponding to values used in this work. Our torus is composed of a stationary and axially-symmetric fluid with arbitrary magnetic polarization and presents a toroidal magnetic field. Besides, we carry out a systematic study of the observed specific intensity and the observed flux as a function of the magnetic susceptibility, $\chi$, and the degree of magnetization, $\beta_{\text{c}}$. We found that assuming power-law synchrotron radiation with constant coefficients as the emission mechanism, the effects of magnetic polarization are negligible if the disk is dominated by the hydrostatic pressure, namely, $\beta_{\text{c}} > 1$. Moreover, if the disk is paramagnetic and dominated by the magnetic pressure, $\beta_{\text{c}} < 1$, the intensity reaches higher peaks for this configuration.
On the other hand, the flux decreases as the degree of magnetization increases, which is observed in the emission-line profiles. This behavior is consistent with the fact that the source function decreases as $B^{-1/2}$. Furthermore, from these emission profiles is clear that diamagnetic fluid emits a higher flux because diamagnetic torus is less compact than paramagnetic ones. Therefore, photons have more optical paths to travel inside the disk and grow up the contribution to the total observed flux. It is highlighted that although the increase in the magnetic field decreases the maximum value of the flux, it does not modify the frequency at which this maximum of each of the fluxes occurs.
Furthermore, our simulations are consistent with the general results of torus spectra around the black hole. For optically thick disks, we found that for a low angle of observation the observed flux is similar to the observed flux for a geometrically thin and optically thick disk. If the observed angle is near the equatorial plane, the observed flux decreases and the peak of the maximum is blue-shifted. For optically thin disks we found that in general, the observed flux is higher than the measured in the case of optically thick disks. In this case, the blue and red horns are more visible even for observations near the equatorial plane. However, the emission lines are higher for high-angle inclination than low angle inclination.
Another interesting feature is that magnetic polarization changes the observed flux, decreasing its maximum value as the material becomes more paramagnetic. Besides, magnetic susceptibility shifts the frequency at occurs the maximum flux: for paramagnetic materials, this maximum has a blue shift in the emission lines both optically thick and thin disk, which is more noticeable for $\theta_0 = 85^\circ$. These frequency shifts could be an observable characteristic that shows the degree of magnetic polarization of the fluid that conforms to the accretion disk. In this way, the effects of magnetic polarization would not be limited to a renormalization of the magnitude of the magnetic field but would have a potential relevance when comparing the observations with the numerical simulations.
\section*{Acknowledgements}
F.D.L-C was supported by the Vicerrectoría de Investigación y Extensión - Universidad Industrial de Santander, under Grant No. 3703.
|
1,477,468,751,026 | arxiv | \section*{Quantum materials, minus the oxygen}
Heusler compounds are intermetallic compounds that crystallize in one of several stuffed tetrahedral structures. Conventionally the term ``Heusler'' has been reserved for the cubic polymorphs, but here I expand the term to include the hexagonal analogues (Fig. \ref{crystal}). The cubic half Heusler compounds (Fig. \ref{crystal}b, spacegroup $F\bar{4}3m$, $C1_b$ structure) have composition $XYZ$ and consist of a zincblende $[YZ]^{n-}$ sublattice that is ``stuffed'' with $X^{n+}$ at the octahedrally coordinated sites $(\frac{1}{2}, 0, 0)$ \cite{kandpal}. Alternatively, this structure can be viewed as a rocksalt $XZ$ sublattice that is ``stuffed'' with $Y$ in every other tetrahedral interstitial site $(\frac{1}{4}, \frac{1}{4}, \frac{1}{4})$ \cite{ougut1995band,larson2000structural}. Full Heusler compounds, with composition $X Y_2 Z$, contain an additional $Y$ atom in the basis to fill all of the tetrahedral interstitials (Fig. \ref{crystal}c, $L2_1$ structure) \cite{graf2011simple}. Here I adopt the naming convention of ordering the elements by \textit{increasing} electronegativity, i.e. $XYZ$ and $X Y_2 Z$ (TiCoSb and MnNi$_2$Ga ), rather than the often adopted $YXZ$ and $Y_2 X Z$ (CoTiSb and Ni$_2$MnGa)\cite{knowlton1912heusler, heusler1903magnetische}, for consistency with standard naming conventions of other compounds
\footnote{Multiple conventions have been used for naming cubic and hexagonal Heusler compounds. The naming of hexagonal Heuslers typically follows the convention of listing constituents in order of increasing electronegativity $XYZ$, e.g. LaCuSn and LiGaGe. The naming of cubic half Heusler compounds often follows this $XYZ$ convention, e.g. LuPtBi, TiCoSb, and TiNiSn; however, the ordering $YXZ$ is also commonly used, e.g. CoTiSb and NiTiSn. This alternate convention exists because full Heusler compounds are more commonly written $Y_2 XZ$ (e.g., Ni$_2$MnGa) rather than $XY_2 Z$ (MnNi$_2$Ga). For consistency across the cubic and hexagonal Heusler compounds, this article adopts the electronegativity convention. When another notation is more commonly used, that name is also mentioned.}.
Lattice parameters for most cubic Heuslers are spanned by the zincblende III-V semiconductors, from GaP ($5.45$ \AA) to GaAs ($5.653$ \AA) and InSb ($6.479$ \AA). Relaxed ternary buffer layers, e.g., In$_x$Ga$_{1-x}$As, enable the lattice parameter to be tuned exactly (Fig. \ref{crystal}d).
\begin{figure*}
\includegraphics[width=7in]{props.pdf}
\caption{Emergent properties at Heusler interfaces. (a) Tuning parameters for epitaxial films and heterostructures. (b) Heusler properties, grouped into four mator functionalities: magnetic, electronic, elastic, and caloric. (c) Scanning tunneling microscopy (STM) images of TiCoSb (semiconductor), MnNiSb (half metallic ferromagnet), and LuPtSb (topological semimetal). Each of these compounds can be grown on a III-V substrate. A Dirac dispersion of topological states is expected at the interface between topological semimetals and normal III-V substrates. A gapped Dirac cone and quantum anomalous Hall effect are expected at the interface between topological materials and ferromagnets. TiCoSb STM adapted from J. K. Kawasaki \textit{et. al.}, Sci. Adv. 4: eaar5832 (2018) \cite{kawasaki2018simple}. Reprinted with permission from AAAS. MnNiSb STM reprinted figure with permission from P. Turban, \textit{et. al.,} Phys. Rev. B 65, 134417 (2002) \cite{turban}. Copyright 2002 by the American Physical Society. LuPtSb STM courtesy of N. Wilson and C. Palmstr{\o}m.}
\label{props}
\end{figure*}
Hexagonal polymorphs also exist, in which polar distortions give rise to properties that are not symmetry-allowed for cubic systems, e.g., ferroelectricity. The parent \textit{ZrBeSi}-type structure is nonpolar and consists of planar graphite-like $[YZ]^{n-}$ layers that are ``stuffed'' with $X^{n+}$ (spacegroup $P6_3 / mmc$ Fig. \ref{crystal}g). Unidirectional buckling $d$ of the $[YZ]^{n-}$ layers produces the polar \textit{LiGaGe}-type structure, which can be viewed as a ``stuffed wurtzite'' (spacegroup $P6_3 mc$, Fig. \ref{crystal}f) \cite{casper2008searching, hoffmann2001alb2}. Many insulating $P6_3 mc$ materials are promising as ferroelectrics, with calculated polarization and energy barrier to switching comparable to BaTiO$_3$ \cite{bennett2012hexagonal, garrity2014hyperferroelectrics}, despite being composed of all-metallic constituents. These predictions challenge the conventional notion that good ferroelectrics should be highly ionic materials with large Born effective charges \cite{benedek2014polarization}. In fact, it is precisely this lack of ionicity and stronger tendency towards covalent bonding that is predicted to make many hexagonal Heuslers robust against the depolarizing field, making them \textit{hyperferroelectrics} \cite{garrity2014hyperferroelectrics}. Other polar $P6_3 mc$ materials are natively semimetallic, and are of interest as low resistivity polar metals \cite{benedek2016ferroelectric}. These hexagonal materials are ``stuffed'' versions of wurtzite GaN (polar) and hexagonal BN (nonpolar), and can be lattice matched to zincblende semiconductor substrates in $\{ 111 \}$ orientation (Fig. \ref{crystal}h).
In both hexagonal and cubic polymorphs, $X$ is typically a transition or rare earth metal, $Y$ a transition metal, and $Z$ a main group metal (III, IV, or V). The phase boundary between hexagonal and cubic polymorphs is determined by the relative size of the $X$ cation compared to $Y$ and $Z$, with larger $X$ cations favoring the hexagonal polymorphs and smaller $X$ favoring cubic polymorphs \cite{seibel2015gold, hoffmann2001alb2, casper2008searching, xie2014pressure}. This dependence on chemical pressure suggests that the phase boundary could also be traversed by epitaxial strain, giving the epitaxial grower access to phases that would otherwise be challenging to stabilize and retain by bulk synthetic methods, e.g., hydrostatic pressure. Other structural variants include Jahn-Teller driven cubic to tetragonal distortions, variations in the atomic site ordering (e.g. ``inverse Heusler,'' $D0_3$, and $B2$ cubic variants), and polar vs antipolar layer buckling patterns in the hexagonal variants \cite{seibel2015gold, bennett2013orthorhombic, strohbeen2019electronically}.
The wide array of quantum properties in these materials arises from the large orbital degeneracy and the spatial confinement of $d$ and $f$ orbitals (compared to $s$ and $p$), with rich phenomena that are highly complementary to that observed in the transition metal oxides. Due to their lack of oxygen or other highly electronegative species, Heusler compounds are typically less ionic than oxides and the on-site electron-electron repulsion (Coulomb $U$) is generally weaker. For these reasons, Heusler compounds are unlikely to be a good system for finding new high temperature superconductors \cite{dagotto1994correlated}. On the other hand, magnetic exchange interactions (Hund's $J$) are quite significant, as evidenced by the strong tendency for magnetic ordering with Curie temperatures as large as 1100 K \cite{wurmehl2006investigation}. The lack of oxygen combined with substrate lattice matching make Heuslers more amenable than oxides to integration with compound semiconductors, since oxygen interdiffusion, reactions, and misfit dislocations pose significant challenges for oxide on semiconductor epitaxy \cite{hubbard1996thermodynamic}. Additionally, many Heusler compounds can be alloyed via to form quaternary, quinary, and even higher component alloys, providing a means to continuously tune the lattice parameter and properties of the Heusler compound itself \cite{zeier2016engineering}. Finally, whereas oxides are typically brittle, Heuslers are highly elastic, displaying superelasticity and accommodating strains of several percent without plastic deformation \cite{bungaro2003first, dong2004shape}, making them attractive for flexible magnetoelectronics.
\begin{figure*}
\includegraphics[width=7in]{topological.pdf}
\caption{Topological states in $d$-band Heusler compounds. (a) Schematic density of states for Bi$_2$Se$_3$ and Heusler compounds ScPtSb, ScPtBi, and GdPtBi. Heusler compounds show significant $d$ character from the $X$ (Sc, Gd) and $Y$ (Pt) rare earth and transition metal sites. (b) Schematic energy-momentum dispersions. Starting from the trivial insulator ScPtSb (left), substitution of Sb with the heavy Bi atom leads to a spin-orbit induced band inversion of the $\Gamma_6$ and $\Gamma_8$ bands, creating a topological semimetal with quadratic band touchings (middle). GdPtBi (right), which has partially filled $f^7$ levels, is antiferromagnetic in its ground state \cite{nakajima2015topological}. Upon the application of an external magnetic field, the combination of Zeeman splitting and the exchange field creates a pair of linearly dispersing Weyl nodes with opposite chirality $\chi_{+}$ and $\chi_{-}$ \cite{hirschberger2016chiral}. For $X$Co$_{2}Z$ compounds, which are ferromagnetic in the ground state, Weyl nodes are expected in the absence of an applied magnetic field \cite{wang2016time}. (c) Schematic of topological state hybridization for ultrathin films. Due to the smaller spatial extent of $d$ orbitals than $s$ and $p$ orbitals, the critical thickness for gap opening in Heuslers is expected to be smaller than for Bi$_2$Se$_3$.}
\label{topological}
\end{figure*}
\section*{Opportunities at interfaces}
Many of the most intriguing properties arise when the diverse properties of Heuslers can be combined and manipulated at atomically defined interfaces (Fig. \ref{props}). Such interfaces include Heusler/semiconductor, Heusler/oxide, and interfaces between two different Heusler compounds. Here I highlight several opportunities that lie beyond spintronics, and in the next section I describe the key experimental challenges.
\textbf{Topological states.} Recent angle-resolved photoemission spectroscopy (ARPES) measurements \cite{logan2016observation, liu2016observation} confirm theoretical predictions \cite{lin2010half, chadov2010tunable} of topological surface and interface states in cubic half Heuslers with large spin-orbit coupling compared to the bandwidth ($\lambda_{SO} > W$), e.g. $R$PtBi and $R$PtSb ($R=$ rare earth metal) (Fig. \ref{topological}b). Such states arise at the interfaces between topologically band-inverted materials and normal materials (or the vacuum, i.e. a surface), and are of great importance for dissipationless transport and for discovery of emergent quasiparticles when interfaced with layers of other functionality, e.g., Majorana bound states at topological / superconductor interfaces \cite{fu2008superconducting}.
Compared to first-generation binary topological insulators Bi$_x$Sb$_{1-x}$ and Bi$_2$Se$_3$, Heuslers offer several distinct opportunities. Firstly, whereas in most known topological materials the near-Fermi level states have $s$ and $p$ character, Heuslers have significant transition metal $d$ character with moderate electron-electron correlations \cite{chadov2009electron} (Fig. \ref{topological}a). The interplay between correlations and spin-orbit coupling is predicted to yield rich correlated topological properties in other systems, e.g. axion states and topological Kondo insulators in iridates. \cite{kim2008novel, witczak2014correlated}. Heuslers provide an alternative materials system for realizing such phenomena. Another potential consequence of localized $d$ orbitals is a shorter critical length scale for surface and interface state hybridization (Fig. \ref{topological}c). ARPES measurements of ultrathin Bi$_2$Se$_3$ films reveal that below a critical thickness of six quintuple layers ($\sim 6$ nm), the topological states at top and bottom interfaces hybridize to open a gap \cite{zhang2010crossover}. The smaller spatial extent of $d$ states in Heuslers implies that topological states may survive to smaller critical thicknesses without gapping out.
Secondly, the multifunctionality within the Heusler family enables lattice-matched topological heterostructures, for interfacing topological states with layers of other functionality. For example, topological / superconductor interfaces are predicted to host Majorana bound states, and topological / ferromagnet interfaces are expected to exhibit the quantum anomalous Hall effect \cite{chang2013experimental, liu2016quantum} and axion states \cite{xiao2018realization, mogi2017magnetic}. Lattice matching minimizes the potentially detrimental effects of misfit dislocations and interfacial defect states that could otherwise obscure the property of interest \cite{richardella2015characterizing}, e.g., by acting as parasitic conduction channels.
Finally, Heuslers are a platform for other topological states, including Dirac and Weyl fermions, in both cubic and hexagonal polymorphs. In cubic Heuslers, transport signatures of Weyl nodes have been observed in several $R$PtBi compounds \cite{hirschberger2016chiral, liang2018experimental, shekhar2018anomalous, PhysRevB.99.035110} and MnCo$_2$Ga (also known as Co$_2$MnGa) \cite{sakai2018giant} under an applied magnetic field, and theory predicts Weyl nodes in the magnetic full Heuslers $X$Co$_{2}Z$ without an external field \cite{wang2016time} (Fig. \ref{topological}b). In the hexagonal polymorphs, which break inversion symmetry, DFT calculations predict Weyl nodes \cite{narayan2015class, gao2018dirac} whose momenta are highly sensitive to the magnitude of polar buckling, potentially tunable by epitaxial strain.
\textbf{Interfacial superconductivity.} Heuslers are a platform for novel superconductivity, both at artificially defined interfaces and natively due to strong spin-orbit coupling. Whereas most known superconductors have singlet pairing, triplet superconductivity is predicted at interfaces between conventional superconductors $(S)$ and ferromagnets $(F)$ \cite{bergeret2001long}. Signatures of triplet pairing have been observed experimentally in Heusler-based $S/F/S$ Josephson junctions, where $S=$ Nb and $F=$ MnCu$_2$Al \cite{sprungmann2010evidence}. All-Heusler Josephson junctions offer the potential of realizing such behavior in fully-lattice matched systems that minimize interfacial disorder. Examples of Heusler superconductors include the cubic full Heuslers $X$Pd$_2$Sn ($X = $Sc, Y, Lu) and $X$Pd$_2 Z$ [$X=$ (Zr, Hf), $Z=$ (In, Al)] (also known as Pd$_{2}XZ$) \cite{klimczuk2012superconductivity, winterlik2009superconductivity}; the rare earth containing half Heuslers $R$PdBi \cite{nakajima2015topological, kim2018beyond}; and the hexagonal compounds BaPtAs \cite{kudo2018superconductivity}, SrPtAs \cite{nishikubo2011superconductivity}, and YbGaSi \cite{imai2008superconductivity}. Heusler $S/F/S$ Josephson junctions are also a platform $\pi$ phase control in an all epitaxial system, with potential applications as qubits \cite{yamashita2005superconducting}.
Intriguingly, recent theory \cite{brydon2016pairing, venderbos2018pairing} and experiments \cite{kim2018beyond} suggest triplet and higher order pairing may exist \textit{natively} in a subset of topological superconducting half Heuslers with composition $R$PdBi. Here the pairing occurs between $j=3/2$ fermions due to strong spin-orbit coupling. This combination is expected to natively host Majorana states in a single material \cite{yang2017majorana}, in contrast with previous experimental realizations that rely on an interface between a superconductor and a separate high spin-orbit material \cite{mourik2012signatures}.
\textbf{Interface polarization: ferroelectrics and polar metals.} For conventional ferroelectrics, the depolarizing field typically competes with and destroys long range polar order in the limit of ultrathin films. Hexagonal Heusler interfaces offer two potential solutions to this problem. Firstly, a number of insulating $P6_3 mc$ compounds (e.g., LiZnAs, NaZnSb) have been proposed as \textit{hyperferroelectrics}, which are robust against the depolarizing field due to their highly covalent bonding character with small Born effective charges \cite{garrity2014hyperferroelectrics}. Ferroelectric switching and hyperferroelectricity have yet to be experimentally demonstrated in hexagonal Heuslers. A significant challenge is that only a small subset of hexagonal Heuslers is natively insulating - an assumed requirement for switching via applied electric fields \cite{fei2018ferroelectric}. Epitaxial strain, quantum confinement, and Peierls-like distortions \cite{seibel2015gold, strohbeen2019electronically, genser2019coupling} may provide routes for tuning the buckling and opening a gap in polar compounds that are natively metallic.
For those hexagonal compounds that cannot be made insulating, the coexistence of a polar structure and metallicity holds interest in its own right, and may be a second solution to the depolarizing field problem. Polar metals, once assumed to be unstable since free carriers were thought to screen out polar displacements, are not fundamentally forbidden \cite{anderson1965symmetry} and have recently been demonstrated in several transition metal oxides \cite{shi2013ferroelectric, cao2018artificial, kim2016polar, kim2016polar}. Hexagonal $P6_3 mc$ Heuslers are another family of polar metals, and are unique in that they are generally more conductive than oxides \cite{kaczorowski1999magnetic, schnelle1997crystal}. One application for polar metals may be to suppress the effects of the depolarizing field by pinning displacements at the polar metal / ferroelectric interface \cite{puggioni2016polar}. Other opportunities for polar metal interfaces may lie in nonlinear optics \cite{wu2017giant}, nonlinear charge transport \cite{kang2019nonlinear, tokura2018nonreciprocal}, and novel superconductivity \cite{edelstein1995magnetoelectric}.
\begin{figure}[h]
\includegraphics[width=3.5in]{polarcatastrophe.pdf}
\caption{Mechanism for polar catastrophe 2DEG at oxide and Heusler interfaces. (a) In bulk LaAlO$_3$, each LaO layer donates half an electron per formula unit to the AlO$_2$ layer above and to the AlO$_2$ layer below, resulting in alternating formal charges of $+1/-1$. This results in an excess charge of half an electron per formula unit at the (001) oriented interface with SrTiO$_3$, which consists of charge neutral SrO and TiO$_2$ planes. (b) A similar mechanism is expected for the half Heusler system TiCoSb / TiNiSn, in which TiCoSb is composed of alternating charged planes while TiNiSn is composed of charge neutral planes.}
\label{polarcatastrophe}
\end{figure}
\textbf{Polar catastrophe.} Interface polarization and charge transfer also provide opportunities for the creation of two-dimensional electron gasses (2DEGs) across interfaces. Consider the classic ``polar catastrophe'' 2DEG that emerges at the LaAlO$_3$ / SrTiO$_3$ interface (Fig. \ref{polarcatastrophe}a). In this $3d$ electron system, the (001) stacking sequence of SrTiO$_3$ consists of charge neutral SrO/TiO$_2$ atomic planes, while LaAlO$_3$ consists of LaO/AlO$_2$ atomic planes with alternating $+1 / -1$ charge \cite{ohtomo2004high, janotti2012controlling}. The 2DEG arises from charge transfer of half an electron per formula unit across the interface, from the LaO atomic plane in LaAlO$_3$ to the TiO$_2$ atomic plane in SrTiO$_3$ \cite{ohtomo2004high, janotti2012controlling}. The half Heusler system TiNiSn / TiCoSb contains the same essential ingredients: just like LAO/STO, the near Fermi level orbitals also have strong $3d$ character. In (001) orientation the Ni/TiSn atomic planes are formally charge neutral, while the Co/TiSb planes have formal charges $-1 / +1$ \cite{kawasaki2018simple} (Fig. \ref{polarcatastrophe}b). These formal charges are based on an electron counting model \cite{kawasaki2018simple} that accurately predicts the experimentally measured surface reconstructions of TiCoSb (001), and is consistent with the experimental data for LuPtSb \cite{patel2014surface, patel2016surface2}, MnNiSb \cite{bach2003molecular}, and TiNiSn (001) \cite{kawasaki_nitisn}.
In this highly simplified view, a charge transfer 2DEG might also be expected at this Heusler interface \cite{sharan2019formation}. Recent transport measurements on MBE-grown TiCoSb/TiNiSn bilayers show 1.5 order of magnitude enhanced conductivity \cite{ricethesis}, consistent with this interfacial charge transfer prediction \cite{sharan2019formation}. Additionally, in LaAlO$_3$ / SrTiO$_3$ the strong spatial confinement of the 2DEG has been suggested to enhance electron-electron correlations and contribute to the emergent superconductivity. What new properties may emerge at Heusler interfaces with enhanced correlations?
\textbf{Interfacial magnetism and skyrmions.} Heusler interfaces offer a platform for enhancing skyrmion phase stability via combined bulk and interfacial inversion symmetry breaking. Magnetic skyrmions are topologically protected vortex-like swirls of spin texture, whose robustness against atomic scale disorder makes them attractive for applications in magnetic memory. They are stabilized \cite{roessler2006spontaneous, binz2006theory, han2010skyrmion} by the Dzyaloshinskii-Moriya (DM) exchange interaction \cite{dzyaloshinsky1958thermodynamic, moriya1960anisotropic}, which results from a combination of broken inversion symmetry and large spin-orbit coupling. To date, most work has focused on two separate strategies to stabilize skyrmions: (1) bulk crystal structures that break inversion, e.g. $B20$ crystals such as FeGe and MnSi \cite{muhlbauer2009skyrmion}, or (2) artificially defined interfaces that break inversion \cite{bogdanov2001chiral}, e.g. Co/Pt interfaces \cite{yang2015anatomy}.
Combining bulk and interfacial DM interactions in a single materials platform is predicted to be a path towards further control and enhancements of skyrmion stability \cite{rowland2016skyrmions}. Heuslers are a strong materials candidate. Recent experiments confirm skyrmions in several Mn$_2 YZ$ compounds that crystallize in the tetragonal inverse Heusler structure ($I\bar{4}m2$) that breaks bulk inversion \cite{nayak2017magnetic, meshcheriakova2014large}. The epitaxial film growth of several of these compounds has recently been demonstrated \cite{jin2018structural, swekis2019topological, rana2016observation, meshcheriakova2015structural, li2018large} providing a path towards further manipulation of the DM interaction in layered heterostuctures of these materials. Beyond skyrmion stability, recent theoretical proposals suggest that skyrmion/superconductor interfaces may be another platform for hosting Majorana fermions \cite{PhysRevB.93.224505, PhysRevB.92.214502}, potentially realizable in an all-Heusler system.
\begin{figure}[th]
\includegraphics[width=3.4in]{martensite.pdf}
\caption{Concept for a topological switch, induced by reversible martensitic phase transitions. The shape memory alloy undergoes a displacive transformation from the high symmetry austenite phase to a low symmetry twinned martensite, as a function of temperature or applied magnetic field. Strains across the interface induce a structural distortion in the ultrathin Heusler layer, e.g. $R$PtBi, transforming it from a topological phase to a trivial phase.}
\label{martensite}
\end{figure}
\textbf{Interface strain and shape memory effect.} Shape memory alloys are ferroelastic materials that undergo large, reversible martensitic phase transitions or twin reorientations to revert a macroscopically deformed material back to its original shape. Several Heuslers, including MnNi$_2 Z$ ($Z=$ group III or IV) and MnCo$_2$Ga, exhibit such transitions, driven by temperature and strain (shape memory effect), or by an external magnetic field (magnetic shape memory effect) \cite{dong2004shape, bungaro2003first}. These compounds are also known as Ni$_2$Mn$Z$ and Co$_2$MnGa. Across these transitions the magnetic, caloric, and electrical transport properties change abruptly \cite{manosa2010giant}, and these materials are generally also superelastic, accommodating strains as large as 10 percent by locally undergoing strain-induced martensitic phase transitions or twin reorientations \cite{sozinov2002giant}. This 10 percent strain is an order of magnitude larger than the strains observed in magnetostrictive or piezoelectric materials, with promising applications for microactuation and vibration dampening. The large latent heat associated with the phase transition holds promise for applications in refrigeration and thermal energy conversion \cite{srivastava2011direct, song2013thermodynamics}.
Layered heterostructures composed of a shape memory alloy provide an opportunity to couple the large and reversible strains across materials interfaces, to \textit{induce} phase transitions in adjacent functional layers. For example, DFT calculations suggest that the topological band inversion in the $R$PtBi Heuslers can be flipped by strains of approximately 3\% \cite{chadov2010tunable}. One could envision $R$PtBi / shape memory alloy interfaces in which the topological states are switched ``on'' and ``off'' by temperature or magnetic field-induced martensitic phase transitions (Fig. \ref{martensite}). Strains of this magnitude are likely too large to be produced by coupling to magnetostrictive or piezoelectric layers, but are within the limits of shape memory alloys.
Well-defined epitaxial interfaces also provide an idealized test bed for understanding and manipulating the phase transition itself. A key limiting factor in bulk shape memory alloys is that the habit plane, i.e. the interface between austenite and martensite phases, is not guaranteed to be atomically commensurate (epitaxial). As a result, repeated cycling through the martensitic phase transition creates dislocations that lead to slower switching speeds, decreased energy efficiency, and eventually mechanical failure \cite{gall2002cyclic}. A promising materials design route is to engineer materials such that the habit plane is atomically commensurate or near-commensurate, i.e. the \textit{compatibility} and \textit{cofactor} conditions \cite{james2000martensitic, james1989theory, bhattacharya1991wedge, ball100i987, gu2018phase}. This condition is met when the middle eigenvalue $\lambda_2$ of the austenite to martensite transformation matrix equals 1. One design route towards the $\lambda_2=1$ criterion is to deliberately fabricate non-stoichiometric samples \cite{cui2006combinatorial, chluba2015ultralow}, such as Mn$_{25+y}$Ni$_{50-x}$Co$_x$Sn$_{25-y}$ (also known as Ni$_{50-x}$Co$_x$Mn$_{25+y}$Sn$_{25-y}$) \cite{srivastava2010hysteresis, bhatti2012small}.
Another route is to engineer the habit plane via film/substrate interface effects in epitaxial thin films, which can be tuned via crystallographic orientation and strain \cite{bhattacharya1999tents, kaufmann2011modulated}. For example, for epitaxial NiTi films grown on (001) oriented MgO \cite{buschbeck2011martensite} and GaAs \cite{buschbeck2011growth} substrates, clamping effects from the substrate force a new transformation pathway in which the habit plane lies parallel to the (001) \cite{buschbeck2011martensite}. Importantly, this transformation occurs via a shear mechanism in which the interface remains atomically coherent, and may provide a general route towards engineering atomically commensurate phase transitions.
\section*{Challenges}
Significant advances have been made on the epitaxial growth and control of Heusler interfaces over the past 20 years, primarily driven by applications in spintronics. These include the development of Heusler molecular beam epitaxy (MBE) \cite{ambrose2000epitaxial, van2000epitaxial, bach2003molecular, dong1999molecular, dong2000epitaxial, turban}, the identification of semi adsorption-controlled growth windows \cite{kawasaki2018simple, patel2014surface, bach, turban, strohbeen2019electronically}, the use of epitaxial diffusion barriers and low temperature seed layers \cite{palmstrom2016heusler, farshchi2013spin, schultz2002eras, buschbeck2011growth}, the use of chemical templating layers \cite{dong2000epitaxial, filippou2018chiral}, and the development simple theoretical frameworks based on electron counting \cite{kandpal, jung2000study, pashley1989electron} for predicting stability and structural distortions at surfaces and interfaces \cite{kawasaki2018simple, pashley1989electron}.
Despite these advances, the full realization of Heusler properties beyond spintronics will likely require even more stingent control of materials and interface quality. This is because many of the emerging properties in Heuslers depend on bandgaps: bulk bandgaps in topological insulators and ferroelectrics, minority spin gaps in half metals, and pairing gaps in superconductors. Such gaps tend to be highly sensitive to non-stoichiometry, point defects, lattice distortions, and interfacial reconstructions and disorder. Additionally, interfacial properties are often inherently short-range, and therefore can be sensitive subtle changes in atomic structure across the interface.
\begin{figure}[h]
\includegraphics[width=2.6in]{transport.pdf}
\caption{Carrier mobility and density at 2K for 18 valence electron half Heuslers, in bulk crystal and epitaxial film form. Legend: MBE-grown films (filled red squares \cite{strohbeen2019electronically, kawasaki_cotisb, patel2014surface,kawasaki_thesis}), sputter-grown films (open red squares, Refs. \cite{jaeger2011epitaxial, wang2012fabrication, narita2015effect, shan2013electronic}), single crystal topological semimetals $R$(Pt,Bd)(Sb,Bi) (filled black circles, Refs.\cite{hou2015high, hirschberger2016chiral, hou2015large, nakajima2015topological}), and bulk semiconductors (open black circles, Refs. \cite{ahilan2004magnetotransport, wu2007thermoelectric}). For the MBE-grown samples, TiCoSb was grown on lattice matched InAlAs/InP(001) \cite{kawasaki_cotisb} and on MgO(001) \cite{kawasaki_thesis}, LuPtSb on InAlSb/GaSb(001) \cite{patel2014surface}, and the hexagonal compounds LaAuSb, LaAuGe, and LaPtSb on Al$_2$O$_3$(0001) \cite{strohbeen2019electronically}. Most sputtered films were grown on MgO(001). For comparison, I also show the oxide SrTiO$_3$, grown by pulsed laser deposition (PLD \cite{kozuka2010dramatic}) and by adsorption-controlled MOMBE \cite{cain2013doped}.}
\label{transport}
\end{figure}
\textbf{Controlling stoichiometry and defects to ``electronic grade.''} Although bandstructure calculations predict a number of half Heuslers to be semiconductors with bandgaps of 1 eV or larger, typical background carrier densities are well above $10^{17}$ cm$^{-3}$ and mobilities below $500$ cm$^2$/Vs, for both bulk crystals and thin films (Fig. \ref{transport}). Flux-grown single crystals of Heusler topological semimetals do have higher mobilities approaching $10^5$ cm$^2$/Vs (filled black circles \cite{hou2015high, hirschberger2016chiral, hou2015large, nakajima2015topological}); however, this higher mobility results in part from the topological protection of surface or bulk Dirac and Weyl states rather than purely a reduction of bulk impurity scattering.
\begin{figure}[th]
\includegraphics[width=2.5in]{adsorption.pdf}
\caption{Thermodynamics of adsorption-controlled growth. (a) Growth window for stoichiometric GaAs, as a function of arsenic partial pressure and sample temperature. Adapted from Ref. \cite{theis1998adsorption}. The bounds of this growth window are determined by the vaporization of arsenic (upper bound) and the decomposition of GaAs into Ga liquid and As$_2$ vapor (lower bound). Within the window, stoichiometric solid GaAs plus As$_2$ vapor is formed. (b) Schematic semi-adsorption-controlled window for antimonide Heuslers. The upper bound is given by the vaporization of Sb, while the lower bound is given by the decomposition of the Heusler phase. One possible decomposition reaction, $XY$Sb$_{(s)}$ $\leftrightarrow XY_{(s)} +$ $\frac{1}{2}$Sb$_{2(g)}$, is shown. The Sb stoichiometry is self limiting; however, the transition metal stoichiometry $X:Y$ is not.}
\label{adsorption}
\end{figure}
The poor transport properties stem largely from challenges in controlling the stoichiometry and resultant defects, which are generally more difficult to control in ternary intermetallics than in simple binary semiconductors. To illustrate this challenge, consider binary GaAs, which shows record high electron mobility when grown in a modulation doped structure by MBE \cite{dingle1978electron, pfeiffer2003role, gardner2016modified}. A major reason for the success of MBE-grown GaAs is the existence of a thermodynamically adsorption-controlled growth window \cite{tsao2012materials}, in which the stoichiometry is self-limiting (Fig. \ref{adsorption}(a)). Due to the high volatility of arsenic, GaAs films are grown with an excess arsenic flux, in which only the stoichiometric As:Ga ratio ``sticks'' and the excess arsenic escapes in the vapor phase. High mobility ternary III-V alloys, e.g. In$_x$Ga$_{1-x}$As, are also routinely grown by MBE in which the As:(In+Ga) stoichiometry is self-limiting. The In:Ga stoichiometry is not self-limiting; however, since both In and Ga have the same valence and incorporate substitutionally on the same lattice sites, slight variations of In:Ga composition result in subtle changes in the bandgap rather than the formation of defect states.
In select cases, ternary Heuslers can be grown in a \textit{semi} adsorption-controlled window (Fig. \ref{adsorption}b), in which the stoichiometry of \textit{one} of the three elements is self-limiting. TiCoSb \cite{kawasaki2018simple, kawasaki_cotisb}, MnNiSb \cite{bach, turban}, LuPtSb \cite{patel2014surface}, and LaAuSb \cite{strohbeen2019electronically} can be grown by MBE with an excess Sb flux, in which the ratio of Sb to $(X+Y)$ is self-limiting.The TiCoSb films grown by this method display the lowest background carrier concentration ($\rho(300K) = 9 \times 10^{17}$ cm$^{-3}$, $\rho(2K) = 2 \times 10^{17}$ cm$^{-3}$) of any gapped Heusler compound to date \cite{kawasaki_cotisb}, including bulk crystals (Fig. \ref{transport}). The electron mobility of $530$ cm$^2$/Vs is similarly large, compared to typical values of less 100 cm$^2$/Vs for growth by sputtering or arc melting (Fig. \ref{transport}). For the semimetal LaAuSb grown by semi adsorption-controlled MBE, the 2K mobility is 800 cm$^2$/Vs \cite{strohbeen2019electronically}.
However, it remains an outstanding challenge to control the remaining $X:Y$ transition metal stoichiometry. This is especially important for Heuslers, compared to III-V ternary alloys, since $X$ and $Y$ occupy different lattice sites and typically have different valences. At typical growth temperatures of $300-600^{\circ}$C the sticking coefficients for elemental transition metals are near unity, therefore the film $X:Y$ stoichiometry relies on precise control of $X$ and $Y$ fluxes rather than a self-limiting process. Due to typical flux drifts, these fluxes are difficult to control to better than $1\%$, even when using real-time flux monitoring and feedback approaches such as optical atomic absorption \cite{chalmers1993real, kasai1997atomic}) or x-ray emission (RHEED-TRAXS \cite{hasegawa1985chemical}) spectroscopies. In a worst-case scenario, if all nonstoichiometric defects were electrically active, a $1\%$ deviation in stoichiometry would correspond to an unintentional carrier density of order $10^{20}-10^{21}$ cm$^{-3}$, clearly unacceptable for most electronic applications. At such high concentrations, the defects typical form an impurity band or a ``perturbed host'' band \cite{yonggang2017natural}. While not all defects are electronically active \cite{yonggang2017natural}, experimentally it is found that most polycrystalline half Heuslers have carrier densities greater than $10^{20}$ cm$^{-3}$ \cite{muta2009high, kim2007high, fu2015realizing}. In general only flux-grown single crystals and semi adsorption-controlled MBE films have unintentional densities below $10^{20}$ cm$^{-3}$ (Fig. \ref{transport}). Control of stoichiometry is also critical for half-metallic ferromagnets, since non-stoichiometric defects often produce states within the minority spin gap \cite{picozzi2007polarization}, thus decreasing the spin polarization at the Fermi energy.
One possible solution may to replace one or both of the transition metal sources with a volatile chemical precursor. For transition metal oxides, fully adsorption-controlled growth of SrTiO$_3$ and SrVO$_3$ thin films has been demonstrated by replacing elemental Ti and V with titanium tetra-isopropoxide (TTIP) and vanadium tetra-isopropoxide (VTIP), respectively. The resulting films exhibit record high electron mobility \cite{cain2013doped, son2010epitaxial} and low residual resistivity \cite{moyer2013highly}, exceeding their bulk counterparts.
This approach is generally called metalorganic molecular beam epitaxy (MOMBE) \cite{putz1985gaas} or chemical beam epitaxy (CBE) \cite{tsang1984chemical}. First developed in the 1980's for growth of III-Vs, MOMBE was applied a few years later to the growth of superconducting oxides $R$Ba$_2$Cu$_3$O$_{7-x}$ ($R =$ Y, Dy) \cite{king1991situ, endo1991preparation}. For the case of perovskite oxides, this approach has recently been termed \textit{hybrid} MBE (\textit{h}MBE) \cite{jalan2009molecular, jalan2009growth}, where the distinction \textit{hybrid} refers to the combined use of metalorganic$+$elemental$+$gas sources \cite{brahlek2018frontiers}, as opposed to purely metalorganic or metalorganic$+$elemental sources. Given the remarkable success of volatile precursor MBE for transition metal oxide growth, similar advances are anticipated if the approach can be applied to Heuslers. Potential precursors include metalorganics, e.g. the metal cyclopentadienyls, or volatile metal halides. However, such precursors introduce new challenges of potential carbon incorporation and equipment corrosion, respectively.
Ultimately, the degree of stoichiometry control possible by adsorption-control may be limited by the phase diagram of the particular system. For example, rather than existing as pure line compounds, some Heusler compounds have a finite (few percent) phase field along certain directions in composition space. The $x<0.05$ solubility of excess Ni within TiNi$_{1+x}$Sn is one example \cite{douglas2014nanoscale, rice2017structural, douglas2014phase}. For such compounds, the stoichiometry is likely to only be self-limited to within the bounds of the phase field. However, for certain applications deliberately off-stoichiometric compositions are desired, e.g. the $\lambda_2=1$ criterion for low hysteresis shape memory alloys as described in the previous section \cite{james2000martensitic, james1989theory, bhattacharya1991wedge, ball100i987, gu2018phase}.
\textbf{Point defects.} Point defects in Heuslers also remain challenging to understand, measure, and control, in part because a quantitative experimental identification requires relatively low defect density samples. Our understanding is derived primarily from first-principles theory. DFT calculations on cubic full and half Heuslers predict a number of point defects, many with similarly small formation energies ($<1$ eV) \cite{yonggang2017natural, picozzi2007polarization, picozzi2004role}. The hexagonal polymorphs are less explored.
\begin{figure}[th]
\includegraphics[width=3.5in]{phase.pdf}
\caption{Defects and phase diagram for cubic Heuslers. (a) Crystal structure and defects. Half Heusler ($XYZ$) consists of an ordered vacancy sublattice $v_{i}$. A common defect for half Heuslers is a small fraction of $Y_{i}$ interstitials ($Y$ on interstitial $(\frac{3}{4}, \frac{1}{4}, \frac{1}{4})$ sites). The partially filled spheres denote fractional occupancy. In the limit of all vacancy sites being filled with $Y$, the structure is full Heusler ($XY_{2}Z$). For full Heuslers in which $X$ and $Z$ sites are indistinguishable, the structure is $B2$. (b) Ternary phase diagram of Ti-Ni-Sn at 497 $^{\circ}$C, adapted from Refs. \cite{romaka2013phase, stadnyk1991isothermal}. A tie line exists between the full and half Heusler phases. The phase fields for both TiNiSn and TiNi$_2$Sn are finite, and extend towards one another.}
\label{phase}
\end{figure}
For half Heuslers, DFT calculations suggest the defect behavior may be grouped into families based on the chemical identity of the $Y$ site \cite{yonggang2017natural}. For $Y = 3d$ metal, $Y_{i}$ interstitials ($Y$ on interstitial $i$ sites, Fig. \ref{phase}a) are predicted to be the dominant low energy defect \cite{yonggang2017natural}, consistent with previous specific calculations for $Y=$ Ni \cite{larson2000structural, ogut}. These findings are consistent with the structural insight that $Y$ and $i$ sites have the same nearest neighbor coordination, and therefore filling these sites would have similar energies ($0$ to $0.5$ eV, depending on the position of the chemical potential \cite{yonggang2017natural}). In the dilute limit, $Y_i$ interstitials are expected to act as shallow donors \cite{yonggang2017natural}. In the high concentration limit they are expected to decrease the effective bandgap via the formation of an impurity band or a ``perturbed host'' band, which explains why many of the predicted semiconducting Heuslers behave experimentally as metals in transport measurements. The low formation energy for $Y_{i}$ is also proposed to drive a natural tendency for half Heuslers to be $Y$-rich \cite{yonggang2017natural}. This prediction of natural off stoichiometry is consistent with experimental observations that for TiNiSn, the phase field extends towards excess $Y=$ Ni and a thermodynamic tie line exists between half Heusler TiNiSn and full Heusler TiNi$_2$Sn (Fig. \ref{phase}b) \cite{douglas2014nanoscale, rice2017structural, douglas2014phase}. Such a tie line exists in many other half Heusler / full Heusler systems, e.g. ZrNiSn / ZrNi$_2$Sn and TiCoSn / TiCo$_2$Sn. Additionally, the electrical transport for TiNi$_{1+x}$Sn is optimized (low carrier density, high mobility) for samples that are slightly $Y=$ Ni rich ($x\sim 0.05$), suggesting that the excess Ni$_i$ compensates electrically for the natural Ni vacancy ($v_{Ni}$) formation \cite{rice2017structural}. A fruitful new direction for theorists would be to identify other electrically compensating point defects, to guide experimental efforts in making half Heuslers that are truly insulating.
For $Y= 5d$ half Heuslers, $Z_X$ antisites are expected to be the dominant defect, which act as acceptors \cite{yonggang2017natural}. General trends for $Y=4d$ are not well established \cite{yonggang2017natural}.
Similar defect calculations for full Heuslers $XY_2 Z$ suggest $Y$ vacancies ($v_Y$) to be a low energy defect \cite{popescu2017native}, complementary to the $Y_i$ for half Heuslers. This prediction is consistent with experimental observations that full Heusler TiNi$_{2-x}$Sn has a finite phase field extending in the Ni-deficient direction \cite{romaka2013phase}. $X_Z$ and $Z_X$ antisites are another proposed defect in both full and half Heuslers \cite{larson2000structural, popescu2017native}, consistent with experimental observations of $B2$-type disorder \cite{khovailo2001order}, in which $X$ and $Z$ sites are indistinguishable, for films grown at low temperature \cite{rath2018reduced}. $X_Y$ and $Y_X$ antisites have also been proposed: in MnCo$_2$Si, first-principles calculations suggest Mn$_{Co}$ (Mn on Co lattice sites) have lowest formation energy and generally retain half metallic character, but other defects such as Co$_{Mn}$ are close in formation energy and can destroy half metallicity by forming states within the minority spin gap \cite{picozzi2004role}. Given the large zoology of proposed point defects for full Heuslers, many with similar small calculated formation energies ($<1$ eV \cite{picozzi2004role, picozzi2007polarization, yonggang2017natural}, compared to $\sim 3$ eV for self interstitials in Si \cite{rinke2009defect}), feedback between theory and measurements on clean samples are required to determine which defects are present, which are electronically active, and how to control them.
\textbf{Interdiffusion and reconstructions.} Most theoretical predictions assume idealized interfaces in which atoms adopt their bulk-like positions. However, at real materials interfaces there can be strong thermodynamic driving forces to deviate from simple bulk-like termination. This is especially important because interface properties are often inherently short-range. Heusler interfaces -- including Heusler/Heusler, Heusler/III-V, and Heusler/oxide -- are no exception. For Heuslers, the challenges exist at several length scales: interdiffusion and reactions at the several nanometer scale, and interfacial reconstructions and site disorder at the unit cell scale.
Interdiffusion and interfacial reactions pose significant challenges at some Heusler/III-V semiconductor interfaces, particularly those containing Ni or Mn. This stems from the large diffusion coefficients for many transition metals in III-Vs ($D > 10^{-10}$ cm$^2$/s for Mn and Ni in GaAs at 500 $^\circ$C, compared to $D \sim 10^{-15}-10^{-13}$ for typical main group species \cite{wu1991diffusion, lahav1986interfacial, ruckman1986interdiffusion, fisher1998diffusion}), combined with complicated multi-component phase diagrams. These factors can result in interdiffused regions and secondary phases for direct Heusler on III-V growth at elevated temperatures ($>400^\circ$C \cite{gusenbauer2011interdiffusion}). Interdiffusion also limits the sharpness of Heusler/Heusler interfaces (e.g. MnCo$_2$Al / MnFe$_2$Al \cite{brown2018epitaxial}, TiNiSn / Zr$_{0.5}$Hf$_{0.5}$NiSn \cite{jaeger2014thermal}, MnNiSb / MnPtSb \cite{mancoff1999growth}), but is generally less significant at Heusler/oxide interfaces due to the relative stability of many metal-oxides (FeCo$_2$Al / MgO \cite{bai2014magnetocrystalline}).
\begin{figure}[h]
\includegraphics[width=3.5in]{diffusion.pdf}
\caption{Strategies for making near atomically abrupt and stable interfaces. (a) Epitaxial $B1$ (rocksalt) diffusion barrier between Heusler film and and III-V substrate. One example diffusion barrier is ErAs, where $X'=$ Er and $Z'=$ As. (b) Epitaxial $B2$ (cesium chloride) chemical templating layer. Example NiGa, where $Y'=$ Ni and $Z'=$ Ga. (c) Low temperature seed layer, which minimizes interdiffusion but typically results in $B2$-type disorder at the interface. (d-f) Example of the low temperature seed layer approach. Reprinted figure with permission from A. Rath \textit{et. al.}, Phys. Rev. B 97, 045304 (2018) \cite{rath2018reduced}. Copyright 2002 by the American Physical Society. (d) Cross sectional high angle annular dark field - scanning trasmission electron microscopy (HAADF-STEM) image of the interface between MnCo$_2$Si (also known as Co$_2$MnSi, CMS) and GaAs (001). Within 5 nm of the interface the CMS seed has disordered $B2$ structure, while the top region shows the fully ordered $L2_1$ structure. For this sample the growth temperature was held constant at $270 ^\circ$C. (e,f) Fast Fourier transform of the regrowth and seed regions.}
\label{diffusion}
\end{figure}
One solution is to grow epitaxial diffusion barriers between the Heusler film and III-V substrate (Fig. \ref{diffusion}a). The rare earth monopnictides ($RV$, $R=$ rare earth, $V=$ As, Sb, Bi) are highly effective diffusion barriers for group III and transition metal species \cite{schultz2002eras, palmstrom1993stable}. These materials have cubic rocksalt ($B1$) structure and can can be lattice matched by alloying on the rare earth site. Examples include ErAs, Sc$_x$Er$_{1-x}$As, ErSb, and GdSb, which have enabled the epitaxial growth of a variety of intermetallic films on III-Vs at temperatures up to $600^\circ$C \cite{schultz2002eras, buschbeck2011growth, palmstrom2016heusler, palmstrom1993stable}. However, the rare earth monopnictides are generally metallic, magnetic, and require a finite thickness of at least three atomic layers to be effective diffusion barriers. Hence they are not suitable when a direct Heusler/III-V interface is required.
Another approach is to grow thermodynamically stable, chemical templating layers \cite{palmstrom1993stable, filippou2018chiral} (Fig. \ref{diffusion}b). $B2$ interlayers (cesium chloride structure) are good templates for full Heusler growth, since these two structures are ordered variants of one another. Starting from the cubic $B2$ structure, whose basis consists of $Z (Z')$ at the origin and $Y (Y')$ at the body center, the full Heusler $L2_1$ structure is obtained by replacing every other $Z$ site with $X$. One example is to use a $B2$ NiGa interlayer to seed the growth of MnNi$_2$Ga on GaAs (001). NiGa is thermodynamically stable in contact with GaAs \cite{dong2000epitaxial}, thus minimizing the interdiffusion. $B2$ templating can also enhance the $c$-axis ordering in Heuslers \cite{filippou2018chiral}, since the [001] stacking sequence of a $B2$ crystal with composition $Y'Z'$ consists of alternating atomic planes of $Y'$ and $Z'$. This template enhances the $c$ axis ordering of the subsequent Heusler film, due to the local bonding preference of $Y$ on $Z'$ and $XZ$ on $Y'$. However, like the rocksalt $B1$ diffusion barriers, $B2$ template layers are often metallic and require a finite thickness, and are also not suitable when a direct Heusler/III-V interface is required.
For direct Heusler/III-V interfaces or for interfaces between two different Heusler compounds, low temperature seed layers are the method of choice \cite{palmstrom2016heusler, farshchi2013spin} (Fig. \ref{diffusion}c). This strategy consists of nucleating several unit cells of Heusler film at low temperature ($< 300 ^\circ$C) to minimize interdiffusion during the formation of the interface \cite{hashimoto2007atomic, kawasaki_cotisb}. The seed can be then be annealed and growth resumed at higher temperatures \textbf{($\sim 500^\circ$C)} to improve the degree of $L2_1$ ordering \cite{kawasaki_cotisb, hirohata2005structural, hashimoto2007atomic, farshchi2013spin, palmstrom2016heusler}. This strategy relies on the fact that bulk diffusion is generally much slower than surface diffusion during growth. Once the interface is formed at low temperature, interdiffusion is suppressed for subsequent anneals compared to direct growth at higher temperatures, as inferred by reflection high energy electron diffraction and x-ray diffraction \cite{hirohata2005structural, kawasaki_cotisb} or by device performance metrics such as the resistance-area product of a magnetoresistance junction \cite{kubota2017current}. Direct measurements of the interdiffusion, e.g. by Rutherford Backscattering Spectrometry or STEM-EELS, are needed to fully quantify these effects as a function of post-growth anneal temperature.
For Heusler/III-V \cite{rath2018reduced, nedelkoski2016realisation} and Heusler/Heusler \cite{brown2018epitaxial} interfaces formed by low temperature seeds, the chemical intermixing is typically limited to a few atomic layers (Fig. \ref{diffusion}d) \cite{rath2018reduced}. However, due to the low surface diffusion at low temperatures, the seed layers often crystallizes in the disordered $B2$ structure, in which $X$ and $Z$ sites are indistinguishable, rather than the ordered full Heusler $L2_1$ \cite{farshchi2013spin, rath2018reduced}. The effects of such disorder on properties can vary significantly depending on the particular compound and desired property \cite{orgassa2000disorder,farshchi2013spin, palmstrom2016heusler, wang2005magnetic,inomata2008site}. Low temperature growth also impedes the ability to control stoichiometry and point defects, which are better controlled under high temperature, adsorption-controlled growth regimes \cite{kawasaki2018simple, patel2014surface, bach, turban, strohbeen2019electronically}.
Even for highly controlled chemical abruptness, thermodynamic driving forces can cause interfacial layer relaxations, atomic reconstructions, and even layer rearrangements. An extreme example is the MnCo$_2$Si / GaAs (001) interface. The bulk (001) atomic stacking sequence of MnCo$_2$Si (also known as Co$_2$MnSi) consists of alternating atomic layers of MnSi and CoCo; however, photoemission spectroscopy measurements reveal that this interface tends to be Mn and As-rich, independent of whether the MnCo$_2$Si growth on As-terminated GaAs is initiated with a monolayer of MnSi or CoCo \cite{palmstrom2016heusler, patelthesis}. Such atomic layer rearrangements are not unique to Heuslers, for example, they are also observed in layered perovskite oxides \cite{nie2014atomically, lee2014dynamic}.
The strong thermodynamic driving forces place constraints on what interfaces can be synthesized, which is an important consideration since interface electronic states, half metallicity, charge transfer, and other interfacial properties can be highly sensitive to the interface termination \cite{picozzi2007polarization}. Feedback from theory is crucial for identifying which types of interfaces are both stable and host the desired property \cite{curtarolo2003predicting, sun2016thermodynamic, curtarolo2005accuracy, zunger2018realization}. A significant challenge is that interfaces have reduced symmetry and increased atomic degrees of freedom, compared to the bulk. Given this potential complexity, it often is not practical to perform first-principles calculations for all possible interface atomic structures. There are too many candidate structures, and the large size of reconstructed slabs makes first-principles approaches computationally expensive. Simple models based on electron counting have recently been developed to guide the screening of stable structures at surfaces \cite{kawasaki2018simple}, which can be down selected for more accurate first-principles calculations. I anticipate that their generalization may make the interface problem more tractable.
\section*{Outlook}
Heusler compounds are a remarkable family of interfacial materials, whose broad range of tunable properties is highly complementary to that of the well-studied transition metal oxides. These compounds are lattice-matched to technologically important semiconductor substrates, making them poised for impact in spintronics and beyond. I conclude with a few remarks on the role of theory and experiments going forward.
\textbf{Theory.} To date, theory has done an excellent job at screening for target properties in the bulk \cite{oliynyk2016high, carrete2014finding, garrity2014pseudopotentials, bennett2012hexagonal, garrity2014hyperferroelectrics, roy2012half, anand2019enormous, carrete2014finding, sanvito2017accelerated} and predicting emergent properties at idealized interfaces, both at the level of first-principles DFT calculations \cite{chadov2010tunable, lin2010half, picozzi2007polarization, picozzi2004role, zhu2015surface, narayan2015class} and model Hamiltonians \cite{timm2017inflated, brydon2016pairing, venderbos2018pairing}. Can such predictions be modified to account for more realistic structural distortions at Heusler interfaces, including relaxations, reconstructions, and point defects? Additionally, can theory aid in identifying which of these compounds and interfaces are thermodynamically stable, or more relevantly, ``stabilizable?'' New theoretical approaches are beginning to consider the path-dependent ``remnant'' metastability of bulk compounds \cite{sun2016thermodynamic}, to identify which compounds have local minima in the free energy landscape that lie not too far above the convex hull \cite{aykol2018thermodynamic, curtarolo2003predicting, curtarolo2005accuracy, zunger2018realization}, and guide possible synthesis routes \cite{sun2016thermodynamic, chen2018understanding}. To what extent can these concepts be applied to Heuslers, and in particular, Heusler interfaces?
\textbf{Experiments.} Heusler compounds today are comparable to semiconductors in the early 20$^{th}$ century. Although field effect transistors were first proposed in the 1920's and 30's \cite{edgar1930method, edgar1933device, heil1935improvements}, the first experimental demonstrations of point contact transistors \cite{PhysRev.74.230} and field effect transistors \cite{arns1998other, dawon1963electric} were not made until the late 40's and 50's. These discoveries were made possible by two major materials and interface innovations: (1) zone refining of germanium and silicon to reduce the background impurities, and (2) methods to prepare clean semiconductor / oxide interfaces, free of trapped charges.
Heusler compounds today are at a similar stage of development: a number of exotic phenomena have been predicted, but their full realization will likely require new advances in materials synthesis and interface control. In this Perspective I outlined a few of the key synthetic challenges and potential solutions. I look forward to the development of new feedback control methods during growth, new chemical precursors for self-limiting stoichiometry, and new methods to probe the properties of buried interfaces. Beyond the significant advances in Heusler spintronics, the broader field of Heusler interfaces is at a stage of relative infancy. I anticipate that the most exotic and the impactful properties of Heusler interfaces have yet to be unleashed.
\section{Acknowledgments}
I thank Chris J. Palmstr{\o}m for insightful feedback throughout the preparation of this manuscript and for his mentorship. I also thank Darrell G. Schlom and Jochen Mannhart \cite{mannhart2010oxide} for inspiring the title. I thank Richard D. James, Anderson Janotti, Chang-Beom Eom, Darrell G. Schlom, and Paul M. Voyles for their feedback, fruitful discussions, and collaborations.
This work was primarily supported by the CAREER program of the National Science Foundation (NSF DMR-1752797) and by the Young Investigator Program of the Army Research Office (ARO W911NF-17-1-0254). Additional support came from the NSF via the University of Wisconsin Materials Research Science and Engineering Center (MRSEC, DMR-1720415) and the Wisconsin Alumni Research Foundation (WARF).
\bibliographystyle{apsrev}
|
1,477,468,751,027 | arxiv | \section{Introduction}
The so-called deterministic compartmental dynamical systems are the simplest amongst the models of epidemiogical dynamics, and a large number of them have been recently considered in relation to the COVID-19 pandemic (see, for example, \cite{MB2020science,VLABHPRYV2020,CG2020size} and references therein). The study of these models typically relies on techniques from dynamical systems theory and numerical studies but, despite these techniques allow a deep understanding of their associated dynamics, the simplicity and accurateness provided by exact simple solutions are indeed helpful both from the mathematical and the epidemiological perspectives (see for instance~\cite{Hethcote1973,Bailey1975,Nucci2004} for the exact solutions of the two-dimensional SIS (susceptible-infective-susceptible) model).
Among three-dimensional models, the well-known SIR (susceptible-infective-recovered) system
\begin{equation}
\label{eq:SIR}
\dot{S} = -\beta\,S\,I, \qquad\qquad\qquad \dot{I} = \beta\,S\,I-\alpha \,I, \qquad\qquad\qquad \dot R = \alpha I , \\
\end{equation}
proposed by Kermack and McKendrick \cite{KM1927sir} is probably the best known one. Despite its apparent simplicity, it has been succesfully used to predict relevant features of the dynamics of a number of epidemics, including the actual COVID-19 pandemic \cite{Buonomo2020,Postnikov2020sir}. Therefore, the study of exact solutions for this system has been faced from several perspectives:
Painlev\'e analysis and Lie symmetries~\cite{LA2004}, parametric-form solutions~\cite{Harko2014}, asymptotic approximants~\cite{Barlow2020}, Hamiltonian structures~\cite{BBG2020hamiltonianepidemics} and time reparametrization~\cite{Cadoni2020}.
Nevertheless, all these approaches lead to solutions which are either perturbative or given in terms of one implicit (or inverse) function. In this sense it can be said that the system \eqref{eq:SIR} does not admit an `exact analytic solution in closed form', {\em i.e.} a solution that can be expressed in terms of a finite number of ordinary operations among elementary functions.
In contradistinction to this fact, in this paper we present the exact analytical solution in closed form of the modified SIR system~\cite{ Brauer1990}
\begin{equation}
\label{eq:modifiedSIR}
\dot{S} = -\dfrac{\beta\,S\,I}{S+I}, \qquad\qquad\qquad \dot{I} = \dfrac{\beta\,S\,I}{S+I}-\alpha \,I , \qquad\qquad\qquad \dot{R} = \alpha\,I ,
\end{equation}
where $\alpha, \beta \in \mathbb R_+$. We show that, surprisingly enough, the general solution of this modified SIR system is given in terms of generalized logistic and exponential functions, namely
\begin{equation}
\label{eq:sol}
S(t) = S_0 \left( \frac{S_0+I_0}{S_0+ I_0 e^{t/\tau}} \right)^{\beta \tau} , \qquad I(t) = I_0 \left( \frac{S_0 + I_0}{S_0 + I_0 e^{t/\tau}} \right)^{\beta \tau} e^{t/\tau}, \qquad R(t) = 1 - \frac{(S_0 + I_0)^{\beta \tau}}{(S_0+I_0 e^{t/\tau})^{\beta \tau -1}},
\end{equation}
where $\tau=(\beta-\alpha)^{-1}$. This is, to the best of our knowledge, the first exact solution of a three-dimensional compartmental epidemiological model in closed form. We will also analyze the modified SIR model from a dynamical systems perspective and show that, in some range of the model parameters $(\alpha, \beta)$, the dynamics of \eqref{eq:modifiedSIR} is actually quite close to the one of the SIR system \eqref{eq:SIR}.
We recall that the modified SIR model~\eqref{eq:modifiedSIR} has been proposed \cite{Brauer1990,Gumral1993} as the appropriate generalization of the SIR model when the recovered individuals are removed from the population. Therefore, although this model is not directly applicable to the COVID-19 pandemic, it will be certainly meaningful for the study of diseases with a very high death rate (the so-called `fatal diseases' like, for instance, the Feline Infectious Peritonitis (FIP), Spongiform Encephalopathy (BSE), Leishmaniasis, Rabbit Haemorrhagic Disease, and the Highly Pathogenic Avian Influenza (H5N1))~\cite{Keeling}) or also in cases when very prolongued quarantines are prescribed.
Moreover, the modified SIR model with a time-dependent transmission rate $\beta(t)$ will be also shown to be exactly solvable in closed form for certain realistic $\beta(t)$ functions, and its solutions will be compared with the constant rate system.
It is worth stressing that these time-dependent transmission rate models are well-known to be the appropriate ones for the analysis of modifications of the transmission rate of a given disease, that can be due -for instance- to seasonal effects, to changes in host behaviour or immunity or to modifications in the abundance of vectors/pathogens (see for instance~\cite{Keeling, BacaerGomes, Ponciano, Liu, Pollicott, Mummert} and references therein). As a consequence, $\beta(t)$ models are indeed of the outmost interest for the planning of non-pharmacological control strategies and, again, the existence of exact closed-form solutions for any of them was lacking.
The structure of the paper is the following. In the next Section we derive the exact solution~\eqref{eq:sol} by making use of the fact that any epidemiological three-dimensional model has a conserved quantity, which in turn is straightforwardly derived from the the more general result (recently proved in \cite{BBG2020hamiltonianepidemics}) stating that any three-dimensional compartmental epidemiological model is a generalized Hamiltonian system. Moreover, the conserved quantity turns out to be just the Casimir of the Poisson algebra of the underlying Hamiltonian structure. In Section 3 we present the analysis of the modified SIR system~\eqref{eq:modifiedSIR} both from a dynamical systems approach and from a Poisson--algebraic point of view, and we show that the exact solution~\eqref{eq:sol} is helpful in order to obtain some relevant epidemiological quantities in a simple and exact form. The main differences between the SIR and modified SIR dynamical systems are analysed in Section 4, where we show that these differences can be understood in terms of the the zeroes of their respective conserved quantities, which are again the Casimir functions for both models that are obtained through the formalism presented in \cite{BBG2020hamiltonianepidemics}. Finally, in Section 5 we consider the generalization of the modified SIR system \eqref{eq:modifiedSIR} when the transmission rate $\beta (t)$ varies with time. We shall show two instances of transmission rates for which the model is explicitly integrable in closed form, and the numerical solution of the model with periodic transmission rate will be also presented.
\section{Exact solution of the modified SIR model}
In order to find the exact solution of \eqref{eq:modifiedSIR} we make use of the following recent result (see \cite{BBG2020hamiltonianepidemics} for details).
\begin{proposition} \cite{BBG2020hamiltonianepidemics}
\label{prop:EH}
Every epidemiological compartmental model with constant population is a generalized Hamiltonian system, with Hamiltonian function $\mathcal{H}$ given by the total population.
\end{proposition}
For the system \eqref{eq:modifiedSIR} the generalized Hamiltonian structure is thus explicitly provided by the Hamiltonian function
\begin{equation}
\label{eq:H}
\mathcal{H}=S+I+R,
\end{equation}
together with the associated Poisson structure, which is found to be given by the fundamental brackets
\begin{equation}
\label{eq:Poisson}
\{S,I\}=0, \qquad \{S,R\}=-\dfrac{\beta\,S\,I}{S+I}, \qquad \{I,R\}=\dfrac{\beta\,S\,I}{S+I}-\alpha \,I \, ,
\end{equation}
and leads to the system~\eqref{eq:modifiedSIR} through Hamilton's equations
\begin{equation}
\label{eq:Hamilton_eqs}
\dot S = \{S,\mathcal H \} , \qquad
\dot I = \{I,\mathcal H \} , \qquad
\dot R = \{R,\mathcal H \} .
\end{equation}
Since every three-dimensional Poisson structure has a Casimir function $\mathcal{C}$, i.e. a function $\mathcal C : \mathcal U \subseteq \mathbb R^3 \to \mathbb R$ such that $\{S,\mathcal C \} = \{I,\mathcal C \} = \{R,\mathcal C \}=0$, then $\mathcal{C}$ is a conserved quantity for any generalized Hamiltonian system~\eqref{eq:Hamilton_eqs} defined on such a Poisson manifold. Therefore:
\begin{corollary}
\label{prop:}
Every three-dimensional epidemiological compartmental model with constant population has a conserved quantity, which is functionally independent of the Hamiltonian function.
\end{corollary}
Note that in case that $\mathcal{H}$~\eqref{eq:H} is functionally dependent of $\mathcal{C}$, the dynamics~\eqref{eq:Hamilton_eqs} would be trivial. For the specific Poisson algebra \eqref{eq:Poisson} the Casimir function is found to be
\begin{equation}
\label{eq:C}
\mathcal{C}=S^{-\frac{\alpha}{\beta}}(S+I) .
\end{equation}
We can use this Casimir function to restrict the dynamics of \eqref{eq:modifiedSIR} to the symplectic leaf defined by the value of $\mathcal{C}$ given by the initial conditions $S(0)=S_0$, $I(0)=I_0$, $R(0)=R_0$, namely
\begin{equation}
\label{eq:C0}
\mathcal C_0=S_0^{-\frac{\alpha}{\beta}}(S_0+I_0) \, .
\end{equation}
This can be also used in order to reduce the system \eqref{eq:modifiedSIR} to a nonlinear ODE, since from \eqref{eq:C} and \eqref{eq:C0} we obtain the phase space equation
\begin{equation}
\label{eq:I_S}
I(S) = (S_0+I_0) \left( \frac{S}{S_0} \right)^{\frac{\alpha}{\beta}} - S ,
\end{equation}
which can be inserted within \eqref{eq:modifiedSIR} in order to get the following nonlinear ODE for the variable $S$:
\begin{equation}
\dot S = - \beta S \left( 1- \frac{S_0^{\alpha/\beta}}{S_0+I_0} S^{1-\alpha/\beta} \right) .
\end{equation}
This ODE suggests the change of variable
\begin{equation}
\label{eq:AS}
A(t) = S(t)^{1-\alpha/\beta} ,
\end{equation}
thus obtaining
\begin{equation}
\dot A = - (\beta - \alpha) A \left( 1- \frac{S_0^{\alpha/\beta}}{S_0+I_0} A \right) .
\end{equation}
If we now set
\begin{equation}
\label{eq:BA}
B(t) = \frac{S_0^{\alpha/\beta}}{S_0+I_0} A (t) ,
\end{equation}
we obtain
\begin{equation}
\label{eq:ODE_B}
\dot B = - (\beta - \alpha) B \left( 1- B \right) .
\end{equation}
The general solution to this ODE is a logistic function with characteristic time $\tau = (\beta-\alpha)^{-1}$, i.e.
\begin{equation}
B(t) = \frac{1}{1+e^{(\beta-\alpha)t +d}} = \frac{1}{1+e^{t / \tau+d}} .
\end{equation}
The integration constant $d$ is fixed by the initial condition $B(0)=\frac{S_0}{S_0+I_0}$, thus obtaining $e^d=\frac{I_0}{S_0}$. Therefore we can write
\begin{equation}
B(t) = \frac{1}{1+ \frac{I_0}{S_0} e^{t/\tau}} \, .
\end{equation}
Now, inverting the change of variables \eqref{eq:BA} we get
\begin{equation}
A(t) = \frac{(S_0+I_0)S_0^{1/\beta\tau}}{S_0+ I_0 e^{t/\tau}} ,
\end{equation}
and finally, from \eqref{eq:AS}, we obtain
\begin{equation}
\label{eq:St}
S(t) = S_0 \left( \frac{S_0+I_0}{S_0+ I_0 e^{t/\tau}} \right)^{\beta \tau} .
\end{equation}
From the phase space equation \eqref{eq:I_S} we directly get
\begin{equation}
\label{eq:IS}
I(S) = (S_0+I_0) \left( \frac{S}{S_0} \right)^{\alpha/\beta} - S ,
\end{equation}
and inserting \eqref{eq:St} we are able to obtain $I(t)$ without any further integration. Finally, we have that
\begin{equation}
\label{eq:It}
I(t) = I_0 \left( \frac{S_0 + I_0}{S_0 + I_0 e^{t/\tau}} \right)^{\beta \tau} e^{t/\tau} .
\end{equation}
Note that $I(t)$ is related to $S(t)$ by
\begin{equation}
\label{eq:ISexp}
I(t) = \frac{I_0}{S_0} S(t) e^{t/\tau} \, ,
\end{equation}
and from the conservation of the total population, we find
\begin{equation}
\label{eq:Rt}
R(t) = 1 -S(t) - I(t) =1 - \frac{(S_0 + I_0)^{\beta \tau}}{(S_0+I_0 e^{t/\tau})^{\beta \tau -1}} .
\end{equation}
Summarizing, equations \eqref{eq:St} shows that the susceptible population follows a generalized logistic function, or Richards' curve, with characteristic time $\tau$ and the relevant constants set to satisfy that $S(0) = S_0$ and $\lim_{t \to \infty} S(t) = 0$. Moreover, the dynamics of the infective population given by \eqref{eq:It} is essentially this same function multiplied by an exponential with the same characteristic time. This is indeed a very natural dynamics for infective processes and, as we will see in the sequel, this dynamics strongly resembles the one described by the famous SIR model \eqref{eq:SIR}, provided that the range of values for the parameters $\alpha$ and $\beta$ is similar to the one found in actual epidemics.
\vspace{-0.5cm}
\paragraph{Remark 1}
It is worth stressing that the method here presented is indeed applicable to any three-dimensional compartmental model, provided we are able to find the Casimir function of the associated Poisson structure. Nevertheless, the distinctive feature of the system \eqref{eq:modifiedSIR} is that the resulting ODE admits a closed-form solution. We recall that in \cite{BBG2020hamiltonianepidemics} such Casimir function approach was used in order to find the solution for some epidemiological models in terms of an inverse function.
\vspace{-0.5cm}
\paragraph{Remark 2}
Solution~\eqref{eq:sol} suggests that the new variable $y=I/S$ should be worth to be considered, since $y(t)$ is an exponential function. In fact, by taking $(S,y,R)$ as new dynamical variables, the modified SIR system reads
\begin{equation}
\label{eq:MSIRy}
\dot{S} = -\frac{\beta}{1+y}\,S\,y, \qquad\qquad\qquad \dot{y} = (\beta-\alpha) \,y, \qquad\qquad\qquad \dot R = \alpha\,S\,y , \\
\end{equation}
in which the equation for $y$ is linearized, as expected. Note that the original SIR system is written in terms of $y$ as
\begin{equation}
\label{eq:SIRy}
\dot{S} = -\beta\,S^2\,y, \qquad\qquad\qquad \dot{y} = (\beta\,S-\alpha) \,y
+ \beta\,S\,y^2, \qquad\qquad\qquad \dot R = \alpha\,S\,y , \\
\end{equation}
which is quite different from~\eqref{eq:MSIRy} as a dynamical system, as it will be shown in Section 4.
\section{Analysis of the modified SIR dynamics}
In this Section we briefly analyze the main features of the modified SIR system \eqref{eq:modifiedSIR}. Without any loss of generality we can assume that $R_0=0$, so $S_0 + I_0 =1$, and the solution of \eqref{eq:modifiedSIR} reads
\begin{equation}
\label{eq:solsimply}
S(t) = \frac{S_0}{\left( S_0+ I_0 e^{t/\tau} \right)^{\beta \tau} } , \qquad\qquad I(t) = \frac{I_0 \, e^{t/\tau}}{\left( S_0 + I_0 e^{t/\tau} \right)^{\beta \tau}} , \qquad\qquad R(t) = 1 - \frac{1}{ \left( S_0+I_0 e^{t/\tau} \right)^{\beta \tau -1}} .
\end{equation}
As we have previously stated, the behavior of $S(t)$ is that of a generalized logistic function while the evolution of the infective population $I(t)$ is given by a generalized logistic function times an exponential. This means that since $\beta \tau > 1$, the logistic term dominates for large times and therefore $\lim_{t \to \infty} I(t) = 0$. However during the first stage of the outbreak the exponential term is the dominating one, and thus the model presents the characteristic infection peak (for appropriate values of the parameters $\alpha$ and $\beta$). The behavior of the functions $S(t)$ and $I(t)$ for different values of $\alpha$ and $\beta$ is shown in Figure \ref{fig:comp_traj}.
A fundamental question to be answered by any epidemiological model is whether, for given values of the parameters, there will be an outbreak. For the modified SIR system we see, simply by evaluating the second equation from \eqref{eq:modifiedSIR} at $t=0$,
\begin{equation}
\frac{d}{dt}\bigg |_{t=0} I(t) = I_0 \left( \frac{\beta S_0}{S_0 + I_0} - \alpha \right) ,
\end{equation}
and the outbreak will exist if and only if $\beta S_0 > \alpha (S_0 + I_0)$, or equivalently
\begin{equation}
\label{eq:peak_cond}
\frac{\beta}{\alpha}-1 > \frac{I_0}{S_0} ,
\end{equation}
which in the case $S_0 + I_0 =1$ means that
\begin{equation}
\label{eq:peak_cond2}
S_0 > \frac{\alpha}{\beta}.
\end{equation}
Obviously, this same result can be obtained by checking the condition for which $I(t)$ has a maximum. Moreover, the analytic solution allows us to exactly determine the time at which the infection peak $t_{\text{peak}}$ is reached, and we obtain
\begin{equation}
\label{eq:tpeak}
t_{\text{peak}} = \tau \log \left( \frac{S_0}{I_0 (\beta \tau - 1)} \right) = \tau \log \left( \frac{S_0}{I_0} \left( \frac{\beta}{\alpha} - 1 \right) \right) \, ,
\end{equation}
which is positive if and only if \eqref{eq:peak_cond} holds. The fraction of infected population at the infection peak reads
\begin{equation}
\label{eq:Ipeak}
I(t_{\text{peak}}) = \left( \frac{\beta \tau -1}{S_0} \right)^{\beta \tau - 1} \left( \frac{S_0 + I_0}{\beta \tau} \right)^{\beta \tau - 1} =
S_0 \left( \frac{\beta}{\alpha}-1 \right) \left( \frac{\alpha}{\beta} \left( 1+\frac{I_0}{S_0} \right) \right)^{\beta/(\beta-\alpha)} .
\end{equation}
A relevant epidemiological quantity is the well-known basic reproduction number $\mathcal R_0 $, defined as the average number of secondary cases produced by one infected individual introduced into a population of susceptible individuals during the mean infectious time $T$ (see~\cite{Hethcote1976,VandenDriessche2017,Hethcote2000} and references therein). It is easy to see that the value of $\mathcal R_0$ for the modified SIR model \eqref{eq:modifiedSIR} is exactly the same as the $\mathcal R_0$ for the SIR model \eqref{eq:SIR}, \emph{i.e.} $\mathcal R_0 = \beta / \alpha$ for both models. Note that this is in full agreement with \eqref{eq:peak_cond} in the sense that when $\mathcal R_0 > 1$ the infection survives, but when $\mathcal R_0 < 1$ the infection spontaneously disappears (see Section 4 for a more careful analysis of the fixed-point structure of both models).
The identification between basic reproduction numbers for both models is a direct consequence of the fact that, for initial conditions $S_0 \approx 1$ and $I_0 \approx 0$, the early dynamics of systems \eqref{eq:SIR} and \eqref{eq:modifiedSIR} are similar. More in detail, it is well-known (see for instance \cite{VandenDriessche2017}) that the initial dynamics of the SIR model under such initial conditions is given by
\begin{equation}
I(t) \approx I_0 e^{\alpha (\mathcal R_0 - 1) t} ,
\end{equation}
while for the modified SIR the closed-form solution \eqref{eq:solsimply} shows that
\begin{equation}
I(t) = \frac{I_0 \, e^{t/\tau}}{\left( S_0 + I_0 e^{t/\tau} \right)^{\beta \tau}} \approx I_0 \, e^{t/\tau} = I_0 e^{\alpha (\mathcal R_0 - 1) t},
\end{equation}
where we have used that $I_0 \ll S_0$, $S_0 \approx 1$ and $t \ll \tau$.
Moreover, the closed-form solution \eqref{eq:solsimply} allows the computation of a generalization of the basic reproduction number, the so-called replacement number $\mathcal R (t)$ \cite{Hethcote2000}. The function $\mathcal R (t)$ is defined as the average number of secondary cases produced by one infected individual during the mean infectious time $T$, where the infected individual is introduced in a population that is in an arbitrary state of the infection outbreak. In our case, since the rate $r$ of secondary infections is given by the term ${\beta\,S\,I}/({S+I})$ in~\eqref{eq:modifiedSIR}, taking into account that $T=1/\alpha$, we obtain
\begin{equation}
\label{eq:rn}
{\cal{R}}(t)=\frac{r\, T}{I}=\frac{\beta}{\alpha} \left( \dfrac{S}{S+I} \right)=\frac{\beta}{\alpha} \,\left( \frac{1}{1+ \frac{I_0}{S_0} e^{t/\tau}} \right)
\, .
\end{equation}
It is clear that $\mathcal R_0 > \mathcal R (t)$ for all $t \in \mathbb R$. Moreover, $\mathcal R_0$ is given by~\eqref{eq:rn} when $t \ll \tau$ and $I_0 \ll S_0$. In fact, we could also say that
\begin{equation}
\mathcal R_0 = \lim_{(S,I) \to (1,0)} \mathcal R (t) = \lim_{t \to -\infty} \mathcal R (t) = \frac{\beta}{\alpha} \, ,
\end{equation}
where this expression should be thought of as a way of reversing the dynamics towards the point $S \approx 1$ and $I \approx 0$. Graphically, this means that we are moving along the solution depicted in Figure \ref{fig:pd_modSIR} in the flow opposite direction, in order to arrive to the $S$-axis. Note that this limit is independent of the arbitrary time origin used to define the initial conditions for the system of ODEs.
Another interesting insight is gained by computing the area below the infective curve $I(t)$. In order to do that, we do not even need to perform the integration of $I(t)$, since from the third equation in \eqref{eq:modifiedSIR} we get
\begin{equation}
Area(I) = \int^\infty_0 I(t) dt = \frac{1}{\alpha} \int^\infty_0 \dot R(t) dt = \lim_{t \to \infty} R(t) - R(0) = \frac{S_0 + I_0}{\alpha} = \frac{1}{\alpha}.
\end{equation}
This result is specially interesting from a parameter estimation point of view, since it allows to obtain a value for $\alpha$ directly from the data. Afterwards, assuming that $S_0$ and $I_0$ are known, $\beta$ can be obtained, for instance, from \eqref{eq:tpeak}. Thus, the exact solution in closed form greatly simplifies the fitting with actual data. Moreover, as we will see below, since the dynamics of the SIR and modified SIR systems are quite close (for a realistic range of the parameters), this procedure for the determination of the parameters of the modified model provides a good approximation for the parameters of the SIR one.
A related interesting quantity from the epidemiological point of view is the removal rate, defined by $\dot R(t)=\alpha I$. While for the SIR model it can only be approximated by a closed-form expression in certain limits (see \cite{Bailey1975}), in the modified SIR system it can obviously computed exactly. Therefore, the behaviour of the removal rate (divided by $\alpha$) for the modified SIR system can be directly extracted from Figure \ref{fig:comp_traj}.
For any epidemic outbreak, it is also enlightening to analyze the intersection of the susceptible $S(t)$, infective $I(t)$ and recovered $R(t)$ functions. The closed-form solution of the modified SIR model allows us to get some exact results in this respect, which we write down in the following
\begin{proposition}
\label{eq:crossSIR}
For the modified SIR system given by \eqref{eq:modifiedSIR}, with $\beta > \alpha$ and initial conditions $S(0)=S_0$, $I(0)=I_0$, $R(0)=0$ such that $S_0 > I_0 > 0$ and $S_0 > \alpha/\beta$, any two of the curves $S(t)$, $I(t)$ and $R(t)$ always intersect exactly once, regardless of the exact values of the initial conditions and parameters of the system.
Furthermore:
\begin{itemize}
\item[i)] The curves $S(t)$ and $I(t)$ intersect before the infection peak if $\beta > 2 \alpha$, exactly at the infection peak if $\beta = 2 \alpha$ and after the infection peak if $\beta < 2 \alpha$.
\item[ii)] The three curves $S(t)$, $I(t)$ and $R(t)$ intersect in a common point if and only if $\frac{\beta}{\alpha} < \frac{\log 3}{\log 3 - \log 2}$ and $S_0 = \frac{1}{3} \left( \frac{3}{2} \right)^{\beta / \alpha}$.
\item[iii)] The three curves $S(t)$, $I(t)$ and $R(t)$ intersect exactly at the infection peak if and only if $\beta = 2 \alpha$ and $S_0 = \frac{3}{4}$.
\end{itemize}
\end{proposition}
\begin{proof}
The solution of \eqref{eq:modifiedSIR} when $R(0)=0$ is given by \eqref{eq:solsimply}. In particular, the unique time at which the curves $S(t)$ and $I(t)$ intersect can be explicitly computed from \eqref{eq:ISexp}, and it reads
\begin{equation}
\label{eq:tSI}
t_{SI} = \tau \log \left( \frac{S_0}{I_0} \right) .
\end{equation}
Since we are assuming $S_0 > I_0$, this time is always positive.
From \eqref{eq:solsimply} we can also compute the times $t_{SR}$ and $t_{IR}$ such that $R(t_{SR}) = S(t_{SR})$ and $R(t_{IR}) = I(t_{IR})$. It is easy to check that these times are given by the common expression
\begin{equation}
\label{eq:tSRIRX}
t^* = \tau \log \left( \frac{X-S_0}{I_0} \right) ,
\end{equation}
where $X$ is a solution of the equation
\begin{equation}
\label{eq:XtSR}
X^{\beta \tau} - X - S_0 = 0
\end{equation}
in the case of $t_{SR}$, while $X$ is a solution of the equation
\begin{equation}
\label{eq:XtIR}
X^{\beta \tau} - 2X + S_0 = 0
\end{equation}
in the case of $t_{IR}$. An elementary computation shows that equation \eqref{eq:XtSR} has only one solution, and therefore $t_{SR}$ is unique. Equation \eqref{eq:XtIR} has 2 solutions, the first one living in $(0,1)$ and the second one within $(1,\infty)$. The first of these solutions results in a negative or complex time, so $t_{IR}$ is defined by the unique solution of \eqref{eq:XtIR} in $(1,\infty)$. Therefore, we have proved that $t_{SI}$, $t_{SR}$ and $t_{IR}$ are unique, which means that the curves $S(t)$, $I(t)$ and $R(t)$ intersect exactly once.
Now, statement $i)$ derives straightforwardly from a comparison between \eqref{eq:tSI} and \eqref{eq:tpeak}.
To prove $ii)$ we first note that $t^* = t_{SI}$ if and only if $X=2 S_0$, which is a solution of both \eqref{eq:XtSR} and \eqref{eq:XtIR} if and only if
\begin{equation}
\label{eq:S0triple}
S_0 = \frac{1}{3} \left( \frac{3}{2}\right)^{\beta/\alpha} .
\end{equation}
Given that $S_0 < 1$, then $\frac{1}{3} \left( \frac{3}{2}\right)^{\beta/\alpha} < 1$, and therefore $\frac{\beta}{\alpha} < \frac{\log 3}{\log (3/2)}$.
Statement $iii)$ is a consequence of $i)$ and $ii)$, since in order that the intersection coincides with the infection peak we need that $\beta = 2 \alpha$, and substituting this into the condition for triple intersection \eqref{eq:S0triple} we get $S_0 = \frac{3}{4}$. Equivalently, note that if $\beta = 2 \alpha$, then by \eqref{eq:Ipeak}, $S(t_{\text{peak}}) = I(t_{\text{peak}}) = \frac{1}{4 S_0}$, so $R(t_{\text{peak}}) = 1 - \frac{1}{2 S_0}$, and by imposing that they coincide we get $S_0 = \frac{3}{4}$.
\vspace{-0.5cm}
\end{proof}
\vspace{-0.7cm}
\paragraph{Remark 3}
The condition $S_0 > \alpha / \beta$ in the previous Proposition is just the condition for the existence of an infectious peak \eqref{eq:peak_cond2} but we are not using it explicitly in the proof. Therefore, all the previous results which do not involve the infection peak are true if this condition is removed.
\vspace{-0.5cm}
\paragraph{Remark 4}
The fact that the functions $S(t)$, $I(t)$ and $R(t)$ always intersect is a striking difference with respect to the original SIR system of Kermack and McKendrick \eqref{eq:SIR}.
\section{Fixed points and comparison with the SIR system}
Finally, it is worth performing a more detailed analysis of the dynamics of the modified SIR system \eqref{eq:modifiedSIR} when compared to the original SIR model \eqref{eq:SIR}. Although the exact closed-form solution here obtained in the modified case is valid for any values of $\alpha$ and $\beta$, for the sake of brevity from now on we only consider the case $\beta > \alpha$ (recall from \eqref{eq:peak_cond} that this is the regime in which an actual outbreak does exist), or equivalently, $\mathcal R_0 > 1$.
Qualitatively, the most significant difference between both dynamical systems is the stability of their fixed points. From equations \eqref{eq:SIR} and \eqref{eq:modifiedSIR} we see that $I=0$ is a line of non-isolated fixed points for both models. The stability of any of these fixed points $p=(S,0)$ is defined simply by the trace of the Jacobian evaluated at this point, $\mathbf{J} (p)$. For the SIR system this trace is
\begin{equation}
\label{eq:trSIR}
\mathrm{Tr}\; \mathbf{J} (p) = \beta S - \alpha ,
\end{equation}
while for the modified SIR system it reads
\begin{equation}
\label{eq:trmodSIR}
\mathrm{Tr}\; \mathbf{J} (p) = \beta - \alpha .
\end{equation}
Therefore, for the modified SIR system all the points belonging to the line $I=0$ (with the exception of $(S,I)=(0,0)$) are unstable (recall that we only consider the epidemiologically relevant case $\beta > \alpha$). Note that this agrees with the value for the basic reproduction number given in Section 3. Meanwhile, for the SIR system points such that $S>\alpha/\beta$ are unstable, while points with $S<\alpha/\beta$ are stable. This can be clearly appreciated in Figures \ref{fig:pd_SIR} and \ref{fig:pd_modSIR}, where the corresponding flows are presented for both systems. Colored curves correspond to the phase space equation $I(S)$ for each model. Trajectories of the system starting at any point of the appropriate curve will follow this curve (in the direction of the flow) in order to reach the relevant fixed point.
The differences regarding the fixed point structure of these two systems can also be analyzed algebraically. For the SIR system \eqref{eq:SIR}, it is well-known that the phase space equation is
\begin{equation}
I(S) = \frac{\alpha}{\beta} \log S - S + \mathcal C,
\end{equation}
where $\mathcal C$ is a constant (this is the equation of the curves in Figure \ref{fig:pd_SIR}, for different values of $\mathcal C$). In fact,
\begin{equation}
\label{eq:C_SIR}
\mathcal C = S + I - \frac{\alpha}{\beta} \log S
\end{equation}
is the Casimir function for the associated Poisson structure (see \cite{BBG2020hamiltonianepidemics} for details). It is easy to prove that the equation $I(S) = 0$ always have a solution $S \in (0,\alpha/\beta)$. However, the phase space equation $I(S) = 0$ for the modified SIR system, where $I(S)$ is given by \eqref{eq:I_S}, always has $S=0$ as a solution (see Figure \ref{fig:pd_modSIR}). This equation is directly obtained from the Casimir function \eqref{eq:C}, and it is interesting to compare this Casimir function \eqref{eq:C} with the exponential of \eqref{eq:C_SIR}.
\medskip
\paragraph{Remark 4}
The previous discussion shows that the different qualitative behaviour of the systems \eqref{eq:SIR} and \eqref{eq:modifiedSIR} can be algebraically understood through the differences between the Casimir functions \eqref{eq:C} and \eqref{eq:C_SIR}, and in particular, the different structure and location of their zeroes within the phase space.
From the epidemiological point of view, the existence of stable fixed points (different from $(0,0)$) in the $I=0$ axis explains the well-known fact that in the original SIR model of Kermack and McKendrick the whole population is not infected during the evolution of the infection. While these results can be obtained from a dynamical systems approach, it is interesting to note their direct connection with the algebraic and geometric structure of the Poisson manifold underlying their description of epidemiological models as generalized Hamiltonian systems.
\vspace{-0.5cm}
\paragraph{Remark 5} For the modified SIR system, the closed-form analytical solution \eqref{eq:sol} contains all the previous information. Solutions with $I_0=0$ are constant functions, and
\begin{equation}
\lim_{t \to \infty} S(t) = 0, \qquad \lim_{t \to \infty} I(t) = 0, \qquad \lim_{t \to \infty} R(t) = 1 ,
\end{equation}
for any initial conditions such that $I_0 \neq 0$. This shows that the only stable fixed point is $(S,I) = (0,0)$. Note that for the modified SIR system it takes an infinite time to reach this fixed point, regardless of the initial conditions.
In Figure \ref{fig:comp_traj}, plots for some trajectories $S(t)$ and $I(t)$ contained in the phase space orbits from Figures \ref{fig:pd_SIR} and \ref{fig:pd_modSIR} are depicted. In the left column $\beta = 1$ and $\alpha = 0.2$ while in the right column $\beta = 1$ and $\alpha = 0.6$. Each plot contains four different curves: coloured ones correspond to $I(t)$ while black ones correspond to $S(t)$, and solid ones correspond to the modified SIR system while dashed ones correspond to the original SIR system. The first row shows the dynamics for initial conditions such that the outbreak rapidly extinguishes. The second row shows the limiting case given by $S_0 = \alpha / \beta$ (note that this value is the same for both models since $S_0+I_0 = 1$). The third row shows the typical behavior for values of the parameters and initial conditions for which there is an actual outbreak, and therefore $I(t)$ has a maximum. The fourth row shows a situation such that at the beginning of the outbreak a fraction of the total population is immunized.
\begin{figure}[H]
\begin{center}
\includegraphics[width=.4\textwidth, height=.3\textwidth]{./C1a.pdf} \includegraphics[width=.4\textwidth, height=.3\textwidth]{./C1b.pdf} \\
\includegraphics[width=.4\textwidth, height=.3\textwidth]{./C2a.pdf} \includegraphics[width=.4\textwidth, height=.3\textwidth]{./C2b.pdf} \\
\includegraphics[width=.4\textwidth, height=.3\textwidth]{./C3a.pdf} \includegraphics[width=.4\textwidth, height=.3\textwidth]{./C3b.pdf} \\
\includegraphics[width=.4\textwidth, height=.3\textwidth]{./C4a.pdf} \includegraphics[width=.4\textwidth, height=.3\textwidth]{./C4b.pdf} \\
\end{center}
\vspace{-0.5cm}
\caption{\small{ $S(t)$ (black) and $I(t)$ (colored) functions for the SIR system (dashed) and the modified SIR system (solid). $\beta = 1$. Left: $\alpha=0.2$. Right: $\alpha=0.6$.
Blue: $S_0 = 0.1$, $I_0 = 0.9$. Orange: $S_0 = \alpha / \beta$, $I_0 = 1- \alpha / \beta$. Cyan: $S_0 = 0.9$, $I_0 = 0.1$. Magenta: $S_0 = 0.5$, $I_0 = 0.001$.
\label{fig:comp_traj}
}}
\end{figure}
\hspace{.02\textwidth}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.4\textwidth, height=.4\textwidth]{./PD_SIR1.pdf}
\includegraphics[width=.4\textwidth, height=.4\textwidth]{./PD_SIR2.pdf}
\end{center}
\vspace{-0.5cm}
\caption{\small{ Phase space for the SIR system \eqref{eq:SIR}. $\beta = 1$. Left: $\alpha=0.2$. Right: $\alpha=0.6$.
Blue line: $S_0 = 0.1$, $I_0 = 0.9$. Orange line: $S_0 = \alpha / \beta$, $I_0 = 1- \alpha / \beta$. Cyan line: $S_0 = 0.9$, $I_0 = 0.1$. Magenta line: $S_0 = 0.5$, $I_0 = 0.$ Red: Stable points. Green: Unstable points.
\label{fig:pd_SIR}
}}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.4\textwidth, height=.4\textwidth]{./PD_modSIR1.pdf}
\includegraphics[width=.4\textwidth, height=.4\textwidth]{./PD_modSIR2.pdf}
\end{center}
\vspace{-0.5cm}
\caption{\small{ Phase space for the modified SIR system \eqref{eq:modifiedSIR}. $\beta = 1$. Left: $\alpha=0.2$. Right: $\alpha=0.6$.
Blue line: $S_0 = 0.1$, $I_0 = 0.9$. Orange line: $S_0 = \alpha / \beta$, $I_0 = 1- \alpha / \beta$. Cyan line: $S_0 = 0.9$, $I_0 = 0.1$. Magenta line: $S_0 = 0.5$, $I_0 = 0.$
\label{fig:pd_modSIR}
}}
\end{figure}
These plots show all possible qualitatively different dynamics for the SIR and modified SIR systems. Essentially, as far as the ratio $\alpha/\beta$ grows, stronger differences between both models arise. In the left column we have $\alpha/\beta = 1/5$ and the dynamics of both systems are quite close (for the 3 first rows). In the right column $\alpha/\beta = 3/5$ and stronger differences appear, specially for $S(t)$. The most striking difference between both systems can be appreciated in the picture located at the last row, second column, which corresponds to a small perturbation of the case when initially half of the population is immunized and the other half is susceptible to the infection. In this case, the SIR system predicts no outbreak (it is a stable fixed point), while the modified SIR system does predict it. Obviously, this is due to the fact that in the modified SIR system we are assuming that the recovered population has been removed (death, quarantine, etc) and therefore does not interact (thus not contributing to the so-called `herd immunity', which is of course not attainable in this model). All these considerations can also be deduced from the phase space representation in Figures \ref{fig:pd_SIR} and \ref{fig:pd_modSIR}.
However, it is important to stress that, very often, the most relevant scenario from the epidemiological viewpoint is the one in which there are no immunized individuals at the beginning (or they are very few ones), which is given by the third row (cyan). We can conclude that in this scenario, specially when the ratio $\alpha/\beta$ is smaller, the SIR and modified SIR systems present similar features, with the modified SIR model always predicting a larger infection peak that the SIR.
\section{Modified SIR system with a time-dependent transmission rate}
We will now consider the modified SIR system where the transmission rate $\beta(t)$ becomes time-dependent, while we keep the recovery rate $\alpha$ as a constant value. As we mentioned in the introduction, the variability of the transmission rate along the evolution of a given disease can be usually related to modifications in the behavior of either the host or the pathogen/vector, which can be in some cases due to seasonal effects. Therefore, it seems natural to wonder whether such time-dependent modified SIR models with smoothly varying $\beta(t)$ do admit also exact solutions in closed form (models with abrupt changes in time of the transmission rate are also considered in the literature~\cite{Liu}).
This question can be answered in the affirmative by recalling Remark 2 in Section 2, where the new variable $y(t) = {I(t)}/{S(t)} $ was considered. It is straightforward to check that the modified SIR model with $\beta(t)$ leads to a formally equivalent system~\eqref{eq:MSIRy} (and the same would happen when a time-dependent recovery rate $\alpha(t)$ is simultaneously considered), namely
\begin{equation}
\label{eq:MSIRydept}
\dot{S} = -\frac{\beta(t)}{1+y}\,S\,y, \qquad\qquad\qquad \dot{y} = (\beta(t)-\alpha) \,y, \qquad\qquad\qquad \dot R = \alpha\,S\,y , \\
\end{equation}
for any time-dependent transmission rate function $\beta(t)$. Therefore, the equation for $y(t)$ can be always integrated
\begin{equation}
\label{eq:MSIRyt}
y(t) = \frac{I(t)}{S(t)} = \frac{I_0}{S_0}e^{\int_0^t \left( \beta (s) - \alpha \right) \mathrm d s} , \,
\end{equation}
and its solution will be given in closed form provided that the function $\beta(t)$ admits a well-defined primitive function. As a consequence,
we can write the explicit solution for $S(t)$ in the form
\begin{equation}\label{eq:SinMSIRydept}
S(t) = S_0 e^{-\int_0^t \frac{\beta(s) y(s)}{1+y(s)} \mathrm d s} = S_0 \exp \left( - \int_0^t \beta(s) \left( \frac{\frac{I_0}{S_0} e^{\int_0^s \left( \beta (u) - \alpha \right) \mathrm d u}}{1 + \frac{I_0}{S_0} e^{\int_0^s \left( \beta (u) - \alpha \right) \mathrm d u}} \right) \mathrm d s \right) ,
\end{equation}
and finally we have
\begin{equation}
I(t) = S(t) \,y(t) , \qquad \qquad R(t) = \alpha\, I(t) .
\end{equation}
As we will show in the sequel, there do exist (a priori realistic) functions $\beta(t)$ such that~\eqref{eq:SinMSIRydept} can be also given in closed form. In particular, we will work out two models: the first one with a monotonically decreasing transmission rate, and the second one corresponding to a disease in which the transmission rate has a maximum at $t\neq 0$. Finally, for the sake of completeness, an oscillating transmission rate representing periodic seasonal effects will be also considered, although in this case~\eqref{eq:SinMSIRydept} cannot be given in closed form.
\subsection{A model with a monotonically decreasing transmission rate}
Let us consider the transmission rate function
\begin{equation}
\beta (t) = \alpha + \frac{1/\tau}{1+t/\tau}, \qquad\mbox{with}\qquad \tau = 1/(\beta_m - \alpha) \, .
\end{equation}
This function presents a maximum with value $\beta_m$ at $t=0$, and then monotonically decreases towards $\alpha$ (see Figure \ref{fig:comp_beta}). This function could model a situation in which the population would be aware of the presence of the virus at $t = 0$ and they take progressive measures in order to prevent its spread, like social-distancing, mask-wearing, etc. In this case the closed-form solution of the modified SIR model is given by
\begin{equation}
y(t) = \frac{I_0}{S_0} \left( 1 + \frac{t}{\tau}\right) ,
\end{equation}
\begin{equation}
S(t) = S_0 \left( \frac{S_0 + I_0}{S_0 + I_0 (1+t/\tau)} \right)^{1-\frac{S_0}{I_0} \alpha \tau} e^{-\alpha t} ,
\end{equation}
and
\begin{equation}
I(t) = I_0 \left( 1 + \frac{t}{\tau}\right) \left( \frac{S_0 + I_0}{S_0 + I_0 (1+t/\tau)} \right)^{1-\frac{S_0}{I_0} \alpha \tau} e^{-\alpha t} .
\end{equation}
As we can appreciate in Figure \ref{fig:comp_traj}, in this model the height of the peak of the outbreak is smaller than in the modified SIR model with $\beta$ constant. Such peak takes place at the time
\begin{equation}
t_{peak} = \tau \left( \sqrt{\frac{S_0}{I_0 \alpha \tau}} - 1 \right) \, ,
\end{equation}
which can be given in terms of the parameters of the model due to the closed form for $I(t)$.
\subsection{A model with a maximum transmission rate for $t\neq 0$}
Let us now consider the transmission rate function (see Figure \ref{fig:comp_beta})
\begin{equation}
\beta (t) = \alpha + \frac{2 t/\tau^2}{1+t^2/\tau^2}, \qquad
\mbox{with}
\qquad \tau = 1/(\beta_m - \alpha).
\end{equation}
This function presents a maximum of the transmission rate with value $\beta_m$ at $t=\tau$, which would model a strong initial intensification of the transmission rate followed by its monotonic decreasing. This second example would provide a toy model for a situation such as an abrupt cancellation of a confinement, where the population incorrectly assumes that the disease is no longer present, therefore resuming its usual interactions very fast, and afterwards correcting this behavior progressively. The exact solution of the model in closed form is given by
\begin{equation}
y(t) = \frac{I_0}{S_0} \left( 1 + \frac{t^2}{\tau^2}\right) ,
\end{equation}
\begin{equation}
S(t) = S_0 \left( \frac{S_0 + I_0}{S_0 + I_0 (1+t^2/\tau^2)} \right)
\exp \left[
\alpha\left(
\dfrac{S_0}{S_0+I_0}
\right)\left(\dfrac{\arctan \left(\sqrt{\frac{I_0}{S_0+I_0}}\frac{t}{\tau}\right)}{\sqrt{\frac{I_0}{S_0+I_0}}\frac{1}{\tau}}\right)
\right]
e^{- \alpha t},
\end{equation}
and
\begin{equation}
I(t) = I_0 \left( 1 + \frac{t^2}{\tau^2}\right) \left( \frac{S_0 + I_0}{S_0 + I_0 (1+t^2/\tau^2)} \right)
\exp \left[
\alpha\left(
\dfrac{S_0}{S_0+I_0}
\right)\left(\dfrac{\arctan \left(\sqrt{\frac{I_0}{S_0+I_0}}\frac{t}{\tau}\right)}{\sqrt{\frac{I_0}{S_0+I_0}}\frac{1}{\tau}}\right)
\right]
e^{- \alpha t} .
\end{equation}
As it is shown in Figure \ref{fig:comp_traj}, for the second model the infection peak is higher than the one corresponding to the first model, although the maximum value for $\beta$ is in both cases the same. This shows that for this kind of models
not only the maximum value of the transmission rate function is relevant, but also the time at which the peak of $\beta(t)$ appears.
\begin{figure}[H]
\begin{center}
\includegraphics[width=.6\textwidth, height=.3\textwidth]{./beta1_beta2.pdf}
\end{center}
\vspace{-0.5cm}
\caption{\small{ Solid: $\beta (t) = \alpha + \frac{1/\tau}{1+t/\tau}$. Dashed: $\beta (t) = \alpha + \frac{2 t/\tau^2}{1+t^2/\tau^2}$. $\beta_m = 1, \alpha = 0.2$.
\label{fig:comp_beta}
}}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.48\textwidth, height=.33\textwidth]{./betat_1.pdf} \includegraphics[width=.48\textwidth, height=.33\textwidth]{./betat_2.pdf} \\
\end{center}
\vspace{-0.5cm}
\caption{\small{ Left: Modified SIR with $\beta$ constant (dotted, red) and modified SIR with $\beta (t) = \alpha + \frac{1/\tau}{1+t/\tau}$ (black). Right: Modified SIR with $\beta$ constant (dotted, red) and modified SIR with $\beta (t) = \alpha + \frac{2 t/\tau^2}{1+t^2/\tau^2}$ (black). $\beta = \beta_m = 1, \alpha = 0.2$. $S_0 = 0.9$, $I_0 = 0.1$.
\label{fig:comp_traj}
}}
\end{figure}
\begin{figure}[H]
\begin{center}
\includegraphics[width=.48\textwidth, height=.33\textwidth]{./betatomega2.pdf} \includegraphics[width=.48\textwidth, height=.33\textwidth]{./betatomega5.pdf} \\
\end{center}
\vspace{-0.5cm}
\caption{\small{ Modified SIR with $\beta$ constant (dotted, red) and modified SIR with $\beta (t ) = \beta_0 (1 + \gamma\, \sin (\omega\, t))$ (black) and $\gamma=1, \beta = \beta_0 = 1, \alpha = 0.2$; $S_0 = 0.9$, $I_0 = 0.1$.
Left: $\omega=2$. Right: $\omega=5$.
\label{fig:periodicbeta}
}}
\end{figure}
\subsection{A model with oscillating transmission rate}
In the case of deseases with periodic outbreaks due, for instance, to seasonal variabilities, the transmission rate function is often assumed to take a sinusoidal form (see~\cite{Keeling, Pollicott, Mummert})
\begin{equation}
\beta (t ) = \beta_0 (1 + \gamma\, \sin (\omega\, t))\,,
\end{equation}
where $0<\gamma\leq 1$ and $T=2\pi/\omega$ is the seasonality period.
In this case $y(t)$~\eqref{eq:MSIRyt} is straightforwardly obtained, but~\eqref{eq:SinMSIRydept} cannot be obtained in closed form, although its numerical integration can be easily performed and. In Figure \ref{fig:periodicbeta} two instances of periodic $\beta(t)$ are presented, where the relevance of the forcing period of $\beta(t)$ on the populations is neatly reflected (see~\cite{Mummert} for the fitting of actual influenza data with a SIR model endowed with with oscillating transmission rate, where the same type of forcing effect shown in Figure \ref{fig:periodicbeta} can be appreciated).
Summarizing, in this paper we have shown that the modified SIR model where recovered individuals are removed from the population can be exactly solved in closed form by making use of its underlying Hamiltonian structure, as well as some of its generalizations with time-dependent transmission rates. Despite SIR models are the most schematic ones in order to describe compartmental epidemiological dynamics, their features are rich enough in order to model many of their dynamical essentials. Therefore, having at hand their solutions in closed form is helpful, since this provides explicit expressions for the relevant epidemiological quantities in terms of the parameters of the model. In particular, the search for other exactly solvable models with time-dependent transmission rates would be certainly helpful in the field of non-pharmacological control strategies, and are worth to be investigated.
\section*{Acknowledgements}
This work has been partially supported by Ministerio de Ciencia e Innovaci\'on (Spain) under grants MTM2016-79639-P (AEI/FEDER, UE) and PID2019-106802GB-I00/AEI/10.13039/501100011033, and by Junta de Castilla y Le\'on (Spain) under grants BU229P18 and BU091G19.
\small
|
1,477,468,751,028 | arxiv | \section{Introduction}
In this paper we consider polynomials orthogonal with respect to the \textit{generalised higher order Freud weight}
\begin{equation} \label{freudg}
\omega(x;t,\lambda)=|x|^{2\lambda+1}\exp\left(tx^2-x^{2m}\right)
\qquad x,t \in\mathbb{R}, \qquad m=2,3,\dots,\end{equation}
with $\lambda>-1$ a parameter.
Let $\omega(x)$ be a positive weight function defined on the real line $\mathbb{R}$ for which all the moments \begin{equation} \label{eq:moment}
\mu_k=\int_{-\infty}^{\infty} x^k\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x, \qquad k=0,1,\dots,\end{equation} exist. Then the sequence of monic orthogonal polynomials $\big\{P_{n}(x)\big\}_{n \in \mathbb{N}}$, where $P_{n}(x)$ is a polynomial of degree $n$ in $x$, is given by \begin{equation} \nonumber\int_{-\infty}^{\infty} P_m(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = h_{n}\delta_{m,n},\qquad h_{n}>0,\label{eq:norm}\end{equation}
where $\delta_{m,n}$ denotes the Kronekar delta.
A fundamental property of orthogonal polynomials is that they satisfy a three-term recurrence relation of the form
\begin{equation} \label{eq:3trr}
P_{n+1}(x)=(x-\alpha_{n})P_{n}(x)-\beta_{n}P_{n-1}(x),
\end{equation}
with $\beta_{n}>0$ and initial values $P_{-1}(x)=0$ and $P_{0}(x)=1$. The recurrence coefficients $\alpha_{n}$ and $\beta_{n}$ are given by the integrals
\[
\alpha_{n} = \frac1{h_{n}}\int_{-\infty}^{\infty} xP_{n}^2(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x,\qquad \beta_{n} = \frac1{h_{n-1}}\int_{-\infty}^{\infty} xP_{n-1}(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x.
\]
The coefficient $\beta_{n}$ in the recurrence relation \eqref{eq:3trr} can also be expressed in terms of the Hankel determinant \begin{equation}\label{eq:detsDn}
\Delta_{n}=\mathop{\rm det}\nolimits\big[\mu_{j+k}\big]_{j,k=0}^{n-1}=\left|\begin{matrix} \mu_0 & \mu_1 & \ldots & \mu_{n-1}\\
\mu_1 & \mu_2 & \ldots & \mu_{n}\\
\vdots & \vdots & \ddots & \vdots \\
\mu_{n-1} & \mu_{n} & \ldots & \mu_{2n-2}\end{matrix}\right|,\qquad n\geq1,\end{equation}
with $\Delta_0=1$, $\Delta_{-1}=0$, whose entries are given in terms of the moments \eqref{eq:moment} associated with the weight $\omega(x)$. Specifically
\begin{equation}\label{def:bn}
\beta_{n} = \frac{\Delta_{n+1}\Delta_{n-1}}{\Delta_{n}^2}.\end{equation}
The monic polynomial $P_{n}(x)$ can be uniquely expressed as the determinant
\begin{equation}\nonumber P_{n}(x)=\frac1{\Delta_{n}}\left|\begin{matrix} \mu_0 & \mu_1 & \ldots & \mu_{n}\\
\mu_1 & \mu_2 & \ldots & \mu_{n+1}\\
\vdots & \vdots & \ddots & \vdots \\
\mu_{n-1} & \mu_{n} & \ldots & \mu_{2n-1}\\
1 & x & \ldots &x^n\end{matrix}\right|,\label{eq:Pndet}\end{equation}
and the normalisation constants as
\begin{equation}\label{def:norm}h_{n}=\frac{\Delta_{n+1}}{\Delta_{n}},\qquad h_0=\Delta_1=\mu_0.
\end{equation}
Also from \eqref{def:bn} and \eqref{def:norm}, we see that the relationship between the recurrence coefficient $\beta_{n}$ and the normalisation constants $h_{n}$ is given by
\begin{equation}\label{bn:hn}\nonumber h_{n}=\beta_{n} h_{n-1}.\end{equation}
For symmetric weights, since $\omega(x)=\omega(-x)$, it follows that $\alpha_{n}=0$ in \eqref{eq:3trr}. Hence, for symmetric weights, the sequence of monic orthogonal polynomials $\big\{P_{n}(x)\big\}_{n\in\mathbb{N}}$, satisfy the three-term recurrence relation
\begin{equation}\label{eq:srr}
P_{n+1}(x)=xP_{n}(x)-\beta_{n}P_{n-1}(x).
\end{equation}
The monic orthogonal polynomials $P_{n}(x)$ associated with symmetric weights are also symmetric, i.e. $P_{n}(-x)=(-1)^nP_{n}(x)$. This implies that each $P_{n}$ contains only even or only odd powers of $x$ and we can write
\begin{align*} P_{2n}(x)&=x^{2n}+\sum_{k=1}^{n}c_{2n-2k}^{(2n)} x^{2n-2k
=x^{2n}+c_{2n-2}^{(2n)}x^{2n-2}+\dots+c_{0}^{(2n)},\\
P_{2n+1}(x)&=x^{2n+1}+\sum_{k=1}^{n}c_{2n-2k+1}^{(2n+1)}x^{2n-2k+1
=x^{2n+1}+c_{2n-1}^{(2n+1)}x^{2n-1}+\dots+c_{1}^{(2n+1)}x.
\end{align*} Substituting these expressions into the recurrence relation \eqref{eq:srr} and comparing the coefficients on each side, we obtain
\begin{align
\beta_{2n}&=c_{2n-2}^{(2n)}-c_{2n-1}^{(2n+1)},\qquad \label{bodd}
\beta_{2n+1}=-\frac{c_0^{(2n+2)}}{c_0^{(2n)}}=-\frac{P_{2n+2}(0)}{P_{2n}(0)}.\end{align}
It follows from \eqref{eq:moment} that, for symmetric weights, $\mu_{2k-1}=0$, $k=1,2,\ldots\ $ and hence
it is possible to write the Hankel determinant $\Delta_{n}$ given by \eqref{eq:detsDn} in terms of the product of two Hankel determinants obtained by matrix manipulation, interchanging columns and rows. The product decomposition, depending on $n$ even or odd, is given by
\begin{equation} \Delta_{2n}=\mathcal{A}_{n}\mathcal{B}_{n},\qquad \Delta_{2n+1}=\mathcal{A}_{n+1}\mathcal{B}_{n},\label{res:lemma21}\end{equation}
where $\mathcal{A}_{n}$ and $\mathcal{B}_{n}$ are the Hankel determinants
\begin{align}\label{def:AnBn} \mathcal{A}_{n}
{=\left|\begin{matrix}
\mu_0 & \mu_2 & \ldots & \mu_{2n-2}\\
\mu_2 & \mu_4 & \ldots & \mu_{2n} \\
\vdots & \vdots & \ddots & \vdots \\
\mu_{2n-2} & \mu_{2n}& \ldots & \mu_{4n-4}
\end{matrix}\right|,}\qquad
\mathcal{B}_{n}
{=\left|\begin{matrix}
\mu_2 & \mu_4 & \ldots & \mu_{2n}\\
\mu_4 & \mu_6 & \ldots & \mu_{2n+2} \\
\vdots & \vdots & \ddots & \vdots \\
\mu_{2n} & \mu_{2n+2}& \ldots & \mu_{4n-2}
\end{matrix}\right|},\end{align
with $\mathcal{A}_0=\mathcal{B}_0=1$. Consequently, for a symmetric weight, substituting \eqref{res:lemma21} into \eqref{def:bn}, the recurrence coefficient $\beta_{n}$ is given by
\begin{equation} \nonumber\beta_{2n} = \frac{\mathcal{A}_{n+1}\mathcal{B}_{n-1}}{\mathcal{A}_{n}\mathcal{B}_{n}},\qquad
\beta_{2n+1}= \frac{\mathcal{A}_{n}\mathcal{B}_{n+1}}{\mathcal{A}_{n+1}\mathcal{B}_{n}}.\label{def:betan}
\end{equation}
Semiclassical orthogonal polynomials are natural generalisations of classical orthogonal polynomials and were introduced by Shohat in \cite{refShohat39}. Maroni provided a unified theory for semiclassical orthogonal polynomials (cf.~\cite{refMaroni2, refMaroni3}). The weights of classical orthogonal polynomials satisfy a first-order ordinary differential equation, the \textit{Pearson equation}
\begin{equation*}\label{eq:Pearson}
\deriv{}{x}(\sigma(x)\omega(x))=\tau(x)\omega(x),
\end{equation*}
where $\sigma(x)$ is a monic polynomial of degree at most $2$ and $\tau(x)$ is a polynomial with degree $1$. For semiclassical orthogonal polynomials, the weight function $\omega(x)$ satisfies a Pearson equation
\eqref{eq:Pearson}
with either deg$(\sigma(x))>2$ or deg$(\tau(x))\neq1$ (cf.~\cite{refHvR,refMaroni}).
The generalised higher order Freud weight given by \eqref{freudg}
is a symmetric weight that satisfies
\begin{equation}\label{w diff eq}
\deriv{}{x}\left\{\sigma(x)\omega(x;t,\lambda)\right\}=\tau(x)\omega(x;t,\lambda),
\end{equation}
with $\sigma(x)=x$ and $\tau(x)=2(tx^2-mx^{2m}+\lambda+1)$ and therefore is a semiclassical weight.
In \S \ref{sec:Freud4weight} we consider the moments of the generalised higher order Freud weight, obtaining a closed form expression for the first moment. The recurrence coefficients in the three term recurrence relation satisfied by polynomials orthogonal with respect to generalised higher order Freud weights are investigated in \S \ref{sec:rcoef}. We prove structure relations and mixed recurrence relations satisfied by generalised higher order Freud polynomials in \S \ref{sec:relations}. The asymptotic behaviour of the recurrence coefficients proved in \S \ref{sec:rcoef} determines the limiting distribution of the zeros and this, as well as other properties of the zeros, is investigated in \S \ref{sec:zeros}. We conclude with the quadratic decomposition of the generalised higher order Freud weight in \S \ref{sec:qdecomp}.
\section{Moments of the generalised higher order Freud weights}\label{sec:Freud4weight}
The existence of the first moment $\mu_0(t;\lambda,m)$ associated with the generalised higher order Freud weight \eqref{freudg} follows from the fact that, at $\infty$, the integrand behaves like $\exp(-x^2)$ and, at $x=0$, the integrand behaves like $x^{\lambda}$ which, for $\lambda>-1$, is integrable.
\comment{ \begin{lemma}\label{finitmom} Let $x\in \mathbb{R}, ~\lambda>-1,~ t\in \mathbb{R}$ and $m=2,3,\dots$. Then, for the generalised higher order Freud weight \eqref{freudg}, the first moment \[\mu_0(t;\lambda,m)=\int_{-\infty}^{\infty} |x|^{2\lambda+1}\exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x =\int_{0}^{\infty} s^{\lambda}\exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}\] is finite.
\end{lemma} \begin{proof} The first moment $\mu_0(t)$ takes the form
\begin{align}
\nonumber \mu_0(t;\lambda,m) &= \int_{0}^{\infty} s^{\lambda}\exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}\\&=\int_{0}^{1} s^{\lambda}\exp(ts-s^m )\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}+\int_{1}^{\infty} s^{\lambda}\exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}.\label{moo1}
\end{align}
Note that
\begin{align*}
\displaystyle\int_{0}^{1} s^{\lambda} \exp(ts-s^m )\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}
& \leq \int_{0}^{1} s^{\lambda} \exp(ts)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s
\leq\begin{cases} \displaystyle\int_{0}^{1} s^{\lambda} \,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s &\text{if}\quad t\leq0,\\
\displaystyle\e^t\int_{0}^{1} s^{\lambda}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s,\quad &\text{if}\quad t>0,\end{cases}\\
\end{align*}
and, since $s^{\lambda}\in L^1(0,1)$ when $\lambda>-1$, the first integral in \eqref{moo1} is finite for $\lambda>-1$.
For $-1<\lambda\leq0$, we have that $s^{\lambda}\leq1$ for $s\geq 1$ and hence \begin{align*} \int_{1}^\infty s^{\lambda} \exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s &\leq \int_{1}^\infty \exp(ts-s^m)<\infty.
\end{align*}
For $\lambda > 0$,
note that $\displaystyle\lim_{s\rightarrow \infty} s^{2}w(s;t,m)=0$, where $w(s;t,m)=s^{\lambda}\exp(ts-s^m)$, hence, by definition, there exists $N>0$ such that $s^2w(s;t.m)<1$ whenever $s>N$.
Since $\displaystyle \int_{N}^{\infty}\dfrac{{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}}{s^2}<\infty$, it follows from the Comparison Theorem that $\displaystyle \int_{N}^{\infty}w(s;t,m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}<\infty$. Finally, for $\lambda>0$, \begin{align*}
\int_{1}^N s^{\lambda} \exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s &\leq N^{\lambda} \int_{1}^{N} \exp(ts-s^m){\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s\\&\leq N^{\lambda}(N-1)\exp(tN-1)<\infty.
\end{align*} Hence $\mu_0(t;\lambda,m)<\infty$ for $\lambda>-1$.
\end{proof}}
\begin{lemma} \label{hypex}
Let $x\in \mathbb{R}, ~\lambda>-1,~ t\in \mathbb{R}$ and $m=2,3,\dots$. Then, for the generalised higher order Freud weight \eqref{freudg},
the first moment is given by
\begin{align*}\mu_0(t;\lambda,m) &=\int_{-\infty}^{\infty} |x|^{2\lambda+1}\exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = \int_0^\infty s^{\lambda}\exp(ts-s^m)\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s \nonumber\\
&= \displaystyle \frac{1}{m}\sum_{k=1}^m\frac{t^{k-1}}{{(k-1)}!} \Gamma\left(\frac{\lambda+ k}{m}\right) \;\HyperpFq{2}{m}\left(\frac{\lambda+ k}{m},1;\frac km,\frac{k+1}{m},\dots,\frac{m+k-1}{m};\left(\frac{t}{m}\right)^m\right)\\
&=\displaystyle \frac{1}{m} \Gamma\left(\frac{\lambda+1}{m}\right) \;\HyperpFq{1}{m-1}\left(\frac{\lambda+1}{m};\frac 1m,\dots\frac{m-1}{m};\left(\frac{t}{m}\right)^m\right)\\
&\qquad+\displaystyle \frac{1}{m} \sum_{k=2}^{m-1}\frac{t^{k-1}}{{(k-1)}!} \Gamma\left(\frac{\lambda+ k}{m}\right)\nonumber\\ &\qquad\qquad\times\HyperpFq{1}{m-1}\left(\frac{\lambda+ k}{m};\frac km,\frac{k+1}{m},\dots,\frac{m-1}{m},\frac{m+1}{m},\dots,\frac{m+k-1}{m};\left(\frac{t}{m}\right)^m\right)\nonumber \\
&\qquad+\displaystyle \frac{t^{m-1}}{m!} \Gamma\left(\frac{\lambda}{m}+1\right) \;\HyperpFq{1}{m}\left(\frac{\lambda}{m}+1;\frac{m+1}{m},\frac{m+2}{m},\dots,\frac{2m-1}{m};\left(\frac{t}{m}\right)^m\right),\nonumber
\end{align*}
where $\HyperpFq pq(a;b;z)$ is the generalised hypergeometric function (cf.~\cite[(16.2.1)]{DLMF}.
\end{lemma}\begin{proof}
Using the power series expansion of the exponential function, we obtain
\[ \begin{split}\mu_0(t;\lambda,m) &=\int_{-\infty}^{\infty} |x|^{2\lambda+1}\exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = \int_0^\infty s^{\lambda}\exp(ts-s^m)\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s \\%[10pt]
&= \displaystyle \int_0^\infty s^{\lambda}\exp(-s^m)\sum_{n=0}^{\infty}\frac{(ts)^n}{n!}\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s\\
&= \displaystyle \sum_{n=0}^{\infty}\frac{t^n}{n!}\int_0^\infty s^{n+\lambda}\exp(-s^m)\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s\\
&= \displaystyle\frac 1m \sum_{n=0}^{\infty}\frac{t^n}{n!} \int_0^\infty y^{(n+\lambda-m+1)/m}\exp(-y)\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} y\\
&= \displaystyle \frac 1m\sum_{n=0}^{\infty} \frac{t^n}{n!}\,\Gamma \left(\frac{\lambda+n+1}{m}\right),
\end{split}\] where $\Gamma(x)$ denotes the Gamma function defined in \cite[(5.2.1)]{DLMF} and the fourth equal sign follows from the Lebesgue's Dominated Convergence Theorem.
Letting $n=mk+j$ for $j=0,1,\dots,m-1$, we can write
\[ \mu_0(t;\lambda,m) =\displaystyle \frac 1m\sum_{k=0}^{\infty}\sum_{j=0}^{m-1}\Gamma\left(\frac{\lambda+j+1}{m}+k\right)\;\frac{t^{mk+j}}{(mk+j)!}.\] Using the Gauss multiplication formula \cite[(5.5.6)]{DLMF} yields
\[(mk+j)!=j!\,m^{mk} \prod_{\ell=1}^{m}\left(\frac{j+\ell}{m}\right)_k\]
where $(a)_k$ denotes the Pochhammer symbol (cf.~\cite[\S5.2(iii)]{DLMF}, while it follows from \cite[(5.5.1)]{DLMF} that
\[\Gamma\left(\frac{\lambda+j+1}{m}+k\right)=\left(\frac{\lambda+j+1}{m}\right)_k\Gamma\left(\frac{\lambda+j+1}{m}\right),\]
and hence we have
\[\begin{split} \mu_0(t;\lambda,m) &=\displaystyle \frac 1m\sum_{k=0}^{\infty}\sum_{j=0}^{m-1}\frac{\left(\frac{\lambda+j+1}{m}\right)_k\Gamma\left(\frac{\lambda+j+1}{m}\right)}{m^{mk}\left(\frac{j+1}{m}\right)_k\left(\frac{j+2}{m}\right)_k...\left(\frac{j+m}{m}\right)_k}\,\frac{t^{mk+j}}{j!}\\
&=\displaystyle \frac 1m\sum_{j=0}^{m-1}\Gamma\left(\frac{\lambda+j+1}{m}\right)\frac{t^j}{j!}\sum_{k=0}^{\infty}\frac{\left(\frac{\lambda+j+1}{m}\right)_k}{\left(\frac{j+1}{m}\right)_k\left(\frac{j+2}{m}\right)_k...\left(\frac{j+m}{m}\right)_k}\left(\frac{t}{m}\right)^{mk}\\
&= \displaystyle \frac 1m\sum_{j=0}^{m-1}\Gamma\left(\frac{\lambda+j+1}{m}\right)\frac{t^j}{j!}\;\HyperpFq{2}{m}\left(\frac{\lambda+j+1}{m},1;\frac{j+1}{m},\frac{j+2}{m},\dots,\frac{m+j}{m};\left(\frac{t}{m}\right)^m\right).
\end{split}\]
\end{proof}
\begin{remark}
In our earlier studies of semi-classical orthogonal polynomials, we proved special cases of Lemma \ref{momentDE} and Lemma \ref{hypex}, namely for $m=2$ in \cite{refCJK} and for $m=3,4,5$ in \cite{refCJ21b}.
\end{remark}
\begin{lemma}\label{momentDE} Let $x\in \mathbb{R}, ~\lambda>-1,~ t\in \mathbb{R}$ and $m=2,3,\dots$. Then, for the generalised higher order Freud weight \eqref{freudg},
the first moment
\[\mu_0(t;\lambda,m) =\int_{-\infty}^{\infty} |x|^{2\lambda+1}\exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = \int_0^\infty s^{\lambda}\exp(ts-s^m)\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s, \]
satisfies the ordinary differential equation
\begin{equation}\label{eq6} m\deriv[m]{\varphi}{t} - t \deriv{\varphi}{t} - (\lambda+1)\,\varphi=0. \end{equation}
\end{lemma}
\begin{proof}
Following \cite{refMul77} and \cite{refCJ21b}, we look for a solution of \eqref{eq6} in the form
\begin{equation} \label{eq7}\varphi(t)=\int_0^\infty \e^{st} \,v(s)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s.\end{equation}
In order for \eqref{eq7} to satisfy \eqref{eq6}, it is necessary that
\[\deriv[m]{\varphi}{t}-\frac1m\deriv{\varphi}{t}-\frac{\lambda+1}{m}\varphi=\int_0^\infty \e^{st} \left(s^m-\frac{s}{m}-\frac{\lambda+1}{m}\right)v(s) \,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s=0.\]
Using integration by parts, this is equivalent to
\[\int_0^\infty \e^{st} \left\{ s^mv(s) +\frac1m v(s)+\frac{s}m \deriv{v}{s}-\frac{\lambda+1}{m}v(s)\right\}{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s=0,\]
under the assumption that $\lim_{s\to\infty} sv(s)\e^{st}=0$.
Hence, for $\varphi(t)$ to be a solution of \eqref{eq6}, we need to choose $v(s)$ so that
\[ (ms^m-\lambda)v(s)+s\deriv{v}{s}=0 .\] One solution of this equation is $v(s) = s^{\lambda}\exp(-s^m)$.\end{proof}
For the generalised higher order Freud weight \eqref{freudg}, the even moments can be written in terms of derivatives of the first moment, as follows
\begin{align} \label{momd}\mu_{2k}(t;\lambda,m)&= \int_{-\infty}^{\infty} x^{2k} |x|^{2\lambda+1} \exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x \nonumber\\
&= \deriv[k]{}{t} \int_{-\infty}^{\infty} |x|^{2\lambda+1}\exp(tx^2-x^{2m})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x\nonumber \\&=\deriv[k]{}{t}\mu_0(t;\lambda,m),\qquad k=0,1,2,\ldots\ ,\end{align}
where the interchange of integration and differentiation is justified by Lebesgue's Dominated Convergence Theorem.
Furthermore, from the definition we have
\begin{align} \label{momd2}\mu_{2k+2}(t;\lambda,m)&=
\mu_{2k}(t;\lambda+1,m),\qquad k=0,1,2,\ldots\ . \end{align}
\section{Recurrence coefficients for generalised higher order Freud weights}\label{sec:rcoef}
\begin{lemma}For the generalised higher order Freud weight \eqref{freudg}, the recurrence coefficient $\beta_{n}$ is given by
\begin{equation} \beta_{2n} = \deriv{}{t} \ln \frac{\mathcal{B}_{n}}{\mathcal{A}_{n}},\qquad
\beta_{2n+1}= \deriv{}{t} \ln\frac{\mathcal{A}_{n+1}}{\mathcal{B}_{n}}. \label{def:betant}
\end{equation} with $A_0=B_0=1$ and \begin{equation} \mathcal{A}_{n} =\operatorname{Wr}\left(\mu_0,\deriv{\mu_0}{t},\ldots,\deriv[n-1]{\mu_0}{t} \right),
\qquad \mathcal{B}_{n} =\operatorname{Wr}\left(\deriv{\mu_0}{t},\deriv[2]{\mu_0}{t},\ldots,\deriv[n]{\mu_0}{t} \right)
\label{def:AnBnW}\end{equation}
where
\[\mu_0= \mu_0(t;\lambda,m) =\int_{0}^{\infty} x^{\lambda}\exp(ts-s^m)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{x},\]
and $\operatorname{Wr}(\varphi_1,\varphi_2,\ldots,\varphi_{n})$ denotes the Wronskian given by
$$\operatorname{Wr}(\varphi_1,\varphi_2,\ldots,\varphi_{n})= \left|\begin{matrix}
\varphi_1 & \varphi_2 & \ldots & \varphi_{n}\\
\varphi_1^{(1)} & \varphi_2^{(1)} & \ldots & \varphi_{n}^{(1)}\\
\vdots & \vdots & \ddots & \vdots \\
\varphi_1^{(n-1)} & \varphi_2^{(n-1)} & \ldots & \varphi_{n}^{(n-1)}
\end{matrix}\right|,\qquad \varphi_j^{(k)}=\deriv[k]{\varphi_j}{t}.$$
\end{lemma}
\begin{proof} It follows from \eqref{def:AnBn} and \eqref{momd} that $\mathcal{A}_{n}$ and $\mathcal{B}_{n}$ can be written in terms of the Wronskians given by \eqref{def:AnBnW}. Furthermore, \begin{align}\label{eq:dodgson} &\mathcal{A}_{n}\deriv{\mathcal{B}_{n}}{t}-\mathcal{B}_{n}\deriv{\mathcal{A}_{n}}{t}=\mathcal{A}_{n+1}\mathcal{B}_{n-1},\qquad
\mathcal{B}_{n}\deriv{\mathcal{A}_{n+1}}{t}-\mathcal{A}_{n+1}\deriv{\mathcal{B}_{n}}{t}=\mathcal{A}_{n+1}\mathcal{B}_{n}\end{align} (cf.~\cite[\S6.5.1]{refVeinDale}) and \eqref{eq:dodgson}, together with \eqref{def:betan} yields \eqref{def:betant}.
\end{proof}
\begin{lemma} Let $\omega_0(x)$ be a symmetric positive weight on the real line for which all the moments exist and let $\omega(x;t)=\exp(tx^2)\,\omega_0(x)$, with $t\in\mathbb{R}$, is a weight such that all the moments of exist. Then the recurrence coefficient $\beta_{n}(t)$ satisfies the Volterra, or the Langmuir lattice, equation
\begin{equation} \nonumber\deriv{\beta_{n}}{t} = \beta_{n}(\beta_{n+1}-\beta_{n-1}). \end{equation}
\end{lemma}
\begin{proof} {See, for example, Van Assche \cite[Theorem 2.4]{refWVAbk}.}
\end{proof}
\begin{lemma} For the generalised higher order Freud weight \eqref{freudg}, the associated monic polynomials $P_{n}(x)$ satisfy the recurrence relation
\begin{equation} \label{eq:3rr} P_{n+1}(x) = xP_{n}(x) - \beta_{n}(t;\lambda) P_{n-1}(x),\qquad n=0,1,2,\ldots\ ,\end{equation}
with $P_{-1}(x)=0$ and $P_0(x)=1$, where
\begin{align*} \beta_{2n}(t;\lambda)
= \frac{\mathcal{A}_{n+1}(t;\lambda)\mathcal{A}_{n-1}(t;\lambda+1)}{\mathcal{A}_{n}(t;\lambda)\mathcal{A}_{n}(t;\lambda+1)}=\deriv{}{t}\ln \frac{\mathcal{A}_{n}(t;\lambda+1)}{\mathcal{A}_{n}(t;\lambda)},\\
\beta_{2n+1}(t;\lambda)
= \frac{\mathcal{A}_{n}(t;\lambda)\mathcal{A}_{n+1}(t;\lambda+1)}{\mathcal{A}_{n+1}(t;\lambda)\mathcal{A}_{n}(t;\lambda+1)}=\deriv{}{t}\ln \frac{\mathcal{A}_{n+1}(t;\lambda)}{\mathcal{A}_{n}(t;\lambda+1)}.
\end{align*}
where $\mathcal{A}_{n}(t;\lambda)$ is the Wronskian given by \eqref{def:AnBnW}
with
\begin{align*}
\mu_0(t;\lambda,m)
&= \displaystyle \frac{1}{m}\sum_{k=1}^m\frac{t^{k-1}}{{(k-1)}!} \Gamma\left(\frac{\lambda+ k}{m}\right) \;\HyperpFq{2}{m}\left(\frac{\lambda+ k}{m},1;\frac km,\frac{k+1}{m},\dots,\frac{m+k-1}{m};\left(\frac{t}{m}\right)^m\right).
\end{align*}
\end{lemma}
\begin{proof}
It follows from substituting \eqref{momd2} into the expression for $\mathcal{B}_{n}(t;\lambda)$ given in \eqref{def:AnBnW} that $\mathcal{B}_{n}=\mathcal{A}_{n}(t;\lambda+1)$ and then the result immediately follows from \eqref{def:betan} and \eqref{def:betant}. \end{proof}
\subsection{Nonlinear recursive relations}
We follow the approach found in \cite[\S7]{refMaroni2} whose key results are summarised in \cite[Proposition 3.1]{Mar99}.
Note that for a given $m\geq 1$, we can write
\begin{equation}\label{x2mPn}
x^{2m} P_{n}(x)
= \sum_{\ell=-m}^{m} C^{(2m)}_{n,n+2 \ell} P_{n+2 \ell},
\end{equation}
where
\[
C^{(2m)}_{n,n+2 \ell} = \frac{1}{h_{n+2 \ell}} \int_{-\infty}^\infty x^{2m} P_{n+2 \ell}(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x
\quad \text{for} \quad \ell = -m, \ldots, m.
\]
Observe that
\(
C^{(2m)}_{n,n+k} = C^{(2m)}_{n+k,n}=0 \) for \( |k|\geq 2m+1 \)
and \[
C^{(2m)}_{n,n+2\ell} =\frac{h_{n}}{h_{n+2\ell}}C^{(2m)}_{n+2\ell,n}
= \frac{1}{\beta_{n+1}\cdots \beta_{n+2\ell}} C^{(2m)}_{n+2\ell,n} \, \quad \text{for} \quad \ell=1,\ldots, m .
\]
From the recurrence relation \eqref{eq:srr} it follows
\begin{equation}\label{x2Pn}
x^2P_{n}(x) =
P_ {n+2} + \left(\beta_{n}+\beta_ {n+1}\right) P_{n}+\beta_ {n-1} \beta_{n} P_ {n-2}, \quad n\geq 0.
\end{equation}
In particular, one has $C_{n,n}^{(2)}=\beta_{n}+\beta_{n+1}$, $C_{n,n-2}^{(2)}=\beta_{n-1}\beta_{n}$ and $C_{n,n+2}^{(2)} = 1$. The computation of the coefficients \( \big\{C_{n-2\ell,n}^{(2m+2)}\big\}_{\ell=0}^{m+1}\) can be derived from the coefficients \( \big\{C_{n-2\ell,n}^{(2m)}\big\}_{\ell=0}^{m}\) as follows
\begin{equation}\label{Ckn2m}
C_{n-2\ell,n}^{(2m+2)}
= \beta_{n+2}\beta_{n+1} C_{n-2\ell,n+2}^{(2m)} + (\beta_{n}+\beta_{n+1}) C_{n-2\ell,n}^{(2m)}
+ C_{n-2\ell,n-2}^{(2m)} , \quad \ell= 0, \ldots , m,
\end{equation}
which is a direct consequence of \eqref{x2mPn} multiplied by \(x^2\) and \eqref{x2Pn}.
\begin{proposition}\label{lm34} The recurrence coefficient $\beta_{n}$ for the generalised higher-order Freud weight \eqref{freudg}
satisfies the discrete equation
\begin{equation}\label{Vn2m}
2m V_{n\phantom{1}}^{(2m)} - 2t \beta_{n} = n+ (\lambda +\tfrac12) [1-(-1)^n].
\end{equation}
where
\begin{equation} \label{Vn2m exp}
V_{n\phantom{1}}^{(2m)}= C_{n,n-2}^{(2m-2)}+\beta_{n}C_{n,n}^{(2m-2)}.
\end{equation}
\end{proposition}
Alternatively, \eqref{Vn2m exp} can be written as
\[
V_{n\phantom{1}}^{(2m)}=
\frac{1}{h_{n-2}} \int_{-\infty}^\infty x^{2m-2} P_{n-2}(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x
+\frac{\beta_{n}}{h_{n}} \int_{-\infty}^\infty x^{2m-2} P^2_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x .
\]
\begin{proof}
For any monic polynomial sequence $\big\{P_{n}(x)\big\}_{n\geq0}$, one can always write
\[
x \deriv{P_{n}}{x}(x)= \sum_{j=0}^{n} \rho_{n,j} P_{n-j}(x), \quad \text{for} \quad n\geq 1,
\]
with $\rho_{n,0}=n$.
The assumption that $\big\{P_{n}(x)\big\}_{n\geq0}$ is orthogonal with respect to the semiclassical weight $w(x)$ satisfying the differential equation \eqref{w diff eq} with $\sigma(x)=x$ and $\tau(x)=2(tx^2-mx^{2m}+\lambda+1)$ gives, using integration by parts,
\begin{align*}
\rho_{n,j} h_{n-j}&= \int_{-\infty}^\infty x \deriv{P_{n}}{x}(x) P_{n-j}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x\\
&= - \int_{-\infty}^{\infty} \left\{ \tau(x)P_{n-j}(x) + x \deriv{P_{n-j}}{x}(x) \right\}
P_{n}(x)\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x,
\end{align*} where $\displaystyle h_k=\int_{-\infty}^{\infty}P_k^2(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x>0.$
Therefore
$\rho_{n,j} =0 $ for any $j\geq 2m+1$ and
the
symmetry of the weight implies \(\rho_{n,j}=0\) for any \(j\) odd. Therefore we have
\begin{equation}\label{eq: struct Pn}
x \deriv{P_{n}}{x}(x) = \sum_{\ell=0}^{m} \rho_{n,2\ell}\, P_{n-2\ell}(x), \quad \text{for} \quad n\geq 0.
\end{equation}
Recall \eqref{x2Pn} to write
\[
\frac{1}{h_{n}}\int_{-\infty}^\infty x^2 P^2_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = (\beta_{n} + \beta_{n+1})
\quad \text{and} \quad
\frac{1}{h_{n-2}}\int_{-\infty}^\infty x^2 P_{n-2}(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = \beta_{n} \beta_{n-1}, \]
and hence
\begin{equation}\label{rhosys1}
\rho_{n,2\ell} =
\begin{cases}
\displaystyle
\frac{2m}{h_{n}} \int_{-\infty}^\infty x^{2m} P^2_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x
- 2t (\beta_{n} + \beta_{n+1})
- \left(2\lambda+2 +n\right),
& \text{if} \quad \ell=0, \\[0.25cm]
\displaystyle \frac{2m}{h_{n-2}} \int_{-\infty}^\infty x^{2m} P_{n-2}(x) P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x
- 2t \, \beta_{n}\beta_{n-1},
& \text{if} \quad \ell=1, \\[0.25cm]
\displaystyle\frac{2m}{h_{n-2\ell}} \displaystyle \int_{-\infty}^\infty x^{2m} P_{n-2\ell}(x)P_{n}(x)\,\omega(x)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x,
& \text{if} \quad 2 \leq \ell \leq m-1, \\[0.25cm]
\displaystyle {2m} \, \beta_{n} \cdots \beta_{n-2m+1},
& \text{if} \quad \ell=m, \\[0.25cm]
0, & \text{otherwise}.
\end{cases}
\end{equation}
Take \(\ell=0\) in \eqref{Ckn2m} and note that $C_{n,n-2}^{(2m-2)} = C_{n-2,n}^{(2m-2)} \beta_{n}\beta_{n-1}$ to get
\[
C_{n,n}^{(2m)}
= \beta_{n+1} \left( C_{n,n+2}^{(2m-2)} \beta_{n+2} + C_{n,n}^{(2m-2)} \right)
+ \beta_{n} \left( C_{n-2,n}^{(2m-2)} \beta_{n-1} + C_{n,n}^{(2m-2)} \right) .
\]
The symmetric orthogonality recurrence relation \eqref{eq:srr} implies that
\[
P_{n+2}(x)P_{n}(x)
= P_{n+1}^2(x)+ \beta_{n} P_{n-1}(x) P_{n+1}(x) - \beta_{n+1} P_{n}^2(x),
\]
which gives the relation
\begin{equation} \label{rel cns 2m}
C_{n,n+2}^{(2m-2)} \beta_{n+2} + C_{n,n}^{(2m-2)}
= C_{n-1,n+1}^{(2m-2)} \beta_{n} + C_{n+1,n+1}^{(2m-2)} ,
\end{equation}
and consequently we have
\begin{equation} \label{cnn2m}
C_{n,n}^{(2m)}
= V_{n+1}^{(2m)} + V_{n\phantom{1}}^{(2m)}
\quad \text{where}\quad
V_{n\phantom{1}}^{(2m)}=\beta_{n} \left( \beta_{n-1} C_{n-2,n}^{(2m-2)}+ C_{n,n}^{(2m-2)}\right).
\end{equation}
On the other hand, expressions for the coefficients \(\rho_{n,2j}\) can be obtained through a purely algebraic way, and therefore expressed recursively. For that, we differentiate with respect to $x$ the recurrence relation \eqref{eq:srr} and use the structure relation \eqref{eq: struct Pn} to get
\[
P_{n}(x) + \sum_{\ell=0}^{m} \rho_{n,2\ell} P_{n-2\ell}(x)
= \deriv{P_{n+1}}{x}(x) + \beta_{n} \deriv{P_{n-1}}{x}(x).
\]
We multiply the latter by $x$ and use again \eqref{eq: struct Pn} and then \eqref{eq:srr} to obtain a linear combination of terms of $\big\{P_{n}(x)\big\}_{n\geq0}$ and this gives
\[
\begin{aligned}
P_{n+1}(x) &+ \beta_{n} P_{n-1}(x)
= \sum_{\ell=0}^{m+1} \left( \rho_{n+1,2\ell}- \rho_{n,2\ell}
+\beta_{n} \, \rho_{n-1,2\ell-2}- \beta_{n-2\ell+2}\,\rho_{n,2\ell-2}\right) P_{n-2\ell +1}(x).
\end{aligned}
\]
Since the terms are linearly independent, we equate the coefficients of $P_{n+1}, P_{n}, \ldots , P_{n-2m-1}$ to get
\begin{equation}\label{rhosys2}
\begin{cases}
\rho_{n,0} = n, \\
\rho_{n+1,2}- \rho_{n,2} = 2 \beta_{n}, \\
\rho_{n+1,2\ell}- \rho_{n,2\ell}
= \beta_{n-2\ell+2}\,\rho_{n,2\ell-2}- \beta_{n} \, \rho_{n-1,2\ell-2},
& \text{for}\quad \ell=2, \ldots , m-1,\\
\beta_{n-2m}\,\rho_{n,2m} = \beta_{n} \,\rho_{n-1,2m}, &\text{for}\quad j=m-1.
\end{cases}
\end{equation}
We combine \eqref{rhosys1} with \eqref{rhosys2} to conclude that the first equation (when \(\ell=0\)) gives
\begin{equation*}
m V_{n+1}^{(2m)} + m V_{n\phantom{1}}^{(2m)}
- t (\beta_{n} + \beta_{n+1})
= n + \left(\lambda+1 \right) ,
\end{equation*}
which implies \eqref{Vn2m}.
\end{proof}
The expressions for $V_{n\phantom{1}}^{(2m)}$ can be then obtained recursively using \eqref{Vn2m exp}, \eqref{Ckn2m}
and \eqref{cnn2m} to write
\begin{align}
V_{n\phantom{1}}^{(2m)} & = \beta_{n} \left( V^{(2 m -2)}_ {n+1}+V^{(2 m -2)}_{n}\right)
+\left( \beta_{n}+\beta_ {n+1}\right) V^{(2 m -2)}_{n}\nonumber\\
&\qquad
-\beta_{n} \left(\beta_{n}+\beta_ {n+1}\right)
\left( V_{n\phantom{1}}^{(2 m -4)}+ V_{ n +1}^{(2 m -4)}\right)
+\beta_{n} \beta_ {n-1} \left( V_ {n-2}^{(2 m -4)}
+ V_{ n -1}^{(2 m -4)}\right)\nonumber\\
&\qquad
+ \beta_{n} \beta_ {n-1} \beta_ {n+1} \beta_ {n+2}C^{(2m-4)}_{n -2,n +2}. \label{Vn2m exp1}
\end{align}
Combining \eqref{rel cns 2m} with \eqref{cnn2m} gives
\[ V_{n}^{(2m)} - V_{n-1\phantom{1}}^{(2m)}
= \beta_{n} \left(V_{n+1}^{(2m-2)} + V_{n}^{(2m-2)} \right) - \beta_{n-1} \left( V_{n-1\phantom{1}}^{(2m-2)} + V_{n-2}^{(2m-2)} \right).
\]
Using the latter relation, we replace the term \(\left( \beta_{n}+\beta_ {n+1}\right) V^{(2 m -2)}_{n}\) in \eqref{Vn2m exp1} to get
\begin{align}
V_{n\phantom{1}}^{(2m)} &= \beta_{n} \left( V_{n+1}^{(2m-2)}+V_{n\phantom{1}}^{(2m-2)}+V_{n-1}^{(2m-2)}\right) + \beta_{n+1} V_{n-1\phantom{1}}^{(2m-2)} \nonumber\\
&\qquad - \beta_{n+1}\beta_{n-1} \left( V_{n-1}^{(2m-4)}+V_{n-2\phantom{1}}^{(2m-4)}\right)+ \beta_{n+2}\beta_{n+1}\beta_{n}\beta_{n-1} C^{(2m-4)}_{n-2,n+2}.
\label{Vn2m exp2}
\end{align}
Consider \(n\rightarrow n-1\) and \(m\rightarrow m-1\) in the latter expression,
\comment{\color{red} TO COMMENT just for our own checking which is
\begin{align*}
V_{n-1\phantom{1}}^{(2m-2)} = & \beta_{n-1} \left( V_{n}^{(2m-4)}+V_{n-1\phantom{1}}^{(2m-4)}+V_{n-2}^{(2m-4)}\right)
+ \beta_{n} V_{n-2\phantom{1}}^{(2m-4)}
- \beta_{n}\beta_{n-2} \left( V_{n-2}^{(2m-6)}+V_{n-3\phantom{1}}^{(2m-6)}\right)\\
& + \beta_{n+1}\beta_{n}\beta_{n-1}\beta_{n-2} C^{(2m-6)}_{n-3,n+1},
\end{align*}}%
and replace it in \eqref{Vn2m exp2} and this yields
\comment{\color{red} TO COMMENT just for our own checking
\begin{align*}
V_{n\phantom{1}}^{(2m)} = & \beta_{n} \left( V_{n+1}^{(2m-2)}+V_{n\phantom{1}}^{(2m-2)}+V_{n-1}^{(2m-2)}\right) \\
&
+ \beta_{n+1} \Big(
\beta_{n-1} \left( V_{n}^{(2m-4)}+V_{n-1\phantom{1}}^{(2m-4)}+V_{n-2}^{(2m-4)}\right)
+ \beta_{n} V_{n-2\phantom{1}}^{(2m-4)}
- \beta_{n}\beta_{n-2} \left( V_{n-2}^{(2m-6)}+V_{n-3\phantom{1}}^{(2m-6)}\right)\\
& + \beta_{n+1}\beta_{n}\beta_{n-1}\beta_{n-2} C^{(2m-6)}_{n-3,n+1}
\Big)\\
&
- \beta_{n+1}\beta_{n-1} \left( V_{n-1}^{(2m-4)}+V_{n-2\phantom{1}}^{(2m-4)}\right)\\
& + \beta_{n+2}\beta_{n+1}\beta_{n}\beta_{n-1} C^{(2m-4)}_{n-2,n+2}.
\end{align*}}%
\begin{align*}
V_{n\phantom{1}}^{(2m)} &= \beta_{n} \left( V_{n+1}^{(2m-2)}+V_{n\phantom{1}}^{(2m-2)}+V_{n-1}^{(2m-2)}\right)
+
\beta_{n-1}\beta_{n+1} V_{n}^{(2m-4)}
+ \beta_{n} \beta_{n+1}V_{n-2\phantom{1}}^{(2m-4)}\\
&\qquad
- \beta_{n+1}\beta_{n}\beta_{n-2} \left( V_{n-2}^{(2m-6)}+V_{n-3\phantom{1}}^{(2m-6)}\right)
+ \beta_{n+1}\beta_{n}\beta_{n-1} \left(
\beta_{n+1}\beta_{n-2} C^{(2m-6)}_{n-3,n+1}
+ \beta_{n+2} C^{(2m-4)}_{n-2,n+2}\right).
\end{align*}
If we replace the term \(V_{n-2\phantom{1}}^{(2m-4)}\) by the corresponding expression given by the latter relation and successively continuing the process then one can deduce the following expressions for \(V_{n\phantom{1}}^{(2m)}\) as follows
\begin{align*}
V_{n\phantom{1}}^{(2)} &= \beta_{n},\\
V_{n\phantom{1}}^{(4)} &=V_{n\phantom{1}}^{(2)}\left(V_{n+1}^{(2)}+V_{n\phantom{1}}^{(2)}+V_{n-1}^{(2)}\right),\\
V_{n\phantom{1}}^{(6)}
&= V_{n\phantom{1}}^{(2)}\left( V_{n+1}^{(4)}+V_{n\phantom{1}}^{(4)} + V_{n-1}^{(4)} + V_{n+1}^{(2)}V_{n-1}^{(2)} \right).
\end{align*}
For higher orders we compute the coefficients $V_{n\phantom{1}}^{(2m)}$ recursively as stated below. We opted for not giving the expressions in terms of $\beta_n$ since those are rather long. For $m=4,5$, we have
\begin{align*}
V_{n\phantom{1}}^{(8)} &= V_{n\phantom{1}}^{(2)}\left( V_{n+1}^{(6)}+V_{n\phantom{1}}^{(6)} + V_{n-1}^{(6)} \right)+V_{n\phantom{1}}^{(4)}V_{n+1}^{(2)}V_{n-1}^{(2)} +V_{n+1}^{(2)}V_{n\phantom{1}}^{(2)}V_{n-1}^{(2)}\left(V_{n+2}^{(2)}+V_{n-2}^{(2)} \right),\\
V_{n\phantom{1}}^{(10)}
&= V_{n\phantom{1}}^{(2)} \left(V^{(8)}_ {n+1}+V_{n\phantom{1}}^{(8)}+V^{(8)}_ {n-1}\right)+V_{n\phantom{1}}^{(6)} V^{(2)}_ {n+1} V^{(2)}_ {n-1}
+V^{(2)}_ {n+1} V_{n\phantom{1}}^{(2)} V^{(2)}_ {n-1} \left(V^{(4)}_{n+2}+V^{(4)}_{n-2}\right)\\
&\qquad +V^{(2)}_{n+1} V^{(2)}_{n\phantom{1}} V^{(2)}_{n-1} \left\{\left(V^{(2)}_{n\phantom{1}}+V^{(2)}_ {n-1}\right)V^{(2)}_ {n+2} +\left(V^{(2)}_ {n+1}+V^{(2)}_{n\phantom{1}}\right) V^{(2)}_ {n-2}+V^{(2)}_{n+2} V^{(2)}_{n-2} \right\}.
\end{align*}
\comment{\color{red} Question for Peter: I tested in Maple and the formula below does not match, mainly due to the tail part (last terms)
\begin{align*}
V_{n\phantom{1}}^{(10)} &= V_{n\phantom{1}}^{(2)} \left(V^{(8)}_ {n+1}+V_{n\phantom{1}}^{(8)}+V^{(8)}_ {n-1}\right)+V^{(2)}_ {n+1} V^{(2)}_ {n-1}V_{n\phantom{1}}^{(6)}
+V^{(2)}_ {n-1} V_{n\phantom{1}}^{(2)} V^{(2)}_ {n-1} \left(V_{n+2}^{(4)}+V_{n-2}^{(4)}+V_{n-2}^{(4)} \right)\\ &\qquad
+V^{(2)}_ {n+1} V_{n\phantom{1}}^{(2)} V^{(2)}_ {n-1} \left(V^{(2)}_ {n+2} V^{(2)}_{n+3}+V^{(2)}_{n-2}V^{(2)}_{n -3} \right).
\end{align*}
}
\comment{\begin{align*}
V_{n\phantom{1}}^{(8)} &= \beta_{n} \Big(\beta_ {n-1}^{3}+\left(3 \beta_{n}+2 \beta_ {n-2}+\beta_ {n+1}\right) \beta_ {n-1}^{2}\\
&\qquad+\left(\beta_ {n+1}^{2}+\left(4 \beta_{n}+\beta_ {n-2}+\beta_ {n+2}\right) \beta_ {n+1}+3 \beta_{n}^{2}+2 \beta_ {n-2} \beta_{n}+\beta_ {n-2} \left(\beta_ {n-2}+\beta_{n -3}\right)\right) \beta_ {n-1}\\
&\qquad +\left(\beta_ {n+1}^{2}+\left(3 \beta_{n}+2 \beta_ {n+2}\right) \beta_ {n+1}+3 \beta_{n}^{2}+2 \beta_{n} \beta_ {n+2}+\beta_ {n+2} \left(\beta_ {n+2}+\beta_{n +3}\right)\right) \beta_ {n+1}+\beta_{n}^{3}\Big),\\
\end{align*}}
\begin{remark}
For the case when $\lambda=-\tfrac12$, Proposition \ref{lm34} was proved by Benassia and Moro \cite{refBM20}, using a result in \cite{refBMX92}. Although it is straightforward to modify the proof presented therein for the case when $\lambda\not=-\tfrac12$, we hereby present an alternative approach purely depending on the structure relation of the semiclassical polynomials.
\end{remark}
When $m=2$ the discrete equation is
\begin{equation} 4\beta_{n} \big(\beta_{n-1} + \beta_{n} + \beta_{n+1}\big)-2t\beta_{n}=n+(\lambda+\tfrac12)[1-(-1)^n)], \label{eq:rr42} \end{equation}
which is dP$_{\rm I}$
and when $m=3$ the discrete equation is
\begin{align}6\beta_{n} \big(\beta_{n-2} \beta_{n-1} &+ \beta_{n-1}^2 + 2 \beta_{n-1} \beta_{n} + \beta_{n-1} \beta_{n+1}
+ \beta_{n}^2 + 2 \beta_{n}\beta_{n+1} + \beta_{n+1}^2 + \beta_{n+1} \beta_{n+2}\big) \nonumber\\
&-2t\beta_{n}=n+(\lambda+\tfrac12)[1-(-1)^n)],\label{eq:rr62}\end{align}
which is a special case of dP$_{\rm I}^{(2)}$, the second member of the discrete Pain\-lev\'e\ I hierarchy. For further information about the discrete Pain\-lev\'e\ I hierarchy, see \cite{refCJ99a,refCJ99b}. Equations \eqref{eq:rr42} and \eqref{eq:rr62} with $t=0$ were derived by Freud \cite{refFreud76}; see also \cite{refMagnus85,refWVAbk}.
Further equation \eqref{eq:rr42} and \eqref{eq:rr62} with $\lambda=-\tfrac12$ are also known as ``string equations" and arise in important physical applications such as two-dimensional quantum gravity, cf.~\cite{refDS,refGMig,refFIK91,refFIK92,refPS}.
\subsection{Asymptotics for the recurrence coefficients as {$n\to\infty$}}
In 1976, Freud \cite{refFreud76} conjectured that the asymptotic behavior of recurrence coefficients $\beta_{n}$ in the recurrence relation \eqref{eq:srr} satisfied by monic polynomials $\{ p_{n} (x) \}_{n=0}^{\infty}$ orthogonal with respect to the weight
\begin{equation*}\omega(x) = |x|^{\rho}\exp(-|x|^{m}),\end{equation*}
with $x \in \mathbb{R}$, $\rho>-1$, $m>0$ could be described by
\begin{equation}\label{Freudconj}
\displaystyle \lim_{n\rightarrow \infty} \frac{\beta_{n}}{n^{2/m}}= \left[ \frac{\Gamma(\tfrac{1}{2}m)\, \Gamma(1+\tfrac{1}{2}m)}{\Gamma(m+1)}\right]^{2/m}.
\end{equation}
Freud stated the conjecture for orthonormal polynomials, proved it for $m=2,4,6$ and also showed that \eqref{Freudconj} is valid whenever the limit on the left-hand side exists. Magnus \cite{refMagnus85} proved Freud's conjecture for the case when $m$ is an even positive integer and also for weights
\begin{equation*}w(x)=\exp\{-Q(x)\}, \end{equation*}
where $Q(x)$ is an even degree polynomial with positive leading coefficient. We refer the reader to \cite[\S4.18]{refNevai86} for a detailed history of solutions to Freud's conjecture up to that point. The conjecture was settled by Lubinsky, Mhaskar and Saff in \cite{refLubinskyMS} as a special case of a more general result for recursion coefficients of exponential weights, see also \cite{refLubinskyMS86}. In \cite{refLB88}, Lubinsky and Saff introduced the class of {\it very smooth Freud weights}
of order $\alpha$ with conditions on $Q$ that are satisfied when $Q$ is of the form $x^{\alpha},$ $\alpha>0$. Associated with each weight in this class, one can define $a_{n}$ as the unique, positive root of the equation (cf.~\cite[p.\ 67]{refLubinskyMS} and references therein)
\begin{equation}\label{LMSnumber} n= \frac{2}{\pi} \displaystyle \int_{0}^{1} \frac{a_{n}s\, Q'(a_{n}s)}{\sqrt{1-s^2}}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{s}.\end{equation}
\begin{theorem}\label{FC} Consider the generalised higher order Freud weight \eqref{freudg}. Then the recurrence coefficients $\beta_{n}$ associated with this weight satisfy
\begin{equation*}\label{eq415}\lim_{n\to \infty} \frac{\beta_{n}(t;\lambda)}{n^{1/m}} = \frac 14\left(\frac{(m-1)!}{(\tfrac12)_m}\right)^{\!1/m}.
\end{equation*}
\end{theorem}
\begin{proof}Let $Q(x)=\tfrac{1}{2}x^{2m}$, then evaluating \eqref{LMSnumber} yields
\begin{align*} n&= \frac{2ma_{n}^{2m}}{\pi}\displaystyle \int_{0}^{1} \frac{s^{2m}}{\sqrt{1-s^2}}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} {s}
= \frac{ma_{n}^{2m}}{\pi}\displaystyle \int_{0}^{1} \frac{t^{m-1/2}}{\sqrt{1-t}}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i}{t}\\
&=\frac{ma_{n}^{2m}}{\pi}B(m+\tfrac12,\tfrac12)
=\frac{ma_{n}^{2m}}{\pi}\frac{\Gamma(m+\tfrac12)\,\Gamma{(\tfrac12)}}{\Gamma(m+1)}
=\displaystyle \frac{a_{n}^{2m}(\tfrac12)_m}{(m-1)!},\end{align*}
where $B(p,q)$ denotes the Beta function (cf.~\cite[(5.12.1)]{DLMF}). Hence $\displaystyle a_{n}^2=\left( \frac{(m-1)!}{(\tfrac12)_m}\, n\right)^{\!1/m}$ and the result follows from \cite[Theorem 2.3]{refLubinskyMS} taking $W(x)=\exp\{-Q(x)\}$, $w=|x|^{\lambda+1/2}$, $P(x)=\tfrac{1}{2}tx^2$ and $\Psi(x)=1$.
\end{proof}
\begin{remark}
Taking $m=2$ in Theorem \ref{FC}, we recover \cite[Corollary 4.2 (ii)]{refCJ18} for the recurrence coefficients associated with the generalised quartic Freud weight $|x|^{2\lambda+1}\exp(tx^2-x^4)$ which satisfy
\[\lim_{n\to \infty} \frac{\beta_{n}(t;\lambda)}{\sqrt{n}} = \frac{1}{\sqrt{12}},\]
while, for $m=3$, the recurrence coefficients associated with the generalised sextic Freud weight $|x|^{2\lambda+1}\exp(tx^2-x^6)$ satisfy
\[\lim_{n\to \infty}\frac{\beta_{n}(t;\lambda)}{\sqrt[3]{n}} = \frac1{\sqrt[3]{60}},\] as shown in \cite[Corollary 4.8]{{refCJ21b}}.
\end{remark}
\section{Generalised higher order Freud polynomials}\label{sec:relations}
\subsection{Differential equations}
The second order differential equations satisfied by generalised higher order Freud polynomials can be obtained by using ladder operators as was done for the special cases $m=2$ and $m=3$ in \cite[Theorem 6]{refCJK} and \cite[Theorem 4.3]{refCJ21a}, respectively. An alternative approach is given by Maroni in \cite{refMaroni} and \cite{refMaroni3}.
\begin{proposition}
The polynomial sequence $\big\{P_{n}(x)\big\}_{n\geq 0}$ orthogonal with respect to the generalised higher order Freud weight \eqref{freudg} is a solution to the differential equation
\begin{equation*}
J(x;n) \deriv[2]{P_{n+1}}{x}(x) + K(x;n) \deriv{P_{n+1}}{x}(x) + L(x;n) P_{n+1}(x) =0,
\end{equation*}
where
\begin{align*}
& J(x;n) = x D_{n+1}(x) , \\
& K(x;n)= C_0(x) D_{n+1}(x) - x \deriv{D_{n+1}}{x}(x) + D_{n+1}(x), \\
& L(x;n) = \mathcal{W}\left(\tfrac{1}{2} (C_{n+1}(x) - C_0(x) ), D_{n+1}(x) \right) - D_{n+1}(x)\sum_{j=0}^n \frac{1}{\beta_j} D_j(x),
\end{align*}
with
\begin{align*}
& C_{n+1} (x) = - C_{n}(x) + \frac{2x}{\beta_{n}} D_{n}(x), \qquad
D_{n+1}(x) = -x + \frac{\beta_{n}}{\beta_{n-1}} D_{n-1}(x) + \frac{x^2}{\beta_{n}} D_{n}(x) - x C_{n}(x),
\end{align*}
subject to the initial conditions $C_0(x) = -1 + 2(tx^2-mx^{2m}+\lambda+1)$, $D_{-1}(x) = 0$ and
\[D_0(x) = 2x \left\{m \sum_{j=1}^{m} \mu_{2j-2}(t,\lambda) x^{2m-2j}
-t \mu_0(t,\lambda)\right\}.
\]
\end{proposition}
\subsection{Mixed recurrence relations}
\begin{lemma}\label{mrec} Let $\big\{P_{n}(x;\lambda)\big\}_{n=0}^{\infty}$ be the sequence of monic generalised higher order Freud polynomials orthogonal with respect to the weight \eqref{freudg}, then, for $m,n$ fixed,
\begin{subequations}
\begin{align}\label{rreven}
xP_{2n}(x;\lambda+1)&=P_{2n+1}(x;\lambda), \\%[5pt]
\label{rrodd}
x^2P_{2n-1}(x;\lambda+1)&=xP_{2n}(x;\lambda)-\left\{\beta_{2n}(\lambda)+\displaystyle \frac{P_{2n+1}'(0;\lambda)}{P_{2n-1}'(0;\lambda)}\right\}P_{2n-1}(x;\lambda).
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}Let $P_{n}(x;\lambda+1)$ be the polynomials associated with the even weight function
\begin{align*}\omega(x;\lambda+1)&=|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)
=x^2\omega(x;\lambda), \quad m=2,3,\dots\ .\end{align*}
The factor $x^2$ by which the weight $\omega(x;\lambda)$ is modified has a double zero at the origin and therefore Christoffel's formula (cf.~\cite[Theorem 2.5]{refSzego}), applied to the monic polynomials $P_{n}(x;\lambda+1)$, is
\[
x^2P_{n}(x;\lambda+1)=\frac{1}{P_{n}(0;\lambda)P_{n+1}'(0;\lambda)-P_{n}'(0;\lambda)P_{n+1}(0;\lambda)}\left|\begin{matrix} P_{n}(x;\lambda) & P_{n+1}(x;\lambda) & P_{n+2}(x;\lambda)\\
P_{n}(0;\lambda) & P_{n+1}(0;\lambda) & P_{n+2}(0;\lambda)\\
P_{n}'(0;\lambda) & P_{n+1}'(0;\lambda) & P_{n+2}'(0;\lambda)\\\end{matrix}\right|.\]
Since the weight $\omega(x;\lambda)$ is even, we have that $P_{2n+1}(0;\lambda)=P_{2n}'(0;\lambda)$ while $P_{2n}(0;\lambda)\neq0$ and $P_{2n+1}'(0;\lambda)\neq0$, hence
\[
x^2P_{n}(x;\lambda+1)=\frac{-1}{ P_{n}'(0;\lambda)P_{n+1}(0;\lambda)}\left|\begin{matrix} P_{n}(x;\lambda) & P_{n+1}(x;\lambda) & P_{n+2}(x;\lambda)\\
0 & P_{n+1}(0;\lambda) &0\\
P_{n}'(0;\lambda) & 0 & P_{n+2}'(0;\lambda)\\\end{matrix}\right|,
\] for $n$ odd, while, for $n$ even,
\[
x^2P_{n}(x;\lambda+1)=\frac{1}{P_{n}(0;\lambda)P_{n+1}'(0;\lambda) }\left|\begin{matrix} P_{n}(x;\lambda) & P_{n+1}(x;\lambda) & P_{n+2}(x;\lambda)\\
P_{n}(0;\lambda) & 0& P_{n+2}(0;\lambda)\\
0 & P_{n+1}'(0;\lambda) &0\\\end{matrix}\right|.
\]
This yields
\begin{equation}\label{even}
x^2P_{n}(x;\lambda+1)=P_{n+2}(x;\lambda)-a_{n}P_{n}(x;\lambda),
\end{equation} where
\begin{equation} \nonumber a_{n}=\begin{cases} \displaystyle\frac{P_{n+2}(0;\lambda)}{P_{n}(0;\lambda)},\quad &\text{for}\quad n\quad\text{even},\\[6pt]
\displaystyle\frac{P_{n+2}'(0;\lambda)}{P_{n}'(0;\lambda)},\quad &\text{for}\quad n\quad\text{odd}.\end{cases}\label{ed}\end{equation} Using the three-term recurrence relation \eqref{eq:srr} to eliminate $P_{n+2}(x;\lambda)$ in \eqref{even}, we obtain
\begin{align*}
x^2P_{n}(x;\lambda+1)=xP_{n+1}(x;\lambda)-(\beta_{n+1}(\lambda)+a_{n})P_{n}(x;\lambda).
\end{align*} It follows from \eqref{bodd} that, for $n$ even, $\beta_{n+1}(\lambda)+a_{n}=0$ and the result follows.
\end{proof}
\begin{lemma}\label{qoeq}
For a fixed $m=2,3,\dots$, let $\big\{P_{n}(x;\lambda)\big\}_{n=0}^{\infty}$ be the sequence of monic generalised higher order Freud polynomials orthogonal with respect to the weight \eqref{freudg}. Then, for $n$ fixed,
\begin{subequations}
\begin{align}\label{qorodd}
P_{2n+1}(x;\lambda)&=P_{2n+1}(x;\lambda+1)+\beta_{2n}(\lambda+1)P_{2n-1}(x;\lambda+1),\\\label{qoreven}
P_{2n}(x;\lambda)&=P_{2n}(x;\lambda+1)-\frac{\beta_{2n}(\lambda)\beta_{2n-1}(\lambda+1)P_{2n-1}'(0;\lambda)}{P_{2n+1}'(0;\lambda)}P_{2n-2}(x;\lambda+1).
\end{align}
\end{subequations}
\end{lemma}
\begin{proof}
Substitute \eqref{rreven} into the three term recurrence relation \begin{equation}\label{nrr}P_{2n+1}(x;\lambda)=xP_{2n}(x;\lambda)-\beta_{2n}(\lambda)P_{2n-1}(x;\lambda),\end{equation} to eliminate $P_{2n+1}(x;\lambda)$ and obtain
\[xP_{2n}(x;\lambda+1)=xP_{2n}(x;\lambda)-\beta_{2n}(\lambda)P_{2n-1}(x;\lambda).\] Let $\displaystyle a_{2n}=\frac{P_{2n+1}'(0;\lambda)}{P_{2n-1}'(0;\lambda)}$. Substitute \eqref{rrodd} into \eqref{nrr} to eliminate $P_{2n-1}(x;\lambda)$ and obtain
\begin{align}\label{nn}xP_{2n}(x;\lambda+1)&=xP_{2n}(x;\lambda)- \frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}\left(xP_{2n}(x;\lambda)-x^2P_{2n-1}(x;\lambda+1)\right)
.\end{align} Simplification and rearrangement of terms in \eqref{nn} yields \[\left(1-\frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}\right)P_{2n}(x;\lambda) = P_{2n}(x;\lambda+1)-\frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}xP_{2n-1}(x;\lambda+1),\] then, using the three term recurrence relation to eliminate $xP_{2n-1}(x;\lambda+1)$, we obtain
\begin{align*}\left(1-\frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}\right)P_{2n}(x;\lambda)&= \left(1-\frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}\right)P_{2n}(x;\lambda+1)\\& \qquad+\frac{\beta_{2n}(\lambda)}{\beta_{2n}(\lambda)+a_{2n}}\beta_{2n-1}(\lambda+1)P_{2n-2}(x;\lambda+1),\end{align*} which simplifies to \eqref{qoreven}.
Substituting \eqref{rreven} into the three term recurrence relation \[P_{2n+1}(x;\lambda+1)=xP_{2n}(x;\lambda+1)-\beta_{2n}(\lambda+1)P_{2n-1}(x;\lambda+1),\] yields \eqref{qorodd}.
\end{proof}
\subsection{Quasi-orthogonality for {$\lambda\in(-2,-1)$}}
Lemma \ref{qoeq} yields the quasi-orthogonality of generalised higher order Freud polynomials for $-2<\lambda<-1$.
\begin{theorem} Suppose $-2<\lambda<-1$. For each fixed $m=2,3,\dots$, the generalised higher order Freud polynomial $P_{n}(x;\lambda)$ is quasi-orthogonal of order $2$ on $\mathbb{R}$ with respect to the weight \begin{equation}\nonumber |x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big),\qquad t \in\mathbb{R}.\end{equation}
\end{theorem}
\begin{proof}
Suppose $-2<\lambda<-1$, then $\lambda+1>-1$.
When $n$ is even, we have from \eqref{qoreven} that \begin{align}\nonumber\int_{-\infty}^{\infty} &x^kP_{n}(x;\lambda)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x \\&\label{oreven}= \int_{-\infty}^{\infty} x^kP_{n}(x;\lambda+1)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x\\&\qquad-\frac{\beta_{n}(\lambda)\beta_{n-1}(\lambda+1)P_{n-1}'(0;\lambda)}{P_{n+1}'(0;\lambda)}\int_{-\infty}^{\infty} x^kP_{n-2}(x;\lambda+1)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x, \nonumber
\end{align}
while, for $n$ is odd, it follows from \eqref{qorodd} that\begin{align}
\nonumber\int_{-\infty}^{\infty} &x^kP_{n}(x;\lambda)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x \\&\label{orodd}= \int_{-\infty}^{\infty} x^kP_{n}(x;\lambda+1)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x\\&\qquad+\beta_{n}(\lambda+1)\int_{-\infty}^{\infty} x^kP_{n-2}(x;\lambda+1)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x. \nonumber
\end{align} Since $\lambda+1>-1$, it follows from the orthogonality of the generalised higher order Freud polynomials that \[\int_{-\infty}^{\infty} x^kP_{n}(x;\lambda+1)\,\,|x|^{2\lambda+3}\exp\big(tx^2-x^{2m}\big)=0\qquad\text{for}\quad k=0,\dots,n-1,\] and we see that all the integrals on the righthand side of \eqref{oreven} and \eqref{orodd} are equal to zero for $k=0,\dots,n-3$.
\end{proof}
\section{Zeros of generalised higher order Freud polynomials}\label{sec:zeros}
\subsection{Asymptotic zero distribution}
The asymptotic behaviour of the recurrence coefficients of generalised higher order Freud polynomials orthogonal with respect to \eqref{freudg}, satisfying Freud's conjecture, given by \eqref{Freudconj}, is independent of the values of $t$ and $\lambda$. The asymptotic behaviour implies that the recurrence coefficients are regularly varying, irrespective of $t$ and $\lambda$. To consider the asymptotic distribution of the zeros of generalised higher order Freud polynomials orthogonal with respect to the weight \eqref{freudg} as $n\to \infty$, we use an appropriate scaling and apply the property of regular variation as detailed in \cite{refKVA1999}.
\begin{theorem}\label{realasymp} Let $\phi(n)=n^{1/(2m)}$ and assume that $n,N$ tend to infinity in such a way that the ratio $n/N\to \ell$. Then, for the sequence of scaled monic polynomials $P_{n,N}(x)=(\phi(N))^{-n}P_{n}(\phi(N)x)$ associated with the generalised higher order Freud weight \eqref{freudg}, the asymptotic zero distribution, as $n\to \infty$, has density \begin{equation}\label{dens}a_m(\ell)=\frac{2m}{c\pi (2m-1)}\left(1-{x^2}/{c^2}\right)^{\!1/2}{}_2F_1\left(1,1-m;\tfrac32-m;{x^2}/{c^2}\right),\end{equation} where \begin{equation*}c=2a\ell^{1/(2m)} \quad \text{with} \quad a= \tfrac12\left(\frac{(m-1)!}{\!(\tfrac12)_m}\right)^{\!1/(2m)},\end{equation*} defined on the interval
$(-2 a \ell^{1/(2m)},2a\ell^{1/(2m)})$.\end{theorem}
\begin{proof} The scaled monic polynomials $P_{n,N}(x)=(\phi(N))^{-n}P_{n}(\phi(N)x)$ associated with the generalised higher order Freud weight \eqref{freudg} have recurrence coefficient $\beta_{n,N}(t;\lambda)=\frac{\beta_{n}(t;\lambda)}{\left(\phi(N)\right)^2}$. Since $\phi:\mathbb{R}^+\to \mathbb{R}^+$ and, for every $\ell>0$, we have \[\lim_{x\to\infty}\frac{\phi(x\ell)}{\phi(x)}=\ell^{1/(2m)},\] $\phi$ is regularly varying at infinity with exponent of variation $\tfrac{1}{2m}$ (cf.~\cite{refVAG1989}). Since it follows from \eqref{eq415} that \[\lim_{n/N\to \ell}\sqrt{\beta_{n,N}(t;\lambda)}=\lim_{n/N\to\ell}\frac{\sqrt{\beta_{n}(t;\lambda)}}{\phi(n)}\frac{\phi(n)}{\phi(N)}=a\ell^{1/(2m)},\] the recurrence coefficients $\beta_{n.N}(t;\lambda)$ are said to be regularly varying at infinity with index $\tfrac{1}{2m}$ (cf.~\cite[Section 4.5]{refKVA1999}). From the property of regular variation, using \cite[Theorem 1.4]{refKVA1999}, it follows that the asymptotic zero distribution has density \begin{align*}\frac {1}{\pi \ell}\int_0^{\ell}s^{-1/(2m)}&\left(2a-x s^{-1/(2m)}\right)^{\!-1/2}\left(2a+x s^{-1/(2m)}\right)^{\!-1/2}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s\label{KVA}&\\&=\frac{m}{a\pi \ell}\int_0^{\ell^{1/(2m)}}y^{2m-2}\left(1-\left(\frac{x}{2ay}\right)^{\!2} \right)^{\!-1/2}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} y\nonumber\\&=\frac{m}{a\pi\ell}\int_0^{\ell^{1/(2m)}}y^{2m-2}\sum_{k=0}^{\infty}\frac{\left(\tfrac12\right)_k}{k!}\left(\frac{x}{2a y}\right)^{\!2k}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} y\nonumber
\\&=\frac{m}{a\pi\ell}\sum_{k=0}^{\infty}\frac{\left(\tfrac12\right)_k}{k!}\left(\frac{x}{2a}\right)^{\!2k}\int_0^{\ell^{1/(2m)}}y^{2m-2k-2}\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} y\nonumber
\\&=\frac{m}{a\pi\ell^{1/(2m)}}\sum_{k=0}^{\infty}\frac{\left(\tfrac12\right)_k}{k!}\frac{1}{2m-2k-1}\left(\frac{x}{2a\ell^{1/(2m)}}\right)^{\!2k}\nonumber\\&=\frac{m}{a\pi\ell^{1/(2m)}(2m-1)}\sum_{k=0}^{\infty}\frac{\left(\tfrac12\right)_k\left(\tfrac12-m\right)_k}{\left(\tfrac 32-m\right)_k k!}\left(\frac{x}{2a\ell^{1/(2m)}}\right)^{\!2k}\nonumber
\\&=\frac{m}{a\pi(2m-1)\ell^{1/(2m)}}{}_2F_1\left(\tfrac12,\tfrac12-m;\tfrac32-m;\left(\frac{x}{2a\ell^{1/(2m)}}\right)^{\!2}\right).
\end{align*}
\end{proof}
Figure \ref{density} shows the zeros and the asymptotic distribution according to Theorem \ref{realasymp}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=2.5in]{zeros}
\caption{\label{density}The zeros of $P_{n,N}(x)$ ({\color{red}{red}}) for $\lambda =0.5$, $t=1$, $m=3$, $n=N=10$ and $\ell=1$ with the corresponding limiting distribution (\ref{dens}) ({\color{dkb}{blue}}) and endpoints $(-2a,0)$ and $(2a ,0)$ ({\color{dkg}{green}}).}
\end{center}
\end{figure}
Figure \ref{figure2} shows the asymptotic distribution of zeros according to Theorem \ref{realasymp} for various values of $\ell$.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=2.5in]{Density_functions}
\caption{\label{figure2}The limiting distribution of the zeros $a_3(\ell)$ for $\ell =0.5$ ({\color{dkg}{green}}), $\ell=1$ ({\color{dkb}{blue}}) and $\ell=2$ ({\color{red}{red}}).}
\end{center}
\end{figure}
\remark Note that the formula on \cite[p. 189, line 22]{refKVA1999} should be $\displaystyle\frac 1t\int_0^t\frac{1}{s^{\lambda}}\omega'_{[b-2a,b+2a]}(xs^{-\lambda})\,{\rm d}}\def\e{{\rm e}}\def\i{{\rm i} s$.
\subsection{Bounds for the extreme zeros} From the three-term recurrence relation \eqref{eq:3rr}, we obtain bounds for the extreme zeros of monic generalised higher order Freud polynomials.
\begin{theorem} For each $n=2,3,\dots,$ the largest zero, $x_{1,n}$, of monic generalised higher order Freud polynomials $P_{n}(x)$ orthogonal with respect to the weight \eqref{freudg}, satisfies
\[0<x_{1,n} <\max_{1\leq k\leq n-1}\sqrt{c_{n}\beta_k(t;\lambda)},\]
where $\displaystyle c_{n}=4\cos^2\left(\frac{\pi}{n+1}\right)+\varepsilon$, $\varepsilon>0$.
\end{theorem}
\begin{proof}
The upper bound for the largest zero $x_{1,n}$ follows by applying \cite[Theorem 2 and 3]{refIsmailLi}, based on the Wall-Wetzel Theorem to the three-term recurrence relation \eqref{eq:3rr}.
\end{proof}
\subsection{Monotonicity of the zeros}
\begin{theorem}
\label{mono1} Consider $0<x_{\lfloor n/2 \rfloor,n}<\dots<x_{2,n} <x_{1,n} $, the positive zeros of monic orthogonal polynomials $P_{n}(x)$ with respect to the generalised higher order Freud weight \eqref{freudg} where $\lfloor k \rfloor$ denotes the largest integer less than or equal to $k$.
Then, for $\lambda>-1$, $t\in \mathbb{R}$ and for a fixed value of $\nu$, $\nu\in \{1,2,\dots, \lfloor n/2 \rfloor\}$, the $\nu$-th zero $x_{n,\nu} $ increases when (i), $\lambda$ increases; and (ii),
$t$ increases.
\end{theorem}
\begin{proof} This follows from \cite[Lemma 4.5]{refCJ21a}, taking $C(x)=x$, $D(x)=x^2$, $\rho=2\lambda+1$ and $\omega_0(x)=\exp(-x^{2m})$.
\end{proof}
\subsection{Interlacing of the zeros}
Next, for fixed $\lambda>-1$, $t\in \mathbb{R}$ and $k\in (0,1]$, we consider the relative positioning of the zeros of the monic generalised higher order Freud polynomials $\big\{P_{n}(x;\lambda)\big\}$ orthogonal with respect to the weight \eqref{freudg}, and the zeros of $\{P_{n}(x;\lambda+k)$, $k\in (0,1]$, orthogonal with respect to the weight \begin{equation} \nonumber
\omega(x;t,\lambda)=|x|^{2\lambda+2k+1}\exp\left(tx^2-x^{2m}\right), \qquad m=2,3,\dots\ .\end{equation}
The zeros of monic generalised higher order Freud polynomials $\big\{P_{n}(x;\lambda)\big\}$ orthogonal with respect to the symmetric weight
\eqref{freudg} are symmetric around the origin.
We denote the positive zeros of $P_{2n} (x;\lambda)$ by \[0<x_{n,2n}^{\lambda}<x_{n-1,2n}^{\lambda}<\dots<x_{2,2n}^{\lambda}<x_{1,2n}^{\lambda},\] and the positive zeros of $P_{2n+1} (x;\lambda)$ by \[0<x_{n,2n+1}^{\lambda}<x_{n-1,2n+1}^{\lambda}<\dots<x_{2,2n+1}^{\lambda}<x_{1,2n+1}^{\lambda}.\]
\begin{theorem} \label{int} Let $\lambda>-1$ and $t\in \mathbb{R}$. Let $\big\{P_{n}(x;\lambda)\big\}$ be the monic generalised higher order Freud polynomials orthogonal with respect to the weight
\eqref{freudg}.
Then, for $\ell\in\{1,\dots,n-1\}$ and $k\in(0,1]$, we have
\begin{eqnarray}\label{inteven}
x_{\ell+1,2n}^{\lambda}<x_{\ell,2n-1}^{\lambda}<x_{\ell,2n-1}^{\lambda+k}<x_{\ell,2n-1}^{\lambda+1}<x_{\ell,2n}^{\lambda},
\end{eqnarray}
and
\begin{eqnarray}\label{intodd}x_{\ell+1,2n+1}^{\lambda}<x_{\ell,2n}^{\lambda}<x_{\ell,2n}^{\lambda+k}<x_{\ell,2n}^{\lambda+1}=x_{\ell,2n+1}^{\lambda}.
\end{eqnarray}\end{theorem}
\begin{proof} The zeros of two consecutive polynomials in the sequence of generalised higher order Freud orthogonal polynomials are interlacing, that is,
\begin{equation}\label{3.13a}
0<x_{n,2n}^{\lambda}<x_{n-1,2n-1}^{\lambda}<x_{n-1,2n}^{\lambda}<\dots<x_{2,2n}^{\lambda}<x_{1,2n-1}^{\lambda}<x_{1,2n}^{\lambda}
\end{equation} and
\begin{equation}\label{3.13b}
0<x_{n,2n}^{\lambda}<x_{n,2n+1}^{\lambda}<x_{n-1,2n}^{\lambda}<\dots<x_{2,2n+1}^{\lambda}<x_{1,2n}^{\lambda}<x_{1,2n+1}^{\lambda}.
\end{equation} On the other hand, we proved in Theorem \ref{mono1} that the positive zeros of generalised higher order Freud polynomials monotonically increase as the parameter$\lambda$ increases. This implies that, for each fixed $\ell\in\{1,2,\dots,n\}$ and $k\in(0,1)$,
\begin{equation}\label{moe} x_{ \ell ,2n}^{\lambda}<x_{\ell,2n}^{\lambda+k}<x_{\ell,2n }^{\lambda+1},\end{equation} and
\begin{equation}\label{moo} x_{ \ell,2n-1}^{\lambda}<x_{\ell,2n-1}^{\lambda+k}<x_{\ell,2n-1}^{\lambda+1}.\end{equation}
Next, we prove that the zeros of $P_{2n}(x;\lambda)$ interlace with those of $P_{2n-1}(x;\lambda+1)$. From \eqref{rrodd},
\begin{equation}\label{l+22}
P_{2n-1}(x;\lambda+1)=\frac{xP_{2n}(x;\lambda)-(\beta_{2n}(\lambda)+P_{2n+1}'(0;\lambda)/P_{2n-1}'(0;\lambda))P_{2n-1}(x;\lambda)}{x^2}.
\end{equation}
Evaluating \eqref{l+22} at consecutive zeros $x_{\ell}=x_{\ell,n}^{(\lambda)}$ and $x_{\ell+1}=x_{\ell+1,n}^{(\lambda)}$, $\ell=1, 2, \ldots, n-1$, of $P_{2n}(x;\lambda)$, we obtain
\[\begin{split}
P_{2n-1}&(x_{\ell};\lambda+1)P_{2n-1}(x_{\ell+1};\lambda+1)\\&=\frac{(\beta_{2n}(\lambda)+P_{2n+1}'(0;\lambda)/P_{2n-1}'(0;\lambda) )^{\!2}P_{2n-1}(x_{\ell};\lambda)P_{2n-1}(x_{\ell+1};\lambda)}{x_{\ell}^2 x_{\ell+1}^2}<0,
\end{split}\]
since the zeros of $P_{2n}(x;\lambda)$ and $P_{2n-1}(x;\lambda)$ separate each other. So there is at least one positive zero of $P_{2n}(x;\lambda+1)$ in the interval $(x_{\ell}, x_{\ell}+1)$ for each $\ell=1, 2, \ldots, n-1$
since there are exactly $n$ positive zeros and
and this implies that
\begin{equation}
0<x_{n,2n}^{\lambda}<x_{n-1,2n-1}^{\lambda+1}<x_{n-1,2n}^{\lambda}<x_{n-2,2n-1}^{\lambda+1}<
\dots<x_{2,2n-1}^{\lambda+1}<x_{2,2n}^{\lambda}<x_{1,2n-1}^{\lambda+1}<x_{1,2n}^{\lambda}\label{3.14b}.
\end{equation}
Equations \eqref{3.13a}, \eqref{moo} and \eqref{3.14b} yield \eqref{inteven}. To prove \eqref{intodd}, we note that by \eqref{rreven} the $n$ positive zeros of $P_{2n}(x,\lambda+1)$ and $P_{2n+1}(x;\lambda)$ coincide, i.e $x_{\ell,2n}^{\lambda+1}=x_{\ell,2n+1}^{\lambda}$ for $\ell\in\{1,2,\dots,n\}$, and the result follows using \eqref{3.13b} and \eqref{moe}.\end{proof}
Figure \ref{inte} shows the interlacing of the zeros of polynomials orthogonal with respect to the generalised higher order Freud weight \eqref{freudg} for $m=3$ as described in \eqref{3.13a} of Theorem \ref{int}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4in]{Interlacing_zeros_even}
\caption{\label{inte}The zeros of $P_{7}(x;\lambda)$ ({\color{dkg}{green}}), $P_{7}(x;\lambda+1)$ ({\color{red}{red}}) and $P_{8}(x;\lambda)$ ({\color{dkb}{blue}}) for $\lambda =0.5$ and $t=1$.}
\end{center}
\end{figure}
Figure \ref{into} illustrates the interlacing of the zeros of polynomials orthogonal with respect to the generalised higher order Freud weight \eqref{freudg} for $m=3$ as described in \eqref{3.13b} of Theorem \ref{int}.
\begin{figure}[ht]
\begin{center}
\includegraphics[width=4in]{Interlacing_zeros_odd}
\caption{\label{into}The zeros of $P_{8}(x;\lambda)$ ({\color{dkg}{green}}), $P_{8}(x;\lambda+0.5)$ ({\color{red}{red}}), $xP_8(x;\lambda+1)$ ({\color{dkb}{blue}}) and $P_{9}(x;\lambda)$ ({\color{dkb}{blue}}) for $\lambda =1.5$ and $t=2.3$.}
\end{center}
\end{figure}
\section{Quadratic decomposition of the generalised higher order Freud weight}\label{sec:qdecomp}
Suppose
\[
P_{2n} (x;t,\lambda) = B_{n}(x^2;t,\lambda,) \qquad
P_{2n+1} (x;t,\lambda) = xR_{n}(x^2;t,\lambda), \qquad \text{for all}\quad n\geq 0,
\]
then from the recurrence relation \eqref{eq:srr} we have
\begin{align*}
B_{n+1} (x;t,\lambda) &= R_{n+1}(x;t,\lambda) + \beta_{2n+2} R_{n}(x;t,\lambda), \\
x R_{n} (x;t,\lambda) &= B_{n+1}(x;t,\lambda) + \beta_{2n+1} B_{n}(x;t,\lambda) ,
\end{align*}
and this gives second order recurrence relations for both $\{B_{n}\}_{n\geq0}$ and $\{R_{n}\}_{n\geq0}$ as follows
\begin{align*}
& \begin{cases}
B_{n+1}(x;t,\lambda) = \left(x - \beta_{2n}-\beta_{2n+1}\right) B_{n}(x;t,\lambda) - \beta_{2n-1}\beta_{2n} B_{n-1}(x;t,\lambda),\ n\geq 1, \\
B_1(x;t,\lambda) = x -\beta_1, \ B_0(x;t,\lambda)=1,
\end{cases}
\\
& \begin{cases}
R_{n+1}(x;t,\lambda) = \left(x - \beta_{2n+2}-\beta_{2n+1}\right) R_{n}(x;t,\lambda) - \beta_{2n+1}\beta_{2n} R_{n-1}(x;t,\lambda), \\
R_1(x;t,\lambda) = x -\beta_1-\beta_2, \ R_0(x;t,\lambda)=1.
\end{cases}
\end{align*}
Furthermore, $\{B_{n}\}_{n\geq 0}$ and $\{R_{n}\}_{n\geq 0}$ satisfy the orthogonality relations
\begin{align*}
& \int_0^{\infty} B_k(x;t,\lambda) B_{n}(x;t,\lambda) \, x^{\lambda} \exp(tx-x^{m})\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = h_{2n}(t,\lambda) \delta_{n,k},\\
& \int_0^{\infty} R_k(x;t,\lambda) R_{n}(x;t,\lambda) \, x^{\lambda+1} \exp(tx-x^{m})\, {\rm d}}\def\e{{\rm e}}\def\i{{\rm i} x = h_{2n+1}(t,\lambda) \delta_{n,k},\quad n,k\geq 0.
\end{align*}
\section*{Acknowledgement}
{PAC and KJ gratefully acknowledge the support of a Royal Society Newton Advanced Fellowship NAF$\backslash$R2$\backslash$180669.}
\defAmerican Mathematical Society{American Mathematical Society}
\defActa Appl. Math.{Acta Appl. Math.}
\defArch. Rat. Mech. Anal.{Arch. Rat. Mech. Anal.}
\defAcad. Roy. Belg. Bull. Cl. Sc. (5){Acad. Roy. Belg. Bull. Cl. Sc. (5)}
\defActa Crystrallogr.{Acta Crystrallogr.}
\defActa Metall.{Acta Metall.}
\defAnn. Mat. Pura Appl. (IV){Ann. Mat. Pura Appl. (IV)}
\defAnn. Phys., Lpz.{Ann. Phys., Lpz.}
\defAnn. Phys., NY{Ann. Phys., NY}
\defAnn. Phys., Paris{Ann. Phys., Paris}
\defBull. Amer. Math. Soc.{Bull. Amer. Math. Soc.}
\defCan. J. Phys.{Can. J. Phys.}
\defCommun. Math. Phys.{Commun. Math. Phys.}
\defCommun. Math. Phys.{Commun. Math. Phys.}
\defCommun. Pure Appl. Math.{Commun. Pure Appl. Math.}
\defCommun. Pure Appl. Math.{Commun. Pure Appl. Math.}
\defClassical Quantum Grav.{Classical Quantum Grav.}
\defC.R. Acad. Sc. Paris{C.R. Acad. Sc. Paris}
\defChaos, Solitons \&\ Fractals{Chaos, Solitons \&\ Fractals}
\defDiff. Eqns.{Diff. Eqns.}
\defDiff. Urav.{Diff. Urav.}
\defEurop. J. Appl. Math.{Europ. J. Appl. Math.}
\defEurop. J. Appl. Math.{Europ. J. Appl. Math.}
\defFunkcial. Ekvac.{Funkcial. Ekvac.}
\defFunkcial. Ekvac.{Funkcial. Ekvac.}
\defInverse Problems{Inverse Problems}
\defJ. Amer. Math. Soc.{J. Amer. Math. Soc.}
\defJ. Appl. Phys.{J. Appl. Phys.}
\defJ. Chem. Phys.{J. Chem. Phys.}
\defJ. Diff. Eqns.{J. Diff. Eqns.}
\defJ. Fluid Mech.{J. Fluid Mech.}
\defJapan J. Appl. Phys.{Japan J. Appl. Phys.}
\defJ. Physique{J. Physique}
\defJ. Phys. Chem.{J. Phys. Chem.}
\defJ. Math. Anal. Appl.{J. Math. Anal. Appl.}
\defJ. Magn. Magn. Mater.{J. Magn. Magn. Mater.}
\defJ. Math. Phys.{J. Math. Phys.}
\defJ. Math. Phys{J. Math. Phys}
\defJ. Nonl. Math. Phys.{J. Nonl. Math. Phys.}
\defJ. Phys. A{J. Phys. A}
\defJ. Phys. A}%: Math. Gen.{J. Phys. A
\def\JPB{J. Phys. B: At. Mol. Phys.}
\def\jpb{J. Phys. B: At. Mol. Opt. Phys.}
\def\JPC{J. Phys. C: Solid State Phys.}
\def\JPCM{J. Phys: Condensed Matter}
\defJ. Phys. D: Appl. Phys.{J. Phys. D: Appl. Phys.}
\defJ. Phys. E: Sci. Instrum.{J. Phys. E: Sci. Instrum.}
\defJ. Phys. F: Metal Phys.{J. Phys. F: Metal Phys.}
\def\JPG{J. Phys. G: Nucl. Phys.}
\def\jpg{J. Phys. G: Nucl. Part. Phys.}
\defJ. Stat. Phys.{J. Stat. Phys.}
\defJ. Opt. Soc. Am.{J. Opt. Soc. Am.}
\defJ. Phys. Soc. Japan{J. Phys. Soc. Japan}
\defJ. Quant. Spectrosc. Radiat. Transfer{J. Quant. Spectrosc. Radiat. Transfer}
\defLett. Math. Phys.{Lett. Math. Phys.}
\defLett. Nuovo Cim.{Lett. Nuovo Cim.}
\defNuovo Cim.{Nuovo Cim.}
\defNucl. Instrum. Methods{Nucl. Instrum. Methods}
\defNonlinearity{Nonlinearity}
\defNagoya Math. J.{Nagoya Math. J.}
\defNucl. Phys.{Nucl. Phys.}
\defPhys. Lett.{Phys. Lett.}
\defPhys. Lett.{Phys. Lett.}
\defPhys. Med. Biol.{Phys. Med. Biol.}
\defPhys. Rev.{Phys. Rev.}
\defPhys. Rev. Lett.{Phys. Rev. Lett.}
\defProc. R. Soc.{Proc. R. Soc.}
\defProc. R. Soc. Lond. A{Proc. R. Soc. Lond. A}
\defProc. R. Soc. Lond. A{Proc. R. Soc. Lond. A}
\defPhys. Scr.{Phys. Scr.}
\defPhys. Status Solidi{Phys. Status Solidi}
\defPhil. Trans. R. Soc.{Phil. Trans. R. Soc.}
\defRev. Mod. Phys.{Rev. Mod. Phys.}
\defRep. Prog. Phys.{Rep. Prog. Phys.}
\defRev. Sci. Instrum.{Rev. Sci. Instrum.}
\defStud. Appl. Math.{Stud. Appl. Math.}
\defStud. Appl. Math.{Stud. Appl. Math.}
\defSolid State Commun.{Solid State Commun.}
\defSemicond. Sci. Technol.{Semicond. Sci. Technol.}
\defSupercond. Sci. Technol.{Supercond. Sci. Technol.}
\defZ. Phys.{Z. Phys.}
\defJ. Comput. Appl. Math.{J. Comput. Appl. Math.}
\defOxford University Press{Oxford University Press}
\defCambridge University Press{Cambridge University Press}
\defAmerican Mathematical Society{American Mathematical Society}
\def\vspace{-8pt}\bibitem{\vspace{-8pt}\bibitem}
\def\refjl#1#2#3#4#5#6#7{\vspace{-8pt}\bibitem{#1}\textrm{\frenchspacing#2}, \textrm{#3},
\textit{\frenchspacing#4}, \textbf{#5}\ (#7)\ #6.}
\def\refjnl#1#2#3#4#5#6#7{\vspace{-8pt}\bibitem{#1}{\frenchspacing\rm#2}, #3,
{\frenchspacing\it#4}, {\bf#5} (#6) #7.}
\def\refpp#1#2#3#4{\vspace{-8pt}\bibitem{#1} \textrm{\frenchspacing#2}, \textrm{#3}, #4.}
\def\refjltoap#1#2#3#4#5#6#7{\vspace{-8pt}\bibitem{#1} \textrm{\frenchspacing#2}, \textrm{#3},
\textit{\frenchspacing#4} (#7).
#6.
\def\refbk#1#2#3#4#5{\vspace{-8pt}\bibitem{#1} \textrm{\frenchspacing#2}, \textit{#3}, #4, #5.}
\def\refcf#1#2#3#4#5#6#7{\vspace{-8pt}\bibitem{#1} \textrm{\frenchspacing#2}, \textrm{#3},
in: \textit{#4}, {\frenchspacing#5}, pp.\ #7, #6.}
|
1,477,468,751,029 | arxiv | \section{Introduction}
\blfootnote{
$2010$ \emph{Mathematics Subject Classification.} Primary $57\rm S25$; Secondary $55\rm M35$.
\emph{\textcolor{white}{ggf}Keywords and phrases.} induced character, fixed point, Smith problem, smooth action.}
P. A. Smith raised in $1960$ the following question for finite groups \cite[p. 406, the footnote]{Smith1960}.
\begin{question}[Smith question]
Is it true that for a finite group acting smoothly on a sphere with exactly two fixed points, the tangent spaces at the fixed points have always isomorphic group module structures defined by differentiation of the action?
\end{question}
Depending on the acting group, there are affirmative, as well as negative answers to this question. Much of the work on this problem has been done by Atiyah and Bott \cite{AtiyahBott1968}, Petrie and his students and collaborators \cite{Petrie1978},\cite{Petrie1979},\cite{Petrie1982} and \cite{Petrie1983}, Cappell and Shaneson \cite{CappellShaneson1980}\cite{Cappell1982}, \cite{Bredon1969}, Illman \cite{Illman1982}, Milnor \cite{Milnor1966}, Laitinen, Morimoto, Pawałowski, Solomon and Sumi, \cite{LaitinenPawalowski1999}, \cite{Morimoto1998},\cite{Morimoto2010}, \cite{PawalowskiSolomon2002}, \cite{MorimotoPawalowski2003}, \cite{Sumi2016}. For a comprehensive survey on the Smith problem, we refer the reader to the work of Pawałowski \cite{Pawalowski2018}.
Assume $G$ is a finite group. Let us call two $\mathbb{R}G$-modules $U$ and $V$ \emph{Smith equivalent} if $U\cong T_x(\Sigma)$ and $V\cong T_y(\Sigma)$ for a smooth action of $G$ on a homotopy sphere $\Sigma$ with exactly two fixed points $x$ and $y$. We say that the \emph{Laitinen Condition} is satisfied for $G$ acting smoothly on a homotopy sphere $\Sigma$ with $\Sigma^G=\{x,y\}$ if $\Sigma^g$ is connected for any $g\in G$ of order $2^k$, where $k\geq 3$. The \emph{real conjugacy class} of an element $g\in G$ is the union $(g)^{\pm}=(g)\cup(g^{-1})$. The \emph{primary number} of $G$ which we denote by $\operatorname{prim}(G)$ is the number of real conjugacy classes of $G$ containing elements whose order is divisible by at least two distinct primes. We call $G$ an \emph{Oliver group} if there does not exist a sequence of subgroups $P\trianglelefteq H\trianglelefteq G$ such that $P$ and $G/H$ are of prime power orders and $H/P$ is cyclic.
The Laitinen Conjecture proposes negative answers to the Smith question concerning actions on homotopy spheres. The conjecture reads as follows.
\begin{conjecture}\cite[Appendix]{LaitinenPawalowski1999}\label{conjecture:laitinen}
If $G$ is an Oliver group with $\operatorname{prim}(G)\geq 2$, then there exist non-isomorphic $\mathbb{R}G$-modules $U$ and $V$ which are Smith equivalent and the action of $G$ on the homotopy sphere in question satisfies the Laitinen Condition.
\end{conjecture}
The converse conclusion is always true \cite{LaitinenPawalowski1999} and Conjecture \ref{conjecture:laitinen} is known to be true in the following cases, \cite{Pawalowski2018}.
\begin{enumerate}[(1)]
\item $G$ is of odd order (and thus, by the Feit-Thompson Theorem, $G$ is solvable).
\item $G$ has a cyclic quotient of odd composite order (for example, $G$ is a nilpotent group with three or more noncyclic Sylow subgroups).
\item $G$ is a nonsolvable group not isomorphic to $\operatorname{Aut}(A_6)$ (in the case where $G = {\rm Aut}(A_6)$, the Laitinen Conjecture is false by \cite{Morimoto2008}).
\item $G$ satisfies the Sumi $G^{\rm nil}$-condition (the condition is defined below).
\end{enumerate}
For a prime $p$, let us use the notation $\mathcal{O}^p(G)$ for the smallest normal subgroup of $G$ with $G/\mathcal{O}^p(G)$ a $p$-group. A subgroup $H$ of a group $G$ is called \emph{large} if $\mathcal{O}^p(G)\leq H$ for some prime $p$. We denote by $\mathcal{L}(G)$ the family of all large subgroups of $G$. Let us call $G$ a \emph{gap group} if there exists an $\mathbb{R}G$-module $V$ such that for any $P<H\leq G$ with $P$ of prime power order, we have $\dim V^P>2\dim V^H$ and for any $L\in\mathcal{L}(G)$, $\dim V^L=0$ holds. Denote by $G^{\rm nil}$ the smallest normal subgroup of $G$ such that $G/G^{\rm nil}$ is nilpotent. We say that $G$ satisfies the \emph{Sumi $G^{\rm nil}$-condition} if there exist two elements $a,b\in G$ of composite order which are not real conjugate in $G$, the equality $aG^{\rm nil}=bG^{\rm nil}$ holds and at least one of the following statements holds.
\begin{itemize}
\item $|a|$ and $|b|$ are even and the involutions of the cyclic subgroups $\langle a\rangle$ and $\langle b\rangle$ are conjugate in $G$.
\item $a$ and $b$ belong to the same gap subgroup of $G$.
\end{itemize}
Therefore, in checking the Laitinen Conjecture, we shall focus on finite solvable Oliver groups $G$ of even order, such that each cyclic quotient of $G$ is either of even or of prime power order and $G$ does not satisfy the Sumi $G^{\rm nil}$-condition. We refer to such a group $G$ as a \emph{special Oliver group}. In general, however, Conjecture \ref{conjecture:laitinen} is not true. It fails for example for $G=\operatorname{Aut}(A_6)$ \cite{Morimoto2008} or $G=S_3\times A_4$ (see \cite{PawalowskiSumi2009} for more counterexamples).
We say that two $\mathbb{R}G$-modules $U$ and $V$ are \emph{$\mathcal{P}$-matched} if for any subgroup $P\leq G$ of prime power order, the restrictions $\operatorname{Res}^G_P(U)$ and $\operatorname{Res}^G_P(V)$ are isomorphic as $P$-modules.
In $2018$, Pawałowski \cite{Pawalowski2018} proposed the following problem.
\begin{problem}\label{question:pawalowski}
For which special Oliver groups $G$ with $\operatorname{prim}(G)\geq 2$, there exist $\mathcal{P}$-matched and Smith equivalent $\mathbb{R}G$-modules $U$ and $V$ which are not isomorphic?
\end{problem}
Some examples of special Oliver groups $G$ with $\operatorname{prim}(G)\geq 2$ such that no $\mathbb{R}G$-modules in question exist were already given in \cite{PawalowskiSumi2009}. We present here, for the first time, a certain infinite family of special Oliver groups with primary numbers at least $2$, possessing pairs of $\mathcal{P}$-matched Smith equivalent $\mathbb{R}G$-modules which are not isomorphic.
Suppose $p$ and $q$ are odd prime numbers such that $q|(p-1)$. Let $D_{2pq}$ be the dihedral group of order $2pq$ and $C_q$ be the cyclic group of order $q$. These groups have the following presentations.
$$
\begin{tabular}{ccc}
$
D_{2pq}=\langle a,b|a^{pq}=b^2=1,bab=a^{-1}\rangle
$&
and&
$
C_q=\langle c|c^q=1\rangle.
$
\end{tabular}
$$
Let $v$ be a primitive root modulo $p$ which is not divisible by $q$ (in case $q|v$, just take $p+v$ instead of $v$ which is also a primitive root modulo $p$). Put $i=v^{(p-1)(q-1)/q}\Mod{pq}$ and note that $i\equiv 1\Mod{q}$ and the order of $i$ modulo $p$ is $q$. Therefore $i\not\equiv 1\Mod{pq}$ and $i^q\equiv 1\Mod{pq}$ by the Chinese Reminder Theorem. Consider the automorphism $\tau$ of $D_{2pq}$ given by $\tau(a)=a^i$ and $\tau(b)=b$. The order of $\tau$ is $q$. Thus, we have a homomorphism $\varphi\colon C_q\rightarrow\operatorname{Aut}(D_{2pq})$, $c\mapsto\tau$. Define $G_{p,q}$ as the following semidirect product.
$$
G_{p,q}=D_{2pq}\rtimes_{\varphi}C_q
$$
The main theorem of the article can be stated as follows.
\begin{theorem}\label{theorem:main}
For any odd primes $p$ and $q$ with $q|(p-1)$, $G_{p,q}$ is a special Oliver group with $\operatorname{prim}(G_{p,q})\geq 2$. Moreover, there exist non-isomorphic $\mathcal{P}$-matched Smith equivalent $\mathbb{R}G_{p,q}$-modules $U$ and $V$.
\end{theorem}
\begin{remark}\emph{Note that the theorem above confirms the Laitinen Conjecture for $G_{p,q}$'s since the Laitinen Condition is naturally satisfied due to the lack of elements of order divisible by $8$ in $G_{p,q}$'s.}
\end{remark}
\begin{remark}
\emph{In the case where $q=2$, $N_p=\{(a^{qs},1)|s=0,...,p-1\}$ is a normal subgroup of $G_{p,q}$ isomorphic to the cyclic group of order $p$, such that the quotient $G_{p,q}/N_p$ is a $2$-group. Thus, for $q = 2$, $G_{p,q}$ is not an Oliver group. Moreover, any nontrivial element of $G_{p,q}$ is of order 2, 4, or $p$, where $p$ is an odd prime. Therefore, by elementary character theory arguments and the result of Atiyah and Bott [1, Thm. 7.15], any two Smith equivalent $\mathbb{R}G_{p,q}$-modules are isomorphic.}
\end{remark}
Fix odd primes $p$ and $q$ such that $q|(p-1)$. For the better presentation of the material, let us introduce additionally the following symbols and concepts ($G$ denotes a finite group).
\begin{itemize}
\item $\operatorname{RO}(G)$ - the real representation group of $G$. Consists of formal differences $U-V$ of $\mathbb{R}G$-modules. We identify $U-V$ with $U'-V'$ if there exists an $\mathbb{R}G$-module $W$ such that $U\oplus V'\oplus W\cong U'\oplus V\oplus W$. The addition is induced by direct sum operation.
\item $\operatorname{PO}(G)$ - the subgroup of $\operatorname{RO}(G)$ containing elements $U-V$ such that $U$ and $V$ are $\mathcal{P}$-matched.
\item An $\mathbb{R}G$-module $V$ is said to satisfy the \emph{weak gap condition} if for any $P<H\leq G$ such that $P$ is of prime power order, we have $\dim V^P\geq 2\dim V^H$.
\item $\operatorname{PO}^{\mathcal{L}}_{\rm w}(G)$ - the subgroup of $\operatorname{PO}(G)$ containing elements which can be written as $U-V$ for some $\mathbb{R}G$-modules $U$ and $V$ satisfying the weak gap condition and such that $\dim W^L=0$ for any $L\in\mathcal{L}(G)$ and $W=U,V$.
\item $N_{pq^2}$ - the unique subgroup of $G_{p,q}$ of index $2$.
\item $\operatorname{Ind}^G_H\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ - the induction homomorphism defined for any subgroup $H\leq G$ by the formula $U-V\mapsto\operatorname{Ind}^G_H(U)-\operatorname{Ind}^G_H(V)$, where $\operatorname{Ind}^G_H(W)$ denotes the induced $\mathbb{R}G$-module from the $\mathbb{R}H$-module $W$. This is a well-defined map since, if $U$ and $V$ are $\mathcal{P}$-matched $\mathbb{R}H$-modules, then so are $\operatorname{Ind}^G_H(U)-\operatorname{Ind}^G_H(V)$ as $\mathbb{R}G$-modules (we comment on this fact in the subsequent part).
\end{itemize}The paper is organized as follows. First, we show that $\operatorname{PO}^{\mathcal{L}}_{\rm w}(N_{pq^2})\neq 0$. In the next section, we prove that $G_{p,q}$ is a special Oliver group with $\operatorname{prim}(G_{p,q})\geq 2$. The third section provides, for any finite groups $H\leq G$, the necessary and sufficient condition for $\operatorname{Ind}^G_H\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ to be a monomorphism. Finally, we prove Theorem \ref{theorem:main} using the properties of the induction from $N_{pq^2}$ to $G_{p,q}$.
From now on, all groups considered in this paper are assumed to be finite.
\section{$\operatorname{PO}^{\mathcal{L}}_{\rm w}(N_{pq^2})$ is nonzero}
Note that $N_{pq^2}=\{(a^l,c^m)|k=0,...,pq-1, m=0,...,q-1\}$. We have
$$
(1,c)(a,1)(1,c)^{-1}=(1,c)(a,1)(1,c^{-m})=(1,c)(a,c^{-m})=(a^i,1).
$$
Thus, under the identifications $a\leftrightarrow(a,1)$ and $c\leftrightarrow(1,c)$, $N_{pq^2}$ can be presented as $$N_{pq^2}=\langle a,c|a^{pq}=c^q=1,cac^{-1}=a^i\rangle.$$
Let
$$
N_{pq^2}'=\langle \alpha,\beta,\gamma|\alpha^{q}=\beta^p=\gamma^q=1,\gamma\beta\gamma^{-1}=\beta^i,\alpha\beta=\beta\alpha,\alpha\gamma=\gamma\alpha\rangle.
$$
Then $N'_{pq^2}$ is isomorphic to the direct product of $C_q=\langle\alpha\rangle$ with the Frobenius group $F_{p,q}$ generated by $\beta$ and $\gamma$.
\begin{lemma}\label{lemma:isom}
Let $f\colon N'_{pq^2}\rightarrow N_{pq^2}$ be given by $f(\alpha)=a^p$, $f(\beta)=a^q$ and $f(\gamma)=c$. Then $f$ is a group isomorphism.
\end{lemma}
\begin{proof}
Note that $f$ is a well-defined group homomorphism. Indeed, $f(\alpha^q)=a^{pq}=1$, $f(\beta^p)=a^{pq}=1$, $f(\gamma^q)=c^q=1$, $f(\gamma\beta\gamma^{-1})=ca^qc^{-1}=(cac^{-1})^q=a^{iq}=f(\beta^i)$, $f(\alpha\beta)=a^{p+q}=a^{q+p}=f(\beta\alpha)$, $f(\gamma\alpha\gamma^{-1})=ca^pc^{-1}=a^{pi}=a^p=f(\alpha)$. The equality $a^{pi}=a^p$ follows from the fact that $pq|p(i-1)$ since $i\equiv 1\Mod{q}$.
Take any $a^lc^m\in N_{pq^2}$. Since $p$ and $q$ are different primes, we can find $x,y\in\mathbb{Z}$ such that $1=xp+yq$ and
$$
f(\alpha^{lx}\beta^{ly}\gamma^m)=a^{plx}a^{qly}c^m=a^{l(xp+yq)}c^m=a^lc^m.
$$
Hence $f$ is surjective. Let us prove that it is injective as well. Suppose $f(\alpha^x\beta^yc^m)=1$. Then $a^{px+qy}c^m=1$ which is the case only if $pq|(px+qy)$ and $m=0$. Since $p|px$ and $q|qy$, we must have then $p|qy$ and $q|px$ but this means $p|y$ and $q|x$. As a consequence, $\alpha^x\beta^y\gamma^m=1$ and $f$ has the trivial kernel.
\end{proof}
Put $u=i\Mod{p}$, $r=(p-1)/q$ and $\mathbb{Z}_p^*/\langle u\rangle=v_1\langle u\rangle\cup...\cup v_r\langle u\rangle$. Following \cite{Liebeck2001}[25.10 Theorem] the conjugacy classes of $F_{p,q}$ are as follows.
\begin{table}[H]
$$
\begin{tabular}{c|ccc}
class&$(1)$&$(\beta^{v_j})$&$(\gamma^n)$ \\
\hline
representative order&$1$&$p$&$q$\\
\hline
size&$1$&$q$&$p$\\
\hline
\# of classes of a given type&$1$&$r$&$q-1$
\end{tabular}
$$
\caption{\label{table:conjugacyClassesFrobenius}Conjugacy classes of $F_{p,q}$.}
\end{table}
where $(\beta^{v_j})=\{\beta^{v_js}|s\in\langle u\rangle\}$ and $(b^n)=\{\beta^m\gamma^n|0\leq m\leq p-1\}$ for all $1\leq j\leq r$ and $1\leq n\leq q-1$. Let $\sigma_{t,x}=\sum_{s\in\langle u\rangle}e^{2\pi iv_txs/p}$ for $x=0,...,p-1$. We have $r$ nonlinear irreducible characters of $F_{p,q}$ given by $\chi_t(\beta^x)=\sigma_{t,x}$ and $\chi_t(\gamma^n)=0$ for $x=0,...,p-1$ and $n=1,...,q-1$. They are presented in the table below.
\begin{table}[H]
$$
\begin{tabular}{c|ccc}
&$(1)$&$(\beta^{v_j})$&$(\gamma^n)$ \\
\hline
$\chi_1$&$q$&$\sigma_{1,v_j}$&$0$\\
\hline
$\vdots$&$\vdots$&$\vdots$&$\vdots$\\
\hline
$\chi_r$&$q$&$\sigma_{r,v_j}$&$0$
\end{tabular}
$$
\caption{\label{table:charactersFrobenius}Nonlinear irreducible characters of $F_{p,q}$.}
\end{table}
The irreducible characters of $C_q$ are $\rho_s\colon C_q\rightarrow\mathbb{C}$, $\alpha\mapsto\zeta_q^s$ for $s=0,...,q-1$, where $\zeta_q=e^{2\pi i/q}$. Since the irreducible characters of direct products are products of irreducible characters of the factor groups \cite{Liebeck2001}[19.18 Theorem], the following table contains the nonlinear irreducible characters of $N_{pq^2}\cong C_q\times F_{p,q}$.
\begin{table}[H]
$$
\begin{tabular}{c|ccccccc}
$g$&$(1,1)$&$(1,\beta^{v_j})$&$(1,\gamma^{n})$&$(\alpha^l,\beta^{v_j})$&$(\alpha^l,\gamma^n)$&$(\alpha^l,1)$& \\
\hline
$|g|$&$1$&$p$&$q$&$pq$&$q$&$q$\\
$|(g)|$&$1$&$q$&$p$&$q$&$p$&$1$\\
\# $(g)$&$1$&$r$&$q-1$&$(q-1)r$&$(q-1)^2$&$q-1$\\
\hline
$\psi_{s,t}=\rho_s\times\chi_t$&$q$&$\sigma_{t,v_j}$&$0$&$\zeta_q^{ls}\sigma_{t,v_j}$&$0$&$q\zeta_q^{ls}$
\end{tabular}
$$
\caption{\label{table:charactersNpq2}Nonlinear irreducible characters of $N_{pq^2}$.}
\end{table}
Let $N_p=\{(1,\beta^s)|s=0,...,p-1\}$. Obviously, $N_p$ is a normal subgroup of $N_{pq^2}$ isomorphic to $C_p$.
\begin{lemma}\label{lemma:largeNpq2}
$\mathcal{O}^q(N_{pq^2})=N_p$ and $\mathcal{O}^p(N_{pq^2})=N_{pq^2}$. As a result, all $L\in\mathcal{L}(N_{pq^2})$ contain $N_p$ as a subgroup.
\end{lemma}
\begin{proof}
It is obvious that $\mathcal{O}^q(N_{pq^2})=N_p$. We show that there is no normal subgroup of $N_{pq^2}$ of order $q^2$ which would conclude the proof.
Suppose for the converse that $N$ is a normal subgroup of $N_{pq^2}$ of order $q^2$. There exists $g\in N$ of order $q$. Since $N\trianglelefteq N_{pq^2}$, we have $(g)\subseteq N$. We know by Table \ref{table:charactersNpq2} that $g$ belongs to one of the following conjugacy classes: $((1,\gamma^n))$, $((\alpha^l,\gamma^n))$ or $((\alpha^l,1))$. Suppose $g\in ((1,\gamma^{n_0}))$ for some $n_0\in\{1,...,q-1\}$. Since $\{(1,\gamma^n)|n=0,...,q-1\}=\langle(1,\gamma^{n_0})\rangle\leq N$, it follows that each class $((1,\gamma^n))$ is contained in $N$. This yields at least $p(q-1)>q^2$ elements in $N$. A contradiction. Let $g\in((\alpha^{l_0},\gamma^{n_0}))$ for some $l_0,n_0\in\{1,...,q-1\}$. Then, similarly as before, considering $\langle(\alpha^{l_0},\gamma^{n_0})\rangle\leq N$ yields at least $p(q-1)>q^2$ elements in $N$ and we can exclude this case as well. Thus, all elements of order $q$ of $N$ belong to one of the classes $((\alpha^l,1))$. From Table \ref{table:charactersNpq2} follows that there are $q-1$ elements in these classes. Moreover, every element of $N$ different from the identity is of order $q$. This yields $|N|=q$ which is also a contradiction.
\end{proof}
Since characters of any group $G$ determine $FG$-modules up to isomorphism for $F=\mathbb{R},\mathbb{C}$, we shall use the same symbols for the characters and the $FG$-modules determined by them. Moreover, if $\chi$ is the character of $G$ determined by some $FG$-module, then by $\dim\chi^H$ we mean the fixed point dimension over $F$ for a subgroup $H$ acting on this $FG$-module. Note that in case $\chi$ is a character of some $\mathbb{R}G$-module then all such fixed point dimensions over $\mathbb{R}$ are equal as considered over $\mathbb{C}$ - we can treat $\chi$ as a character of a $\mathbb{C}G$-module as well.
\begin{lemma}\label{lemma:zeroDimension}
Let $s\neq 0$ and $H$ be a subgroup of $N_{pq^2}$ of order $p$ or $q^2$. Then $\dim\psi_{s,t}^H=0$ for any $t=0,...,r$.
\end{lemma}
\begin{proof}
Suppose $|H|=p$. Then $H=N_p$ and it follows from Table \ref{table:charactersNpq2} that
\begin{eqnarray*}
\dim\psi_{s,t}^H&=&\frac{1}{|H|}\sum_{h\in H}\psi_{s,t}(h)=\frac{1}{p}\Big(q+\sum_{x=1}^{p-1}\sigma_{t,x}\Big)=\frac{1}{p}\Big(q+\sum_{x=1}^{p-1}\sum_{s\in\langle u\rangle}e^{2\pi iv_txs/p}\Big)\\
&=&\frac{1}{p}\Big(q+\sum_{s\in\langle u\rangle}\sum_{x=1}^{p-1}e^{2\pi iv_txs/p}\Big)=\frac{1}{p}\Big(q+\sum_{s\in\langle u\rangle}(-1)\Big)=0.
\end{eqnarray*}
If $|H|=q^2$, then, since the only nonzero values of $\psi_{s,t}$ on elements of order $q$ are taken for the classes $(\alpha^l,1)$, it follows that
$$
\dim\psi_{s,t}^H\leq\frac{1}{q^2}\Big(q+\sum_{l=1}^{q-1}|q\zeta_q^{ls}|\Big)<\frac{1}{q^2}(q+q(q-1))=1
$$
and $\dim\psi_{s,t}^H=0$.
\end{proof}
\begin{corollary}\label{corollary:weakGapNpq2}
If $s\neq 0$, then $2\Rea\psi_{s,t}$ is an $\mathbb{R}N_{pq^2}$-module satisfying the weak gap condition and such that $\dim(2\Rea\psi_{s,t})^L=0$ for any $L\in\mathcal{L}(N_{pq^2})$.
\end{corollary}
\begin{proof}
From the properties of real and complex irreducible representations, we know that $2\Rea\psi_{s,t}$ is the character of a real irreducible $N_{pq^2}$-module since $\psi_{s,t}$ is not real-valued. Take any $L\in\mathcal{L}(N_{pq^2})$. We know by Lemma \ref{lemma:largeNpq2} that $N_p\leq L$. Thus, by Lemma \ref{lemma:zeroDimension}, we get
$$
\dim(2\Rea\psi_{s,t})^L=\dim(\psi_{s,t}+\overline{\psi_{s,t}})^L=2\dim\psi_{s,t}^L\geq 2\dim\psi_{s,t}^{N_p}=0.
$$
It remains to show that $2\Rea\psi_{s,t}$ satisfies the weak gap condition. By means of Lemma \ref{lemma:zeroDimension}, this boils down to proving that
$$
\dim(2\Rea\psi_{s,t})\geq 2\dim(2\Rea\psi_{s,t})^H
$$
for any subgroup $H\leq N_{pq^2}$ of order $q$. Using, once again, the information from Table \ref{table:charactersNpq2}, we get
$$
\dim(2\Rea\psi_{s,t})=2q>4\geq 2\cdot 2\cdot\frac{1}{q}(1+q-1)\geq 2\dim(2\Rea\psi_{s,t})^H.
$$
\end{proof}
\begin{lemma}\label{lemma:noznzeroPO}
Let $s\neq 0$. Then, for any $t=1,...,r$, the $\mathbb{R}G$-modules $U=2\Rea\psi_{s,t}$ and $V=2\Rea\psi_{q-s,t}$ are not isomorphic and $\mathcal{P}$-matched. As a result, $\operatorname{PO}_{\rm w}^{\mathcal{L}}(N_{pq^2})\neq 0$.
\end{lemma}
\begin{proof}
It follows from Table \ref{table:charactersNpq2} that $U$ and $V$ are $\mathcal{P}$-matched. Note that $U=\rho_s\times\chi_t$ and $V=\overline{\rho_s}\times\chi_t$. By the similar computation as in the proof of Lemma \ref{lemma:zeroDimension}, we establish the Frobenius-Schur indicator of character $\chi_t$.
\begin{eqnarray*}
\iota(\chi_t)&=&\frac{1}{|F_{p,q}|}\sum_{g\in F_{p,q}}\chi_{t}(g^2)=\frac{1}{pq}\Big(q+\sum_{|g|=p}\chi_{t}(g^2)\Big)=\frac{1}{pq}\Big(q+\sum_{|g|=p}\chi_t(g)\Big)\\
&=&\frac{1}{pq}\Big(q+\sum_{x=1}^{p-1}\sigma_{t,x}\Big)=0.
\end{eqnarray*}
Thus, $\chi_t$ is not real-valued and we can take $x=0,...,p-1$ such that $\Ima(\chi_t(\beta^x))\neq 0$. Now, take $l=1,...,q-1$ and put $g=(\alpha^l,\beta^{x})$. Clearly, $g$ is an element of order $pq$. Then, the character of $U$ evaluated on $g$ is equal to the number
\begin{eqnarray*}
\chi_U(g)&=&2\Rea\psi_{s,t}(g)=2\Rea(\rho_s(\alpha^l)\chi_t(\beta^x))\\
&=&2(\Rea(\rho_s(\alpha^l))\Rea(\chi_t(\beta^x))-\Ima(\rho_s(\alpha^l))\Ima(\chi_t(\beta^x))).
\end{eqnarray*}
Analogously, the character of $V$ evaluated on $g$ is equal to
\begin{eqnarray*}
\chi_V(g)&=&2\Rea\psi_{q-s,t}(g)=2\Rea(\overline{\rho_s(\alpha^l)}\chi_t(\beta^x))\\
&=&2(\Rea(\rho_s(\alpha^l))\Rea(\chi_t(\beta^x))+\Ima(\rho_s(\alpha^l))\Ima(\chi_t(\beta^x)))\\
&\neq& \chi_U(g).
\end{eqnarray*}
\end{proof}
\section{$G_{p,q}$ is a special Oliver group with $\operatorname{prim}(G_{p,q})\geq 2$}
We divide the material contained in this section into three parts. In the first, we determine conjugacy classes of $G_{p,q}$. Using this, we show that $\operatorname{prim}(G_{p,q})\geq 2$. In the next part, we establish all normal subgroups of $G_{p,q}$ and infer the necessary information concerning the quotients of $G_{p,q}$. Finally, we use the performed computations to prove that $G_{p,q}$ is an example of a special Oliver group.
\subsection{Conjugacy classes of $G_{p,q}$}
Any element of $G_{p,q}$ is either of the form $x_1=(ba^l,c^m)$ or $x_2=(a^l,c^m)$ for some $l=0,...,pq-1$ and $m=0,...,q-1$. In the first case, its inverse $x_1^{-1}=(ba^{li^{-m}},c^{-m})$, while in the second $x_2^{-1}=(a^{-li^{-m}},c^{-m})$.
Let $g\in G_{p,q}$. We have the following possibilities.
\begin{enumerate}[(1)]
\item $g=(ba^{l_0},c^{m_0})$. Then
$$
\begin{tabular}{ccc}
$x_1gx_1^{-1}=(ba^{l(1+i^{m_0})-l_0i^m},c^{m_0})$&and&
$x_2gx_2^{-1}=(ba^{-l(1+i^{m_0})+l_0i^m},c^{m_0}).$
\end{tabular}
$$
Note that the expression $l(1+i^{m_0})-l_0i^m$ can take any remainder modulo $pq$. Since $i\equiv 1\Mod{q}$, it follows that $l(1+i^{m_0})-l_0i^m\equiv 2l-l_0\Mod{q}$ and substituting subsequent values for $l=0,...,pq-1$, we can obtain any pair of remainders of $l(1+i^{m_0})-l_0$ modulo $p$ and $q$. We conclude then from the Chinese Remainder Theorem, that for any $l'=0,...,pq-1$, there exist $l=0,...pq-1$ such that $l(1+i^{m_0})-l_0i^0=l(1+i^{m_0})-l_0\equiv l'\Mod{pq}$. Therefore
$$
(g)=\{(ba^l,c^{m_0})|l=0,...,pq-1\}.
$$
Note that $(b,c^{m_0})^n=(b^n,c^{nm_0})$. Hence $|g|=2q$ if $m_0\neq 0$ and $|g|=2$ if $m_0=0$.
\item $g=(a^{l_0},c^{m_0})$, where $m_0\neq 0$. Then
$$
\begin{tabular}{ccc}
$x_1gx_1^{-1}=(a^{l(i^{m_0}-1)-l_0i^m},c^{m_0})$&and&
$x_2gx_2^{-1}=(a^{-l(i^{m_0}-1)+l_0i^m},c^{m_0}).$
\end{tabular}
$$
We have $l(i^{m_0}-1)-l_0i^m\equiv -l_0\Mod{q}$ and substituting subsequent values for $l$, we can achieve all remainders modulo $p$ of $l(i^{m_0}-1)-l_0i^m$. If $r_0$ is the remainder modulo $q$ of $l_0$, it follows then that
$$
(g)=\{(a^{r_0+lq},c^{m_0}),(a^{-r_0+lq},c^{m_0})|l=0,...,p-1\}.
$$
For any $n\geq 0$
$$
g^n=(a^{l_0(1+i^{m_0}+...+i^{(n-1)m_0})},c^{m_0})=(a^{l_0\cdot\frac{1-i^{nm_0}}{1-i^{m_0}}},c^{nm_0}).
$$
Thus $q||g|$. On the other hand $p|\frac{1-i^{qm_0}}{1-i^{m_0}}$ since $1-i^{qm_0}$ is divisible by $pq$ and $p\nmid 1-i^{m_0}$. Moreover, $1+i^{m_0}+...+i^{(q-1)m_0}\equiv q\equiv 0\Mod{q}$, so $pq|\frac{1-i^{qm_0}}{1-i^{m_0}}$ and $|g|=q$.
\item $g=(a^{l_0},1)$. The computations of conjugacy class elements reduce then to
$$
\begin{tabular}{ccc}
$x_1gx_1^{-1}=(a^{-l_0i^m},1)$&and&
$x_2gx_2^{-1}=(a^{l_0i^m},1).$
\end{tabular}
$$
If $p\nmid l_0$, then all the numbers from the set $S_{l_0}=\{\pm l_0i^m|m=0,...,q-1\}$ give different remainders modulo $p$ - this follows from the definition of $i$. Thus, we have $(p-1)/2$ such conjugacy classes, each with $2q$ elements and $$(g)=\{(a^{l_0i^m},1),(a^{-l_0i^m},1)|m=0,...,q-1\}$$
Moreover, for any $n\geq 0$, $g^n=(a^{nl_0},1)$, so $|g|=pq$ if $q\nmid l_0$ and $|g|=p$ if $q|l_0$.
If $p|l_0$ and $q\nmid l_0$, then the set $S_{l_0}$ reduces to two elements, $(a^{l_0},1)$ and $(a^{-l_0},1)$. We have $(q-1)/2$ such classes and
$$
\begin{tabular}{ccc}
$(g)=\{(a^{l_0},1),(a^{-l_0},1)\}$&and&
$|g|=q.$
\end{tabular}
$$
Finally, the last class left is the class of the identity element, $(g)=\{(1,1)\}$.
\end{enumerate}
The following table summarizes the information about the conjugacy classes of $G_{p,q}$ and orders of its elements.
\begin{table}[H]
$$
\begin{tabular}{c|cccccccc}
$g$&$(1,1)$&$B$&$E_s$&$C_{m}$&$D_{r,m}$&$F_s$&$B_m$&$A_l$ \\
\hline
$|g|$&$1$&$2$&$p$&$q$&$q$&$q$&$2q$&$pq$\\
$|(g)|$&$1$&$pq$&$2q$&$p$&$2p$&$2$&$pq$&$2q$\\
\# $(g)$&$1$&$1$&$\frac{1}{2}r$&$q-1$&$\frac{1}{2}(q-1)^2$&$\frac{1}{2}(q-1)$&$q-1$&$\frac{1}{2}(q-1)r$
\end{tabular}
$$
\caption{\label{table:classesGpq}Conjugacy classes of $G_{p,q}$.}
\end{table}
where
$$
\begin{tabular}{cc}
$B=(b,1),$&$E_s=(a^{qs},1), s=1,...,p-1,$\\
$C_m=(1,c^m), m=1,...,q-1,$&$D_{r,m}=(a^r,c^m), m=1,...,q-1, q\nmid r,$\\
$F_s=(a^{ps},1), s=1,...,q-1,$&$B_m=(b,c^m), m=1,...,q-1,$\\
$A_l=(a^l,1), p,q\nmid l.$
\end{tabular}
$$
\begin{lemma}\label{lemma:primaryNumber}
$\operatorname{prim}(G_{p,q})=\frac{1}{2}(q-1)(r+1)$. Thus $\operatorname{prim}(G_{p,q})\geq 2$.
\end{lemma}
\begin{proof}
We establish first the real conjugacy classes of $G_{p,q}$ whose elements are not of prime power order. Let $g\in G$ be such an element. Obviously, we can consider only those $g$ which are the distinguished conjugacy class representatives. It follows from Table \ref{table:classesGpq} that $g\in (B_m)$ or $g\in (A_l)$ for some $m=1,...,q-1$ and $l$ not divisible by $p$ and $q$. In the first case, $g=(b,c^m)$ and $g^{-1}=(b,c^{-m})$, so $(g)\neq (g^{-1})$. This yields $(q-1)/2$ real conjugacy classes of the form $(B_m)^{\pm}=(B_m)\cup(B_{q-m})$ for any $m=1,...,(q-1)/2$. In case $g\in (A_l)$, we have $g=(a^l,1)$ and $g^{-1}=(a^{-l},1)$ and $g$ is conjugate to $g^{-1}$,
$$
(b,1)(a^l,1)(b,1)^{-1}=(a^{-l},1).
$$
Thus, each of the classes $(A_l)$ constitute the real conjugacy class. Therefore $$\operatorname{prim}(G_{p,q})=\frac{1}{2}(q-1)+\frac{1}{2}(q-1)r=\frac{1}{2}(q-1)(r+1).$$
\end{proof}
\subsection{Normal subgroups and quotients of $G_{p,q}$}
\begin{lemma}\label{lemma:ordersNormalSubgroups}
If $N\trianglelefteq G_{p,q}$, then $|N|\in\{1,p,q,pq,2pq,pq^2,2pq^2\}$.
\end{lemma}
\begin{proof}
$|G|$ has the following set of divisors
$$
\{1,2,p,q,2p,2q,pq,q^2,2pq,2q^2,pq^2,2pq^2\}.
$$
We show that $|N|\notin \{2,2p,2q,q^2,2q^2\}$. Assume $2||N|$. Then there is some element of order $2$ in $N$. Since $N$ is a normal subgroup of $G_{p,q}$, it follows from Table \ref{table:classesGpq} that $(B)\subseteq N$ and thus $|N|\geq pq$. Observe that $pq>2,2p,2q,2q^2$. Hence $|N|\notin\{2,2p,2q,2q^2\}$.
Now, suppose $|N|=q^2$. We conclude from Table \ref{table:classesGpq} that $N\trianglelefteq N_{pq^2}$. However, this possibility was already excluded in the proof of Lemma \ref{lemma:largeNpq2}.
\end{proof}
Consider the following subgroups of $G_{p,q}$.
$$
\begin{tabular}{cc}
$N_{2pq}=\{(b^{\varepsilon}a^l,1)|\varepsilon=0,1, l=0,...,pq-1\}$
&$N_p=\{(a^{qs},1)|s=0,...,p-1\}$\\
$N_q=\{(a^{ps},1)|s=0,...,q-1\}$&$N_{pq}^1=\{(a^l,1)|l=0,...,pq-1\}$
\end{tabular}
$$
and
$$
N_{pq}^2=\{(a^{qs},c^m)|s=0,...,p-1, m=0,...,q-1\}.
$$
\begin{lemma}\label{lemma:uniqueNormal}
$N_{pq^2}, N_{2pq}, N_{pq}^1, N_{pq}^2, N_p$ and $N_q$ are the only proper normal subgroups of $G_{p,q}$. Moreover, $N_{2pq}\cong D_{2pq}$, $N_{pq}^1\cong C_{pq}$, $N_{pq}^2\cong F_{p,q}$, $N_p\cong C_p$ and $N_q\cong C_q$.
\end{lemma}
\begin{proof}
It follows from Table \ref{table:classesGpq} that all the subgroups mentioned in the Lemma consist of the whole conjugacy classes and thus are normal. Clearly, $N_{2pq}\cong D_{2pq}$, $N^1_{pq}\cong C_{pq}$, $N^2_{pq}\cong F_{p,q}$ (since $N_{pq}^2$ is not abelian and the unique nonabelian group of order $pq$ is $F_{p,q}$), $N_p\cong C_p$ and $N_q\cong C_q$. We show that there are no other proper normal subgroups in $G_{p,q}$.
Assume for the converse that there exists a proper normal subgroup $N$ of $G_{p,q}$ such that $N\notin\{N_p,N_q,N_{pq}^1,N_{pq}^2,N_{2pq},N_{pq^2}\}$. From Lemma \ref{lemma:ordersNormalSubgroups}, we have $$|N|\in\{p,q,pq,2pq,pq^2\}.$$
If $|N|=pq^2$, then the only possibility is $N=N_{pq^2}$ which is a contradiction.
Suppose $|N|=2pq$. Then there exists an element of order $2$ contained in $N$. Thus, $(B)\subseteq N$. Since $N\neq N_{2pq}$, it follows that $g=(x,c^m)\in N$ for some $x\in D_{2pq}$ and $m\neq 0$. Since $(1,1)\in N$, $(B)\subseteq N$ and $|\{(1,1)\}\cup(B)|=pq+1$, it follows that $|(g)|<pq$. We conclude then from Table \ref{table:classesGpq} that $g\in(C_m)$ or $g\in(D_{r,m})$ for some r not divisible by $q$. Thus $C_m\in N$ or $D_{r,m}\in N$. Suppose $D_{r,m}\in N$. Then, for $n\geq 0,$
$$
D_{r,m}^n=(a^{r(1+i^m+...+i^{(n-1)m})},c^{nm})
$$
and $D_{r,m}^n\in D_{r_n,(nm \Mod{q})}$ for any $n=1,...,q-1$, where $r_n\neq 0$. Hence $S=(D_{r,m})\cup(D_{r_2,(2m\Mod{q})})\cup...\cup(D_{r_{q-1},((q-1)m\Mod{q})})\subseteq N$. However, $|S|=2p(q-1)>pq$. A contradiction. This means that $C_m\in N$. Therefore $\langle C_m\rangle\leq N$ and thus, for any $m=1,...,q-1$, $(C_m)\subseteq N$. On the other hand, $|(C_1)\cup...\cup(C_{q-1})|=p(q-1)<pq-1$ which means that $D_{r,m}\in N$ for some $m\neq 0$ and $q\nmid r$ which we have already excluded.
Assume $|N|=pq$. Then $N$ has no element of order $2$ and, since $N\neq N_{pq}^1$, $C_m\in N$ or $D_{r,m}\in N$ for some $m\neq 0$ and $q\nmid r$. The latter case implies $|N|>pq$. If $C_m\in N$, then, since $|\{(1,1)\}\cup(C_1)\cup...\cup(C_{q-1})|<pq$, it follows that one of the elements $A_l$ or $F_s$ is contained in $N$ for some $p,q\nmid l$ and $s=1,...,q-1$. If $A_l\in N$, again, we obtain a contradiction for this leads to $|N|>pq$ (for $\langle A_l\rangle=N_{pq}^1$). If $F_s\in N$, then $(F_1)\cup...\cup(F_{q-1})\subseteq N$. On the other hand, there exist an element of order $p$ in $N$ and we conclude from Table \ref{table:classesGpq} that $(E_1)\cup...\cup(E_{p-1})\subseteq N$. Thus,
\begin{eqnarray*}
|N|&\geq&|\{(1,1)\}\cup\bigcup_{r=1}^{q-1}(C_r)\cup\bigcup_{s=1}^{q-1}(F_s)\cup\bigcup_{t=1}^{p-1}(E_t)|\\
&=&1+p(q-1)+2\cdot\frac{1}{2}(q-1)+2q\cdot\frac{1}{2}r=pq+q-1>pq.
\end{eqnarray*}
Thus, $F_s\notin N$ for any $s=1,...,q-1$ and
$$
N=\{(1,1)\}\cup(C_1)\cup...\cup(C_{q-1})\cup(E_1)\cup...\cup(E_{p-1})=N^2_{p,q}
$$
which contradicts our assumption.
Let $|N|=q$ and $g\in N$ be an element of order $q$. If $g\in (C_m)$ or $g\in D_{r,m}$ for some $m\neq 0$ and $q\nmid r$, we conclude from Table \ref{table:classesGpq} that this implies $|N|>q$. Thus, $g\in (F_s)$ which means that it is impossible that $N\neq N_q$.
If $|N|=p$, Table \ref{table:classesGpq} leads immediately to the contradiction.
\end{proof}
\begin{corollary}\label{corollary:largeGpq}
$\mathcal{O}^p(G_{p,q})=G_{p,q},\:\mathcal{O}^q(G_{p,q})=N_{2pq}$ and $\mathcal{O}^2(G_{p,q})=N_{pq^2}$. Thus $\mathcal{L}(G_{p,q})=\{N_{2pq},N_{pq^2},G_{p,q}\}$.
\end{corollary}
Since $G_{p,q}$ is the semidirect product of $D_{2pq}$ and $C_q$, it can be presented as follows.
$G_{p,q}=\langle a,b,c|a^{pq},b^2,bab^{-1}=a^{-1},c^q,cac^{-1}=a^i,cbc^{-1}=b\rangle$
and we can identify $a$ with $(a,1)$, $b$ with $(b,1)$ and $c$ with $(1,c)$.
\begin{lemma}\label{lemma:quotientsGpq}
$G_{p,q}/N^1_{pq}\cong C_{2q}$, $G_{p,q}/N_{pq}^2\cong D_{2q}$, $G_{p,q}/N_p\cong C_q\times D_{2q}$ and $G_{p,q}/N_q$ is a group not of prime power order which is not nilpotent.
\end{lemma}
\begin{proof}
Define $\varphi_{pq}^1\colon G_{p,q}\rightarrow C_{2q}=\langle d|d^{2q}=1\rangle$ by $\varphi_{pq}^1(a)=1$, $\varphi_{pq}^1(b)=d^q$, $\varphi_{pq}^1(c)=d^q$. Obviously
$$
\varphi_{pq}^1(a^{pq})=\varphi_{pq}^1(b^2)=\varphi(c^q)=\varphi_{pq}^1(baba)=1
$$
and
$$
\varphi_{pq}^1(cac^{-1})=\varphi_{pq}^1(a^i)=\varphi_{pq}^1(cbc^{-1}b^{-1})=1
$$
and $\varphi_{pq}^1$ is a well-defined group homomorphism. It is easy to observe that $\operatorname{Ker}\varphi_{pq}^1=N^1_{pq}$ and that $\varphi_{pq}^1$ is surjective. Thus $G_{p,q}/N^1_{p,q}\cong C_{2q}$.
Let $\varphi_{pq}^2\colon G_{p,q}\rightarrow D_{2q}=\langle d,e|d^q=e^2=1,ede=d^{-1}\rangle$ be given by $\varphi_{pq}^2(a)=d$, $\varphi_{pq}^2(b)=e$ and $\varphi_{pq}^2(c)=1$. We have
$$
\varphi_{pq}^2(a^{pq})=\varphi_{pq}^2(b^2)=\varphi(c^q)=\varphi_{pq}^2(baba)=\varphi_{pq}^2(cbc^{-1}b^{-1})=1
$$
and
$$
\varphi_{pq}^2(cac^{-1})=d=d^i=\varphi_{pq}^2(a^i).
$$
Thus $\varphi_{pq}^2$ is a well-defined epimorphism. Moreover, $$b^{\varepsilon}a^lc^m\in\operatorname{Ker}\varphi_{pq}^2\Leftrightarrow e^{\varepsilon}d^l=1\Leftrightarrow \varepsilon=0,q|l,m=0,...,q-1$$
and $\operatorname{Ker}\varphi_{pq}^2=\{(a^{qs},c^m)|s=0,...,p-1,m=0,...,q-1\}=N^2_{pq}$.
Put $\varphi_{p}\colon G_{p,q}\rightarrow C_q\times D_{2q}=\langle d|d^q=1\rangle\times\langle e,f|e^q=f^2=1,fef=e^{-1}\rangle$, $\varphi_{p}(a)=(1,e)$, $\varphi_{p}(b)=(1,f)$ and $\varphi_{p}(c)=(d,1)$. Obviously,
$$
\varphi_{p}(a^{pq})=\varphi_{p}(b^2)=\varphi_{p}(c^q)=\varphi_{p}(baba)=\varphi_{p}(cbc^{-1}b^{-1})=(1,1).
$$
Moreover, $\varphi_{p}(cac^{-1})=(1,e)$ and $\varphi_{p}(a^i)=(1,e^i)$. Since $i\equiv 1\Mod{q}$, it follows that $(1,e^i)=(1,e)$. Hence $\varphi_{p}$ is a well-defined homomorphism. Obviously, $\varphi_{p}$ is surjective and $b^{\varepsilon}a^lc^m\in\operatorname{Ker}(\varphi_{p})\Leftrightarrow m=0,\varepsilon=0,q|l$. Therefore $\operatorname{Ker}(\varphi_{p})=\{a^{qs}|s=0,...,p-1\}=N_p$.
Since the order of $G_{p,q}/N_q$ equals $2pq$, $G_{p,q}/N_q$ is not of prime power order. Suppose for the converse that $G_{p,q}/N_q$ is nilpotent. This means that $G_{p,q}/N_q$ is the direct product of its Sylows and, since $|G_{p,q}/N_q|$ is the product of three distinct primes, we conclude that $G_{p,q}/N_q$ has to be cyclic. Let $dN_q$ be the generator of $G_{p,q}/N_q$. If $d=(ba^l,c^m)$ for some $l=0,...,pq-1$ and $m=0,...,q-1$, then $|d|\leq 2q$ and it follows that $(dN_q)^{2q}=1$ in $G_{p,q}/N_q$. This contradicts that $G_{p,q}/N_q$ is of order $pq$. Thus, $d=(a^l,c^m)$. If $m\neq 0$, then $d^q=(1,1)$ and we obtain a contradiction. Hence, $d=(a^l,1)$. In this case, however, $d^p=(a^{pl},1)\in N_q$ which leads, again, to a contradiction.
\end{proof}
\subsection{$G_{p,q}$ is a special Oliver group}
We will need the following results of Sumi.
\begin{theorem}\label{theorem:Sumi2012}\cite{Sumi2012}[Theorem 1.2]
Let $G$ be a group with no large subgroup of prime power order. Moreover, suppose that $[G\colon\mathcal{O}^2(G)]=2$ and $\mathcal{O}^{p_0}(G)\neq G$ for a unique odd prime $p_0$ and that $G$ does not have an element of order divisible by $4$ and there is an element $g\in G$ of order $2$ not belonging to $\mathcal{O}^2(G)$ such that $2|\mathcal{O}^2(C_G(g))|\geq |C_G(g)|$. Then $G$ is not a gap group.
\end{theorem}
\begin{lemma}\label{lemma:Sumi2004}\cite{Sumi2004}[p.35, first paragraph]
If $G$ is a group which has a large subgroup of prime power order, then $G$ is not a gap group.
\end{lemma}
\begin{lemma}\label{lemma:sumi2001}\cite{Sumi2001}[pp. 982,984]
For any $n\geq 3$, the dihedral group $D_{2n}$ is not a gap group.
\end{lemma}
Now, we can prove the following.
\begin{lemma}\label{lemma:notGapSubgroups}
$N_{pq}^1$, $N_{2pq}$, $N_{pq^2}$ and $G_{p,q}$ are not gap groups.
\end{lemma}
\begin{proof}
Let us prove that $G_{p,q}$ is not a gap group by means of Theorem \ref{theorem:Sumi2012}. By Corollary \ref{corollary:largeGpq} and the fact that $G_{p,q}$ does not have an element of order divisible by $4$, it suffices to show that there exists an element $g\in G$ of order $2$ not belonging to $\mathcal{O}^2(G)=N_{pq^2}$ such that $2|\mathcal{O}^2(C_G(g))|\geq |C_G(g)|$. We show that this holds for $g=(b,1)$. We have
\begin{eqnarray*}
(b^{\varepsilon}a^l,c^m)(b,1)=(b,1)(b^{\varepsilon}a^l,c^m)\Leftrightarrow (b^{\varepsilon}a^lb,c^m)=(b^{1+\varepsilon}a^l,c^m)
\end{eqnarray*}
which holds if and only if $l=0$. Thus $C_{G_{p,q}}(g)=\{(b^{\varepsilon},c^m)|\varepsilon=0,1,m=0,...,q-1\}\cong C_{2q}$. Obviously $\mathcal{O}^2(C_{2q})\cong C_q$ and therefore the inequality $2|\mathcal{O}^2(C_G(g))|\geq |C_G(g)|$ holds. Hence $G_{p,q}$ is not a gap group.
Note that both $N_{pq}^1$ and $N_{pq^2}$ contain $N_p$ as a normal subgroup (since $N_p\trianglelefteq G_{p,q}$). Thus $\mathcal{O}^q(N_{pq}^1)=\mathcal{O}^q(N_{pq^2})=N_p$ and $N_p$ is a large subgroup for both $N_{pq}^1$ and $N_{pq^2}$. Hence, we get from Lemma \ref{lemma:Sumi2004} that $N_{pq}^1$ and $N_{pq^2}$ are not gap groups.
The statement for $N_{2pq}$ is the direct corollary from Lemma \ref{lemma:sumi2001}.
\end{proof}
\begin{lemma}\label{lemma:notGNil}
$G_{p,q}$ has no cyclic quotient of odd composite order and $G_{p,q}$ does not satisfy the Sumi $G_{p,q}^{\rm nil}$-condition.
\end{lemma}
\begin{proof}
It follows by Lemma \ref{lemma:uniqueNormal} and Lemma \ref{lemma:quotientsGpq} that $G_{p,q}$ has no cyclic quotient of odd composite order. It follows by Lemma \ref{lemma:quotientsGpq} that $G_{p,q}^{\rm nil}=N^1_{pq}$. Assume $xN^1_{pq}=yN_{pq}^1$ for some elements $x,y\in G_{p,q}$ of even order. This means that $x=(ba^l,c^m)$ and $y=(ba^{l'},c^{m'})$ for some $l,l'=0,...,pq-1$ and $m,m'=0,...,q-1$ and
$$
xy^{-1}=(a^{l'i^{m-m'}-l},c^{m-m'})\in N_{pq}^1.
$$
Thus $m'=m$ and $(x)=(y)$ by Table \ref{table:classesGpq}.
Suppose there exist $x',y'\in G_{p,q}$ of composite order such that one of them, say $x'$, is of odd order. Then $x'\in N_{pq}^1$ by Table \ref{table:classesGpq}. Thus, the only subgroups of $G_{p,q}$ which can contain both $x'$ and $y'$ must have $N_{pq}^1$ as a subgroup. These subgroups are precisely $N_{pq}^1$, $N_{2pq}$, $N_{pq^2}$ and $G_{p,q}$. We showed in Lemma \ref{lemma:notGapSubgroups} that they are not gap groups. This shows that $G_{p,q}$ does not satisfy the Sumi $G_{p,q}^{\rm nil}$-condition.
\end{proof}
\begin{lemma}\label{lemma:normalNorCyclicQuotients}
$N_{pq^2}$ has no normal subgroup $P$ of prime power order such that the quotient $N_{pq^2}/P$ is cyclic. The same statement holds for $N_{2pq}$.
\end{lemma}
\begin{proof}
Suppose that $P\trianglelefteq N_{pq^2}$ and $N_{pq^2}/P$ is cyclic. Then $|P|\in\{1,p,q,q^2\}$. If $|P|=1$, then $N_{pq^2}/P\cong N_{pq^2}$ and we obtain a contradiction. The case $|P|=q^2$ is not possible by the proof of Lemma \ref{lemma:largeNpq2}. Let $|P|=q$ and $N_{pq^2}/P=\langle(a^l,c^m)P\rangle$. We know from the proof of Lemma \ref{lemma:quotientsGpq} that $(a^l,c^m)^q=(1,1)$. Thus $|N_{pq^2}/P|\leq q$ which is a contradiction since $|N_{pq^2}/P|=pq$. Therefore $|P|=p$. In this case, however, it follows from Lemma \ref{lemma:isom} and Table \ref{table:charactersNpq2} that $P=N_p$. Suppose $N_{pq^2}/P=\langle(a^l,c^m)P\rangle$. As before, $|N_{pq^2}/P|\leq q$ which is not possible.
Assume that there exists $P\trianglelefteq N_{2pq}$ of prime power order such that $N_{2pq}/P$ is cyclic. Then $|P|\in\{1,2,p,q\}$. Obviously, $P$ cannot be the trivial subgroup and, since there is no normal subgroup of order $2$ in $N_{2pq}$, it follows that $|P|\in\{p,q\}$. This means that $P$ is a subgroup of $N_{pq}^1$. If $|P|=p$, then $P=\{(a^{qs},1)|s=0,...,p-1\}$. Since $|(ba^l,1)|=2$ for any $l=0,...,pq-1$, it follows that $N_{2pq}/P=\langle (a^l,1)P\rangle$. Suppose $(a^l,1)^nP=(ba^{l'},1)P$ for some $n\geq 0$ and $l'=0,...,pq-1$. This means that $(ba^{l'-l},1)=(ba^{l'},1)(a^l,1)^{-1}\in P$. A contradiction which implies that $N_{2pq}/P$ is not cylic. The case $|P|=q$ is analogous.
\end{proof}
\begin{lemma}\label{lemma:specialOliver}
$G_{p,q}$ is a special Oliver group.
\end{lemma}
\begin{proof}
Obviously, $G_{p,q}$ is not of odd order. Since $D_{2pq}$ and $C_q$ are solvable groups, it follows that $G_{p,q}$, as the semidirect product of $D_{2pq}$ and $C_{q}$, is solvable as well. Moreover, by Lemma \ref{lemma:notGNil}, we know that $G_{p,q}$ has no cyclic quotient of odd composite order and does not satisfy the Sumi $G_{p,q}^{\rm nil}$-condition. Thus, we only have to show that $G_{p,q}$ is an Oliver group. Suppose for the converse that this is not true. Then, there exist subgroups $P\trianglelefteq H\trianglelefteq G_{p,q}$ such that $G_{p,q}/H$ and $P$ are of prime power orders and $H/P$ is cyclic. Then, by Lemma \ref{lemma:ordersNormalSubgroups}, $|H|\in\{2pq,pq^2,2pq^2\}$ and thus, by Lemma \ref{lemma:uniqueNormal}, $H\in\{N_{2pq},N_{pq^2},G_{p,q}\}$. However, by Lemmas \ref{lemma:uniqueNormal}, \ref{lemma:quotientsGpq} and \ref{lemma:normalNorCyclicQuotients}, it follows that neither of the groups $N_{2pq}$, $N_{pq^2}$ and $G_{p,q}$ has a normal subgroup of prime power order such that the quotient by it is cyclic. This concludes the proof.
\end{proof}
\section{When $\operatorname{Ind}^G_H\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ is a monomorphism?}
Assume $H$ is a subgroup of a group $G$ and consider the induction homomorphism $$\operatorname{Ind}_H^G\colon\operatorname{RO}(H)\rightarrow\operatorname{RO}(G),\; U-V\mapsto \operatorname{Ind}^G_H(U)-\operatorname{Ind}^G_H(V).$$
\begin{theorem}\label{theorem:inducedCharacterFormula}\cite[21.23. Theorem]{Liebeck2001}
Let $\chi$ be a character of $H$ and $g\in G$. Then, we have two possibilities.
\begin{enumerate}[(1)]
\item if $H\cap (g)=\emptyset$, then $\operatorname{Ind}_H^G(\chi)(g)=0$.
\item if $H\cap (g)\neq\emptyset$, then
$$
\operatorname{Ind}_{H}^G(\chi)(g)=|C_G(g)|\Big(\frac{\chi(h_1)}{|C_H(h_1)|}+\ldots+\frac{\chi(h_m)}{|C_H(h_m)|}\Big),
$$
\end{enumerate}
where $C_K(x)$ denotes the centralizer of the element $x$ of the group $K$ and $h_1,\ldots,h_m$ are the representatives of all the distinct conjugacy classes in $H$ of the elements of the set $H\cap (g)$.
\end{theorem}
The above theorem shows that we have a well-defined restriction
$$
\operatorname{Ind}^G_H\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G).
$$
Let $s$ denote the number of real conjugacy classes of $G$ which have nonzero intersection with $H$ and whose elements are not of prime power order. Put $t=\operatorname{prim}(H)$ and let $m$ be the number of real conjugacy classes of $G$. Obviously, $s\leq t$.
Consider the image $\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G))$. Clearly, it is a torsion-free subgroup of $\operatorname{PO}(G)$ and it has a well-defined rank as the minimum $\geq 0$ such that $\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G))\cong\mathbb{Z}^r$. Thus, $\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ is a monomorphism if and only if $r=t$.
If $A$ is a matrix with entries in the field $K$, then we denote by $\operatorname{rank}_K(A)$ the rank of $A$ over $K$.
\begin{lemma}\label{lemma:trace}
Assume $A\in\operatorname{GL}(n,\mathbb{C})$ is of finite order. Then $\operatorname{tr}(A^{-1})=\overline{\operatorname{tr}(A)}$.
\end{lemma}
\begin{lemma}\label{lemma:boundForRank}
The rank of $\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G))$ is at most $s$, that is $r\leq s$.
\end{lemma}
\begin{proof}
Pick the bases $\epsilon=\{\varepsilon_1,...,\varepsilon_{t}\}$ and $\epsilon'=\{\varepsilon_1',...,\varepsilon_{t'}'\}$ of $\operatorname{PO}(H)$ and $\operatorname{PO}(G)$ respectively ($t'=\operatorname{prim}(G)$). We use the column convention for elements from $\operatorname{PO}(H)$ and $\operatorname{PO}(G)$ - we represent them as $t\times 1$ and $t'\times 1$ vectors respectively, where the coordinates are given by the bases $\epsilon$ and $\epsilon'$ accordingly. The induction map is a linear map and denote by $M$ its matrix form in bases $\varepsilon$ and $\varepsilon'$.
Let $(g_1)^{\pm},...,(g_{t'})^{\pm}$ be the ordered list of all real conjugacy classes of $G$ whose elements are not of prime power order and let $\chi$ be the map which evaluates the characters of the elements of $\operatorname{PO}(G)$ on the classes $(g_i)^{\pm}$ for $i=1,...,t'$. Note by Lemma \ref{lemma:trace} that $\chi$ is well-defined and
$$
\chi\colon\operatorname{PO}(G)\rightarrow\mathbb{R}^{t'}
$$
$$
\varepsilon_j'\mapsto\begin{pmatrix}
\varepsilon_j'(g_1) \\
\vdots \\
\varepsilon_j'(g_{t'})
\end{pmatrix}.
$$
Let $X=(\chi_{ij})_{1\leq i,j\leq t'}$ be the matrix of $\chi$, that is $\chi_{ij}=\varepsilon_j'(g_i)$. Clearly, $\operatorname{rank}_{\mathbb{R}}(X)=t'$. Consider the composition
$$
\chi\circ\operatorname{Ind}^G_H\colon\operatorname{PO}(H)\xrightarrow{\operatorname{Ind}_H^G}\operatorname{PO}(G)\xrightarrow{\chi}\mathbb{R}^{t'}.
$$
The matrix of $\chi\circ\operatorname{Ind}^G_H$ is a $t'\times t$ matrix $A=(a_{ij})_{1\leq i\leq t',1\leq j\leq t}$ given by $A=XM$. Thus, $a_{ij}=\operatorname{Ind}^G_H(\varepsilon_j)(g_i)$ for $1\leq i\leq t'$ and $1\leq j\leq t$. It follows from Theorem \ref{theorem:inducedCharacterFormula} that $\operatorname{rank}_{\mathbb{R}}(A)\leq s$. On the other hand, since $\operatorname{rank}_{\mathbb{R}}(X)=t'$, it follows that $\operatorname{rank}_{\mathbb{R}}(M)=\operatorname{rank}_{\mathbb{R}}(A)$. Therefore $\operatorname{rank}_{\mathbb{R}}(M)\leq s$.
Now, since $M$ is an integer matrix and $\mathbb{R}$ is an extension of $\mathbb{Q}$, we conclude that the real rank of $M$ equals its rational rank, that is $\operatorname{rank}_{\mathbb{R}}(M)=\operatorname{rank}_{\mathbb{Q}}(M)=r'$. We show that $r=r'$ which would mean that $r\leq s$ and would complete the proof.
Obviously, $r\geq r'$. Let $V=\langle \varepsilon_1',...,\varepsilon_{t'}'\rangle$. Take any $r'+1$ elements $v_1,...,v_{r'+1}$ from $\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G))$. They can be considered as vectors from $V$. Note that they are linearly dependent (over $\mathbb{Q})$, since the dimension of $\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G))$ considered as a subspace of $V$ equals $r'$. Let
\begin{equation}\label{equation:linearCombination}
\alpha_1v_1+...+\alpha_{r'+1}v_{r'+1}=0
\end{equation}
be a nontrivial combination. Suppose $\{\alpha_{i_1},...,\alpha_{i_k}\}$ is the set of all nonzero coefficients and $\alpha_{i_j}=p_j/q_j$, where $p_j,q_j\in\mathbb{Z}\setminus{\{0\}}$ for $1\leq j\leq k$. Multiplying both sides of equality (\ref{equation:linearCombination}) by $q_1...q_k$, we get a nontrivial integer combination of $v_j$'s. Thus $r\leq r'$ and, as a result, $r=r'$.
\end{proof}
\begin{lemma}\label{lemma:indMonoGeneral}
$\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ is a monomorphism if and only if $(h)_G^{\pm}\cap H=(h)_H^{\pm}$ for any $h\in H$ not of prime power order.
\end{lemma}
\begin{proof}
Suppose that for any $h\in H$ not of prime power order we have $(h)^{\pm}_G\cap H=(h)^{\pm}_H$. Let $x_1$ and $x_2$ be two different elements of $\operatorname{PO}(H)$. We must show that $\operatorname{Ind}_H^G(x_1)\neq\operatorname{Ind}_H^G(x_2)$. There exists $h\in H$ not of prime power order with $x_1(h)\neq x_2(h)$. We have two possibilities. The first one is when $(h)_H=(h^{-1})_H=(h)_G^{\pm}\cap H$. Thus
$$(h)_G\cap H=(h^{-1})_G\cap H=(h)^{\pm}_G\cap H=(h)^{\pm}_H=(h)_H$$
and it follows by Theorem \ref{theorem:inducedCharacterFormula} that
$$
\begin{tabular}{ccc}
$
\operatorname{Ind}_H^G(x_1)(h)=\frac{|C_G(h)|}{|C_H(h)|}x_1(h)
$
&
and
&
$
\operatorname{Ind}_H^G(x_2)(h)=\frac{|C_G(h)|}{|C_H(h)|}x_2(h).
$
\end{tabular}
$$
Therefore $\operatorname{Ind}_H^G(x_1)(h)\neq\operatorname{Ind}_H^G(x_2)(h)$ since $x_1(h)\neq x_2(h)$. In the second possibility, we have $(h)_H\neq(h^{-1})_H$. If $(h)_G\cap H=(h)_H$, we have already proved the assertion. Assume $(h)_H\subsetneq(h)_G\cap H$. Note that $$((h)_G\cap H)\cup((h^{-1})_G\cap H)=(h)_G^{\pm}\cap H=(h)_H\cup(h^{-1})_H.$$ Clearly $(h^{-1})_H\subseteq(h^{-1})_G\cap H$, which in connection with $(h)_H\subsetneq(h)_G\cap H$ gives from the equalities above $(h)_G\cap H=(h^{-1})_G\cap H=(h)_H\cup(h^{-1})_H$. Note that $|C_H(h)|=|C_H(h^{-1})|$. Thus by Theorem \ref{theorem:inducedCharacterFormula} and Lemma \ref{lemma:trace}, we get
$$
\begin{tabular}{ccc}
$
\operatorname{Ind}_H^G(x_1)(h)=2\frac{|C_G(h)|}{|C_H(h)|}x_1(h)
$
&
and
&
$
\operatorname{Ind}_H^G(x_2)(h)=2\frac{|C_G(h)|}{|C_H(h)|}x_2(h).
$
\end{tabular}
$$
Thus $\operatorname{Ind}_H^G(x_1)(h)\neq\operatorname{Ind}_H^G(x_2)(h)$.
We prove now the converse. Suppose $\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ is a monomorphism. Assume for the contrary that there exists $h\in H$ not of prime power order with $(h)^{\pm}_G\cap H\neq(h)^{\pm}_H$. Then $(h)^{\pm}_H\subsetneq(h)^{\pm}_G\cap H$ and thus $s<t$. Hence, it follows by Lemma \ref{lemma:boundForRank} that $\operatorname{rank}(\Ima(\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)))<t$ and $\operatorname{Ind}_H^G\colon\operatorname{PO}(H)\rightarrow\operatorname{PO}(G)$ is not injective which is a contradiction with our assumption.
\end{proof}
\begin{corollary}\label{corollary:inducedNormal}
Assume $N$ is a normal subgroup of $G$. Then $\operatorname{Ind}_N^G\colon\operatorname{PO}(N)\rightarrow\operatorname{PO}(G)$ is a monomorphism if and only if $(n)_G^{\pm}=(n)_N^{\pm}$ for any $n\in N$ not of prime power order.
\end{corollary}
\begin{corollary}\label{corollary:inductionNpq2}
$\operatorname{Ind}_{N_{pq^2}}^{G_{p,q}}\colon\operatorname{PO}(N_{pq^2})\rightarrow\operatorname{PO}(G_{p,q})$ is a monomorphism.
\end{corollary}
\begin{proof}
By Corollary \ref{corollary:inducedNormal}, it suffices to show that for any $n\in N_{pq^2}$ not of prime power order, we have $(n)^{\pm}_{G_{p,q}}=(n)^{\pm}_{N_{pq^2}}$. We know by Table \ref{table:classesGpq} that $n$ has to be of order $pq$ and $n=(a^l,1)$ for some $l$ not divisible by $p$ and $q$. From the proof of Lemma \ref{lemma:primaryNumber}, we know that $n^{-1}=(a^{-l},1)$ and $n$ are conjugate in $G_{p,q}$. Thus $(n)^{\pm}_{G_{p,q}}=(n)_{G_{p,q}}$. On the other hand $n$ and $n^{-1}$ are not conjugate in $N_{pq^2}$. Otherwise, there would exists $(a^{l'},c^{m'})$ such that
$$
(a^{l'},c^{m'})(a^l,1)(a^{l'},c^{m'})^{-1}=(a^{-l},1).
$$
Thus $(a^{li^{m'}},1)=(a^{-l},1)$ which cannot be true since $li^{m'}\equiv l\not\equiv -l\Mod{q}$ for $q\nmid l$. Therefore $(n^{-1})_{N_{pq^2}}\neq(n)_{N_{pq^2}}$ and it follows by Table \ref{table:charactersNpq2} that $|(n^{\pm})_{N_{pq^2}}|=2q$. On the other hand, $|(n)^{\pm}_{G_{p,q}}|=|(n)_{G_{p,q}}|=2q$, and the assertion follows.
\end{proof}
\section{Proof of Theorem \ref{theorem:main}}
We use the following result of Morimoto.
\begin{theorem}\cite[Theorem 1.9]{Morimoto2012}\label{theorem:mor}
Let $H$ be a subgroup of an Oliver group $G$. If $U-V\in\operatorname{PO}_{\rm w}^{\mathcal{L}}(H)$, then there exists an $\mathbb{R}G$-module $W$ such that $\operatorname{Ind}^G_H(U)\oplus W$ and $\operatorname{Ind}^G_H(V)\oplus W$ are Smith equivalent $\mathbb{R}G$-modules.
\end{theorem}
We can prove now the main theorem of this paper.
\begin{proof}[Proof of Theorem \ref{theorem:main}]
Lemmas \ref{lemma:primaryNumber} and \ref{lemma:specialOliver} tell us that $G_{p,q}$ is a special Oliver group with primary number at least $2$. By Lemma \ref{lemma:noznzeroPO}, there exist non-isomorphic $\mathbb{R}G$-modules $U$ and $V$ with $U-V\in\operatorname{PO}_{\rm w}^{\mathcal{L}}(N_{pq^2})$. Thus, by Corollary \ref{corollary:inductionNpq2}, $\operatorname{Ind}^{G_{p,q}}_{N_{pq^2}}(U)$ and $\operatorname{Ind}^{G_{p,q}}_{N_{pq^2}}(V)$ are not isomorphic $\mathbb{R}G_{p,q}$-modules. Moreover, by Theorem \ref{theorem:mor}, it follows that there exists an $\mathbb{R}G_{p,q}$-module $W$ such that $\operatorname{Ind}^{G_{p,q}}_{N_{pq^2}}(U)\oplus W$ and $\operatorname{Ind}^{G_{p,q}}_{N_{pq^2}}(V)\oplus W$ are Smith equivalent. Obviously, these modules are not isomorphic.
\end{proof}
\section*{Acknowledgements}
The author would like to thank the referee for his valuable remarks. Also, I would like to thank Prof. Krzysztof Pawałowski for his comments and interest in the results presented here. I am also grateful to Dr. Marek Kaluba for remarks which substantially improved the presentation of this paper. I would like to thank all the participants of the algebraic topology seminar held at Adam Mickiewicz University for helpful comments.
\newpage
\bibliographystyle{acm}
\input{elsarticle-template-1-num.bbl}
\textcolor{white}{fsf}\\
\emph{Faculty of Mathematics and Computer Science}\\
\emph{Adam Mickiewicz University in Poznań}\\
\emph{ul Uniwersytetu Poznańskiego 4}\\
\emph{61-614 Poznań, Poland}\\
\emph{Email address:} piotr.mizerka@amu.edu.pl
\end{document}
|
1,477,468,751,030 | arxiv | \section{Preliminaries}
However, before proving Theorem \ref{maintheorem} we mention several concepts and results which we need to make use of, and we prove a lemma.
A graph $G$ is an edge-critical block, if $\kappa(G)=2$ and $\kappa(G-e)=1$ for any edge $e$ of $G$ . Let $D(G)$ be the set of edges $uv$ where $d_{G}(u), d_{G}(v)\geq 3$. If $D(G)=\emptyset$, then every edge of $G$ is incident to a vertex of degree 2; we call such a graph a \emph{DT-graph}.
\begin{theorem}\emph{\textbf{\cite{Fle1}}}
\label{The1}
Let $G$ be an edge-critical block. Then exactly one of the following two statements is true:
\begin{itemize}
\item[1)] $G$ is a DT-block.
\item[2)] There is an edge $f$ in $D(G)$ such that at least one of the endblocks of $G-f$ is a DT-block.
\end{itemize}
\end{theorem}
The basic result about hamiltonicity of the square of a 2-block is given by the following theorem.
\begin{theorem}\emph{\textbf{\cite{Fle2}}}
\label{2-blockcycle}
Suppose $v$ and $w$ are two arbitrarily chosen vertices of a $2$-block $G$.
Then $G^2$ contains a hamiltonian cycle $C$ such that the edges of $C$ incident to $v$ are in $G$
and at least one of the edges of $C$ incident to $w$ is in $G$. Furthermore, if $v$ and $w$ are
adjacent in $G$, then these are three different edges.
\end{theorem}
Let $\mbox{bc}(G)$ denote the block-cutvertex graph of $G$. Blocks corresponding to leaves of $\mbox{bc}(G)$ are called \emph{endblocks}. Note that a block in a graph $G$ is either a 2-block or a bridge of $G$. The graph $G$ is called \emph{blockchain} if $\mbox{bc}(G)$ is a path. Let $G$ be a blockchain. We denote its blocks $B_{1}, B_{2},...,B_{k}$ and cutvertices $c_{1},c_{2},...,c_{k-1}$ such that $c_{i}\in V(B_{i})\cap V(B_{i+1})$, for $i=1,2,...,k-1$. A blockchain $G$ is called \emph{trivial}, if $E(\mbox{bc}(G))=\emptyset$, otherwise it is called \emph{non-trivial}. Note that only $B_{1}$ and $B_{k}$ are endblocks of a non-trivial blockchain $G$. \emph{An inner block} is a block of $G$ containing exactly 2 cutvertices. \emph{An inner vertex} is a vertex in $G$ which is not a cutvertex of $G$.
The first author proved in \cite{Ekst} the following theorem dealing hamiltonicity of the square of a blockchain graph.
\begin{theorem}\emph{\textbf{\cite{Ekst}}}
\label{blockchaincycle}
Let $G$ be a blockchain and let $u_{1}$, $u_{2}$ be arbitrary inner vertices which are contained in different endblocks of~$G$.\\
Then $G^{2}$ contains a hamiltonian cycle $C$ such that, for $i=1,2$,
\begin{mathitem}
\item[$\bullet$] if $u_{i}$ is contained in a 2-block, then both edges of $C$ incident with
$u_{i}$ are in $G$, and
\item[$\bullet$] if $u_{i}$ is not contained in a 2-block, then exactly one edge of $C$
incident with $u_{i}$ is in $G$.
\end{mathitem}
\end{theorem}
Let $G$ be a connected graph. By a \emph{$uv$-path} we mean a path from $u$ to $v$ in $G$. If a $uv$-path is hamiltonian, we call it a \emph{$uv$-hamiltonian path}. Let $A=\{x_{1},x_{2},...,x_{k}\}$ be a set of $k$ ($\geq 3$) distinct vertices in $G$. An $x_{1}x_{2}$-hamiltonian path
in $G^{2}$ which contains $k-2$ distinct edges $x_{i}y_{i}\in E(G), i=3,...,k$, is said to be $\mathcal{F}_{k}$. A graph $G$ is said to have the $\mathcal{F}_{k}$ property if, for any set $A=\{x_{1},x_{2},...,x_{k}\}\subseteq V(G)$, there is an $\mathcal{F}_{k}$ $x_{1}x_{2}$-hamiltonian path in $G^{2}$.
\begin{theorem}\emph{\textbf{\cite{FleChia}}}
\label{F_4}
Let $G$ be a 2-block. Then $G$ has the $\mathcal{F}_{4}$ property.
\end{theorem}
A graph $G$ is said to have the strong $\mathcal{F}_{3}$ property if, for any set of 3 vertices $\{x_{1},x_{2},x_{3}\}$ in $G$, there is an
$x_{1}x_{2}$-hamiltonian path in $G^{2}$ containing distinct edges $x_{3}z_{3},x_{i}z_{i}\in E(G)$ for a given $i\in\{1,2\}$. Such an
$x_{1}x_{2}$-hamiltonian path in $G^{2}$ is called a strong $\mathcal{F}_{3}$ $x_{1}x_{2}$-hamiltonian path.
\begin{theorem}\emph{\textbf{\cite{FleChia}}}
\label{strongF_3}
Every 2-block has the strong $\mathcal{F}_{3}$ property.
\end{theorem}
The following lemma is frequently used in the proofs below.
\begin{lemma}
\label{blockchainpath}
Let $G$ be a non-trivial blockchain. We choose
\begin{itemize}
\item $c_{0}\in V(B_{1})$, $c_{k}\in V(B_{k})$ which are not cutvertices;
\item $u_{i}\in V(B_{i})$ (if any) which is not a cutvertex and $v_{i}\in V(B_{i})$ such that
$u_{i}\neq v_{i}$, $u_{1}\neq c_{0}$ and $u_{k}\neq c_{k}$, for $i=1,2,...k$.
\end{itemize}
Then $G^{2}$ contains a $c_{0}c_{k}$-hamiltonian path $P$ such that there exist distinct edges $u_{i}u'_{i}$ $v_{i}v'_{i}\in E(B_{i})\cap E(P)$ (if $u_{i}$ exists), $i=1,2,...,k$.
\end{lemma}
\begin{proof}
If $B_{i}$ is 2-connected, then let $P_{i}$ be an $\mathcal{F}_{4}$ $c_{i-1}c_{i}$-hamiltonian path in $B_{i}^{2}$ containing 2 distinct edges $u_{i}u'_{i},v_{i}v'_{i}\in E(B_{i})$ for $v_{i}\notin \{c_{i-1},c_{i}\}$ by Theorem~\ref{F_4}; and let $P_i$ be a strong $\mathcal{F}_{3}$ $c_{i-1}c_{i}$-hamiltonian path in $B_{i}^{2}$ containing 2 distinct edges $u_{i}u'_{i},v_{i}v'_{i}\in E(B_{i})$ for $v_{i}\in \{c_{i-1},c_{i}\}$ by Theorem~\ref{strongF_3}, respectively.
If $B_{i}=c_{i-1}c_{i}$, then we set $P_{i}=B_{i}$. Note that in this case $u_{i}$ does not exist and $v_{i}\in \{c_{i-1},c_{i}\}$.
Then $P=\cup_{i=1}^{k}P_{i}$ is a $c_{0}c_{k}$-hamiltonian path in $G^{2}$ as required.
\end{proof}
The concept of EPS-graphs plays a central role in proofs of hamiltonicity in the square of a $DT$-graph (see \cite{Fle}). We use this concept also in one part of the proof of Theorem \ref{maintheorem}. Let $G$ be a graph. An \emph{EPS-graph} is a spanning connected subgraph $S$ of $G$ which is the edge-disjoint union of an eulerian graph $E$ (which may be disconnected) and a linear forest $P$. For $S=E\cup P$, let $d_{E}(v)$, $d_{P}(v)$ denote the degree of $v$ in $E$, $P$, respectively.
Fleischner and Hobbs introduced in \cite{FleHob} the concept of $W$-soundness of a cycle. Let $W$ be a set of vertices of $G$. A cycle $K$ is called
\emph{$W$-maximal} if $|V(K')\cap W|\leq|V(K)\cap W|$ for any cycle $K'$ of $G$. Let $K$ be a cycle of $G$ and let $W$ be a set of vertices of $G$.
A blockchain $P$ of $G-K$ is \emph{a $W$-separated $K$-to-$K$ blockchain based on vertex $x$} if a vertex of $W$ is a cut vertex of $P$, both endblocks $B$ and $B'$ of $P$ include vertices of $K$, $V(B)\cap V(K)=\{x\}$, no vertex of $K$ is a cutvertex of $P$, and $(V(P)\cap V(K))-\{x\}\subseteq V(B').$ For a given path $p=v_1,v_2,...,v_{n-1},v_n$ we let $F(p)=v_1$, $L(p)=v_n$.
\begin{definition}
\label{sound}
A cycle $K$ in $G$ is \emph{$W$-sound} if it is $W$-maximal, $|W|=5$ and the following hold:
\begin{itemize}
\item[(1)] $|V(K)\cap W|\geq 4$; or
\item[(2)] $|V(K)\cap W|=3$ and the following situation does not prevail; there are two $W$-separated $K$-to-$K$ blockchains $P$ and $Q$ of $G-K$ based on
a vertex $w$ of $W$ such that $V(P)\cap V(Q)=\{w\}$ and if $p$ is a shortest path in $P$ from $w$ to a vertex of $K$ different from $w$ and $q$
is the same for $Q$, then there is a subsequence $w,w',L(p),L(q),w'',w$ of $K$ where $w'$ and $w''$ are in $W-\{w\}$; or
\item[(3)] $|V(K)\cap W|=2$ and the following situation does not prevail; there are three $W$-separated $K$-to-$K$ blockchains $P_1,P_2$ and $P_3$ of $G-K$
based on a single vertex $a$ of $V(K)-W$, such that $V(P_i)\cap V(P_j)=\{a\}$ whenever $i$ and $j$ are distinct elements of $\{1,2,3\}$, and if
$p_i$ is a shortest path in $P_i$ from $a$ to a vertex of $K$ different from $a$ for each $i\in \{1,2,3\}$, then there is a subsequence
$a,w',L(p_1),L(p_2),L(p_3),w'',a$ of $K$ where $\{w',w''\}=V(K)\cap W$.
\end{itemize}
\end{definition}
We observe that Definition \ref{sound} is basically the content of Lemma 1 in \cite{FleHob}. That is, said lemma guarantees that for every choice $W\subseteq V(G)$ with $|W|=5$ in a 2-block $G$ of order at least 5, there is a $W$-sound cycle in $G$.
\begin{theorem}\emph{\textbf{\cite{FleHob}}}
\label{The2}
Let $G$ be a 2-block and $W$ a set of five distinct vertices in $G$, and let $K$ be a $W$-sound cycle in $G$. Then there is an EPS-graph
$S=E\cup P$ of $G$ such that $K\subseteq E$ and $d_{P}(w)\leq 1$ for every $w\in W$.
\end{theorem}
\section{Proof of Theorem \ref{maintheorem}}
\begin{proof}
First we prove that $G$ has the $\mathcal{H}_{4}$ property. We proceed by contradiction supposing that $|V(G)|+|E(G)|$ is minimal. It follows that $G$ is an edge-critical block and in particular $|V(G)|\geq 5$. We distinguish cases by the number of edges in $D(G)$. The reader is advised to draw figures where he/she deems it necessary to follow our case distinctions.
\noindent
\emph{Case 1.} $|D(G)|>0.$
By Theorem~\ref{The1}, let $f=x'x\in D(G)$ be an edge such that $d_{G}(x'), d_{G}(x)\geq 3$.
Then $G-f$ is a blockchain and both endblocks $B',B$ of $G-f$ are 2-blocks. Set
$X=\{x_1,x_2,x_3,x_4\}$. Without loss of generality assume that $|X\cap(V(B)-y)|\leq 2$
(otherwise we consider $B'$ instead of $B$); i.e., at most $x_{1},x_2\in V(B)-y$, say, where
$x, y\in V(B)$ and $y$ is a cutvertex of $G-f$. We distinguish the following 3 subcases.
\medskip
\emph{Subcase 1.1}: $|X\cap(V(B)-y)|=2$; i.e., $x_1, x_2\in V(B)-y.$
Then $B^2$ has an $xy$-hamiltonian path $P_1$ containing different edges $x_1y_1$, $x_2y_2$ of
$E(G)$ for certain $y_1,y_2$ by Theorem \ref{F_4} or by Theorem \ref{strongF_3} if $x_1=x$ or
$x_2=x$; and $(G-B)^2$ has an $xy$-hamiltonian path $P_2$ containing different edges
$x_3y_3,x_4y_4$ of $E(G)$ for certain $y_3,y_4$ by Lemma \ref{blockchainpath}. Now $P_1 \cup P_2$
is a required hamiltonian cycle in $G^2$, a contradiction. Note that $x_3,x_4\in V(B')-y'$
where $y'\in V(B')$ is a cutvertex of $G-f$, otherwise we can use $B'$ instead of $B$ and
$x_3$ or $x_4$ instead of $x_1$ or $x_2$ (see \emph{Subcase 1.2} or \emph{Subcase 1.3} below).
\medskip
\emph{Subcase 1.2}: $|X\cap(V(B)-y)|=1$; i.e., $x_1 \in V(B)-y$ and $x_2 \notin V(B)-y.$
\emph{(1.2.1)} Assume that $x_2,x_3,x_4$ are not inner vertices of $G$ in the same block of $G-B$.
We proceed very similar as in \emph{Subcase 1.1}; we use only the strong $\mathcal{F}_{3}$
property in $B$, and $G-B$ is a non-trivial blockchain. Hence we can apply Lemma
\ref{blockchainpath} except if $x=x_1$, some $x_i=y$ for $i\in \{2,3,4\}$, say $i=2$, and
$x_3,x_4$ are inner vertices in the same endblock of $G-B$ which also contains $x_2$.
If $x=x_1$, $x_2=y$, and $x_3,x_4$ are inner vertices in the same endblock of $G-B$ which also
contains $x_2$, then $B^2$ has an $x_2x_1$-hamiltonian path $P_1$ containing different edges
$x_2y_2$, $uv$ of $E(G)$ for certain $y_2,u,v$ by Theorem~\ref{strongF_3}, and $(G-B)^2$ has an
$x_2x_1$-hamiltonian path $P_2$ containing different edges $x_1x',x_3y_3,x_4y_4$ of $E(G)$ for
certain $y_3,y_4$ by Lemma \ref{blockchainpath}. Again, $P_1 \cup P_2$ is a required hamiltonian
cycle in $G^2$, a contradiction.
\emph{(1.2.2)} Assume that $x_2,x_3,x_4$ are inner vertices of $G$ in the same block $B^*$
of $G-B$.
Clearly, $B^2$ contains a hamiltonian cycle $H_{B}$ containing 3 different edges $y'y,x'_1x_1,x''x$ of $E(B)$ for certain vertices $y',x'_1,x''$ by Theorem \ref{F_4} (starting with a corresponding $\mathcal{F}_{4}$ $x''x$-hamiltonian path in $B^{2}$) if $x\neq x_1$, and $y'y,x'_1x,x''x$ of $E(B)$ for certain vertices $y',x'_1,x''$ by Theorem \ref{2-blockcycle} if $x=x_1$.
Let $G_1$ be the component of $G-B^*-xx'$ containing $B$ and $y^*=V(B^*)\cap V(G_1)$. Note that
$G_1$ is a trivial or non-trivial blockchain.
(a) If $y^*=y$, then $G_1=B$ and we set $H_{G_1}=H_B$ (see above).
(b) If $y^*\neq y$, then either $G_1-B=y^*y$ or $(G_1-B)^2$ contains a hamitonian cycle $C$ containing edges $y^*_1y^*$, $y''y$ of $E(G_1-B)$ for certain $y^*_1,y''$ by applying Theorem \ref{2-blockcycle} or Theorem \ref{blockchaincycle}.
Now we set $$H_{G_1}=(H_B-y'y)\cup y'y^*$$ and $y_1^*=y$ if $G_1-B=y^*y$; and
$$H_{G_1}=(H_B\cup C-\{y'y,y''y\})\cup y'y''$$ if $G_1-B\neq y^*y$.
Note that the edge $y_1^*y^*\in E(G_1)$ is contained in $H_{G_1}$ in both cases.
\medskip
Clearly, $|V(B^*)|+|E(B^*)|<|V(G)|+|E(G)|$. Hence $(B^*)^2$ contains a hamiltonian cycle $H_{B^*}$ containing four different edges $y_2^*y^*,x_2x'_2,x_3x'_3,x_4x'_4$ of $E(B^*)$ for certain vertices $y_2^*,x'_i$, $i=2,3,4$.
\medskip
Let $z\in V(B^*)$ be the cutvertex of $G-x'x$ different from $y^*$.
(A) $x'=z$. Then $$(H_{G_1}\cup H_{B^*}-\{y_2^*y^*,y_1^*y^*\})\cup\{y_1^*y_2^*\}$$ is a required hamiltonian cycle in $G^2$ containing four different edges $x_ix'_i,$ of $E(G)$, $i=1,2,3,4$, a contradiction.
(B) $x'\neq z$
If $d_{G-B^*}(z)=1$, then we set $G_2=G-G_1-B^*-z_1z$ where $z_1$ is the unique neighbour of
$z$ in $G-B^*$; otherwise we set $G_2=G-G_1-B^*$. Note that $G_2$ is a trivial or non-trivial
blockchain and $G_2=x'x$ is not possible because of $d_G(x')>2$.
We apply Theorem \ref{blockchaincycle} such that either $(G_2)^2$ contains a hamitonian cycle $H_{G_2}$ with $x'x\in E(H_{G_2})$ if $z\notin V(G_2)$, or $(G_2)^2$ contains a hamitonian cycle $H$ containing the edge $x'x$ and different edges $z_1z,z_2z$ of $G_1$ for certain $z_1,z_2$ if $z\in V(G_2)$. In the latter case we set $H_{G_2}=(H-\{z_1z,z_2z\})\cup z_1z_2$. Then $$(H_{G_1}\cup H_{G_2}\cup H_{B^*}-\{y_2^*y^*,y_1^*y^*,x'x,x''x\})\cup\{y_1^*y_2^*,x''x'\}$$ is again a hamiltonian cycle in $G^2$ containing four different edges $x_ix'_i$ of $E(G)$, $i=1,2,3,4$, a contradiction.
\medskip
\emph{Subcase 1.3}: $|X\cap(V(B)-y)|=0$; i.e., $x_1, x_2\notin V(B)-y.$
Let $G_1$ be a graph which arises from $G$ by replacing $B$ with a path $p$ of length 3,
say $p=x,a,b,y$. Then $|V(G_1)|+|E(G_1)|<|V(G)|+|E(G)|$ since $B$ is not a triangle because $G$
is edge-critical. Hence $(G_1)^2$ contains a hamiltonian cycle $H_1$ containing four different
edges $x_iy_i$ of $E(G_1)$ for certain vertices $y_i$, $i=1,2,3,4$, and as many edges as possible
of $G_1$.
In the following we shall proceed in a manner very similar to the proof that the square of a 2-block is hamiltonian, \cite{Fle1}. However, in order to avoid total dependence of the reader on the knowledge or study of \cite{Fle1}, we shall describe and partially repeat the procedure employed in that paper. In particular, we shall quote the cases with the numbering of \cite{Fle1}.
This yields the consideration of 13 cases how the hamiltonian cycle $H_1$ traverses vertices of the path $p$. As in \cite{Fle1}, Cases 3, Case 4, Case 12, and Case 13 are contradictory to the maximality of the number of edges of $G_1$ belonging to $H_1$; and Case 6 can be reduced to Case 10, Case 8 to Case 7, Case 10 to Case 9 and Case 11 to Case 5. Note that by the reductions we preserve the existence of the edges $x_iy_i$ even if $x_i\in\{x',y\}$ for $i\in\{1,2,3,4\}$.
The remaining 5 cases are (using the labeling of vertices $x',x,a,b,y$ instead of $x,w,a,b,v$ in \cite{Fle1}):
\emph{Case 1.} $H_1=...,x,a,b,y,...$
\emph{Case 2.} $H_1=...,x,a,b,y',...$
\emph{Case 5.} $H_1=...,x',a,b,x,...$
\emph{Case 7.} $H_1=...,x',a,y,...,y',b,x$
\emph{Case 9.} $H_1=...,x',a,y,b,x...$;
\noindent
and $y'y$ is an edge of $G$.
In order to extend $H_1$ to $H$ in $G^2$ in these five cases with $H$ having the required property, one can proceed in the same way as it has been done in~\cite{Fle1}. However, we deem it necessary to show explicitly that no problems arise under the stronger condition of this theorem (similarly as in \cite{Fle2}).
\emph{Case 1.} By Theorem \ref{strongF_3}, $B^2$ has an $xy$-hamiltonian path $P$ starting with an edge $yy^*$ of $E(B)$ and containing an edge $uv$ of $B$ for certain vertices $u,v$. Replace in $H_1$ the path $p$ with a hamiltonian path $P$ and we get a hamiltonian cycle $H$ as required.
\emph{Case 2.} Take $P$ as in Case 1 and replace in $H_1$ the path $x,a,b,y'$ with $(P-yy^*)\cup y'y^*$ and again we get a hamiltonian cycle $H$ as required. Note that $H$ contains all edges of $G$ belonging to $H_1$.
\emph{Case 5.} By Theorem \ref{2-blockcycle}, $B^2$ contains a hamiltonian cycle $H_B$ such that both edges of $H_B$ incident to $y$ (say $yy^*,yy^{**}$) are in $B$ and at least one of the edges of $H_B$ incident to $x$ (say $xx^*$) is in $B$. We set $$H^*=(H_B-\{yy^*,yy^{**}\})\cup y^*y^{**}$$ which does not contain $y$, and replace in $H_1$ the path $x',a,b,x$ with $(H^*-xx^*)\cup x'x^*$, thus obtaining a hamiltonian cycle $H$ in $G^2$ which has the same behavior in all vertices of $G_1-\{a,b\}\subset G$ as $H_1$.
\emph{Case 7.} Take $H_B$ as in Case 5 and replace in $H_1$ the path $x',a,y$ with the path $P_1\cup x^*x'$ where $P_1\subset H_B$ is the path from $y$ to $x^*$ and does not contain $x$; and
replace in $H_1$ the path $y',b,x$ with the path $P_2\cup y't$ where $t\in \{y^*,y^{**}\}$ and $P_2\subset H_B$ is the path from $x$ to $t$ and does not contain any of $y$, $x^*$. Again we get a hamiltonian cycle $H$ as required.
\emph{Case 9.} Take $H_B$ as in Case 5 and replace in $H_1$ the path $x',a,y,b,x$ with $(H_B-xx^*)\cup x'x^*$, thus obtaining a hamiltonian cycle $H$ in $G^2$ which has the same behavior in all vertices of $G_1-\{a,b,y\}\subset G$ as $H_1$ and both edges of $H$ incident to $y$ are in $G$.
In all cases we obtained a hamiltonian cycle $H$ in $G^2$ containing four different edges $x_ix'_i,$ of $E(G)$ (in most cases we have $x'_i=y_i$; see the first paragraph of this subcase 1.3), $i=1,2,3,4$, a contradiction.
\medskip
\noindent
\emph{Case 2.} $|D(G)|=0.$ That is, $G$ is a $DT$-graph.
\noindent
a) Suppose $N(x_{i})\subseteq V_{2}(G)$ for every $i=1,2,3,4$.
Set $W'=\{x_1,x_2,x_3,x_4\}$ and let $K$ be a $W'$-maximal cycle in $G$. Observe that $|V(K)|\geq 4$ since an edge-critical block on at least 4 vertices cannot contain a triangle.
If $|W'\cap V(K)|=4$, then we choose $x_5$ arbitrary in $V(G)-W'$. If $|W'\cap V(K)|=3$, then we choose $x_5$ arbitrary in $V(K)-W'$. If $|W'\cap V(K)|=2$, then we choose an arbitrary 2-valent vertex $x_5$ in $V(K)-W'$ which exists because all neighbours of $x_i$ are 2-valent.
We set $W=W'\cup\{x_5\}$. Then $K$ is $W$-sound in $G$ unless $|W\cap V(K)|=3$ and forbidden situation (2) in Definition \ref{sound} arises. That is, without loss of generality $x_1,x_2\in V(K)$ and there exist $W$-separated $K$-to-$K$ blockchains $P$, $Q$ based on $x_i$, $i\in \{1,2\}$, $P\cap Q=x_i$, and paths $p,q$ in $P,Q$, respectively, such that there is a subsequence $x_i,w',L(p),L(q),w'',x_i$, where $\{w',w''\}=\{x_{3-i},x_5\}$ and $x_3,x_4\in V(p)\cup V(q)$. Then there is a cycle $K'$ containing $x_i,x_3,x_4$, a contradiction to the $W'$-maximality of $K$.
By Theorem \ref{The2}, $G$ contains an EPS-graph $S=E\cup P$ such that $K\subseteq E$ and $d_{P}(w)\leq 1$ for every $w\in W$. If there is no adjacent pair $x_{i},x_{j}$ for $i,j\in \{1,2,3,4\}$, we use $S$ and an algorithm in \cite{Fle} to obtain a hamiltonian cycle in $G^{2}$ with the required properties, a contradiction. However, if there is an adjacent pair, say $x_1,x_2$, then $d_G(x_1)=d_G(x_2)=2$ and $d_P(x_1)=d_P(x_2)=0$ and we can proceed with the cycle $K$ containing $x_1,x_2,x_3$ to obtain a required hamiltonian cycle in $G^2$ as before, a contradiction.
\noindent
b) Without loss of generality suppose that $N(x_{4})\nsubseteq V_{2}(G)$.
Hence $\mbox{deg}_{G}(x_{4})=2$. Let $P_{4}=y_{4}x_{4}z_{1}...z_{k}$ be a unique path in $G$ such that $d_{G}(y_{4})>2$, $d_{G}(z_{k})>2$ and
$d_{G}(z_{i})=2$, for $i=1,2,...,k-1$. We set $G^-=G-\{x_{4},z_{1},...,z_{k-1}\}$, where $z_0=x_4$ if $k=1$.
b1) Assume that $G^-$ is 2-connected.
If $x_i\in V(G^-)-\{y_4,z_k\}$ for $i=1,2,3$, then $|V(G)|+|E(G)|>|V(G^-)|+|E(G^-)|$ and hence $(G^-)^{2}$ has a hamiltonian cycle $H^-$ containing different edges $x_{i}y_{i},z_{k}w_{4}\in E(G)$, $i=1,2,3$. It is easy to see that we can extend $H^-$ to a hamiltonian cycle $H$ in $G^{2}$ such that $H$ contains edges $x_{i}y_{i}$, $x_{4}z_{1}$, for $i=1,2,3$, a contradiction.
Suppose $x_3\notin V(G^-)-\{y_4,z_k\}$. If $\{x_1,x_2,x_3\}\cap\{y_4,z_k\}\neq\emptyset\}$, then without loss of generality $x_3\in\{y_4,z_k\}$. By Theorem~\ref{F_4} or Theorem~\ref{strongF_3}, $(G^-)^{2}$ contains a $y_{4}z_{k}$-hamiltonian path $P^-$ and $P^-$ contains distinct edges $x_{i}y_{i}$ of $G$ if $x_i\in V(G^-)$ for $i=1,2$. Then $P^{-}\cup P_{4}$ is a hamiltonian cycle in $G^{2}$ with the required properties, a contradiction.
b2) Assume that $G^-$ is not 2-connected.
\noindent
Then $G^-$ is a non-trivial blockchain with $y_{4},z_{k}$ in distinct endblocks and $y_{4},z_{k}$ are not cutvertices.
Assume not all $x_{1},x_{2},x_{3}$ are inner vertices in the same block. Then we apply Lemma~\ref{blockchainpath} to get a $y_{4}z_{k}$-hamiltonian path $P^-$ in $(G^-)^{2}$ with distinct edges $x_{i}y_{i}\in E(G^-)$, $i=1,2,3$. Note than $x_{i}$ could be $y_{4}$ or $z_{k}$. Then again
$P^-\cup P_{4}$ is a hamiltonian cycle in $G^{2}$ with the required properties, a contradiction.
Now assume that $x_{1},x_{2},x_{3}$ are inner vertices in the same block $B$. Then there exists an end block $B^*$ of $G^-$ such that $x_{i}\notin V(B^*)$, $i=1,2,3$. A graph $G'$ arises from $G$ by the replacement of $B^*$ by a path $p$ of length 3. Hence $|V(G)|+|E(G)|>|V(G')|+|E(G')|$ and we denote by $H'$ a hamiltonian cycle in $(G')^{2}$ containing edges $x_{i}w_{i}$, $i=1,2,3,4$, and as many edges of $G'$ as possible.
We proceed in the same manner as in Subcase 1.3 (note that in this case none of $x_i$, $i=1,2,3,4$, is on $p$) to get a hamiltonian cycle in $G^{2}$ with required properties, a contradiction.
\bigskip
Finally we want to show that Theorem \ref{maintheorem} is best possible, i.e., we construct an infinite family of graphs which do not satisfy the $\mathcal{H}_{5}$ property. For this purpose start with an arbitrary 2-block $G$ and fix different vertices $x_1, x_2\in V(G)$.
Define $$H=G\cup \{y_1,y_2,...,y_t;t\geq 3\}\cup \{x_iy_j: 1\leq i\leq 2,1\leq j\leq t\},$$
where $\{y_1,...,y_t\}\cap V(G)=\emptyset$.
Then $H$ is a 2-block. However, $H$ does not have the $\mathcal{H}_{5}$ property: indeed, there is no hamiltonian cycle $C$ in $H^2$ containing edges of $H$ incident to $x_1,x_2,y_1,y_2,y_3$ because of the neighbours of $y_1,y_2,y_3$ in $H$ are $x_1$ and $x_2$ only; that is $x_1$ or $x_2$ would be incident to three edges of $C\cap H$, which is impossible.
\end{proof}
\section{Conclusion}
We introduced the concept of the $\mathcal{H}_{k}$ property and proved that every 2-block has the $\mathcal{H}_{4}$ property but not the $\mathcal{H}_{5}$ property in general. Similarly in \cite{FleChia} it is proved that every 2-block has the $\mathcal{F}_{4}$ property but not the $\mathcal{F}_{5}$ property in general. Moreover, a 2-block $G$ having the $\mathcal{F}_{k}$ property implies that $G$ has the $\mathcal{H}_{k-1}$ property for $k=3,4,...$. Hence we conclude that Theorem \ref{maintheorem} and Theorem \ref{F_4} are best possible with respect to hamiltonicity and hamiltonian connectedness in the square of a 2-block.
\medskip
\noindent
{\bf Acknowledgements}.
\noindent
This publication was supported by the project LO1506 of the Czech Ministry of Education, Youth and Sports, and by FWF project P27615-N25.
|
1,477,468,751,031 | arxiv | \section{Introduction}
A stochastic process $\bfxi=(\xi_t)_{t\in[0,1]}$ on the interval $[0,1]$,
whose sample paths belong to the space $C[0,1]$ of continuous functions on
$[0,1]$, is called \emph{max-stable} (MSP), if there are norming functions
$a_n,b_n\in C[0,1]$, $a_n>0$, such that the distribution of the process
$\max_{1\le i\le n}\left(\bfxi^{(i)}-b_n\right)/a_n$ coincides with that of
$\bfxi$ for each $n\in\N$. By $\bfxi^{(1)},\bfxi^{(2)},\dots$ we denote
independent copies of $\bfxi$.
\citet{aulfaho11} established a characterization of the distribution of an
MSP via a norm on $E[0,1]$, the space of bounded functions on $[0,1]$ which
have finitely many discontinuities. This norm is called $D$-norm and it is
defined by means of a so-called generator process.
An MSP $\bfeta=(\eta_t)_{t\in[0,1]}\in C[0,1]$ with standard negative
exponential margins $P(\eta_t\le x)=\exp(x)$, $x\le 0$, will be called a
\emph{standard max-stable} process (SMSP). As the distribution of a
stochastic process on $[0,1]$ is determined by its finite dimensional
marginal distributions, a process $\bfeta\in C[0,1]$ with identical marginal
distribution function (df) $F(x)=\exp(x)$, $x\le 0$, is standard max-stable
if and only if
\begin{equation} \label{eqn:standard_max-stable}
P\left(\bfeta\le \frac fn\right)^n=P(\bfeta\le f),\qquad f\in E^-[0,1],\,n\in\N,
\end{equation}
where $E^-[0,1]$ denotes the subset of those functions in $E[0,1]$, which
attain only non positive values. Note that it would be sufficient in equation
\eqref{eqn:standard_max-stable} to consider $f\in C^-[0,1]$ of non positive
continuous functions. The extension to $E^-[0,1]$, however, provides the
inclusion of the finite dimensional marginal distributions (fidis) of
$\bfeta$, as
\[
P(\eta_{t_i}\le x_i,\ 1\le i\le d) = P(\bfeta\le f)
\]
where $0\le t_1<\dots<t_d\le1$ and $f\in E^-[0,1]$ is given by $f(t_i) =
x_i<0$ for $i\in\set{1,\dots,d}$ and $f(t) = 0$ for
$t\in[0,1]\setminus\set{t_1,\dots,t_d}$.
From \citet{aulfaho11} we know that \eqref{eqn:standard_max-stable} is
equivalent with the condition that there is some norm $\norm\cdot_D$ on
$E[0,1]$, called \emph{$D$-norm}, satisfying
\begin{equation}\label{eqn:df_of_smsp}
P(\bfeta\le f)=\exp\left(-\norm f_D\right),\qquad f\in E^-[0,1].
\end{equation}
Precisely, there exists a stochastic process $\bfZ=(Z_t)_{t\in[0,1]}\in
C[0,1]$ with
\[
0\le Z_t\le m,\quad E(Z_t)=1,\qquad t\in[0,1],
\]
for some number $m\ge 1$, such that
\[
\norm f_D=E\left(\sup_{t\in[0,1]}(\abs{f(t)}Z_t)\right),\qquad f\in E[0,1].
\]
The condition $Z_t\le m$, $t\in[0,1]$, can be weakened to
$E\left(\sup_{t\in[0,1]}Z_t\right)<\infty$. Observe that property
\eqref{eqn:df_of_smsp} corresponds to the spectral representation of a
max-stable process given in \citet{resroy91} and \citet{dehaan84} since
$P\left(-\eta_t^{-1}\le y\right) = \exp\left(-y^{-1}\right)$ for $y>0$ and
$t\in[0,1]$.
Based on this characterization, \citet{aulfaho11} introduced a functional
domain of attraction approach for stochastic processes in terms of
convergence of their \emph{distribution functions}, which is more general
than the one based on \emph{weak convergence} as investigated in
\citet{dehal01}. In Section \ref{sec:characterization_of_fmda} of the present
paper we will carry \citeauthor{dehal01}'s (\citeyear{dehal01})
characterization of max-domain of attraction
for stochastic processes in $C[0,1]$ in terms of weak convergence over to
our domain of attraction approach based on convergence of df.
\citet{buihz08} suggested the definition of \emph{generalized Pareto
processes} (GPP), which extends the multivariate approach to function spaces.
This particular approach was investigated and settled in \citet{ferrdh12},
\citet{aulfaho11} and \citet{domri13}. In Section
\ref{sec:Spectral_delta_Neighborhood_of_a_standard_GPP} we will introduce
certain $\delta$-neighborhoods of GPP, which can be characterized by their
rate of convergence towards a max-stable process. This is in complete
accordance with the multivariate case.
Finally, we establish the concept of differentiability in distribution of an
SMSP in Section 4. To this end, we investigate some properties of SMSP such
as the partial derivatives of a $D$-norm, the distribution of the increments
of an SMSP and the conditional distribution of an SMSP given one point being
observed.
To improve the readability of this paper we use bold face such as $\bfxi$,
$\bfX$ for stochastic processes and default font $f$, $a_n$ etc. for non
stochastic functions. Operations on functions such as $\bfxi<f$ or
$(\bfxi-b_n)/a_n$ are meant pointwise. The usual abbreviations \textit{iid,
a.s.} and \textit{rv} for the terms \textit{independent and identically
distributed, almost surely} and \textit{random variable}, respectively, are
used.
\section{A Characterization of Max-Domain of Attraction}\label{sec:characterization_of_fmda}
In the multivariate framework, it is well-known that a rv $(X_1,\dots,X_d)$
is in the domain of attraction of a multivariate max-stable distribution if
and only if its copula has this property and the distribution of $X_i$ is in
the univariate domain of attraction of a max-stable distribution for each
$i=1,\dots,n$. We refer to \citet{gal78}, \citet{deheu78,deheu84} and
\citet{aulbf11} for details.
\Citet{dehal01} extended this result to stochastic processes, where
\emph{domain of attraction} is now meant in the sense of weak convergence;
condition~\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df}, see
below, is part of their characterization. We will carry
\citeauthor{dehal01}'s (\citeyear{dehal01}) result over to our domain of
attraction approach based on convergence of dfs of stochastic processes.
Let $\bfX=(X_t)_{t\in[0,1]}\in C[0,1]$ be a stochastic process with
continuous marginal df $F_t(x)=P(X_t\le x)$, $x\in\R$, $t\in[0,1]$, and let
$\bfxi=(\xi_t)_{t\in[0,1]}\in C[0,1]$ be an MSP with marginal df $G_t$,
$t\in[0,1]$. Suppose that there exist norming functions $a_n, b_n\in C[0,1]$,
$a_n>0$, $n\in\N$, such that
\begin{equation}\label{eqn:final_equivalent_crucial_condition_on_marginal_df}
\sup_{t\in[0,1]}\abs{n\big(F_t(a_n(t)f(t)+b_n(t)) - 1\big)-\log\big(G_t(f(t))\big)}\to_{n\to\infty} 0
\end{equation}
for each $f\in E[0,1]$ with $\inf_{t\in[0,1]}G_t(f(t))>0$. This is
essentially condition (3.11) in \citet{dehal01}. Using Taylor expansion
$\log(1+\varepsilon)=\varepsilon + O(\varepsilon^2)$ as $\varepsilon\to 0$,
condition \eqref{eqn:final_equivalent_crucial_condition_on_marginal_df} in
particular implies weak convergence of the univariate margins
\[
F_t(a_n(t)x+b_n(t))^n\to_{n\to\infty} G_t(x),\qquad x\in\R, t\in[0,1].
\]
Put $\bfU:=(U_t)_{t\in[0,1]}:=(F_t(X_t))_{t\in[0,1]}$, which is the
\emph{copula process} corresponding to $\bfX$. Let
$\bfU^{(1)},\bfU^{(2)},\dots$ be independent copies of $\bfU$, and let
$\bfX^{(1)},\bfX^{(2)},\dots$ be independent copies of $\bfX$. The following
theorem is the main result of this section.
\begin{theorem}\label{thm:equivalence_of_convergence}
We have under condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df}
\begin{equation}\label{eqn:functional_domain_of_attraction_assumption}
P\left(\max_{1\le i\le n}\frac{\bfX^{(i)}-b_n}{a_n}\le f\right) \to_{n\to\infty} P(\bfxi\le f), \qquad f\in E[0,1],
\end{equation}
if and only if
\begin{equation}\label{eqn:copula_domain_of_attraction_assumption}
P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right) \to_{n\to\infty} P(\bfeta\le g), \qquad g\in E^-[0,1],
\end{equation}
where for the implication
$\eqref{eqn:functional_domain_of_attraction_assumption}\implies
\eqref{eqn:copula_domain_of_attraction_assumption}$ we set
$\eta_t:=\log(G_t(\xi_t))$, $t\in[0,1]$, and for the reverse conclusion
$\xi_t:=G_t^{-1}(\exp(\eta_t))$, $t\in [0,1]$. In both cases the processes
$\bfxi:=(\xi_t)_{t\in[0,1]}$ and $\bfeta:=(\eta_t)_{t\in[0,1]}\in C[0,1]$ are
max-stable, $\bfeta$ being an SMSP.
\end{theorem}
By Lemma 1 in \citet{aulfaho11} or the elementary arguments as in the proof
of Theorem 9.4.1 in \citet{dehaf06}, one obtains that
$P(G_t(\xi_t)=0\mathrm{\ for\ some\ }t\in[0,1])=0$, i.e., the processes
$\bfeta$ and $\bfxi$ are well defined.
\begin{proof}
As $\bfX$ has continuous sample paths, we have continuity of the function
$[0,1]\ni t\mapsto G_t(x)$ for each $x\in\R$ and, thus, continuity of the
function $[0,1]\times\R\ni (t,x)\mapsto G_t(x)$ as well as its monotonicity
in $x$ for a fixed $t$.
We first establish the implication
$\eqref{eqn:functional_domain_of_attraction_assumption}\implies
\eqref{eqn:copula_domain_of_attraction_assumption}$. Choose $g\in E^-[0,1]$
with $\sup_{t\in[0,1]}g(t)<0$ and put $f(t):=G_t^{-1}(\exp(g(t))$. Then $f\in
E[0,1]$ and we obtain from assumption
\eqref{eqn:functional_domain_of_attraction_assumption}
\begin{equation}\label{eqn:variant_of_fda_assumption}
P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_n f+b_n\right)\to_{n\to\infty}P(\bfxi\le f)=P(\bfeta\le g)=\exp\left(-\norm g_D\right),
\end{equation}
where $\norm\cdot_D$ is the $D$-norm corresponding to the SMSP $\bfeta$.
We have, on the other hand, by condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df}
\begin{align*}
&P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_n f+b_n\right)\\
&= P\left(n\max_{1\le i\le n}\left(U_t^{(i)}-1\right)\le n\big(F_t(a_n(t) f(t)+b_n(t))-1\big),\,t\in[0,1]\right)\\
&= P\left(n\max_{1\le i\le n}\left(U_t^{(i)}-1\right)\le g(t) + r_n(t),\,t\in[0,1]\right),
\end{align*}
where $r_n(t)=o(1)$ as $n\to\infty$, uniformly for $t\in[0,1]$. We claim that
\begin{equation}\label{eqn:fda_of_copula process_for_g_<_0}
P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right) \to_{n\to\infty} P(\bfeta\le g).
\end{equation}
Replace $g$ by $g+\varepsilon$ and $g-\varepsilon$ for $\varepsilon>0$ small
enough such that $g+\varepsilon<0$, and put
\[
f_\varepsilon(t):= G_t^{-1}(\exp(g(t)+\varepsilon)),\quad f_{-\varepsilon}(t):= G_t^{-1}(\exp(g(t)-\varepsilon)),\qquad t\in[0,1].
\]
Then $f_\varepsilon, f_{-\varepsilon}\in E[0,1]$, and we obtain from
condition \eqref{eqn:final_equivalent_crucial_condition_on_marginal_df} and
equation \eqref{eqn:variant_of_fda_assumption} for $n\ge n_0$
\begin{align*}
&P\left(n\max_{1\le i\le n}\left(U_t^{(i)}-1\right)\le n\big( F_t(a_n(t)f_\varepsilon(t)+b_n(t))-1\big),\,t\in[0,1]\right)\\
&\ge P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\\
&\ge P\left(n\max_{1\le i\le n}\left(U_t^{(i)}-1\right)\le n\big( F_t(a_n(t)f_{-\varepsilon}(t)+b_n(t))-1\big),\,t\in[0,1]\right),
\end{align*}
where the upper bound converges to $\exp\left(-\norm{g+\varepsilon}_D\right)$
and the lower bound to $\exp\left(-\norm{g-\varepsilon}_D\right)$. As both
converge to $\exp\left(-\norm g_D\right)$ as $\varepsilon\to 0$, we have
established \eqref{eqn:fda_of_copula process_for_g_<_0}.
Next we claim that \eqref{eqn:fda_of_copula process_for_g_<_0} is true for
each $g\in E^-[0,1]$, i.e., we drop the assumption $\sup_{t\in[0,1]}g(t)<0$.
We prove this by a contradiction. Suppose first that there exists $g\in
E^-[0,1]$ such that
\[
\liminf_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\le \exp\left(-\norm g_D\right)-\delta
\]
for some $\delta>0$. From \eqref{eqn:fda_of_copula process_for_g_<_0} we
deduce that for each $\varepsilon>0$
\[
\lim_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g-\varepsilon\right)=\exp\left(-\norm{g-\varepsilon}_D\right)
\]
and, thus,
\begin{align*}
&\exp\left(-\norm g_D\right)-\delta\\
&\ge \liminf_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\\
&\ge \liminf_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g-\varepsilon\right)\\
&= \exp\left(-\norm{g-\varepsilon}_D\right).
\end{align*}
As $\varepsilon > 0$ was arbitrary, we have reached a contradiction and,
thus, we have established that
\[
\liminf_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\ge \exp\left(-\norm g_D\right),\qquad g\in E^-[0,1].
\]
Suppose next that there exists $g\in E^-[0,1]$ such that
\[
\limsup_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\ge \exp\left(-\norm g_D\right)+\delta
\]
for some $\delta>0$. We have by \eqref{eqn:fda_of_copula process_for_g_<_0}
for $\varepsilon>0$
\[
\lim_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le -\varepsilon\right) =\exp\left(-\varepsilon\norm 1_D\right)\to_{\varepsilon\downarrow 0} 1,
\]
and, thus,
\begin{align*}
&\exp\left(-\norm g_D\right)+\delta\\
&\le \limsup_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\\
&\le \limsup_{n\to\infty}\Biggl( P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g,\, n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le -\varepsilon\right)\\
&\hspace*{5cm}+ P\left(\left(n \max_{1\le i\le n}\left(\bfU^{(i)}-1\right) \le -\varepsilon\right)^\complement\right)\Biggr)\\
&= \exp\left(-\norm{(\min(g(t),-\varepsilon)_{t\in[0,1]}}_D\right) + 1-\exp\left(-\varepsilon \norm 1_D\right)
\end{align*}
by \eqref{eqn:fda_of_copula process_for_g_<_0}. As the first term in the
final line above converges to $\exp\left(-\norm g_D\right)$ as
$\varepsilon\downarrow 0$ and the second one to zero, we have established
another contradiction and, thus,
\[
\limsup_{n\to\infty} P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\le \exp\left(-\norm g_D\right),\qquad g\in E^-[0,1].
\]
This proves equation \eqref{eqn:fda_of_copula process_for_g_<_0} for
arbitrary $g\in E^-[0,1]$ and completes the proof of the conclusion
$\eqref{eqn:functional_domain_of_attraction_assumption}\implies
\eqref{eqn:copula_domain_of_attraction_assumption}$.
Next we establish the implication
$\eqref{eqn:copula_domain_of_attraction_assumption}\implies
\eqref{eqn:functional_domain_of_attraction_assumption}$. Choose $f\in E[0,1]$
with $\inf_{t\in[0,1]}G_t(f(t))>0$ and put $g(t):=\log(G_t(f(t)))$,
$t\in[0,1]$. From the assumption
\eqref{eqn:copula_domain_of_attraction_assumption} we obtain
\[
P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\to_{n\to\infty} P(\bfeta\le g)=P(\bfxi\le f)=\exp\left(-\norm g_D\right).
\]
On the other hand, we have by condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df}
\begin{align*}
&P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g\right)\\
&= P\left(n\max_{1\le i\le n}\left(\bfU_t^{(i)}-1\right)\le n\big(F_t(a_n(t) f(t)+b_n(t))-1\big) + r_n(t),\,t\in[0,1]\right),
\end{align*}
where $r_n(t)=o(1)$ as $n\to\infty$, uniformly for $t\in[0,1]$. We claim that
\begin{align}\label{eqn:fda_of_copula_processes}
&P\left(n\max_{1\le i\le n}\left(\bfU_t^{(i)}-1\right)\le n\big(F_t(a_n(t) f(t)+b_n(t))-1\big),\,t\in[0,1]\right)\nonumber\\
&\to_{n\to\infty} P\left(\bfeta\le g\right)\nonumber\\
&=\exp\left(-\norm g_D\right).
\end{align}
Replace $g$ by $\min(g+\varepsilon,0)$ and $g-\varepsilon$, where
$\varepsilon>0$ is arbitrary. Then we obtain from
\eqref{eqn:copula_domain_of_attraction_assumption} and condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df} for $n\ge
n_0=n_0(\varepsilon)$
\begin{align*}
&P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le \min(g+\varepsilon,0)\right)\\
&=P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g+\varepsilon\right)\\
&\ge P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le n\big(F_t(a_n(t) f(t)+b_n(t))-1\big),\,t\in[0,1]\right)\\
&\ge P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le g-\varepsilon\right).
\end{align*}
Since
\begin{equation*}
P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le \min(g+\varepsilon,0)\right)\to_{n\to\infty}\exp\left(-\norm{\min(g+\varepsilon,0)}_D\right)
\end{equation*}
as well as
\begin{equation*}
P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le
g-\varepsilon\right)\to_{n\to\infty}
\exp\left(-\norm{g-\varepsilon}_D\right)
\end{equation*}
and $\varepsilon>0$ was arbitrary, \eqref{eqn:fda_of_copula_processes}
follows.
From the equality
\begin{align*}
&P\left(n\max_{1\le i\le n}\left(\bfU^{(i)}-1\right)\le n\big(F_t(a_n(t) f(t)+b_n(t))-1\big),\,t\in[0,1]\right)\\
&=P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_nf+b_n\right)
\end{align*}
and from \eqref{eqn:fda_of_copula_processes} we obtain
\begin{equation}\label{eqn:fda_for_X}
\lim_{n\to\infty} P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_nf+b_n\right) =P(\bfxi\le f)
\end{equation}
for each $f\in E[0,1]$ with $\inf_{t\in[0,1]}G_t(f(t))>0$. If
$\inf_{t\in[0,1]}G_{t}(f(t))=0$, then, for $\varepsilon >0$, there exists
$t_0\in[0,1]$ such that $G_{t_0}(f(t_0))\le\varepsilon$. We, thus, have
$P(\bfxi\le f)\le P\left(\xi_{t_0}\le f(t_0)\right)=G_{t_0}(f(t_0))\le
\varepsilon$ and, by condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df},
\begin{align*}
P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_nf+b_n\right)&\le P\left(\max_{1\le i\le n} X_{t_0}^{(i)}\le a_n(t_0)f(t_0)+ b_n(t_0)\right)\\
&\to_{n\to\infty} G_{t_0}(f(t_0))
\le \varepsilon.
\end{align*}
As $\varepsilon>0$ was arbitrary, we have established
\[
\lim_{n\to\infty} P\left(\max_{1\le i\le n}\bfX^{(i)}\le a_nf+b_n\right)=0=P(\bfxi\le f)
\]
in that case, where $\inf_{t\in[0,1]}G_t(f(t))=0$ and, thus,
\eqref{eqn:fda_for_X} for each $f\in E[0,1]$. This completes the proof of
Theorem \ref{thm:equivalence_of_convergence}.
\end{proof}
\begin{cor}\label{cor:equivalence_of_fda}
Let $\bfX=(X_t)_{t\in[0,1]}\in C[0,1]$ be a stochastic process with identical
continuous marginal df $F(x)=P(X_t\le x)$, $x\in\R$, $t\in[0,1]$, and let
$\bfxi=(\xi_t)_{t\in[0,1]}\in C[0,1]$ be an MSP with identical marginal df
$G$. Denote by $\bfU=(U_t)_{t\in[0,1]}:=(F(X_t))_{t\in[0,1]}$ the copula
process pertaining to $\bfX$. Then we have $\bfX\in\mathcal D(\bfxi)$ if and
only if $\bfU\in\mathcal D(\bfeta)$ and $F\in \mathcal D(G)$.
\end{cor}
For a characterization of the condition $\bfU\in \mathcal D(\bfeta)$ in terms
of certain neighborhoods of generalized Pareto processes see Proposition
\ref{prop:characterization_of_D(eta)} below.
\begin{proof}[Proof of Corollary \ref{cor:equivalence_of_fda}]
The assumption $F\in\mathcal D(G)$ yields
$\sup_{x\in\R}|F^n(a_nx+b_n)-G(x)|\to_{n\to\infty}0$ for some sequence of
norming constants $a_n>0$, $b_n\in\R$, $n\in\N$. Taking logarithms and using
Taylor expansion of $\log(1+x)$ for $x\in[x_0,x_1]$ with $0<x_0\le x_1$
implies
\[
\sup_{x\in[x_0,x_1]}\abs{n(F(a_nx+b_n)-1)-\log(G(x))}\to_{n\to\infty}0
\]
and, thus, condition
\eqref{eqn:final_equivalent_crucial_condition_on_marginal_df} is satisfied.
Corollary \ref{cor:equivalence_of_fda} is now an immediate consequence of
Theorem \ref{thm:equivalence_of_convergence} together with the fact that the
assumption $\bfX\in\mathcal D(\bfxi)$ implies in particular that
$F\in\mathcal D(G)$.
\end{proof}
We conclude this section with a short remark. Choose $f\in E[0,1]$ which is
not the constant function zero. If $\bfeta\in C^-[0,1]$ is an SMSP, then
the univariate rv
\[
\eta_f:= \sup_{t\in [0,1]} \frac{\eta_t}{\abs{f(t)}},
\]
is by equation \eqref{eqn:df_of_smsp} negative exponential distributed with
parameter $\norm f_D$. In particular we obtain that
$P\left(\sup_{t\in[0,1]}\eta_t<0\right)=1$.
\section{\texorpdfstring{$\delta$}{Delta}-Neighborhood of a Generalized Pareto Process} \label{sec:Spectral_delta_Neighborhood_of_a_standard_GPP}
First we recall some facts from \citet{aulfaho11}. A univariate generalized
Pareto distribution (GPD) $W$ is simply given by $W(x)=1+\log(G(x))$,
$G(x)\ge 1/e$, where $G$ is a univariate extreme value distribution (EVD). It
was established by \citet{pick75} and \citet{balh74} that, roughly, the
maximum of $n$ iid univariate observations, linearly standardized, converges
in distribution to an EVD as $n$ increases if, and only if, the exceedances
above an increasing threshold follow a generalized Pareto distribution (GPD).
The multivariate analogon is due to \citet{roott06}. It was observed by
\citet{buihz08} that a $d$-dimensional GPD $W$ with ultimately standard
Pareto margins can be represented in its upper tail as
$W(\bfx)=P(U^{-1}\bfZ\le\bfx)$, $\bfx_0\le \bfx\le\bfzero\in\R^d$, where the
rv $U$ is uniformly on $(0,1)$ distributed and independent of the rv
$\bfZ=(Z_1,\dots,Z_d)$ with $0\le Z_i\le m$ for some $m\ge 1$ and $E(Z_i)=1$,
$1\le i\le d$. The following definition extends this approach to function
spaces.
\begin{defi}\upshape
Let $U$ be a rv which is uniformly distributed on $[0,1]$ and independent of
the generator process $\bfZ\in C[0,1]$ that is characterized by the two
properties
\begin{equation*}
0\le Z_t\le m\quad\mathrm{\ and\ }\quad E(Z_t)=1,\qquad t\in[0,1],
\end{equation*}
for some constant $m\ge 1$. Then the stochastic process
\[
\bfY:=\frac 1U \bfZ\ \in C^+[0,1].
\]
is a \emph{generalized Pareto process} (GPP) \citep{buihz08,ferrdh12,
domri13}.
\end{defi}
The univariate margins $Y_t$ of $\bfY$ have ultimately standard Pareto tails:
\begin{align*}
P(Y_t\le x)&= P\left(\frac 1 x Z_t\le U\right)\\
&=\int_0^m P\left(\frac 1x z\le U\right)\,(P*Z_t)(dz)\\
&=1 - \frac 1 x \int_0^m z\,(P*Z_t)(dz)\\
&=1 - \frac 1 x E(Z_t)\\
&= 1- \frac 1 x,\qquad x\ge m,\,0\le t\le 1.
\end{align*}
Put $\bfV:=\max(-1/\bfY, M)$, where $M<0$ is an arbitrary constant, which
ensures that $\bfV>-\infty$. Then, by Fubini's Theorem,
\begin{align*}
P\left(\bfV\le f\right)&=P\left(\sup_{t\in[0,1]}\left(\abs{f(t)}Z_t\right)\le U\right)\\
&=1-\int_0^1 P\left(\sup_{t\in[0,1]}\left(\abs{f(t)}Z_t\right)>u\right)\,du\\
&=1- E\left(\sup_{t\in[0,1]}\left(\abs{f(t)}Z_t\right)\right)\\
&=1-\norm f_D
\end{align*}
for all $f\in\barE$ with $\norm f_\infty\le \min(1/m,\abs M)$, i.e., $\bfV$
has the property that its functional df is in its upper tail equal to
\begin{align}
W(f)&:=P\left(\bfV\le f\right)\notag\\
&=1-\norm f_D\notag\\
&=1+\log\left(\exp\left(-\norm f_D\right)\right)\notag\\
&=1+\log(G(f)),\qquad f\in \barE,\,\norm f_\infty\le \min(1/m,\abs M), \label{uppertail_funct_df_of_GPD}
\end{align}
where $G(f)=P(\bfeta\le f)$ is the functional df of the MSP $\bfeta$ with
$D$-norm $\norm\cdot_D$ and generator $\bfZ$. This representation of the
upper tail of a GPP in terms of $1+\log(G)$ is in complete accordance with
the uni- and multivariate case \citep[see, for example,][Chapter
5]{fahure10}. We write $W=1+\log(G)$ in short notation and call $\bfV$ a GPP
as well.
\begin{rem}\upshape
Due to representation \eqref{uppertail_funct_df_of_GPD}, the GPD process
$\bfV$ is obviously in the functional domain of attraction of an SMSP
$\bfeta$ with $D$-norm $\norm\cdot_D$ and generator $\bfZ$: Take $a_n=1/n$
and $b_n=0$. We have for $f\in E^-[0,1]$ and large enough $n\in\N$
\[
P\left(\bfV\le \frac 1n f\right)^n =\left(1-\frac 1n \norm f_D\right)^n \to_{n\to\infty} \exp\left(-\norm f_D\right)=P(\bfeta\le f).
\]
\end{rem}
The following result is a functional version of the well-known fact that the
spectral df of a GPD random vector is equal to a uniform df in a neighborhood
of zero.
\begin{lemma}
We have for $f\in\barE$ with $\norm f_\infty\le m$ and some $s_0<0$
\[
W_f(s):=P\left(\bfV\le s\abs f\right)=1+s\norm f_D,\qquad s_0\le s\le 0.
\]
\end{lemma}
Let $\bfU$ be a copula process. The next result is established in
\citet{aulfaho11}.
\begin{prop}\label{prop:characterization_of_D(eta)}
The property $\bfU\in \mathcal{D}(\bfeta)$ is equivalent with the condition
\begin{equation}\label{eqn:tail_equivalence_of_H_and_W}
\lim_{s\uparrow 0} \frac{1-H_f(s)}{1-W_f(s)}=1, \qquad f\in\barE,
\end{equation}
i.e., the spectral df $H_f(s)=P(\bfU-1\le s\abs f),\;s\le 0,$ of $\bfU-1$ is
\textit{tail equivalent} with that of the GPD $W_f=1+\log(G_f)$,
$G(\cdot)=\exp\left(-\norm{\cdot}_D\right)$.
\end{prop}
This characterization of the domain of attraction of an SMSP in terms of a
certain GPP suggests to focus on the following standard case. Recall that
Section~\ref{sec:characterization_of_fmda} justified to consider SMSPs in
place of general MSPs.
\begin{defi}\upshape
A stochastic process $\bfV\in \barC$ is a \emph{standard generalized Pareto
process} (SGPP), if there exists a $D$-norm $\norm\cdot_D$ on $E[0,1]$ and
some $c_0>0$ such that
\[
P(\bfV\le f)=1-\norm f_D
\]
for all $f\in\barE$ with $\norm f_\infty\le c_0$.
\end{defi}
The same arguments as at the end of Section
\ref{sec:characterization_of_fmda} suggest to consider the rv
\[
V_f := \sup_{t\in[0,1]} \frac{V_t}{\abs{f(t)}}
\]
if we want to test whether a given process $\bfV\in C^-[0,1]$ actually is an
SGPP. Again $f\in E[0,1]$ is not the constant function zero. In the case that
$\bfV$ is an SGPP we obtain
\[
P\left(V_f>x\right)=\norm f_D \abs x,\qquad -1\le x\le 0,
\]
if $\norm f_\infty\le c_0$, i.e., $V_f$ follows in its upper tail, precisely
on $(-1,0)$, a uniform distribution.
Using the spectral decomposition of a stochastic process in $\barC$, we can
easily extend the definition of a spectral $\delta$-neighborhood of a
multivariate GPD as in \citet[Section 5.5]{fahure10} to the spectral
$\delta$-neighborhood of an SGPP. Denote by
$E^-_1[0,1]=\set{f\in\barE:\,\norm f_\infty=1}$ the unit sphere in $\barE$.
\begin{defi}\upshape
We say that a stochastic process $\bfY\in\barC$ belongs to the
\textit{spectral $\delta$-neighborhood} of the SGPP $\bfV$ for some $\delta
\in(0,1]$, if we have uniformly for $f\in E^-_1[0,1]$ the expansion
\begin{align*}
1-P(\bfY\le c f)&=(1-P(\bfV\le cf))\left(1+O\left(c^\delta\right)\right)\\
&=c\norm f_D \left(1+O\left(c^\delta\right)\right)
\end{align*}
as $c\downarrow 0$.
\end{defi}
An SMSP is, for example, in the spectral $\delta$-neighborhood of the
corresponding GPP with $\delta=1$.
The following result extends Theorem 5.5.5 in \citet{fahure10} on the rate of
convergence of multivariate extremes. It shows that $\delta$-neighborhoods
collect, roughly, all processes which have a polynomial rate of convergence
towards an SMSP.
\begin{prop}
Let $\bfY$ be a stochastic process in $\barC$, $\bfV$ an SGPP with $D$-norm
$\norm\cdot_D$ and $\bfeta$ a corresponding SMSP.
\begin{itemize}
\item[\textit{(i)}] Suppose that $\bfY$ is in the spectral
$\delta$-neighborhood of $\bfV$ for some $\delta\in(0,1]$. Then we
have
\[
\sup_{f\in\barE}\abs{P\left(\bfY\le \frac f n\right)^n-P(\bfeta\le f)}=O\left(n^{-\delta}\right).
\]
\item[\textit{(ii)}] Suppose that $H_f(c)=P(\bfY\le c\abs f)$ is
differentiable with respect to $c$ in a left neighborhood of $0$ for
any $f\in E^-_1[0,1]$, i.e., $h_f(c):=(\partial/\partial c) H_f(c)$
exists for $c\in(-\varepsilon,0)$ and any $f\in E^-_1[0,1]$. Suppose,
moreover, that $H_f$ satisfies the von Mises condition
\[
\frac{-ch_f(c)}{1-H_f(c)}=:1+r_f(c)\to_{c\uparrow 0} 1,\qquad f\in E^-_1[0,1],
\]
with remainder term $r_f$ satisfying
\[
\sup_{f\in E^-_1[0,1]}\abs{\int_c^0\frac{r_f(t)} t\,dt}\to_{c\uparrow 0} 0.
\]
If
\[
\sup_{f\in\barE}\abs{P\left(\bfY\le \frac f n\right)^n-P(\bfeta\le f)}=O\left(n^{-\delta}\right)
\]
for some $\delta\in(0,1]$, then $\bfY$ is in the spectral
$\delta$-neighborhood of the GPP $\bfV$.
\end{itemize}
\end{prop}
\begin{proof}
Note that
\begin{align*}
&\sup_{f\in\barE}\abs{P\left(\bfY\le \frac f n\right)^n-P(\bfeta\le f)}\\
&= \sup_{f\in\barE}\abs{P\left(\bfY\le \frac{\norm f_\infty} n \frac f{\norm f_\infty}\right)^n- P\left(\bfeta\le \norm f_\infty \frac f{\norm f_\infty}\right)}\\
&=\sup_{c < 0}\sup_{f\in E^-_1[0,1]}\abs{P\left(\bfY\le \frac cn\abs f\right)^n- P\left(\bfeta\le c\abs f\right)}\\
&=\sup_{f\in E^-_1[0,1]} \sup_{c < 0}\abs{P\left(\bfY\le \frac cn\abs f\right)^n- P\left(\bfeta\le c\abs f\right)}.
\end{align*}
The assertion now follows by repeating the arguments in the proof of Theorem
1.1 in \citet{falr02}, where the bivariate case has been established.
\end{proof}
\section{Distributional Differentiability of an SMSP}\label{sec:differentiability_smsp}
In this section, our aim is to establish the concept of distributional
differentiability of an SMSP $\bm\eta=(\eta_t)_{t\in[0,1]}$, that is, the
convergence in distribution of the difference quotient
$(\eta_{t+h}-\eta_t)/h$ to some rv on the real line for $h\to0$. To this end,
we need to determine the distribution of the increments $\eta_s-\eta_t$,
$s\neq t$. This can be achieved by the calculation of the conditional
distribution of $\bm\eta$, given $\{\eta_{t_0}=x\}$ for some $t_0\in[0,1]$
and $x<0$, which is part of an interesting and challenging open problem by
itself: Suppose the distribution of the SMSP $(\eta_t)_{t\in[0,1]}$ is known,
and the process has already been observed at some points
$\{\eta_{t_k}=x_k,~k=1,\dotsc,n\}$, how can one determine the conditional
distribution of $(\eta_t)_{t\in[0,1]}$ given these observations?
As an auxiliary result, we first compute the partial derivatives of a
functional $D$-norm $\norm\cdot_D$. For this purpose, we need the following
definition. Let $\mathcal X$ be a normed function space, and $J:\mathcal
X\to\R$ a functional. The \emph{first variation} (or the \emph{G\^{a}teaux
differential}) \emph{of $J$ at $u\in\mathcal X$ in the direction
$v\in\mathcal X$} is defined as
\begin{equation*}
\nabla J(u)(v):=\lim_{\eps\to0}\frac{J(u+\eps v)-J(u)}{\eps}=\frac{d}{d\eps}J(u+\eps v)\Big|_{\eps=0}.
\end{equation*}
Moreover, the \emph{right-hand} (\emph{left-hand}) first variation of $J$ at
$u$ in the direction $v$ is defined as
\begin{equation*}
\nabla^+ J(u)(v):=\lim_{\eps\downarrow0}\frac{J(u+\eps v)-J(u)}{\eps}\quad\text{and}\quad\nabla^- J(u)(v):=\lim_{\eps\downarrow0}\frac{J(u)-J(u-\eps v)}{\eps}.
\end{equation*}
Considering a $D$-norm $\norm\cdot_D$ a functional on the space $E[0,1]$, we
can calculate the first variation of $\norm\cdot_D$. The choice of the space
$E[0,1]$ allows us the incorporation of the fidis and therefore yields the
partial derivatives of a \emph{multivariate} $D$-norm. This
finite-dimensional version of the following result has already been observed
by \citet{einkraseg12}. Note that as a norm is a convex function, a
multivariate $D$-norm $\norm{\bfx}_D$ is for almost every
$\bfx<\bfzero\in\R^d$ continuously differentiable.
Denote further by $\sgn$ the sign function, i.\,e. $\sgn(x)=1$ for $x>0$ and
$\sgn(x)=-1$ for $x<0$.
\begin{lemma}\label{lem:dnorm_firstvariation}
Let $\norm\cdot_D$ be a $D$-norm on the function space $E[0,1]$ with
generator $\bm Z=(Z_t)_{t\in[0,1]}\in C[0,1]$. Let $t_0\in[0,1]$ and
$1_{\{t_0\}}\in E[0,1]$ be the indicator function of the one point set
$\{t_0\}$. Then for every $f\in E[0,1]$ with $f(t_0)\neq0$
\begin{align*}
\nabla^+\norm f_D\left(1_{\{t_0\}}\right)&=\lim_{\eps\downarrow0}\frac{\norm{f+\eps1_{\{t_0\}}}_D-\norm{f}_D}{\eps}\\
&=\sgn\left(f(t_0)\right) E\left(1_{\{\sup_{t\neq t_0}\abs{f(t)}Z_t\leq\abs{f(t_0)}Z_{t_0}\}}Z_{t_0}\right)\\
&=\sgn\left(f(t_0)\right) E\left(1_{\{\sup_{t\in[0,1]}\abs{f(t)}Z_t=\abs{f(t_0)}Z_{t_0}\}}Z_{t_0}\right)
\end{align*}
and
\begin{align*}
\nabla^-\norm f_D\left(1_{\{t_0\}}\right)&=\lim_{\eps\downarrow0}\frac{\norm{f}_D-\norm{f-\eps1_{\{t_0\}}}_D}{\eps}\\
&=\sgn\left(f(t_0)\right) E\left(1_{\{\sup_{t\neq t_0}\abs{f(t)}Z_t<\abs{f(t_0)}Z_{t_0}\}}Z_{t_0}\right).
\end{align*}
In particular, the left-side first variation of $\norm f_D$ in the direction
$1_{\{t_0\}}$ always equals zero if $f$ is continous in $t_0$.
\end{lemma}
\begin{proof}
Let $f\in E[0,1]$ with $f(t_0)>0$. We have
\begin{equation*}
\norm f_D=E\left(\sup_{t\in[0,1]}\abs{f(t)}Z_t\right).
\end{equation*}
First we calculate the right-hand first variation of $\norm\cdot_D$. For
$\eps>0$ there exists a disjoint decomposition of the underlying probability
space $(\Omega,\mathcal A,P)$ via
\begin{equation*}
\Omega=A_1+A_2^{\eps}+A_3^{\eps}
\end{equation*}
with
\begin{align*}
A_1:&=\left\{\sup_{t\neq t_0}\abs{f(t)}Z_t\leq f(t_0)Z_{t_0}=\sup_{t\in[0,1]}\abs{f(t)}Z_t\right\},\\
A_2^{\eps}:&=\left\{f(t_0)Z_{t_0}<\sup_{t\neq t_0}\abs{f(t)}Z_t=\sup_{t\in[0,1]}\abs{f(t)}Z_t\leq (f(t_0)+\eps)Z_{t_0}\right\}\downarrow_{\eps\downarrow0},\emptyset\\
A_3^{\eps}:&=\left\{(f(t_0)+\eps)Z_{t_0}<\sup_{t\neq t_0}\abs{f(t)}Z_t=\sup_{t\in[0,1]}\abs{f(t)}Z_t\right\}.
\end{align*}
Therefore we obtain by the dominated convergence theorem
\begin{align*}
\nabla^+&\norm f_D\left(1_{\{t_0\}}\right)=\lim_{\eps\downarrow0}\frac{\norm{f+\eps1_{\{t_0\}}}_D-\norm{f}_D}{\eps}\\
=&\lim_{\eps\downarrow0}E\left(\frac1\eps\left(\max\left(\sup_{t\neq t_0}\abs{f(t)}Z_t,(f(t_0)+\eps)Z_{t_0}\right)-\sup_{t\in[0,1]}\abs{f(t)}Z_t\right)\right)\\
=&\lim_{\eps\downarrow0}E\left(\frac1\eps\Big((f(t_0)+\eps)Z_{t_0}-f(t_0)Z_{t_0}\Big)\cdot 1_{A_1}\right)\\
&+\lim_{\eps\downarrow0}E\left(\frac1\eps\Big((f(t_0)+\eps)Z_{t_0}-\sup_{t\neq t_0}\abs{f(t)}Z_t\Big)\cdot 1_{A_2^{\eps}}\right)\\
&+\lim_{\eps\downarrow0}E\left(\frac1\eps\Big(\sup_{t\neq t_0}\abs{f(t)}Z_t-\sup_{t\neq t_0}\abs{f(t)}Z_t\Big)\cdot 1_{A_3^{\eps}}\right)\\
=&E\left(Z_{t_0}\cdot1_{A_1}\right)
\end{align*}
since the second summand after the second to last equality vanishes as
\begin{align*}
\frac1\eps\Big((f(t_0)+\eps)Z_{t_0}-\sup_{t\neq t_0}\abs{f(t)}Z_t\Big)\cdot 1_{A_2^{\eps}}&<\frac1\eps\Big((f(t_0)+\eps)Z_{t_0}-f(t_0)Z_{t_0}\Big)\cdot 1_{A_2^{\eps}}\\
&\to_{\eps\downarrow0}Z_{t_0}\cdot 1_{\emptyset}=0.
\end{align*}
Note that
\begin{equation*}
\sup_{t\neq t_0}\abs{f(t)}Z_t\leq f(t_0)Z_{t_0}\iff \sup_{t\in[0,1]}\abs{f(t)}Z_t=f(t_0)Z_{t_0}.
\end{equation*}
In order to calculate the left-side first variation of $\norm\cdot_D$, we
find for $\eps>0$ a disjoint decomposition of $(\Omega,\mathcal A,P)$ via
\begin{equation*}
\Omega=B_1^{\eps}+B_2^{\eps}+B_3
\end{equation*}
with
\begin{align*}
B_1^{\eps}:&=\left\{\sup_{t\neq t_0}\abs{f(t)}Z_t\leq (f(t_0)-\eps)Z_{t_0}\right\}\uparrow_{\eps\downarrow0}\left\{\sup_{t\neq t_0}\abs{f(t)}Z_t< f(t_0)Z_{t_0}\right\}=:B_1,\\
B_2^{\eps}:&=\left\{(f(t_0)-\eps)Z_{t_0}<\sup_{t\neq t_0}\abs{f(t)}Z_t\leq (f(t_0))Z_{t_0}\right\}\downarrow_{\eps\downarrow0},\emptyset\\
B_3:&=\left\{f(t_0)Z_{t_0}<\sup_{t\neq t_0}\abs{f(t)}Z_t\right\}.
\end{align*}
Hence we obtain again by the dominated convergence theorem
\begin{align*}
\nabla^-&\norm f_D\left(1_{\{t_0\}}\right)=\lim_{\eps\downarrow0}\frac{\norm{f}_D-\norm{f-\eps1_{\{t_0\}}}_D}{\eps}\\
=&\lim_{\eps\downarrow0}E\left(\frac1\eps\left(\sup_{t\in[0,1]}\abs{f(t)}Z_t-\max\left(\sup_{t\neq t_0}\abs{f(t)}Z_t,(f(t_0)-\eps)Z_{t_0}\right)\right)\right)\\
=&\lim_{\eps\downarrow0}E\left(\frac1\eps\Big(f(t_0)Z_{t_0}-(f(t_0)-\eps)Z_{t_0}\Big)\cdot 1_{B^{\eps}_1}\right)\\
&+\lim_{\eps\downarrow0}E\left(\frac1\eps\Big(f(t_0)Z_{t_0}-\sup_{t\neq t_0}\abs{f(t)}Z_t\Big)\cdot 1_{B_2^{\eps}}\right)\\
&+\lim_{\eps\downarrow0}E\left(\frac1\eps\Big(\sup_{t\neq t_0}\abs{f(t)}Z_t-\sup_{t\neq t_0}\abs{f(t)}Z_t\Big)\cdot 1_{B_3}\right)\\
=&E\left(Z_{t_0}\cdot1_{B_1}\right)
\end{align*}
since the second summand after the second last equality vanishes by the same
argument as above. The case $f(t_0)<0$ works analogously.
\end{proof}
The first variation (or the partial derivatives, respectively) of a $D$-norm
emerge in the easiest case of the so-called \emph{prediction problem}, cf.
\citet{wansto11}, \citet{domeyri12} and \citet{domeyimin12}. Suppose one
knows the distribution of an SMSP $\bm\eta$ and the point $\{\eta_{t_0}=x\}$,
$x<0$, has already been observed. We are interested in the conditional
distribution of $\bm\eta$, given $\{\eta_{t_0}=x\}$. The finite-dimensional
version of the following Lemma is part of Proposition 5 in
\citet{domeyimin12}.
\begin{lemma}\label{lem:conditional_singlepoint_functional}
Let $\bm\eta=(\eta_t)_{t\in[0,1]}$ be an SMSP with $D$-norm $\norm\cdot_D$
generated by $\bm Z=(Z_t)_{t\in[0,1]}$. Choose an aribtrary $t_0\in[0,1]$.
Then for every $f\in\bar E^-[0,1]$ with $f(t_0)=0$ and almost all $x<0$
\begin{equation*}
P\left(\bm\eta\leq f\big|\eta_{t_0}=x\right)=\exp\left(-\left(x+\norm{f+x1_{\{t_0\}}}_D\right)\right)\cdot E\left(1_{\{\sup_{t\in[0,1]}\abs{f(t)}Z_t\leq\abs{x}Z_{t_0}\}}Z_{t_0}\right).
\end{equation*}
\end{lemma}
\begin{proof}
The random variable $\eta_{t_0}$ has Lebesgue-density $e^y$, $y\leq 0$.
Therefore, we have by basic rules of conditional distributions for almost all
$x<0$
\begin{align*}
P\left(\bm\eta\leq f\big|\eta_{t_0}=x\right)&=\lim_{\eps\downarrow0}\frac{\eps^{-1}P(\bm\eta\leq f,\eta_{t_0}\in(x,x+\eps])}{\eps^{-1}P(\eta_{t_0}\in(x,x+\eps])}\\
&=\exp(-x)\lim_{\eps\downarrow0}\frac{P(\bm\eta\leq f,\eta_{t_0}\leq x+\eps)-P(\bm\eta\leq f,\eta_{t_0}\leq x)}{\eps}.
\end{align*}
Now define the function $g\in\bar E^-[0,1]$ by $g(t)=f(t)$, $t\neq t_0$, and
$g(t_0)= x$. Then we have by Lemma \ref{lem:dnorm_firstvariation}
\begin{align*}
P\left(\bm\eta\leq f\big|\eta_{t_0}=x\right)&=\exp(-x)\lim_{\eps\downarrow0}\frac{\exp\left(-\norm{g+\eps1_{\{t_0\}}}_D\right)-\exp\left(-\norm{g}_D\right)}{\eps}\\
&=-\exp(-x)\exp\left(-\norm{g}_D\right)\cdot\nabla^+\norm g_D\left(1_{\{t_0\}}\right)\\
&=\exp\left(-\left(x+\norm{f+x1_{\{t_0\}}}_D\right)\right)\cdot E\left(1_{\{\sup_{t\in[0,1]}\abs{g(t)}Z_t=\abs{x}Z_{t_0}\}}\right)\\
&=\exp\left(-\left(x+\norm{f+x1_{\{t_0\}}}_D\right)\right)\cdot E\left(1_{\{\sup_{t\in[0,1]}\abs{f(t)}Z_t\leq\abs{x}Z_{t_0}\}}\right).
\end{align*}
\end{proof}
The preceding result can be used for the derivation of the distribution of
the increments of an SMSP, which is the content of the next lemma.
\begin{lemma}\label{lem:increments_smsp}
Consider an SMSP $\bm\eta=(\eta_t)_{t\in[0,1]}$ with generator process $\bm
Z=(Z_t)_{t\in[0,1]}$ and choose arbitrary $s,t\in[0,1]$, $s\neq t$. Denote by
$\norm\cdot_D$ the $D$-norm pertaining to $(\eta_s,\eta_t)$. Then for every
$x\in\R$
\begin{align*}
&P\left(\eta_s-\eta_t\leq x\right)\\
&=\begin{cases}\displaystyle\int_{-\infty}^0\exp\left(-\norm{(x+y,y)}_D\right)\cdot E\left(1_{\{yZ_t\leq(x+y)Z_s\}}Z_t\right)~dy,&\; x<0 \\ \displaystyle\int_{-\infty}^{-x}\exp\left(-\norm{(x+y,y)}_D\right)\cdot E\left(1_{\{yZ_t\leq(x+y)Z_s\}}Z_t\right)~dy+1-\exp(-x),&\; x\geq0. \end{cases}
\end{align*}
\end{lemma}
\begin{proof}
We have by basic rules of conditional distributions for every $x\in\R$
\begin{equation*}
P(\eta_s\leq x+\eta_t)=\begin{cases}\displaystyle\int_{-\infty}^0P\left(\eta_s\leq x+y\big|\eta_t=y\right)\exp(y)~dy&\; x<0 \\ \displaystyle\int_{-\infty}^{-x}P\left(\eta_s\leq x+y\big|\eta_t=y\right)\exp(y)~dy+\int_{-x}^0\exp(y)~dy&\; x\geq0. \end{cases}
\end{equation*}
On the other hand, we obtain from Lemma
\ref{lem:conditional_singlepoint_functional} for almost all $y<0$ and every
$x\in\R$ with $x+y\leq 0$
\begin{align*}
P\big(\eta_s&\leq x+y\big|\eta_t=y\big)\\
&=\exp\left(-\left(y+\norm{(x+y,y)}_D\right)\right)\cdot E\left(1_{\{\max(\abs{x+y}Z_s,\abs yZ_t)=\abs{y}Z_t\}}\cdot Z_t\right)\\
&=\exp\left(-\left(y+\norm{(x+y,y)}_D\right)\right)\cdot E\left(1_{\{(x+y)Z_s\geq yZ_t\}}\cdot Z_t\right),
\end{align*}
which completes the proof.
\end{proof}
The preceding lemma allows us to introduce the following differentiability
concept. We call a stochastic process $(X_t)_{t\in[0,1]}$
\emph{differentiable in distribution in $t_0\in[0,1]$}, if the difference
quotient $(X_{t_0+h}-X_{t_0})/h$ converges in distribution to some rv on the
real line for $h\to0$.
\begin{prop}[Differentiability in Distribution of SMSP]\label{prop:differentiabiliy_of_smsp}
Let $\bfeta=(\eta_t)_{t\in[0,1]}$ be an SMSP with generator process
$\bfZ=(Z_t)_{t\in[0,1]}\in C[0,1]$. Suppose that for some $t_0\in[0,1]$
\begin{equation}\label{eqn:local_differentiability_of_generator_process}
\frac{Z_{t_0+h}-Z_{t_0}}{h} \to_{h\to 0} \xi_{t_0}\quad\textrm{a.s.}
\end{equation}
Then we have for $x\in\R$
\[
P\left(\frac{\eta_{t_0+h}-\eta_{t_0}}{h}\le x\right)\to_{h\to 0} H(x):= \int_{-\infty}^0 \exp(y) E\left( 1_{\left\{\xi_{t_0}\le -\tfrac xy Z_{t_0}\right\}}Z_{t_0}\right)\,dy.
\]
\end{prop}
Condition $\eqref{eqn:local_differentiability_of_generator_process}$ means
that $(\partial/\partial t) Z_t$ exists for $t=t_0$ a.s. and, therefore,
$\xi_{t_0}=Z'_{t_0}$.
\begin{proof}[Proof of Proposition \ref{prop:differentiabiliy_of_smsp}]
We have for $x\in\R$ and $h>0$ by Lemma \ref{lem:increments_smsp}
\begin{align*}
&P\left(\eta_{t_0+h}-\eta_{t_0}\le hx\right)\\
&= \int_{-\infty}^{-h\abs x} \exp\left(-\norm{(hx+y,y)}_{D(h)}\right)\cdot E\left(1_{\{yZ_{t_0}\leq(hx+y)Z_{t_0+h}\}}Z_{t_0}\right)\,dy + o(1)
\end{align*}
as $h\downarrow0$, where $\norm{\cdot}_{D(h)}$ is the $D$-norm generated by
$(Z_{t_0+h},Z_{t_0})$. Now we obtain for almost all $y<-h\abs x$
\begin{align*}
&E\left( 1_{\left\{y Z_{t_0} \leq (hx+y) Z_{t_0+h}\right\}}Z_{t_0}\right)\\
&=E\left(1_{\left\{y \tfrac{Z_{t_0}-Z_{t_0+h}}{h} \leq x Z_{t_0+h}\right\}}Z_{t_0} \right)\\
&=E\left(1_{\left\{ \tfrac{Z_{t_0+h}-Z_{t_0}}{h} \leq -\tfrac xy Z_{t_0+h}\right\}}Z_{t_0} \right)\\
&\to_{h\downarrow 0} E\left( 1_{\left\{ \xi_{t_0} \le -\tfrac xy Z_{t_0}\right\}}Z_{t_0}\right)
\end{align*}
by condition \eqref{eqn:local_differentiability_of_generator_process} which
implies the assertion if $h\downarrow0$. On the other hand, we have for
$x\in\R$ and $h<0$ by Lemma \ref{lem:increments_smsp}, condition
\eqref{eqn:local_differentiability_of_generator_process}, and the fact that
$E(Z_{t_0})=1$
\begin{align*}
&P\left(\eta_{t_0+h}-\eta_{t_0}\ge hx\right)=1-P\left(\eta_{t_0+h}-\eta_{t_0}< hx\right)\\
&=1- \int_{-\infty}^{h\abs x} \exp\left(-\norm{(hx+y,y)}_{D(h)}\right)\cdot E\left(1_{\{yZ_{t_0}\leq(hx+y)Z_{t_0+h}\}}Z_{t_0}\right)\,dy + o(1)\\
&\to_{h\uparrow0}1-\int_{-\infty}^0 \exp(y) E\left( 1_{\left\{\xi_{t_0}\ge -\tfrac xy Z_{t_0}\right\}}Z_{t_0}\right)\,dy\\
&=1-\int_{-\infty}^0\exp(y)~dy+\int_{-\infty}^0 \exp(y) E\left( 1_{\left\{\xi_{t_0}\le -\tfrac xy Z_{t_0}\right\}}Z_{t_0}\right)\,dy\\
&=\int_{-\infty}^0 \exp(y) E\left( 1_{\left\{\xi_{t_0}\le -\tfrac xy Z_{t_0}\right\}}Z_{t_0}\right)\,dy.
\end{align*}
\end{proof}
Proposition \ref{prop:differentiabiliy_of_smsp} does not imply
differentiability of the path of $\bfeta$ at $t_0$. But if $\bfeta$ is
differentiable at $t_0$, then $H$ is the df of the derivative
$(\partial/\partial t)\eta_t$ of $\bfeta$ at $t=t_0$. We, therefore, denote
by $\eta_{t_0}'$ a rv which follows the df $H$.
Suppose that $(\partial/\partial t) Z_t$ exists for $t=t_0$ a.s. Then
\[
\mathbb{F}_{t_0}(x):= E\left(1_{\left\{Z_{t_0}'\le x Z_{t_0}\right\}}Z_{t_0} \right),\qquad x\in\R,
\]
defines a common df on $\R$. Denote by $\zeta_{t_0}$ a rv, which follows this
df and which is independent of $\eta_{t_0}$. Then we obtain the equation
\[
H(x) =P\left(-\eta_{t_0} \zeta_{t_0}\le x\right),\qquad x\in\R,
\]
i.e., we have
\[
\eta_{t_0}'=_D -\eta_{t_0} \zeta_{t_0}.
\]
The pathwise derivative of $\bfeta$ at $t_0$, if it exists, coincides,
therefore, in distribution with $-\eta_{t_0} \zeta_{t_0}$.
\begin{lemma}
Suppose that $E\left(Z_{t_0}'\right)$ exists. Then the mean value of $\mathbb
F_{t_0}$ exists as well and coincides with $E\left(Z_{t_0}'\right)$.
\end{lemma}
\begin{proof}
The expectation of an arbitrary rv $\xi$ exists iff $\int_0^\infty
P(\xi>x)\,dx+\int_{-\infty}^0 P(\xi<x)\,dx<\infty$, and in this case
\[
E(\xi)= \int_0^\infty P(\xi>x)\,dx - \int_{-\infty}^0 P(\xi<x)\,dx.
\]
As a consequence we obtain from Fubini's theorem
\begin{align*}
&\int x\,\mathbb F_{t_0}(dx)\\
&= \int_0^\infty 1-E\left(1_{\left\{Z_{t_0}'\le x Z_{t_0}\right\}}Z_{t_0} \right)\,dx - \int_{-\infty}^0 E\left(1_{\left\{Z_{t_0}'\le x Z_{t_0}\right\}}Z_{t_0} \right)\,dx\\
&= \int_0^\infty E\left(1_{\left\{Z_{t_0}'> x Z_{t_0}\right\}}Z_{t_0} \right)\,dx - \int_{-\infty}^0 E\left(1_{\left\{Z_{t_0}'\le x Z_{t_0}\right\}}Z_{t_0} \right)\,dx\\
&= \int z \int_0^\infty 1_{\{z'>x z\}}\,dx\,\left(P*\left(Z_{t_0},Z_{t_0}'\right)\right)(d(z,z'))\\
&\hspace*{2cm} -\int z \int_0^\infty 1_{\{z'\le x z\}}\,dx\,\left(P*\left(Z_{t_0},Z_{t_0}'\right)\right)(d(z,z'))\\
&= \int z\max\left(\frac{z'} z,0\right)\left(P*\left(Z_{t_0},Z_{t_0}'\right)\right)(d(z,z'))\\
&\hspace*{2cm} + \int z\min\left(\frac{z'} z,0\right)\left(P*\left(Z_{t_0},Z_{t_0}'\right)\right)(d(z,z'))\\
&= E\left(Z_{t_0}'\right).
\end{align*}
\end{proof}
As a consequence we obtain in particular
\[
E\left(\eta_{t_0}'\right) = - E\left(\eta_{t_0}\zeta_{t_0}\right) = - E\left(\eta_{t_0}\right) E\left(\zeta_{t_0}\right) = E\left(Z_{t_0}'\right).
\]
|
1,477,468,751,032 | arxiv | \section{Introduction}
So far, most of the large-scale machine learning systems are trained in batch mode where the data samples from all the classes must be available during training. As pointed out by~\cite{rebuffi2016icarl,li2016learning}, natural learning systems are inherently incremental where new knowledge is continuously learned over time while existing knowledge is maintained. Furthermore, many computer vision applications in the real world require incremental learning capabilities because the batch-mode training does not scale up well when new classes need to be learned frequently over time.
One main problem with the incremental learning is the catastrophic forgetting~\cite{McCloskey-Cohen-PLM-1989} where the classification accuracy for the existing classes may quickly deteriorate as the new classes are added. Catastrophic forgetting is mainly caused by the fact that the past data are not available during training.
The key challenge is how to integrate the old classifier and new data to learn the representation efficiently. Existing solutions \cite{li2016learning, rebuffi2016icarl} either use the old classifier as regularization, or select a few exemplars to represent old training data. However, neither of them can retain the classification performance on the old classes and at the same time balance the old/new classes. In addition, selecting exemplars has other limitations (e.g. not scalable, privacy issue, etc).
We propose a new loss function to integrate cross-entropy loss and distillation loss on both old exemplars and new training samples. This allows accurate classification within old classes or new classes. To balance between old and new classes, we found that a scalar can efficiently represent the bias on new data. Thus we can simply estimate the bias on validation set and remove it from the model. The experiment results show the bias estimation is very stable across validation and test datasets. This method outperforms the-state-of-arts iCaRL \cite{rebuffi2016icarl} on CIFAR-100 \cite{krizhevsky2009learning}.
We also investigate using GANs\cite{goodfellow2014generative,ACGAN2017,CGAN-2014,RadfordMC15,ArjovskyCB17} to replace the exemplars as it is more scalable and has less privacy issue than keeping the past data. Although GANs have limitations on image quality and mode dropping, our method outperforms the-state-of-arts LwF \cite{li2016learning} which does not use any exemplars on CIFAR-100 \cite{krizhevsky2009learning}, Flower-102 \cite{nilsback2008automated} and MS-Celeb-1M-Base \cite{lowshotface,guo2016msceleb} datasets.
\begin{figure*}[h]
\begin{center}
\includegraphics[width=\linewidth]{compareFramework.pdf}
\end{center}
\caption{Comparison of our framework with LwF \cite{li2016learning} and iCaRL \cite{rebuffi2016icarl}. LwF applies the distilling loss to the new data and iCaRL keeps a small set of exemplars from old data. Our method utilizes GANs to generated old data. }
\label{fig:compareFramework}
\end{figure*}
\section{Related Work}
Incremental learning has been a long standing problem in machine learning~\cite{cauwenberghs2001incremental,polikar2001learn++,mensink2013distance,kuzborskij2013n}. Before the deep learning took off, people had been developing incremental learning techniques by leveraging linear classifiers, ensemble of weak classifiers, nearest neighbor classifiers, etc. Recently, thanks to the exciting progress in deep learning, there has been a lot of research on incremental learning with deep neural network models. The work can be roughly divided into two categories depending on whether they require the old data or not.
The methods in the first category do not require any old data. \cite{jung2016less} presented a method for domain transfer learning. They try to maintain the performance on the old task by freezing the final layer of the old tasks and discouraging the shared weight parameters in the feature extraction layers from changing. \cite{kirkpatrick2017overcoming} proposed a technique to remember old tasks by constraining the important weight parameters to stay close to their old values while looking for a solution to a new task in the neighborhood of the old one. One limitation of the approach is that there will be conflicting constraints for the weight parameters which are shared between old and new tasks. \cite{li2016learning} presented a method that uses a technique called knowledge distillation~\cite{hinton2015distilling} to maintain the performance of the new classifier on the old tasks. \cite{Shmelkov-et-al-ICCV-2017} also uses knowledge distillation, but they apply it to the object detection instead of classification. Our method also belongs to this category in that we do not require the old data. The key difference is that we use a GANs \cite{goodfellow2014generative,ArjovskyCB17,RadfordMC15} to represent the old data. Even though the data generated by GANs are not as good as the real data, our experiments show that the data generated by GANs significantly improve the incremental learning performance compared to those techniques without using any old data at all.
The methods in the second category require part or all of the old data. \cite{rebuffi2016icarl} proposed a method to select a small number of exemplars from each old class, and the selected exemplars are stored for the incremental learning when adding new classes. \cite{lopez2017gradient} proposed a continuous learning framework where the training samples for different tasks are input into the model one by one during training. \cite{xiao2014error} proposed a training method that grows a network hierarchically as new training data are added. The key difference between our method and these techniques is that we do not keep any real data of the old classes. In real world applications, due to the privacy and legal concerns, real data may not be allowed to be stored for a long period of time. Since GANs generate data purely from a neural network without copying any real image patches, we expect GANs to have much less or even no privacy concerns. The situation is similar to the speech synthesis where the speech data generated by model-based speech synthesis methods has no privacy or legal concerns while the speech data generated by exemplar-based speech synthesis methods, which copies exemplar speech segments, are considered as having the same privacy concerns as the original real data.
\section{Incremental Classifier Learning \label{secincrementallearning}}
We define the incremental classification as follows: given a pre-trained classifier $\hat{f}^n$ on $n$ classes and a new labeled dataset for additional $m$ classes, how to train a new model $f^{n+m}$ to perform classification on $n+m$ classes? Let us denote the new dataset as $X^m=\{(x_i, y_i), 1 \leq i \leq M, y_i \in [n+1,..,n+m]\}$, where $M$ is the size of the dataset, $x_i$ and $y_i$ are the image and the label, respectively.
Although strict incremental learning does not allow using the old $n$ class data, which is used to train the old model $\hat{f}^n$, iCaRL \cite{rebuffi2016icarl} shows that keeping a small amount of old data significantly improves the performance. Let us denote the selected old data as $\hat{X}^n=\{(\hat{x}_j, \hat{y}_j), 1 \leq j \leq N_s, \hat{y}_j \in [1,..,n]\}$, where $N_s$ is the number of selected old images.
\subsection{Design of Loss Function}
Incremental learning has two major challenges - (a) maintaining the classification performance on the old $n$ classes, and (b) balancing the old $n$ classes and the new $m$ classes.
LwF \cite{li2016learning} used distilling with the new data to handle the first challenge and relied on the weight decay to address the balance. However, one limitation of this solution is that the distilling with the new data may not guarantee that the new classifier has similar predictions as the old classifier on the old data. In addition, finding the optimum weight decay is difficult.These two issues can be significantly improved by selecting a few real exemplars from the old data. We will provide a solution later.
iCaRL \cite{rebuffi2016icarl} solved the first challenge by carefully selecting a few exemplars from the old data, and handling the unbalance issue by using binary entropy loss for each class. However, the loss function is not as effective as the cross entropy to handle the relationship between classes.
We propose a new loss function by leveraging the strength of both LwF \cite{li2016learning} and iCaRL \cite{rebuffi2016icarl}. We borrow the exemplar idea from iCaRL to keep a few exemplars and use them on both the distilling and cross entropy losses. The introduction of the old data improves the distilling by enabling the similar prediction between the old and new classifiers on the old $n$ classes. Although exemplars also help balance the old $n$ classes and new $m$ classes as the cross entropy is computed across all $n+m$ classes, it is still difficult to find the balance point since the new data variations are significantly larger.
We present a simple representation of the unbalance - a scalar on the prediction of the new $m$ classes.
In addition, we can simply estimate the unbalance on the validation set and apply it to the new classifier. Experimental results show that our approach not only outperforms the LwF and iCaRL, but also is not sensitive to the exemplar selection. Furthermore, our new loss function and unbalance estimation also works well on GANs generated data. We illustrate the difference between our model, LwF and iCaRL in Figure \ref{fig:compareFramework}.
\subsection{Loss Function}
In the section, we show the details of the proposed loss function and unbalance estimation. Let us denote the output of the old and new classifiers as $\hat{f}^n(x)=[\hat{f}_1(x),...,\hat{f}_n(x)]$ and $f^{n+m}(x)=[f_1(x),...,f_n(x), f_{n+1}(x),...,f_{n+m}(x)]$ respectively. This is illustrated in Figure \ref{fig:overview}.
The distilling loss is formulated as follows:
\begin{align}
L_d = \sum_{x\in \hat{X}^n \cup X^m} \sum_{k=1}^n- \frac{\hat{f}_k(x)}{T} \log (\frac{f_k(x)}{T}),
\end{align}
where $T$ is the temperature scaler and is set to 2.
The cross-entropy loss is written as:
\begin{align}
L_c = \sum_{(x,y) \in \hat{X}^n \cup X^m} \sum_{k=1}^{n+m}- \delta_{y=k} \log [f_k(x)].
\end{align}
The two losses are combined linearly as follows:
\begin{align}
L = \lambda L_d + (1-\lambda)L_c, \label{eqloss}
\end{align}
where the scaler $\lambda$ is used to balance the two terms.
\subsection{Bias Removal}
Due to the unbalanced data between the old $n$ classes and the new $m$ classes (i.e. $N_s \ll M$), the new classifier $f^{n+m}$ is biased towards the new $m$ classes. This is confirmed by the experimental results.
We simply represent the bias as
a multiplier $\beta$ between 0 and 1 that is applied to the outputs of the new $m$ classes:
\begin{align}
f^{n+m}(x)=[f_1(x),...,f_n(x), \beta f_{n+1}(x),...,\beta f_{n+m}(x)]. \label{eqbeta}
\end{align}
We can use the validation set to estimate the bias and apply it to the classifier to remove the bias.
Results show that the bias is very consistent across the validation set and the test set and our new loss with bias estimation outperforms the-state-of-arts iCaRL \cite{rebuffi2016icarl} on CIFAR-100.
\begin{figure}
\begin{center}
\includegraphics[width=1.1\linewidth]{overview.pdf}
\end{center}
\caption{Overview of our incremental learning framework. The loss function contains two terms, the distilling loss and the cross-entropy loss. }
\label{fig:overview}
\end{figure}
\section{Generating Exemplars Using GANs \label{secsampling}}
The approach of choosing exemplars has a major drawback - the gap between the distribution of the chosen exemplars and the distribution of the entire training data for the old $n$ classes.
Since in theory GANs \cite{goodfellow2014generative} can learn the distribution of the real dataset, GANs could potentially be a better representation of the old data. Furthermore, we expect GANs to have less privacy concerns than the real data because GANs do not copy any real image patches. Therefore, we propose training a GANs to represent the old data, and using the samples generated by the GANs for incremental learning.
\subsection{Combining GANs and Classifier}
We use all the training data of the old $n$ classes $\hat{X}^n$ to train a generator $x_g=G(z)$, where $z$ is a random noise input. Then a label is assigned to the generated image $x_g$ using the classifier $\hat{f}^n$ as follows:
\begin{align}
y_g = \underset{1 \leq k \leq n}{\arg\max}\hat{f}_k(x_g).
\end{align}
We use the vanilla GANs rather than the conditional GANs \cite{CGAN-2014} since it is difficult for the conditional GANs when the number of classes $n$ is large (e.g. 10k for face data) and each class has limited samples \cite{ACGAN2017}. We use the DCGANs \cite{RadfordMC15} architecture for the generator and use the earth-mover distance in WGANs \cite{ArjovskyCB17} as the loss function to encourage more data variations and coverage.
\subsection{Data Selection \label{secganseect}}
Even though there has been exciting progress in GANs, there are still limitations in the existing techniques. GANs cannot guarantee that the generated image
$x_g$ always looks like images from the old training dataset $\hat{X}^n$. It sometimes generates images which are mixtures of multiple classes or generates images which are totally different from any of the classes.
Therefore, a selection process is necessary in order to remove unrelated samples. We simply select generated images $x_g$ such that its maximum likelihood over $n$ classes is above a pre-specified threshold $\theta$, i.e.
$\underset{1 \leq k \leq n}{\max}{\hat{f}_k(x_g)} > \theta$. Once the generated images are selected, we use the same incremental learning approach in Section \ref{secincrementallearning} to train the new classifier $f^{n+m}$ by replacing the real exemplars with generated images and labels $\{x_g, y_g\}$.
\section{Experiments}
In this section, we first introduce the datasets and describe implementation details. We then show comparisons with the state-of-the-art methods, followed by the discussions on the model parameters.
\subsection{Datasets }
Experiments are conducted on three datasets:\\
\textbf{CIFAR-100} \cite{krizhevsky2009learning} contains 100 object classes. Each class has 500 training images and 100 testing images. Following the class-incremental benchmark protocol in iCaRL \cite{rebuffi2016icarl} on this dataset, 100 classes are arranged in a fixed random order and come in as $P$ parts. Each part is with $C=100/P$ classes. A multi-class classifier is built with the first part that contains $C$ classes. Then this classifier is adapted to recognize all 100 classes. We have performed experiments for $P$=2, 5, 10 and 20. \\
\textbf{Flower-102 } \cite{nilsback2008automated} consists of 102 flower categories. The original split of this dataset has 1,020 images in the training set and 6,149 images in the test set. To use more training images for the CNN model, we take the larger set of 6,149 images as training and test on the smaller set of 1,020 images. Similar to CIFAR-100, 102 classes are split into two parts and are trained incrementally. Each part contains 51 classes. \\
\textbf{MS-Celeb-1M-Base} \cite{lowshotface} has 20,000 classes with a total of 1.2 million aligned face images, which is a smaller yet nearly noise-free version of MS-Celeb-1M \cite{guo2016msceleb} dataset. For each person, 80\% with up to 30 images are randomly selected as the training images and the rest are used for testing. The 20,000 classes are divided randomly and equally into two parts to be incrementally trained.
\subsection{Implementation Details }
We used the TensorPack package\footnote{\url{https://github.com/ppwwyyxx/tensorpack}} based on TensorFlow \cite{abadi2016tensorflow}. The implementation details are listed as follows:\\
\textbf{CIFAR-100}: we follow the iCaRL setting which used a 32-layer ResNet \cite{he2016deep}. Each training step has 70 epochs. The learning rate starts from 0.1 initially and reduces to 0.01 and 0.001 after 49 and 63 epochs, respectively. The weight decay is set to 0.0002 and the batchsize is 128. \\
\textbf{Flower-102}: an 18-layer ResNet \cite{he2016deep} is fine-tuned from a model that is trained with IMAGENET ILSVRC 2012 \cite{russakovsky2015imagenet} dataset. 90 epochs are used for each training. The learning rate is set to 0.1 and reduces to 1/10 of the previous learning rate every 30 epochs. The weight decay is 0.0001 and the batchsize is 256. \\
\textbf{MS-Celeb-1M-Base}: a 34-layer ResNet \cite{he2016deep} is utilized. The learning rate, weight decay, batchsize and training epochs are the same as Flower-102.
The standard Top-1 accuracy is reported as the evaluation metric.
Our codes will be released in github.
\subsection{Comparisons and Results}
We compare with other alternative methods, including:\\
\textbf{Finetuning} learns an ordinary multi-class network without taking any measures to prevent catastrophic forgetting. It learns a multi-class classifier for new incoming classes by fine tuning the previously learned multi-class classification network.\\
\textbf{Learning without Forgetting (LwF)} \cite{li2016learning} utilizes distillation loss to prevent catastrophic forgetting. But it does not use any method to recover the old data like iCaRL or ours. \\
\textbf{iCaRL} \cite{rebuffi2016icarl} keeps an exemplar set to store a small part of old data and uses distillation loss.
The results of our method include:\\
\textbf{Ours-Real} keeps a few exemplars like what iCaRL does but is trained with our loss function defined in Equation (\ref{eqloss}). \\
\textbf{Ours-GANs} uses GANs generated data to replace the exemplars in Ours-Real.
The GANs model is trained by using the data in the first part, and the data in the first part are not used anymore during the incremental learning.
We compare Ours-Real to iCaRL when using real exemplars, and compare Ours-GANs to LwF and Finetuning without using exemplars.
\subsubsection{Ours-Real vs iCaRL with Real Exemplars}
We compare Ours-Real to iCaRL \cite{rebuffi2016icarl}, which is the state-of-arts on incremental learning with real exemplars. The results for P=2 (C=50) are shown in Table \ref{tablecifaricarl}. We can see that Ours-Real obtains 2.07\% gain in accuracy over iCaRL during increment learning from 50 classes to 100 classes. This demonstrates the superiority of our method.
On CIFAR-100 dataset, ours-Real and iCaRL use different implementations of the 18-layer ResNet \cite{he2016deep}. When training classifier on all 100 class training data, our network gets 69.09\% top-1 accuracy, which is slightly higher than iCaRL (68.6\% accuracy reported in \cite{rebuffi2016icarl}). But when training classifier on the first 50 classes, our network obtains 75.22\% accuracy that is worse than 76.40\% in iCaRL. Given these two results, we consider that the two implementations are on par. Thus, the 2.07\% gain in accuracy shows that Ours-Real outperforms iCaRL comes from our loss function and unbalance removal.
We further compare iCaRL with Ours-Real on Flower-102 and MS-Celeb-1M-Base. Since results on these two datasets are not shown in \cite{rebuffi2016icarl}, we implement their method by ourselves. Results are shown in Table \ref{talcompare}.
On Flower-102, as suggested by \cite{rebuffi2016icarl}, the learning rate is initialized to 2 and decreased to 0.4 and 0.08 after 30 and 60 epochs, respectively. The total training epochs are 90. The size of the exemplar set is 255, which is the same as Ours-Real. From the results, we can see that Ours-Real outperforms iCaRL by 1\%.
On MS-Celeb-1M-Base, the training with the sigmoid cross entropy loss on the old data of 10K classes does not converge regardless of whether we train it from scratch or fine-tune from the model that is trained using Softmax loss.
This is probably because the sigmoid cross entropy loss does not enforce a strong discriminative boundary between classes.
Thus we adapt a two-stage fine-tuning strategy to make the binary cross entropy loss work on MS-Celeb-1M-Base. During the first stage, we fix the feature extraction and only fine-tune the classifier with sigmoid cross entropy loss. After this stage, the top-1 accuracy on the validation set of the first 10k classes is 92\%. For the second stage, we fine-tune the whole network with a smaller learning rate for 40 epochs (using larger learning rates at the second stage does not converge).
After the second stage, we can achieve 97\% top-1 accuracy, which is still worse than the 99.34\% of the model trained with softmax. For the incremental learning with new data (another 10K classes), the convergence problem still exists. Thus, we also adapt the same two-stage training strategy to make it converge. From the results, we can see that Ours-Real outperforms iCaRL by a margin, which shows that our method is better than iCaRL in handling the incremental learning with a large number of classes.
\begin{table}[!t]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Method & CIFAR-100 & Flower-102 & MS-Celeb\\
\hline\hline
iCaRL \cite{rebuffi2016icarl}&0.6132 & 0.9301 &0.9341 \\
Ours-Real & \textbf{0.6339} & \textbf{0.9402} &\textbf{0.9903} \\
\hline
\end{tabular}
\end{center}
\caption{Comparison of iCaRL and Ours-Real. Ours-Real outperforms iCaRL in all three datasets. On MS-Celeb-1M-Base that has 20,000 classes, Ours-Real is significantly better than iCaRL, which shows that our incremental learning method is more effective in handling a large number of classes.} \label{talcompare}\label{tablecifaricarl}
\end{table}
The experimental results on CIFAR-100 with $P=5 (C=20), 10 (C=10)$, and $20 (C=5)$ are shown in Table \ref{tableicarl20}, \ref{tableicarl10} , and \ref{tableicarl5}, respectively.
Our method outperforms iCaRL \cite{rebuffi2016icarl} for all incremental batches by a margin.
The softmax outperforms sigmoid at the beginning by a large margin. Although the margin becomes smaller after a few batches, our method still has better performance across all batches.
The average gains of our method over iCaRL \cite{rebuffi2016icarl} are $2.36$\%, $2.03$\% and $2.97$\% for the splits of 20, 10 and 5 classes, respectively. These results illustrate that our method using cross entropy and bias removal is effective and robust.
\begin{table}[!t]
\begin{center}
\scalebox{1}{
\begin{tabular}{|l|c|c|c|c|c|}
\hline
& 20 & 40 & 60 & 80 & 100\\
\hline\hline
iCaRL \cite{rebuffi2016icarl}& 81.33& 72.19&{65.21}&{59.43} & {54.38}\\
Ours-Real & \textbf{81.55} &\textbf{74.45} & \textbf{67.82} & \textbf{61.86}& \textbf{56.53} \\
\hline
\end{tabular}
}
\end{center}
\caption{Results on CIFAR-100 with a batch of 20 classes.} \label{tableicarl20}
\end{table}
\begin{table}[!t]
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}
\hline
& 10 & 20 & 30 & 40 & 50\\
\hline
iCaRL \cite{rebuffi2016icarl}& 86.17& 78.60 & 73.60 &67.06 & 63.99 \\
Ours-Real & \textbf{90.09}& \textbf{80.05} & \textbf{74.87}& \textbf{68.70} & \textbf{65.06} \\
\hline
\hline\hline
& 60 & 70 & 80 & 90 & 100\\
\hline
iCaRL \cite{rebuffi2016icarl}& 59.58 & 56.80 & 53.76 & 51.22 & 49.23 \\
Ours-Real & \textbf{61.70} & \textbf{59.13} & \textbf{56.03} & \textbf{53.16} & \textbf{51.38} \\
\hline
\end{tabular}
\end{center}
\caption{Results on CIFAR-100 with a batch of 10 classes.} \label{tableicarl10}
\end{table}
\begin{table}[!t]
\begin{center}
\scalebox{1}{
\begin{tabular}{|l|c|c|c|c|c|c|c|c|c|c|}
\hline
& 5 & 10 & 15 & 20 & 25\\
\hline
iCaRL \cite{rebuffi2016icarl}& 87.80 & 82.50 & 78.06 &74.21 & 70.96 \\
Ours-Real & \textbf{95.80}& \textbf{90.30} & \textbf{81.07}& \textbf{75.95} & \textbf{73.68} \\
\hline
\hline\hline
& 30 & 35 & 40 & 45 & 50\\
\hline
iCaRL \cite{rebuffi2016icarl}& 66.68& 64.16 & 62.18 &61.14 & 59.75 \\
Ours-Real & \textbf{71.80}& \textbf{68.12} & \textbf{65.45}& \textbf{62.89} & \textbf{61.90} \\
\hline
\hline\hline
& 55 & 60 & 65 & 70 & 75\\
\hline
iCaRL \cite{rebuffi2016icarl}& 58.09& 56.58 & 54.73 &52.62 & 51.62 \\
Ours-Real & \textbf{59.37 }& \textbf{57. 27} & \textbf{56.30}& \textbf{55.92} & \textbf{ 54.04} \\
\hline
\hline\hline
& 80 & 85 & 90 & 95 & 100\\
\hline
iCaRL \cite{rebuffi2016icarl}& 50.28 & 48.75 & 47.00 & 45.77 & 44.36 \\
Ours-Real & \textbf{52.63} & \textbf{49.75} & \textbf{49.68} & \textbf{48.16} & \textbf{46.91} \\
\hline
\end{tabular}
}
\end{center}
\caption{Results on CIFAR-100 with a batch of 5 classes.} \label{tableicarl5}
\end{table}
We compare the confusion within and across batches. Figure \ref{fig:confusion} shows the batch confusion on the split of 5, 10, 20 and 50 classes. The confusion is balanced inside and across batches overall.
\begin{figure*}[!t]
\begin{center}
\includegraphics[width=1\linewidth]{confusionsAll.pdf}
\end{center}
\caption{Confusition matrix across batches, (a) split of 5 classes, (b) split of 10 classes, (c) split of 20 classes and (d) split of 50 classes.}
\label{fig:long}
\label{fig:confusion}
\end{figure*}
Another experiment is designed to compare our model with iCaRL \cite{rebuffi2016icarl} in the extreme case, i.e. using all old 50 classes data as exemplars. As shown in Table \ref{tableicarlalldata}, iCaRL and our model have similar performance in training a classifier in batch mode to recognize the 100 classes.
In the incremental learning setting, our method can achieve the same performance compared with the batch training since the loss function for batch training is a special case of our method when $\lambda$ is set to 0. The incremental learning result of our method is actually slightly better due to the fact that the model of 50 classes is utilized as the initialization. Results of iCaRL cannot reach the upper bound of the batch training of its own. This is because the loss function in iCaRL during incremental learning is different to the batch training classifier. These results illustrate that our loss function (cross-entorpy loss with Softmax) is more efficient than the binary cross-entropy loss with a sigmoid activation function in iCaRL.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
&Method & Top-1 accuracy \\
\hline
\multirow{ 2}{*}{Batch training} &iCaRL \cite{rebuffi2016icarl}& 0.6860\\
&Ours-Real& 0.6909\\
\hline
\multirow{ 2}{*}{Incremental learning}&iCaRL \cite{rebuffi2016icarl}& 0.6687 \\
&Ours-Real& 0.7004 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison between our method and iCaRL with all data available on CIFAR-100. Two methods achieve similar performance when using all the 100 classes at the batch training. Ours-Real outperforms iCaRL in the incremental learning setting.} \label{tableicarlalldata}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
Method & CIFAR-100&Flower-102 & MS-Celeb \\
\hline\hline
Finetuning & 0.3950 & 0.4912 & 0.4901 \\
LwF \cite{li2016learning} &0.5250 & 0.7461&0.9848 \\
Ours-GANs & \textbf{0.5490} & \textbf{0.9000} & \textbf{0.9891}\\
\hline\hline
Ours-Real & {0.6339}& {0.9402} &{0.9903}\\
\hline
\end{tabular}
\end{center}
\caption{Top 1 accuracy on CIFAR-100, Flower-102 and MS-Celeb datasets without using exemplars. Ours-GANs outperform LwF in all datasets.
Training with all data achieves top-1 accuracy 0.6909 on CIFAR-100, 0.9750 on Flower-102 and 0.9933 on MS-Celeb-1M-Base.
Results of Ours-Real are also given in the last row for reference. } \label{tableflowerface}
\end{table}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{gan-cifar-flower-face.pdf}
\end{center}
\caption{Illustration of some selected GANs images and real images of the corresponding class. We select three classes in each dataset and manually select the similar pair of images between the sampled GANs data and real images. Five pairs of every class are presented. Although the quality of GANs images are much lower than real images, some shapes and colors look similar between real images and GANs images. }
\label{figgandata}
\end{figure*}
\subsubsection{Ours-GANs vs LwF without Real Exemplars}
We compare Ours-GANs to LwF \cite{li2016learning}, which is the state-of-the-arts on incremental learning without any real exemplars. As shown in Table \ref{tableflowerface}, Ours-GANs outperforms LwF \cite{li2016learning} on all three datasets.
Especially on Flower-102, Ours-GANs are 25.39\% better than LwF, which is a huge margin. This shows the effectiveness our method and demonstrate that using distilling with the new data alone can not guarantee that the new classifier has similar predictions as the old classifier on the old data. .
Compared with using the real images, the performance of using GANs drops from 63.87\% to 54.9\% on CIFAR-100. This is due to the challenge on training GANs on CIFAR-100 dataset. In Figure \ref{figgandata}, we can see that GANs generated images have poor quality. However, the generated images is still useful, since it provides better results (54.9\%) than LwF (52.5\%). We believe that as GANs continue to improve in the future, the gap between using real images and GANs will be reduced.
To visually verify that correlation between GANs generated data and real exemplars, we compare the real images and GANs generated images for three datasets in Figure \ref{figgandata}. For each dataset, we select 3 classes and 5 real images and GANs generated images per class. Clearly, the GANs generated images are correlated with the real images, which validates the idea of using GANs to represent the old training data. However, we also observe a clear difference between real images and GANs generated images, which explains the performance drop from using real exemplars to using GANs.
\subsection{Model Analysis}
In this subsection, we analyze the sensitivity of our loss function to (a) exemplar selection, (b) the scalar to balance cross entropy loss and distill loss $\lambda$ in Equation (\ref{eqloss}), and (c) the bias scalar $\beta$ in Equation (\ref{eqbeta}). The analysis is performed on CIFAR-100 dataset with an incremental from 50 classes to 100 classes. We also discuss how to sample the GANs generated data.
\subsubsection{Sensitivity to Exemplar-Selection}
We evaluate our method with two different strategies to select real exemplars: (a) the exemplar management strategy proposed by iCaRL \cite{rebuffi2016icarl}, and (b) random selection. We keep the size of the exemplar set to 2,000. Random selection is run 5 times and an average result is reported. Results are shown in Table \ref{tableexemplar}. Compared with iCaRL exemplar management, randomly selected samples can achieve similar performance, which demonstrates that our loss function is not sensitive to the exemplar-selection.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
&Top-1 accuracy \\
\hline
iCaRL Exemplars \cite{rebuffi2016icarl}& 0.6339\\
\hline
Random Selection & 0.6361 \\
\hline
\end{tabular}
\end{center}
\caption{Comparison between iCaRL exemplar management strategy and random selection.}
\label{tableexemplar}
\end{table}
\subsubsection{Loss Scalar $\lambda$ }
The parameter $\lambda$ in Equation (\ref{eqloss}) controls the balance between the distillation loss and the cross-entropy loss. The value is set by grid search on a validation set.
Note that our validation data is split from the training set, not the test set.
We first investigate the $\lambda$ with real data. In Table \ref{tablelambdareal}, we can observe that the results are stable when varying $\lambda$ from 0.3 to 0.7. Thus we set $\lambda$ to 0.5. This setting is used for all the experiments with real data.
The results show that the balance between the distilling loss and the cross entropy loss is important, since it provides the optimal trade-off between the old classifier and new data.
Note that when $\lambda$ is equal to 0.0, our loss function is equivalent to the cross-entropy loss. Using cross-entropy loss alone struggles with the data unbalance problem as it does not leverage the good performance of the old classifier. Using the distilling loss alone ($\lambda=$1.0), is not a good choice either since the labels of neither old nor new data are used.
Estimating $\lambda$ when using GANs generated data is difficult due to the apparent differences between the GANs generated images and the real images.
Intuitively, GANs data require a larger $\lambda$ compared with real images since outputs from the old classifier is more reliable than the pseudo labels. We empirically set $\lambda$ to 0.9 in all the experiments with GANs data.
\begin{table}
\begin{center}
\begin{tabular}{|l|c|c|c|c|c|}
\hline
$\lambda$ & Validation & Test \\
\hline
1.0 & 0.482& 0.3703\\
0.9 & 0.536& 0.5055 \\
0.7 & 0.560 & 0.5576\\
0.5 & \textbf{0.562} & 0.566\\
0.3 & 0.558 & \textbf{0.5714} \\
0.1 & 0.552 & 0.5546\\
0.0 & 0.532 & 0.5352\\
\hline
\end{tabular}
\end{center}
\caption{Effect of $\lambda$ on the incremental learning performance when the exemplars are selected from the real old data set. } \label{tablelambdareal}
\end{table}
\begin{table}
\begin{center}
\begin{tabular}{|c|c|c||c|c|c|}
\hline
Old data & \multicolumn{2}{|c||}{Exemplars} & \multicolumn{2}{|c|}{GANs}\\
\hline
$\beta$ & Validation & Test & Validation & Test \\
\hline
0.0 & 0.4240 & 0.3594 & 0.4970 & 0.3740\\
0.1 & 0.4240& 0.3594 & 0.5000& 0.3742\\
0.2 & 0.4260& 0.3611 & 0.5080& 0.3848\\
0.3 & 0.4420 & 0.3836& 0.5340 & 0.4130\\
0.4 & 0.5040 & 0.4483& 0.5960 & 0.4574\\
0.5 & 0.5680 & 0.5286& 0.6510 & 0.5042\\
0.6 & 0.6400 & 0.6000& 0.7060 & 0.5472\\
0.7 & \textbf{0.6720} & \textbf{0.6339}& 0.7400 & \textbf{0.5628}\\
0.8 & 0.6580 & 0.6319 & \textbf{0.7570} & 0.549\\
0.9 & 0.6240 & 0.6064& 0.7490 & 0.5166\\
1.0 & 0.5740 & 0.5667& 0.7140 & 0.4720\\
\hline
\end{tabular}
\end{center}
\caption{
Incremental learning performance over $\beta$ on exemplars of real images and GANs generated data. The corresponding accuracy-$\beta$ curve is shown in Figure \ref{figexemplargancurve}.
} \label{tablebetarealReal}
\end{table}
\begin{figure}
\begin{center}
\includegraphics[width=0.9\linewidth]{betaselection.pdf}
\end{center}
\caption{ Incremental learning accuracy-$\beta$ curve. We can observe a correlation between the validation set and the test set with both real data and GANs generated data. }
\label{figexemplargancurve}
\end{figure}
\begin{figure*}
\begin{center}
\includegraphics[width=\linewidth]{ganvalfigV2.pdf}
\end{center}
\caption{Confusion matrix with different $\beta$ on the validation set with real data (a) (b) (c) (d) and GANs data (e) (f) (g) (h). We can observe that the bias moves gradually between the old 50 classes and the new 50 classes with different $\beta$. A proper $\beta$ can alleviate the bias in the classifier.}
\label{figganval}
\end{figure*}
\subsubsection{Bias Scalar $\beta$}
The parameter $\beta$ in Equation (\ref{eqbeta}) is a multiplier applied to the output of new classes, which controls the balance between old classes and new classes. It can be estimated on the same validation set for estimating $\lambda$ in the last subsection.
We first analyze the effect of $\beta$ on real image exemplars.
We randomly select 5 samples per class from the training dataset for validation and keep the rest for training.
For the first 50 classes, the 5 samples per class are selected from the 40 exemplars per class. For the newly arrived 50 classes, the 5 samples are selected from the training samples.
Results on the validation set is shown in Table \ref{tablebetarealReal} with $\beta$ varying from 0.0 to 1.0 with an increment of 0.1. The accuracy-$\beta$ curve is shown in Figure \ref{figexemplargancurve}. We find that $\beta=0.7$ achieves the best with 9.8\% and 6.7\% gains on the validation set and the test set, respectively. This shows the effectiveness of our method to remove bias.
Compared to iCaRL, even though we use less data for training due to splitting into training set and validation,
our bias-removed classifier outperforms iCaRL by 2.07\% in terms of accuracy, shown in Table \ref{tableflowerface}.
We also analyze the selection of $\beta$ with GANs data.
Similar to using the real exemplars, we select 10 samples from every class for validation.
Results on the validation set and test set are shown in Table \ref{tablebetarealReal}.
Using the best estimation of bias $\beta$ = 0.8 achieves 54.90\% top-1 accuracy, which is 7.7\% higher than the classifier without bias removal. We also observe that the validation and test sets are not perfectly aligned on the GANs generated data, i.e. a smaller bias $\beta$ = 0.7 achieves the best 56.28\% on the test dataset. This is because GANs generated data does not have the same distribution of the test dataset. But the bias estimation on the validation is reasonable with only 1.38\% drop in performance.
To further illustrate the bias problem in the incremental learning, we plot the confusion matrix with different $\beta$ on the validation set for both using exemplars and using GANs generated data in Figure \ref{figganval}.
We observe the same trend on both real exemplars and GANs generated data. When $\beta=1$, the bias is clearly on the new classes. As $\beta$ decreases, the bias gradually moves towards the old classes. We also observe a difference between using real exemplars and GANs generated data - the confusion between the old 50 classes is much smaller on the GANs generated data. This is due to a classic drawback of GANs: generated data have smaller variations than the real data.
\subsubsection{ GANs Data Sampling}
As described in section \ref{secsampling}, we utilize the threshold $\theta$ to select good images. If the number of images with higher scores than $\theta$ is large,
we select the top-$K$ images. $\theta$ and $K$ are set empirically.
On CIFAR-100, $\theta=0.95, K=50$. On Flower-102, $\theta=0.3, K=60$. On MS-Celeb-1M-Base, $\theta=0.3, K=10$.
The $\theta$ on CIFAR-100 is very different to Flower-102 and MS-Celeb-1M-Base because the classifier is not as good as the other two.
\section{Conclusions and Future Work}
In this paper, we address three issues in incremental classifier learning - (a) the inefficient loss function to integrate old classifier and new data, (b) the unbalance between old and new classes, (c) the scalable and privacy problems for selecting exemplars.
Firstly, we propose a new loss function that combines the distillation loss and cross-entropy loss over all old and new classes. Secondly, we found that the unbalance between old and new classes can be represented as a multiplier on the prediction of new classes. This unbalance representation is very stable on validation set, test set and even data generated using GANs. Finally, we propose using GANs to generate the past data. Our method has excellent results outperforming the state of arts on three datasets CIFAR-100, FLower-102 and MS-Celeb-1M-Base by a large margin.
In the future, we plan to investigate how to improve the GANs generator to narrow the gap between GANs data and real images.
\balance
{\small
\bibliographystyle{ieee}
|
1,477,468,751,033 | arxiv | \section{Introduction}
Topological quantum computation has recently emerged as one of the most intriguing paradigms
for the storage and manipulation of quantum information~\cite{Nayak_2008, Pachos_2012}.
The defining features of topological order, namely the existence of degenerate ground states which
(i) share the same thermodynamic properties and (ii) can only be distinguished by a global measurement,
portend for a true many-body protection of quantum information.
Additionally, the non-Abelian anyons which typically appear in these models are crucial
for the active manipulation of the information, to be accomplished through
their adiabatic braiding~\cite{Kitaev_2003, Dennis_2002}.
Among the several systems featuring topological order,
free p-wave superconducting systems
with symmetry protected topological properties
have lately attracted a significant amount of attention~\cite{Alicea_2012, Beenakker_2013, DasSarma_2015}.
On the one hand, they are exactly-solvable fermionic models which help building
a clear physical intuition of some aspects of topological order~\cite{Kitaev_2001, Kitaev_2006}.
On the other one, they are physically relevant, and several articles have recently reported
experimental evidences to be linked to p-wave-like superconductors featuring zero-energy
Majorana modes~\cite{Mourik_2012, Das_2012, Churchill_2013, Finck_2013, NadjPerge_2014}.
Whereas up to now these experimental results have been obtained in solid-state setups,
it is natural to ask whether such physics might as well be observed in cold
atomic gases~\cite{Radzihovsky_2007}, which owing to their well-controlled microscopic physics should allow
for a more thorough understanding of these peculiar phases of matter.
Important theoretical efforts have thus proposed a variety of schemes which exploit
in different ways several properties of such
setups~\cite{Sato_2009, DasSarma_2011, Jiang_2011, Diehl_2011, Nascimbene_2013, Kraus_2013, Buhler_2014}.
Among these ideas, that of a dissipative preparation of interesting many-body quantum states~\cite{Diehl_2008, Verstraete_2009}
is particularly appealing: rather than suffering from some unavoidable open-system dynamics,
such as three-body losses or spontaneous emission, one tries to take advantage of it
(see Refs.~\cite{Roncaglia_2010, Diehl_2011, Bardyn_2013, Budich_2015,Weimer_2010}
for the case
of states with topological order, such as p-wave superconductors).
The key point is the engineering of an environment that in the long-time limit drives
the system into the desired quantum state. This approach has the remarkable advantage of being
a workaround to the ultra-low temperatures necessary for the observation of important
quantum phenomena which constitute a particularly severe obstacle in fermionic systems.
The trust is thus that the mentioned ``non-equilibrium cooling'' may open the path towards the experimental investigation of currently unattainable states, e.g. characterized by p-wave superconductivity.
In this article we discuss the dissipative engineering of a p-wave superconductor
with a fixed number of particles, an important constraint in cold-atom experiments.
We consider two different setups:
(i) A single quantum wire, introduced in Ref.~\cite{Diehl_2011};
this system displays the typical features of a p-wave superconductor but it is not topological in its number conserving variant.
(ii) A two-leg ladder~\cite{Fidkowski_2011, Sau_2011, Cheng_2011, Kraus_2013, Ortiz_2014, Iemini_2015, Lang_2015}, supporting a dissipative dynamics which entails a two-dimensional steady state space characterized by p-wave superconducting order with boundary Majorana modes for every fixed particle number.
We identify the p-wave superconducting nature of the steady states by studying the proper correlators, which saturate to a finite value in the long distance limit.
Their topological properties are best discussed using a mathematical connection between dark states of the Markovian dynamics and ground states of a suitable parent Hamiltonian.
In both setups we demonstrate that the dissipative gap closes at least polynomially in the system size and thus that the typical decay time to the steady state diverges in the thermodynamic limit.
This contrasts with the case where number conservation is not enforced. In this case typically the decay time is finite in the thermodynamic limit~\cite{Diehl_2011, Bardyn_2013}, and reflects the presence of dynamical slow modes related to the particle-number conservation~\cite{Cai_2013,Hohenberg_1977}, which also exist in non-equilibrium systems (see also Ref.~\cite{Kastoryano_2012,Buchhold_2015,Caspar_2015}).
Our exact analytical findings are complemented by a numerical study based on a matrix-product-operator
representation of the density matrix~\cite{Verstraete_2004, Zwolak_2004},
one of the techniques for open quantum systems which are recently attracting an increasing
attention~\cite{Orus_2008, Prosen_2009, Hartmann_2009, Daley_2014, Cui_2015, Biella_2015, Finazzi_2015, Mascarenhas_2015, Werner_2015}.
These methods are employed to test the robustness of these setups to perturbations, which is thoroughly discussed.
The article is organized as follows: in Sec.~\ref{sec:I} we review the key facts behind the idea
of dissipative state preparation using the dark states of a many body problem, and exemplify them
recalling the problem studied in Ref.~\cite{Diehl_2011}. A simple criterion for signalling the divergence of the decay-time with the system size is also introduced.
In Sec.~\ref{sec:N:1} we present the exact analytical study of the single-wire protocol,
and in Sec.~\ref{sec:Numerics1} a numerical analysis complements the previous discussion with the characterization of the robustness to perturbations of these setups.
In Sec.~\ref{sec:N:2} we discuss the protocol based on the ladder geometry.
Finally, in Sec.~\ref{sec:conc} we present our conclusions.
\section{Dissipative state preparation of Majorana fermions: known facts}
\label{sec:I}
\subsection{Dark states and parent Hamiltonian of Markovian dynamics} \label{sec:I:A}
The dissipative dynamics considered in this article is Markovian and, in the absence
of a coherent part, can be cast in the following Lindblad form:
\begin{equation}
\frac{\partial}{\partial t}\hat \rho =
\mathcal L[\hat \rho] =
\sum_{j=1}^{m} \left[ \hat L_j \hat \rho \hat L_j^\dagger - \frac 12 \{ \hat L_j^\dagger \hat L_j, \hat \rho \} \right],
\label{eq:master:equation}
\end{equation}
where $\mathcal L$ is the so-called Lindbladian super-operator and the $\hat L_j$ are the (local)
Lindblad operators.
We now discuss a fact which will be extensively used in the following.
Let us assume that a pure state $\ket{\Psi}$ exists, with the property:
\begin{equation}
\hat L_j \ket{\Psi} = 0; \quad \forall \, j = 1,\ldots,m.
\label{eq:dark:state}
\end{equation}
A simple inspection of Eq.~\eqref{eq:master:equation} shows that $\ket{\Psi}$
is a steady state of the dynamics, and it is usually referred to as \textit{dark state}.
Although the existence of a state satisfying Eq.~\eqref{eq:dark:state} is usually not guaranteed,
in this article we will mainly consider master equations which enjoy this property.
A remarkable feature of dark states is that they can be searched through the minimization
of a \textit{parent Hamiltonian}. Let us first observe that Eq.~\eqref{eq:dark:state} implies that
$\bra {\Psi} \hat L_j^\dagger \hat L_j \ket{\Psi} = 0$ and since every operator
$\hat L_j^\dagger \hat L_j$ is positive semi-definite, $\ket {\Psi}$ minimizes it.
Consequently, $\ket{\Psi}$ is a ground state of the parent Hamiltonian:
\begin{equation}
\hat {\mathcal H}_{p} = \sum_{j=1}^m \hat L^\dagger_j \hat L_j.
\label{eq:parent:Hamiltonian}
\end{equation}
Conversely, every zero-energy ground state $\ket{\Phi}$ of Hamiltonian~\eqref{eq:parent:Hamiltonian}
is a steady state of the dynamics~\eqref{eq:master:equation}.
Indeed, $\hat {\mathcal H}_p \ket {\Phi} = 0$ implies that
$\bra {\Phi} \hat L_j^\dagger \hat L_j \ket{\Phi} = 0$ for all $j =1,\ldots,m$.
The last relation means that the norm of the states $\hat L_j \ket{\Phi}$ is zero,
and thus that the states themselves are zero: $\hat L_j \ket{\Phi}=0$.
As we have already shown, this is sufficient to imply that $\ket {\Phi}$ is a steady-state of the dynamics.
In order to quantify the typical time-scale of the convergence to the steady state,
it is customary to consider the right eigenvalues of the super-operator $\mathcal L$,
which are defined through the secular equation $\mathcal L [\hat \theta_\lambda] = \lambda \hat \theta_{\lambda}$.
The asymptotic decay rate for a finite system is defined as
\begin{equation}
\lambda_{\rm ADR} = \inf_{\substack{\lambda \text{ is eigenvalue of } \mathcal L\\ \Re(\lambda) \neq 0}} \{ -\Re (\lambda)\}.
\label{eq:ADR}
\end{equation}
The minus sign in the previous equation follows from the fact that the real part of the eigenvalues
of a Lindbladian super-operator satisfy the following inequality: $\Re(\lambda) \leq 0$.
Remarkably, for every eigenvalue $\xi$ of $\hat {\mathcal H}_p$ there is an eigenvalue
$\lambda = - \xi/2$ of $\mathcal L$ which is at least two-fold degenerate.
Indeed, given the state $\ket {\psi_\xi}$ such that $\hat {\mathcal H}_p \ket{\psi_\xi} = \xi \ket{\psi_\xi}$,
the operators made up of the dark state $\ket{\Psi}$ and of $\ket{\psi_\xi}$
\begin{equation}
\hat \theta_{- \xi/2}^{(1)} = \ket{\Psi} \hspace{-0.05cm} \bra{\psi_\xi},
\quad
\hat \theta_{- \xi/2}^{(2)} = \ket{\psi_\xi} \hspace{-0.05cm} \bra{\Psi}
\end{equation}
satisfy the appropriate secular equation.
This has an important consequence: if $\hat {\mathcal H}_p$ is gapless,
then $\lambda_{\rm ADR} \xrightarrow{L \to \infty} 0$ in the thermodynamic limit, where $L$ is the
size of the system. Indeed:
\begin{equation}
0 < \lambda_{\rm ADR} \leq \frac \xi2,
\end{equation}
for every eigenvalue $\xi$ of $\hat {\mathcal H}_p$; if $\xi$ closes as $L^{-\alpha}$ ($\alpha>0$),
then the dissipative gap closes at least polynomially in the system size.
Note that this argument also implies that if $\mathcal L$ is gapped,
then the parent Hamiltonian is gapped as well.
It is important to stress that the spectral properties of the parent Hamiltonian $\hat {\mathcal H}_p$
do not contain all the information concerning the long-time dissipative dynamics.
As an example, let us assume that the Markovian dynamics in Eq.~\eqref{eq:master:equation}
(i) supports at least one dark state and (ii) has an associated parent Hamiltonian which is gapped.
If the Lindblad operators are Hermitian, then the fully-mixed state is a steady state
of the master equation too. The presence of such stationary state is not signaled
by the parent Hamiltonian, which is gapped and only detects the pure steady states of the dynamics.
Whereas the some of the above relations have been often pointed out in the
literature~\cite{Diehl_2008, Verstraete_2009}, to the best of our knowledge the remarks
on the relation between the spectral properties of $\mathcal L$ and $\hat {\mathcal H}_p$ are original.
\subsection{The Kitaev chain and the dissipative preparation of its ground states} \label{sec:I:B}
Let us now briefly review the results in Ref.~\cite{Diehl_2011} and use them to exemplify
how property~\eqref{eq:dark:state} can be used as a guideline for dissipative state preparation in the number non-conserving case. This will be valuable for our detailed studies of its number conserving variant below.
The simplest model displaying zero-energy unpaired Majorana modes is the one-dimensional
Kitaev model at the so-called ``sweet point''~\cite{Kitaev_2001}:
\begin{equation}
\hat {\mathcal H}_{\rm K} =
-J \sum_j \left[ \hat a^\dagger_j \hat a_{j+1} + \hat a_j \hat a_{j+1} + \mathrm{H.c.} \right],
\quad J>0,
\end{equation}
where the fermionic operators $\hat a_j^{(\dagger)}$ satisfy canonical anticommutation relations
and describe the annihilation (creation) of a spinless fermion at site $j$.
The model can be solved with the Bogoliubov-de-Gennes transformation, and, when considered
on a chain of length $L$ with open boundaries, it takes the form:
\begin{equation}
\hat {\mathcal H}_{\rm K} = E_0
+ \frac J2 \sum_{j=1}^{L-1} \hat {\ell}_j^\dagger \hat \ell_j \, ,
\label{eq:Kitaev:Hamiltonian}
\end{equation}
with
\begin{eqnarray}
\hat \ell_j &=& \hat C_j^\dagger + \hat A_j, \\
\hat C^\dag_j &=& \hat a^\dag_j+\hat a^\dag_{j+1}, \quad \hat A_j = \hat a_j - \hat a_{j+1}.
\end{eqnarray}
The ground state has energy $E_0$ and is two-fold degenerate: there are two linearly
independent states $\ket {\psi_e}$ and $\ket {\psi_o}$ which satisfy:
\begin{equation}
\hat \ell_j \ket{\psi_\sigma}=0; \quad
\forall \, j = 1,\ldots, L-1;
\qquad \sigma =e,o.
\label{eq:key:property}
\end{equation}
The quantum number distinguishing the two states is the parity of the number of fermions,
$\hat P = (-1)^{\sum \hat a_j^\dagger \hat a_j}$, which is a symmetry of the model
(the subscripts $e$ and $o$ stand for \textit{even} and \textit{odd}).
Both states $\ket{\psi_{\sigma}}$ are p-wave superconductors, as it can be explicitly proven
by computing the expectation value of the corresponding order parameter:
\begin{equation}
\bra{\psi_\sigma} \hat a_j \hat a_{j+1} \ket{\psi_\sigma}
\xrightarrow{L \rightarrow \infty} \frac{1}{4}.
\label{eq:order:par}
\end{equation}
It is thus relevant to develop a master equation which features $\ket{\psi_e}$ and $\ket {\psi_o}$
as steady states of the dynamics~\cite{Diehl_2011, Bardyn_2013}.
Property~\eqref{eq:key:property} provides the catch: upon identification of the $\hat \ell_j$ operators
with the Lindblad operators of a Markovian dynamics, Eq.~\eqref{eq:dark:state} ensures that
the states $\ket{\psi_\sigma}$ are steady states of the dynamics and that in the long-time limit
the system evolves into a subspace described in terms of p-wave superconducting states.
This becomes particularly clear once it is noticed that the parent Hamiltonian of this Markov process
coincides with $\hat {\mathcal H}_{\rm K}$ in Eq.~\eqref{eq:Kitaev:Hamiltonian} apart from an additive constant.
Let us conclude mentioning that the obtained dynamics satisfies an important physical requirement,
namely \textit{locality}.
The Lindblad operators $\hat \ell_j$ only act on two neighboring fermionic modes;
this fact makes the dynamics both physical and experimentally feasible.
On the other hand, they do not conserve the number of particles,
thus making their engineering quite challenging with cold-atom experiments.
The goal of this article is to provide dissipative schemes with Lindblad operators
which commute with the number operator and feature the typical properties of a p-wave superconductor.
\section{Single wire: Analytical results}
\label{sec:N:1}
The simplest way to generalize the previous results to systems where the number of particles is conserved
is to consider the master equation induced by the Lindblad operators~\cite{Diehl_2011, Bardyn_2013}:
\begin{equation}
\hat L'_j = \hat C_j^\dagger \hat A_j,
\quad \forall \, j =1,\ldots,L-1,
\label{eq:Lind:N:1}
\end{equation}
for a chain with hard-wall boundaries and spinless fermions:
\begin{equation}
\frac{\partial}{\partial t}\hat \rho =
\mathcal L'[\hat \rho] = \gamma
\sum_{j=1}^{L-1} \left[ \hat L_j' \hat \rho \hat L_j'^\dagger - \frac 12 \{ \hat L_j'^\dagger \hat L'_j, \hat \rho \} \right];
\quad \gamma > 0;
\label{eq:master:equation:N:1}
\end{equation}
where $\gamma$ is the damping rate.
This Markovian dynamics has already been considered in Refs.~\cite{Diehl_2011, Bardyn_2013}.
Using the results presented in Ref.~\cite{Iemini_2015}, where the parent Hamiltonian related to the dynamics in Eq.~\eqref{eq:master:equation:N:1} is considered, it is possible to conclude that for a chain with periodic boundary conditions (i) there is a unique dark state for every particle number density $\nu = N/L$, and (ii) this state is a p-wave superconductor.
A remarkable point is that the $\hat L'_j$ are local and do not change the number of particles:
their experimental engineering is discussed in Ref.~\cite{Diehl_2011}, see also \cite{MullerReview}.
Here we clarify that for the master equation for a single wire with hard-wall boundaries, the steady state is not topological and does not feature Majorana edge physics, although they still display the bulk properties of a p-wave superconductor (instead, the two-wire version studied below \textit{has} topological properties associated to dissipative Majorana zero modes).
The asymptotic decay rate of the master equation is also characterized.
An extensive numerical study of the stability of this protocol is postponed to Sec.~\ref{sec:Numerics1}.
\subsection{Steady states}
In order to characterize the stationary states of the dynamics,
let us first observe that Eq.~\eqref{eq:key:property} implies~\cite{Iemini_2015}
\begin{equation}
\hat C^\dagger_j \ket{\psi_\sigma} = - \hat A_j \ket{\psi_\sigma},
\end{equation}
so that:
\begin{equation}
\hat L'_j \ket {\psi_\sigma} =
\hat C_j^\dagger \hat A_j \ket{\psi_\sigma} =
- \hat C_j^\dagger \hat C_j^\dagger \ket{\psi_\sigma} = 0.
\end{equation}
Thus, $\ket{\psi_\sigma}$ are steady states of the dynamics.
Let us define the states
\begin{equation}
\ket{\psi_N} = \hat \Pi_N \ket{\psi_\sigma},
\label{eq:proj:N:stst}
\end{equation}
where $\hat \Pi_N$ is the projector onto the subspace of the global Hilbert (Fock) space
with $N$ fermions ($\hat \Pi_N \ket{\psi_\sigma} = 0$ when the parity of $N$ differs from $\sigma$
and thus we avoid the redundant notation $\ket{\psi_{\sigma,N}}$).
Since $[\hat L'_j, \hat N]=0$, where $\hat N = \sum_j\hat a^\dagger_j \hat a_j$ is the particle-number
operator, it holds that $\hat L'_j \ket{\psi_N}=0$ for all $j = 1, \ldots, L-1$
and thus the $\ket{\psi_N}$ are dark states.
Let us show that there is only one dark state $\ket{\psi_N}$ once the value of $N$ is fixed.
To this end, we consider the parent Hamiltonian~\eqref{eq:parent:Hamiltonian} associated
to the Lindblad operators~\eqref{eq:Lind:N:1}:
\begin{equation}
\hat {\mathcal H}_{p}' \! = \!
2 J \sum_{j=1}^{L-1} \! \left[
\hat n_j \! + \! \hat n_{j+1} \! - \! 2 \hat n_j \hat n_{j+1}
\! - \! \hat a_{j+1}^\dagger \hat a_{j} \! - \! \hat a_j^\dagger \hat a_{j+1}
\right] \! , \;
\end{equation}
where $\hat n_j \equiv \hat a_j^\dagger \hat a_j$
and $J>0$ is a typical energy scale setting the units of measurement.
Upon application of the Jordan-Wigner transformation, the model $\hat {\mathcal H}_p'$
is unitarily equivalent to the following spin-$1/2$ chain model:
\begin{equation}
\hat {\mathcal H}_{p,\rm spin}' = J
\sum_{j=1}^{L-1} \left[ 1
+ \hat \sigma_j^x \hat \sigma_{j+1}^x
+ \hat \sigma_j^y \hat \sigma_{j+1}^y
- \hat \sigma_j^z \hat \sigma_{j+1}^z \right] ,
\end{equation}
where $\hat \sigma_j^{\alpha}$ are Pauli matrices.
Apart from a constant proportional to $L-1$, $\hat {\mathcal H}_{p, \rm spin}'$ is the ferromagnetic Heisenberg model.
The particle-number conservation corresponds to the conservation of the total magnetization
along the $\hat z$ direction. It is a well-known fact that this model has a highly degenerate
ground state but that there is only one ground state for each magnetization sector,
both for finite and infinite lattices.
Thus, this state corresponds to the state $\ket {\psi_N}$ identified above; therefore,
the possibility that the ground state of $\hat {\mathcal H}_p'$ is two-fold degenerate (as would be required for the existence of Majorana modes)
for fixed number of fermions and hard-wall boundary conditions is ruled out.
Summarizing, the dynamics induced by the Lindblad operators in~\eqref{eq:Lind:N:1}
conserves the number of particles and drives the system into a quantum state with the properties
of a p-wave superconductor (in the thermodynamic limit $\ket{\psi_e}$ and $\hat \Pi_N \ket{\psi_e}$
have the same bulk properties, as it is explicitly checked in Ref.~\cite{Diehl_2011, Bardyn_2013},
but see also the discussion below).
Since the steady states of the system for open boundary conditions are unique, they do not display
any topological edge property.
\subsection{P-wave superconductivity}
Let us explicitly check that the states $\ket{\psi_N}$ have the properties of a p-wave superconductor.
Since each state has a definite number of fermions, the order parameter defined in Eq.~\eqref{eq:order:par}
is zero by symmetry arguments.
In a number-conserving setting, we thus rely on the p-wave pairing correlations:
\begin{equation}
G^{(p)}_{j,l} = \bra{\psi_N} \hat O_j^{(p)\dagger} \hat O^{(p)}_l \ket{\psi_N}
= \bra{\psi_N} \hat a_j^\dagger \hat a_{j+1}^\dagger \hat a_{l+1} \hat a_{l} \ket{\psi_N} .
\label{eq:observable:pairing}
\end{equation}
If in the long-distance limit, $|l-j| \to \infty$, the expectation value saturates to a finite value
or shows a power-law behavior, the system displays p-wave superconducting (quasi-)long-range order.
If the decay is faster, e.g. exponential, the system is disordered.
In this specific case, the explicit calculation shows a saturation at large distance
(see also Ref.~\cite{Iemini_2015}):
\begin{equation}
G^{(p)}_{j,l} \xrightarrow{|j-l| \to \infty} \nu^2 (1-\nu)^2
\label{eq:order:corr}
\end{equation}
in the thermodynamic limit.
The saturation to a finite value captures the p-wave superconducting nature of the states.
Note that the breaking of a continuous symmetry in a one-dimensional system signaled
by Eq.~\eqref{eq:order:corr} is a non-generic feature: a perturbation of Hamiltonian $\hat {\mathcal H}_p'$
would turn that relation into a power-law decay to zero as a function of $|j-l|$ (see Ref.~\cite{Iemini_2015} for an explicit example).
\subsection{Dissipative gap}
An interesting feature of $\hat {\mathcal H}_{p,\rm spin}'$ is that it is gapless; the gap closes
as $L^{-2}$ due to the fact that the low-energy excitations have energy-momentum relation $\omega_q\sim q^2$,
as follows from well-known properties of the ferromagnetic Heisenberg model.
The Jordan-Wigner transformation conserves the spectral properties and thus $\hat {\mathcal H}_{p}'$
is also gapless. Thus, according to the discussion in Sec.~\ref{sec:I:A},
the asymptotic decay rate $\lambda_{\rm ADR}'$ associated to the Lindbladian $\mathcal L'$
closes in the thermodynamic limit. This is true both for periodic and hard-wall boundary conditions.
This fact has two important consequences.
The first is that the dissipative preparation of a fixed-number p-wave superconductor through
this method requires at least a typical time $\tau'$ that scales like $L^2$.
In Sec.~\ref{sec:Numerics1} we numerically confirm this polynomial scaling.
Although this requires an effort which is polynomial in the system size, and which is thus efficient,
it is a slower dynamical scenario than that of the non-number-conserving dynamics considered
in Refs.~\cite{Diehl_2011, Bardyn_2013} and summarized in Sec.~\ref{sec:I:B},
where $\tau$ does not scale with $L$ (the super-operator $\mathcal L$ in that case is gapped), and thus the approach to stationarity is exponential in time. The difference can be traced to the presence of dynamical slow modes related to exact particle number conservation, a property which is abandoned in the mean field approximation of Refs.~\cite{Diehl_2011, Bardyn_2013}.
The second consequence is that a gapless Lindbladian $\mathcal L$ does not ensure
an \textit{a priori} stability of the dissipative quantum state preparation.
Roughly speaking, even a small perturbation $\epsilon \mathcal M'$ ($\epsilon \ll 1$)
to the Lindbladian $\mathcal L'$ such that the dynamics is ruled by
$\mathcal L'+ \epsilon \mathcal M'$ has the potential to qualitatively change the physics of the steady-state
(see Refs.~\cite{Syassen_2008, Li_2014, Ippoliti_2015} for some examples where the presence of a gap
is exploited for a perturbative analysis of the steady states). This concerns, in particular, the long-distance behavior of correlation functions.
To further understand this last point, in Sec.~\ref{sec:Numerics1} we have analyzed the effect of several perturbations through numerical simulations.
In the case in which the steady state has topological properties, they may still be robust. We further elaborate on this point in Sec.~\ref{sec:N:2}, where we study the ladder setup.
Notwithstanding the gapless nature of the Lindbladian $\mathcal L'$, we can show that waiting
for longer times is beneficial to the quantum state preparation.
If we define $p_0(t) = \text{tr} \big[ \hat P_0 \hat \rho(t) \big]$, where $\hat P_0$ is the projector
onto the ground space of the parent Hamiltonian $\hat {\mathcal H}'_p$, then the following monotonicity property holds:
\begin{equation}
\frac{\mathrm d}{\mathrm dt} p_0(t) \geq0.
\label{eq:positive:derivative}
\end{equation}
Indeed, $\frac{\mathrm d}{\mathrm dt} p_0(t) = \text{tr} \big[ \hat P_0 \mathcal L'[\hat \rho(t)] \big] =
\text{tr} \big[ \mathcal L'^*[\hat P_0] \hat \rho(t) \big]$,
where $\mathcal L'^*$ is the adjoint Lindbladian.
It is easy to see that $\mathcal L'^*[\hat P_0] = \gamma \sum_j \hat L_j'^{\dagger} \hat P_0 \hat L'_j$,
which is a non-negative operator because for any state $\ket{\phi}$ it holds that:
\begin{align}
\langle \phi |\mathcal L'^*[\hat P_0]| \phi \rangle
=& \gamma \sum_j \langle \phi |\hat L_j'^{\dagger} \hat P_0 \hat L'_j | \phi \rangle = \nonumber \\
=& \gamma \sum_{j, \alpha} |\langle \psi_\alpha |\hat L'_j | \phi \rangle|^2 >0
\end{align}
where $\{ \ket {\psi_\alpha}\}$ are a basis of the ground space of the parent Hamiltonian $\hat {\mathcal H}'_p$.
If we consider the spectral decomposition of
$\hat \rho (t) = \sum_\beta p_\beta \ket{\phi_\beta} \hspace{-0.05cm} \bra{\phi_\beta}$, with $p_\beta>0$,
we obtain Eq.~\eqref{eq:positive:derivative}.
\section{Single wire: Numerical results}
\label{sec:Numerics1}
Although the previous analysis, based on the study of the dark states of the dynamics, has already identified many distinguishing properties of the system, there are several features which lie outside its prediction range.
Let us list for instance the exact size scaling of the asymptotic decay rate or the resilience of the scheme to perturbations.
In order to complement the analysis of the dissipative dynamics with these data, we now rely on a numerical approach.
The numerical analysis that we are going to present is restricted to systems with hard-wall boundary conditions.
In order to characterize the time evolution described by the master equation~\eqref{eq:master:equation:N:1},
we use two different numerical methods. The first is a Runge-Kutta (RK) integration for systems
of small size (up to $L=10$)~\cite{bookPress}.
This method entails an error due to inaccuracies in the numerical integration,
but the density matrix is represented without any approximation.
On the contrary, the second method, based on a Matrix-Product-Density-Operator (MPDO) representation
of the density matrix, allows the study of longer systems through an efficient approximation
of $\hat \rho$~\cite{Verstraete_2004, Zwolak_2004, Prosen_2009}.
The time evolution is performed through the Time-Evolving Block Decimation (TEBD) algorithm,
which is essentially based on the Trotter decomposition of the Liouville super-operator $e^{t \mathcal L'}$.
Although this method has been shown to be able to reliably describe problems with
up to $\sim 100$ sites~\cite{Biella_2015}, in this case we are not able to consider lengths
beyond $L=22$ because of the highly-entangled structure of the states encountered during the dynamics.
It is an interesting perspective to investigate whether algorithms based on an MPDO representation
of the density matrix, which compute the steady state through maximization of the Lindbladian
super-operator $\mathcal L'$, might prove more fruitful in this context~\cite{Cui_2015, Mascarenhas_2015}.
Finally, we have also performed Exact-Diagonalization (ED) studies of system sizes up to $L=5$
in order to access properties of $\mathcal L'$, such as its spectrum,
which cannot be observed with the time-evolution.
\subsection{Asymptotic decay rate}
\begin{figure}
\centering
\includegraphics[scale=0.45]{Fig1A.pdf}
\includegraphics[scale=0.45]{Fig1B.pdf}
\caption{(Color online) (Top) Runge-Kutta time evolution of the pairing correlator $G^{(p)}_{1,L-1}(t)$
for the largest available system size, $L=10$.
The inset shows that upon subtraction of the steady value, an exponential decay is observed,
from which $\lambda_{\rm ADR}$ is extracted.
(Bottom) Time evolution of $G^{(p)}_{1,L-1}(t) - \big[ G^{(p)}_{1,L-1} \big]_{\rm ss}$ for several system sizes.
The inset shows the scaling of $\lambda_{\rm ADR}$ with $L$, which is fitted by an algebraic function.
}
\label{scaling.diss.gap}
\end{figure}
Let us first assess that the asymptotic decay rate of the system closes polynomially
with the system size (from the previous analysis we know that it closes \textit{at least} polynomially.
As we discuss in Appendix~\ref{app:Spectral}, in the asymptotic limit,
it is possible to represent the expectation value of any observable $\hat A$ as:
\begin{equation}
\langle \hat{A}\rangle(t) - \langle \hat{A}\rangle_{\rm ss} \sim \kappa e^{-\lambda_{\rm ADR} t} + \ldots
\label{eq:steady:observable}
\end{equation}
where $\langle \hat A \rangle(t) = {\rm tr} \big[ \hat A \, \hat \rho(t) \big]$,
$\langle \hat{A}\rangle_{\rm ss} = \lim_{t \to \infty} \langle \hat{A}\rangle(t)$
and $\kappa$ is a non-universal constant.
The notation $-\lambda_{\rm ADR}$ is due to the fact that $\lambda_{\rm ADR}$ is positive, being defined through the additive inverse of the real part of the eigenvalues, see Eq.~\eqref{eq:ADR}.
It is possible to envision situations where $\kappa=0$ and thus the long-time decay
is dominated by eigenvalues of $\mathcal L'$ with smaller real part.
The study of the long-time dependence of any observable can be used to extract
the value of $\lambda_{\rm ADR}$; among all the possible choices, we employ the pairing correlator
$G^{(p)}_{j,l}(t) = \langle \hat O_j^{(p)\dagger} \hat O^{(p)}_l \rangle(t)$ [see Eq.~\eqref{eq:observable:pairing}]
because of its special physical significance.
In Fig.~\ref{scaling.diss.gap}(top), we consider $L=10$ and plot the time evolution of $G^{(p)}_{j,l}(t)$
for $j=1$ and $l = L-1$ (no relevant boundary effects have been observed as far as the estimation
of $\lambda_{\rm ADR}$ is concerned). The calculation is performed through RK integration of the master equation.
The initial state of the evolution is given by the ground state of the non-interacting Hamiltonian,
$\hat {\mathcal H}_0 = - J
\sum_j \hat a^{\dagger}_{j} \hat a_{j+1} + {\rm H.c.}$
($N=L/2$ for $L$ even, and $N=(L+1)/2$ for $L$ odd).
In order to benchmark the reliability of the RK integration for getting the steady state,
we compare the expectation value
of several observables (in particular of pairing correlators) with the exactly-known results
(Sec.~\ref{sec:N:1} provides the exact wavefunction of the steady state, from which several observables
can be computed). In all cases the absolute differences are below $10^{-6}$.
Similar results are obtained for smaller system sizes, where it is even possible to compute
the trace-distance of the RK steady-state from the $\lambda=0$ eigenstate of the Liouvillian computed with ED.
In the long-time limit, the observable~\eqref{eq:observable:pairing} displays a clear stationary behavior,
$\big[ G^{(p)}_{j,l} \big]_{\rm ss} = \lim_{\tau \to \infty} G^{(p)}_{j,l} (\tau)$,
consistently with Eq.~\eqref{eq:steady:observable}.
Once such stationary value is subtracted,
it is possible to fit $\lambda_{\rm ADR}$ from the exponential decay of
\begin{equation}
G^{(p)}_{j,l}(t) - \big[ G^{(p)}_{j,l} \big]_{\rm ss}
\label{eq:subtraction}
\end{equation}
The subtraction is possible to high precision because the value of $\big[ G^{(p)}_{j,l} \big]_{\rm ss}$ is known
from the previous analytical considerations.
Moreover, as we have already pointed out, the evolution continues up to times such that
$G^{(p)}_{j,l} (t)$ differs in absolute terms from
the analytical value for $\lesssim 10^{-6}$, which makes the whole procedure reliable.
In Fig.~\ref{scaling.diss.gap}(bottom) we display the quantity in~\eqref{eq:subtraction}
for various lattice sizes $L$. It is clear that the convergence of the observable requires an amount
of time which increases with $L$.
A systematic fit of $\lambda_{\rm ADR}$ for several chain lengths allows for an estimate
of its dependence on $L$ [see Fig.~\ref{scaling.diss.gap}(bottom)]: the finite-size dissipative gap scales as
\begin{equation}
\lambda_{\rm ADR}\propto L^{-2.13 \pm 0.05 } \; .
\label{eq:lambda:ADR:scaling}
\end{equation}
The exact diagonalization (ED) of the Liouvillian up to $L=5$ allows a number of further considerations.
First, the Liouvillian eigenvalues with largest real part ($\Re(\lambda) \lesssim 0$) are independent of the number of particles (the check has been performed for every value of $N=1, ... , 5$).
Second, comparing the ED with the previous analysis, we observe that
the $\lambda_{\rm ADR}$ in Eq.~\eqref{eq:lambda:ADR:scaling} coincides with the second eigenvalue
of the Liouvillian, rather than with the first [here the generalized eigenvalues
are ordered according to the additive inverse of their real part $-\Re(\lambda)$].
Numerical inspection of small systems (up to $L=5$) shows that the first excited eigenvalue
of $\mathcal L'$ is two-fold degenerate and takes the value $- \xi/2$, where $\xi$ is the energy
of the first excited state of $\hat {\mathcal H}'_p$ (see the discussion in Sec.~\ref{sec:I:A}).
Our numerics suggests that it does not play any role in this particular dissipative evolution,
hinting at the fact that the chosen $\hat \rho(0)$ does not overlap with the eigensubspace
relative to $-\xi/2$. In this case, the value of $\kappa$ in Eq.~\eqref{eq:steady:observable} is zero.
\subsection{Perturbations}
In order to test the robustness of the dissipative scheme for the preparation of a p-wave superconductor,
we now consider several perturbations of the Lindbladian $\mathcal L'$ of both dissipative and Hamiltonian form.
The robustness of the dissipative state preparation of the p-wave superconductor is probed
through the behavior of the correlations $G^{(p)}_{j,l}(t)$, which define such phase.
\subsubsection{Perturbations of the Lindblad operators}
\begin{figure}
\centering
\includegraphics[scale=0.45]{Fig2.pdf}
\caption{(Color online) Steady-state values of $\big[ G'^{(p)}_{4,j} \big]_{\rm ss}$ [see Eq.~\eqref{eq:gprime}]
for a lattice with $L=22$ sites at half-filling, $\nu=1/2$, computed with MPDO for different values
of $\epsilon$ in $\hat L'_{j,\epsilon}$ [see Eq.~\eqref{eq:pert:exc}].
The inset displays the steady-state values of the local number of fermions
$\langle \hat n_j \rangle_{\rm ss}$ for the same systems.
}
\label{excitation.perturbation.L22}
\end{figure}
Let us define the following perturbed Lindblad operator:
\begin{equation}
\hat L'_{j,\epsilon} = \hat C_{j}^\dagger \hat A_{j,\epsilon};
\quad
\hat A_{j,\epsilon} = \hat a_j-(1-\epsilon)\hat a_{j+1};
\quad \epsilon \in \mathbb R,
\label{eq:pert:exc}
\end{equation}
which allows for slight asymmetries in the action of the dissipation
between sites $j$ and $j+1$. The continuity equation associated to the dynamics, $\partial_t \hat n_i = -(\hat j_i - \hat j_{i-1})$,
is characterized by the following current operator: $\hat j_i = \hat n_i - (1- \epsilon)^2 \hat n_j + (\epsilon^2 - 2 \epsilon) \hat n_i \hat n_{i+1}$. When $\epsilon \neq 0$, $\hat j_{i}$ is not anymore odd under space reflection around the link between sites $i$ and $i+1$, so that in the stationary state a non-zero current can flow even if the density profile is homogeneous (and even under the previous space-inversion transformation), which is quite intuitive given the explicit breaking of inversion symmetry in this problem.
We employ the MPDO method to analyze the steady-state properties
of a system with size $L=22$ initialized in the ground state of the free Hamiltonian
$\hat {\mathcal H}_{0}$ for $N=11$ and subject to such dissipation.
The results in the inset of Fig.~\ref{excitation.perturbation.L22} show that the steady state
is not homogeneous and that a relatively high degree of inhomogeneity
$\frac{\langle \hat n_{L} \rangle - \langle \hat n_{1} \rangle }{\langle \hat n_{L/2} \rangle} \approx 1$
is found also for small perturbations $\epsilon = 0.05$.
This is not to be confused with the phase-separation instability which characterizes
the ferromagnetic parent Hamiltonian $\hat {\mathcal H}'_{p, \mathrm{spin}}$.
Indeed, if PBC are considered, the system becomes homogeneous and a current starts flowing in it
(not shown here).
P-wave superconducting correlations are affected by such inhomogeneity.
Whereas for $\epsilon=0$ the correlations $\big[ G^{(p)}_{j,l} \big]_{\rm ss}$
do not show a significant dependence on $|j-l|$, this is not true even
for small perturbations $\epsilon \leq 0.05$.
In order to remove the effect of the inhomogeneous density,
in Fig.~\ref{excitation.perturbation.L22} we show the value of properly rescaled p-wave correlations:
\begin{equation}\label{eq:gprime}
\big[ G'^{(p)}_{j,l} \big]_{\rm ss} \equiv \langle \, \hat O'^{(p) \dagger}_j \, \hat O'^{(p)}_l \, \rangle_{\rm ss} =
\frac{(N/L)^{4}\, \langle \hat O^{(p) \dagger}_j \hat O^{(p)}_l \rangle_{\rm ss}}
{\langle \hat n_j \rangle_{\rm ss} \langle \hat n_{j+1} \rangle_{\rm ss} \langle \hat n_l \rangle_{\rm ss} \langle \hat n_{l+1} \rangle_{\rm ss} }
\end{equation}
where $\hat O'^{(p)}_j =(N/L)^2 \hat O^{(p)}_j / (\langle \hat n_j \rangle_{\rm ss} \langle \hat n_{j+1} \rangle_{\rm ss})$.
An exponential decay behavior appears as a function of $|j-l|$, which becomes more pronounced when $\epsilon$ is increased.
Even if the simulation is performed on a finite short system, for significant perturbations, $\epsilon = 0.1$, the value of $\big[ G'^{(p)}_{j,l} \big]_{\rm ss}$ decays of almost two decades, so that the exponential behavior is identified with reasonable certainty.
In Appendix~\ref{app:ParentHamiltonian} we discuss some interesting analogies of these results
with the properties of the ground state of the parent Hamiltonian
$\hat {\mathcal H}'_{p, \epsilon} = J \sum_j \hat L_{j, \epsilon}'^\dagger \hat L'_{j, \epsilon}$.
It should be stressed that, since $\hat {\mathcal H}'_{p, \epsilon}$ does not have a zero-energy
ground state, there is no exact correspondence between its ground state
and the steady states of $\mathcal L_{\epsilon}'$.
Concluding, we mention that a similar analysis can be done introducing an analogous perturbation
in the operator $\hat C_j^\dagger$; our study did not observe any qualitative difference (not shown).
\subsubsection{Perturbations due to unitary dynamics}
\begin{figure}
\centering
\includegraphics[scale=0.4]{Fig3A.pdf}
\includegraphics[scale=0.4]{Fig3B.pdf}
\caption{(Color online) (Top) Pairing correlations
$\big[ G^{(p)}_{4,j} \big]_{\rm ss}$ for the steady state of the dynamics
in the presence of a Hamiltonian perturbation~\eqref{eq:Lind:pert:Ham}.
The calculation of the steady state is performed with MPDO technique for $L=22$ and $N=11$.
(Bottom) The decay of $\big[ G^{(p)}_{4,j} \big]_{\rm ss}$
is exponential in $j$ (here, $\epsilon=0.1$).
}
\label{free.fermion.perturbation}
\end{figure}
An alternative way of perturbing the dynamics of $\mathcal L'$ in Eq.~\eqref{eq:master:equation:N:1}
is to introduce a Hamiltonian into the system, chosen for simplicity to be the already-introduced
free Hamiltonian $\hat {\mathcal H}_0$:
\begin{equation}
\frac{\partial}{\partial t}\hat \rho =
-i [ \epsilon \hat {\mathcal H}_0, \hat \rho] + \mathcal L[\hat \rho].
\label{eq:Lind:pert:Ham}
\end{equation}
Using the MPDO method to characterize the steady state of the dynamics, we analyze
the spatial decay of the pairing correlations for $L=22$ and at half-filling ($N=11$);
the initial state is set in the same way as in the previous section.
In Fig.~\ref{free.fermion.perturbation} (top) we display the results:
even for very small perturbations the pairing correlator
$\big[ G^{(p)}_{4,j} \big]_{\rm ss}$ decays rapidly in space.
The long-distance saturation observed in the absence of perturbations is lost
and qualitatively different from this result.
In Fig.~\ref{free.fermion.perturbation} (bottom) we highlight that the decay is exponential.
Summarizing, in all the cases that we have considered, the p-wave pairing correlations of the stationary state $\big[ G^{(p)}_{j,l} \big]_{\rm ss}$ are observed to decay as a function of $|j-l|$. Due to the interplay between the targeted dissipative dynamics and the perturbations, which do not support a p-wave ordered dark state, the steady state is mixed, similar to a finite temperature state. From this intuition, the result is easily rationalized: Any (quasi) long range order is destroyed in one-dimensional systems at finite temperature. We note that the true long range order found in the unperturbed case (correlators saturating at large distance; opposed to the more generic quasi-long range order defined with algebraic decay) is non-generic in one-dimensional systems and a special feature of our model, see \cite{Iemini_2015} for a thorough discussion. However, the destruction of any such order via effective finite temperature effects must be expected on general grounds. The absence of quasi-long-range p-wave superconducting order, which in one-dimension only occurs at zero-temperature for pure state, is likely to be in connection with this fact.
\subsubsection{Perturbation strength}
\begin{figure}
\centering
\includegraphics[scale=0.45]{Fig4.pdf}
\caption{ (Color online) $\big[ G'^{(p)}_{2,L-2} \big]_{\rm ss}$ in the presence of a perturbed Lindblad operator as a function of the perturbation strength $\epsilon$.
The perturbation is considered both for the $\hat A_j$ (top) and $\hat C^\dagger_j$ (bottom) operators (see text for the definitions).
The calculation is done with RK integration of the equation of motion for $L=8$ and $N=4$.
}
\label{hamiltonian.lindblad.perturb}
\end{figure}
Finally, we perform a quantitative investigation of the dependence of the pairing correlations on the perturbation strength, $\epsilon$.
{\it{Lindblad perturbation}} -- In Fig.~\ref{hamiltonian.lindblad.perturb} we plot the p-wave superconducting
correlation $\big[ G^{(p)}_{2,L-2} \big]_{\rm ss}$
of a system of length $L=8$ as a function of the intensity of the perturbation
$\epsilon$ in $\hat L'_{j, \epsilon}$ (for completeness, the complementary case
$\hat L'^{(2)}_{j,\epsilon} = \hat C_{j,\epsilon}^\dagger \hat A_{j}$, with
$\hat C_{j,\epsilon}^\dagger = \hat a_j^\dagger+(1-\epsilon)\hat a^\dagger_{j+1}$, is also included).
Our data confirm that correlations undergo a clear suppression in the presence of $\epsilon \neq 0$, which in one case is exponential in $\epsilon$ and in the other in $\epsilon^2$.
The calculation is performed through RK integration of the dynamics.
\begin{figure}
\centering
\includegraphics[scale=0.45]{Fig5.pdf}
\caption{
(Color online) $\big[ G^{(p)}_{2,L-2} \big]_{\rm ss}$
in the presence of a perturbing Hamiltonian as a function of the perturbation strength $\epsilon$.
We consider $\hat \mathcal H_0$, $\hat \mathcal H_{\rm nn}$ and $\hat \mathcal H_{\rm pair}$ (see text for the definitions). The inset highlights the exponential decay with $\epsilon$.
}
\label{hamiltonian.lindblad.perturb.2}
\end{figure}
{\it{Hamiltonian perturbation}} -- We begin with the two cases:
$\hat {\mathcal H}_{0}$ and $\hat {\mathcal H}_{\rm nn} = - J \sum_j \hat n_j \hat n_{j+1}$.
Fig.~\ref{hamiltonian.lindblad.perturb.2} shows, in both cases, an exponential decay
to zero of $\big[ G'^{(p)}_{2,L-2} \big]_{\rm ss}$ when $\epsilon$ is increased.
On the contrary, a Hamiltonian which introduces p-wave correlations in the system, such as
\begin{equation}
\hat {\mathcal H}_{\rm pair} = -J \sum_{j,l} (\hat a^\dagger_{j} \hat a^\dagger_{j+1} \hat a_{l+1} \hat a_l + {\rm H.c.}),
\label{pairing.hamiltonian:2}
\end{equation}
changes the value and the sign of $\big[ G'^{(p)}_{2,L-2} \big]_{\rm ss}$, leaving it different from zero.
Concluding, we have shown that in all the considered cases, perturbations of both dissipative
and Hamiltonian form are detrimental to the creation of a p-wave superconductor.
This is rationalized by the mixedness of the stationary state in that case, and parallels a finite temperature situation.
In any generic, algebraically ordered system at T=0, one has gapless modes.
\section{Two wires}\label{sec:N:2}
An intuitive explanation of why the dissipative setup discussed in Sec.~\ref{sec:N:1} does not show topological dark states with fixed number of particles is the fact that this constraint fixes the parity of the state, and thus no topological degeneracy can occur.
It has already been realized in several works that a setup with two parallel wires can overcome
this issue~\cite{Fidkowski_2011, Sau_2011, Cheng_2011, Kraus_2013, Ortiz_2014, Iemini_2015, Lang_2015}.
In this case it is possible to envision a number-conserving p-wave superconducting Hamiltonian
which conserves the parity of the number of fermions in each wire: such symmetry can play the role
of the parity of the number of fermions for $\hat {\mathcal H}_{\rm K}$ in Eq.~\eqref{eq:Kitaev:Hamiltonian}.
Several equilibrium models have already been discussed in this context; here we consider
the novel possibility of engineering a topological number-conserving p-wave superconductor with Markovian dynamics.
\subsection{Steady states}
Let us study a system composed of two wires with spinless fermions described by the canonical
fermionic operators $\hat a_j^{(\dagger)}$ and $\hat b_j^{(\dagger)}$.
For this model we consider three kinds of Lindblad operators:
\begin{subequations}
\label{eq:Lind:N:2}
\begin{align}
& \hat L''_{a,j} = \hat C_{a,j}^\dagger \hat A_{a,j}; \label{eq:Lind_1-wire}
\\
& \hat L''_{b,j} = \hat C_{b,j}^\dagger \hat A_{b,j}; \label{eq:Lind_2-wire}
\\
& \hat L''_{I,j} = \hat C_{a,j}^\dagger \hat A_{b,j} + \hat C^\dagger_{b,j}\hat A_{a,j}.
\label{eq:Lind_Int}
\end{align}
\end{subequations}
We now characterize the dark states of the Markovian dynamics induced by these
operators for a two-leg ladder of length $L$ with hard-wall boundary conditions:
\begin{equation}
\frac{\partial}{\partial t}\hat \rho =
\mathcal L''[\hat \rho] = \gamma
\sum_{j=1}^{L-1} \sum_{\Lambda= a,b,I} \left[ \hat L_{\Lambda,j}'' \hat \rho \hat L_{\Lambda,j}''^\dagger - \frac 12 \{ \hat L_{\Lambda,j}''^\dagger \hat L''_{\Lambda,j}, \hat \rho \} \right].
\label{eq:master:equation:N:2}
\end{equation}
In particular, we will show that, for every fermionic density different from the completely empty and
filled cases, there are always two steady states.
It is easy to identify the linear space $\mathcal S_N$ of states which are annihilated by the $\hat L''_{a,j}$ and $\hat L''_{b,j}$ and have a total number of particles $N$:
\begin{equation}
\mathcal S_N = \text{span} \{
\ket{\psi_{a,0}} \hspace{-0.05cm} \ket{\psi_{b,N}},
\ket{\psi_{a,1}} \hspace{-0.05cm} \ket{\psi_{b,N-1}},
\ldots,
\ket{\psi_{a,N}} \hspace{-0.05cm} \ket{\psi_{b,0}}
\}.
\end{equation}
where the states $\ket{\psi_{\alpha,N}}$ are those defined in Eq.~\eqref{eq:proj:N:stst}
for the wire $\alpha=a,b$. Let us consider a generic state in $\mathcal S_N$:
\begin{equation}
\ket{\psi} = \sum_{m = 0}^{N} \alpha_{m}
\ket{\psi_{a,m}} \ket{\psi_{b,N-m}},
\quad \sum_{m = 0}^{N} |\alpha_{m}|^2 =1.
\end{equation}
From the condition $\hat C^\dagger_j \ket{\psi_\sigma} = - \hat A_j \ket{\psi_\sigma}$ we obtain:
\begin{subequations}
\begin{align}
\hat C^\dagger_j \ket{\psi_{N-1}} &= - \hat A_j \ket{\psi_{N+1}}, \quad N \in (0,2L) \\
0 &= - \hat A_j \ket{\psi_{1}} , \\
\hat C^\dagger_j \ket{\psi_{2L-1}} &= 0 ,
\end{align}
\end{subequations}
and when we impose the condition $\hat L''_{I,j} \ket \psi = 0$:
\begin{widetext}
\begin{align}
\hat L''_{I,j} \ket{\psi}
=& \sum_{m = 0}^{N-1} \alpha_{m}
\hat C^{\dagger}_{a,j}\hat A_{b,j} \ket{\psi_{a,m}} \ket{\psi_{b,N-m}} +
\sum_{m = 1}^{N} \alpha_{m} \hat C^\dagger_{b,j}\hat A_{a,j} \ket{\psi_{a,m}} \ket{\psi_{b,N-m}}
= \nonumber \\
=& \sum_{m = 0}^{N-1} \alpha_{m}
\hat C^{\dagger}_{a,j}\hat A_{b,j} \ket{\psi_{a,m}} \ket{\psi_{b,N-m}} - \sum_{m = 2}^{N+1} \alpha_{m} \hat C^\dagger_{a,j}\hat A_{b,j} \ket{\psi_{a,m-2}} \ket{\psi_{b,N-m+2}}=0.
\end{align}
\end{widetext}
The result is $\alpha_m = \alpha_{m+2}$, so that two linearly independent states
can be constructed which are annihilated by all the Lindblad operators in~\eqref{eq:Lind:N:2}:
\begin{subequations}
\label{eq:states:ee}
\begin{align}
\ket{\psi_{N,ee}} =& \frac{1}{\mathcal N_{N,ee}^{1/2}}\sum_m \ket{\psi_{a,2m}}\ket{\psi_{b,N-2m}}, \\
\ket{\psi_{N,oo}} =& \frac{1}{\mathcal N_{N,oo}^{1/2}}\sum_m \ket{\psi_{a,2m-1}}\ket{\psi_{b,N-2m+1}}.
\end{align}
\end{subequations}
The subscripts $ee$ and $oo$ refer to the fermionic parities in the first
and second wire assuming that $N$ is even; $\mathcal N_{N,ee}$ and $\mathcal N_{N,oo}$
are normalization constants~\cite{Iemini_2015}. For $N$ odd one can similarly construct
the states $\ket{\psi_{N,eo}}$ and $\ket{\psi_{N,oe}}$.
By construction, the states that we have just identified are the only dark states of the dynamics.
It is an interesting fact that at least two parent Hamiltonians are known
for the states in~\eqref{eq:states:ee}, as discussed in Refs.~\cite{Iemini_2015, Lang_2015}.
We refer the reader interested in the full characterization of the topological properties
of these steady-states to those articles.
Finally, let us mention that the form of the Lindblad operators in~\eqref{eq:Lind:N:2}
is not uniquely defined. For example one could replace $\hat L''_{I,j}$ in Eq.~\eqref{eq:Lind_Int}
with the following:
\begin{equation}
\hat L''_{I,j} = \left( \hat C_{a,j}^\dagger + \hat C_{b,j}^\dagger \right) \left( \hat A_{a,j} + \hat A_{b,j} \right),
\label{eq:Lind_Int2}
\end{equation}
without affecting the results~\cite{Iemini_2015}.
The latter operator is most realistic for an experimental implementation, as we point out below.
\subsection{P-wave superconductivity}
Let us now check that the obtained states are p-wave superconductors.
Similarly to the single-wire protocol discussed in Eq.~\eqref{eq:order:corr},
the explicit calculation~\cite{Iemini_2015} shows that p-wave correlations
saturate to a final value at large distances in the thermodynamic limit
[for the two-leg ladder we consider $\nu=N/(2L)$]
\begin{equation}
\bra{\psi_{N,ee}} \hat O_j^{(p)\dagger} \hat O^{(p)}_l \ket{\psi_{N,ee}}
\xrightarrow{|j-l| \rightarrow \infty} \nu^2(1-\nu)^2.
\end{equation}
This relation clearly highlights the p-wave superconducting nature of the states.
\subsection{Dissipative gap}
In order to demonstrate that the asymptotic decay rate $\lambda_{\rm ADR}$ associated
to $\mathcal L''$ tends to $0$ in the thermodynamic limit, we consider the parent Hamiltonian of the model:
\begin{align}
\label{eq:hamiltonian}
\hat {\mathcal H}_p'' \! = \! & - 4 J \!\! \sum_{\substack{j=1 \\ \alpha =a,b}}^{L-1} \!\! \Big[ (\hat \alpha^\dagger_{j}\hat \alpha_{j+1} \! + \! \text{H.c.}) - \! (\hat n_j^{\alpha} + \hat n_{j+1}^{\alpha}) + \! \hat n_j^{\alpha} \hat n_{j+1}^{\alpha} \Big] \nonumber \\
& -2 J \sum_{j=1}^{L-1} \Big[ (\hat n_j^a + \hat n_{j+1}^a)(\hat n_j^b + \hat n_{j+1}^b) - (\hat a^\dagger_{j}\hat a_{j+1} \hat b^\dagger_{j} \hat b_{j+1} \nonumber\\
& \hspace{0.65cm} + \hat a_{j}^\dagger \hat a_{j+1} \hat b^\dagger_{j+1} \hat b_{j} - 2 \hat b^\dagger_{j} \hat b^\dagger_{j+1} \hat a_{j+1} \hat a_{j} + {\rm H.c.})
\Big] ,
\end{align}
where $J>0$ is a typical energy scale setting the units of measurement.
This Hamiltonian has been extensively analyzed in Ref.~\cite{Iemini_2015}.
Numerical simulations performed with the density-matrix renormalization-group algorithm
assess that $\hat {\mathcal H}_p''$ is gapless and that the gap is closing as $1/L^2$.
According to the discussion in Sec.~\ref{sec:I:A}, the asymptotic decay rate $\lambda_{\rm ADR}$
associated to the Lindbladian $\mathcal L''$ closes in the thermodynamic limit with a scaling
which is equal to $\sim L^{-2}$ or faster.
This is true both for periodic and hard-wall boundary conditions.
\subsection{Experimental implementation}
The Lindblad operators in Eqs.~\eqref{eq:Lind_1-wire}, \eqref{eq:Lind_2-wire} and~\eqref{eq:Lind_Int2}
lend themselves to a natural experimental implementation.
The engineering of terms like $\hat L''_{a,j}$ and $\hat L''_{b,j}$ has been extensively discussed in Ref.~\cite{Diehl_2011} starting from ideas originally presented in Ref.~\cite{Diehl_2008}. As we will see, the Lindblad operator $\hat L''_{I,j}$ in Eq.~\eqref{eq:Lind_Int2} is just a simple generalization.
The idea is as follows: a superlattice is imposed which introduces in the system additional
higher-energy auxiliary sites located in the middle of each square of the lower sites target lattice.
Driving lasers are then applied to the system, whose phases are chosen such that the excitation
to the auxiliary sites happens only for states $\ket{\varphi}$ such that
$(\hat A_{a,j} + \hat A_{b,j}) \ket{\varphi} \neq 0$.
If the whole system is immersed into, e.g., a Bose-Einstein condensate reservoir, atoms located
in the auxiliary sites can decay to the original wire by emission
of a Bogoliubov phonon of the condensate.
This process is isotropic and, for a wavelength of the emitted phonons comparable to the lattice spacing, gives rise to the four-site creation part
with relative plus sign: $\hat C^\dagger_{a,j} + \hat C^\dagger_{b,j}$.
\subsection{Perturbations}
An important property of topological Hamiltonians is the robustness of their edge physics to local perturbations. Similar features have been highlighted in the case of topological superconductors where the setup is not number conserving~\cite{Diehl_2011, Bardyn_2013}. The goal of this section is to probe the resilience of the twofold-degenerate steady states of $\mathcal L''$. A conclusive analysis is beyond our current numerical possibilities; here we present some preliminary results obtained via exact diagonalization methods.
We consider the natural choice of Lindblad operators Eqs.~(\ref{eq:Lind_1-wire},\ref{eq:Lind_2-wire},\ref{eq:Lind_Int2}), subject to perturbations:
\begin{subequations}
\label{eq:pert:exc:2}
\begin{align}
& \hat L''_{a,j,\epsilon} = \hat C_{a,j}^\dagger \hat A_{a,j,\epsilon};
\quad
\hat A_{a,j,\epsilon} = \hat a_j-(1-\epsilon)\hat a_{j+1};
\\
& \hat L''_{b,j,\epsilon} = \hat C_{b,j}^\dagger \hat A_{b,j,\epsilon};
\quad
\hat A_{b,j,\epsilon} = \hat b_j-(1-\epsilon)\hat b_{j+1};
\\
& \hat L''_{I,j} = \left( \hat C_{a,j}^\dagger + \hat C_{b,j}^\dagger \right) \left( \hat A_{a,j,\epsilon} + \hat A_{b,j,\epsilon} \right) ; \quad \epsilon \in \mathbb R
\end{align}
\end{subequations}
Those define a perturbed Lindbladian $\mathcal L''_{\epsilon}$. They are a simple generalization of those defined in Eq.~\eqref{eq:pert:exc} for the single-wire setup.
Let us begin our analysis by showing that for small sizes $L \sim 6$ the degeneracy of the steady space for $\epsilon=0$ is broken.
Let us first remark that for $\epsilon=0$ the steady space is four-fold degenerate; a possible parameterization is:
\begin{align}
\mathcal B = \{ & \ket{\psi_{N,ee}} \hspace{-0.05cm} \bra{\psi_{N,ee}} ,
\quad
\ket{\psi_{N,ee}} \hspace{-0.05cm} \bra{\psi_{N,oo}} ,\\
&\ket{\psi_{N,oo}} \hspace{-0.05cm} \bra{\psi_{N,ee}} ,
\quad
\ket{\psi_{N,oo}} \hspace{-0.05cm} \bra{\psi_{N,oo}} \}.
\end{align}
A direct inspection of the eigenvalues of $\mathcal L_{\epsilon}$ shows that this degeneracy is broken once $\epsilon \neq 0$. Results, shown in Fig.~\ref{two.wires.perturbation.L6N6} for a fixed lattice size $L=6$ and $N=6$, display a quadratic splitting of the steady steady degeneracy with the perturbation strength.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Fig6.pdf}
\caption{ (Color online) Real part of the first six eigenvalues of the Lindbladian operator $\mathcal L''_{\epsilon}$ for $L=6$ and $N=6$ as a function of $\epsilon$. Eigenvalues $\lambda_j$ are sorted according to increasing $- \Re (\lambda_j)$. The plot highlights the presence of a $\lambda=0$ eigenvalue (within numerical accuracy $10^{-15}$), of three eigenvalues which scale as $\epsilon^{2}$ and of other eigenvalues of magnitude $\sim 1$. }
\label{two.wires.perturbation.L6N6}
\end{figure}
Let us now check the behavior with the system size of the first eigenvalues of the system for longer system sizes.
In order to obtain a reasonable number of data, the extreme choice of setting $N=2$ in all simulations has been taken, which allows us to analyze system sizes up to $L=20$. Results shown in Fig.~\ref{two.wires.perturbation.fixedN2} (top) show that the Liouvillian eigenvalues related to the steady-state degeneracy display an algebraic scaling $\lambda_{\rm ADR}\sim L^{-1}$ in the accessible regime of system sizes for small perturbations ($\epsilon=10^{-2}$), while they are gapped for larger perturbations ($\epsilon=10^{-1}$).
Note that, for the system sizes which could be accessed, larger eigenvalues clear display an algebraic decay, as shown in Fig.~\ref{two.wires.perturbation.fixedN2} (bottom), also for $\epsilon=0.1$. The scaling of the eigenvalues related to the steady state degeneracy is not exponential and thus in principle should not be connected to the topological properties of the system. However, these preliminary considerations suffer from two significant biases: (i) the small considered sizes, (ii) the fact that they are not performed at exactly fixed density, and (iii) the very low filling. A more thorough analysis is left for future work.
\begin{figure}[t]
\centering
\includegraphics[scale=0.45]{Fig7A.pdf}
\includegraphics[scale=0.45]{Fig7B.pdf}
\caption{(Color online) Real part of the eigenvalues $j=2$, $3$ and $4$ (top) and $j=5$ and $6$ (bottom) of the Lindbladian operator $\mathcal L''_{\epsilon}$ for $N=2$ as a function of $L$ (here, $L\leq20$).
The two values $\epsilon=0.1$ and $\epsilon=0.01$ are considered.
In the top panel, the values of the eigenvalues relative to $\epsilon=0.1$ have been rescaled by $0.01$ in order to facilitate the readability of the plot.}
\label{two.wires.perturbation.fixedN2}
\end{figure}
\section{Conclusions}\label{sec:conc}
In this article we have discussed the dissipative quantum state preparation of a p-wave superconductor in one-dimensional fermionic systems with fixed number of particles.
In particular, we have presented two protocols which have been fully characterized in the presence
of hard-wall boundaries. Whereas the former does not display topological property,
the latter features a two-dimensional steady space to be understood in terms of boundary
Majorana modes for any number of fermions. Through the analysis of a related parent Hamiltonian,
we are able to make precise statements about the gapless nature of the Lindbladian
super-operators associated to both dynamics.
The peculiar form of the master equations considered in this article allows for the exact
characterization of several properties of the system, and in particular of the steady state,
even if the dynamics is not solvable with the methods of fermionic linear
optics~\cite{Prosen_2008, Bravyi_2012} exploited in Refs.~\cite{Diehl_2011, Bardyn_2013}.
This result is very interesting \textit{per se}, as such examples are usually rare
but can drive physical intuition into regimes inaccessible without approximations.
It is a remarkable challenge to investigate which of the properties presented so far are general
and survive to modifications of the environment, and which ones are peculiar of this setup.
Using several numerical methods for the study of dissipative many-body systems, we have presented a detailed analysis of the robustness to perturbations of these setups. Through the calculation of the proper p-wave correlations we have discussed how external perturbations can modify the nature of the steady state. In the ladder setup, where the steady states are topological, we have presented preliminary results on the stability of the degenerate steady-space of the system.
The analysis presented here has greatly benefited from exact mathematical relations
between the properties of the Lindbladian and of a related parent Hamiltonian.
Since the study of closed systems is much more developed than that of open systems
both from the analytical and from the numerical points of view, a more detailed understanding of the relations between Lindbladians and associated parent Hamiltonian operators
stands as a priority research program.
\acknowledgments
We acknowledge enlightening discussions with C. Bardyn, G. De Palma, M. Ippoliti and A. Mari.
F. I. acknowledges financial support by the Brazilian agencies FAPEMIG, CNPq, and
INCT- IQ (National Institute of Science and Technology for Quantum Information).
D. R. and L. M. acknowledge the Italian MIUR through FIRB Project No. RBFR12NLNA.
R. F. acknowledges financial support from the EU projects SIQS and QUIC and from Italian MIUR via PRIN Project No. 2010LLKJBX.
S. D. acknowledges support via the START Grant No. Y 581-N16, the German Research Foundation through ZUK 64, and through the Institutional
Strategy of the University of Cologne within the German Excellence Initiative (ZUK 81).
L. M. is supported by LabEX ENS-ICFP: ANR-10-LABX-0010/ANR-10-IDEX-0001-02 PSL*.
This research was supported in part by the National Science Foundation under Grant No. NSF PHY11-25915.
|
1,477,468,751,034 | arxiv | \section{Introduction}
\label{intro}
Holographic correlators are correlation functions of single trace operators in CFTs with holographic duals, such that the large $N$ limit of the CFT correlators is dual to scattering of gravitons or KK modes in weakly coupled AdS gravity. One reason to study these correlators is to learn about quantum gravity on curved spacetime. Another motivation is that there is a concrete limit \cite{Penedones:2010ue,Polchinski:1999ry,Susskind:1998vk,Giddings:1999jq,Fitzpatrick:2011hu,Fitzpatrick:2011jn} of the AdS correlator that gives the flat space S-matrix, so we can use CFT to study quantum gravity in flat space. For CFTs with M-theory duals, which is the subject of this paper, this motivation is especially compelling, because the M-theory S-matrix cannot be computed from flat space methods even in principle beyond the lowest few terms in the small Planck length expansion \cite{Green:1997as, Russo:1997mk, Green:2005ba}. Instead, \cite{Chester:2018aca} proposed that the M-theory S-matrix could be defined to all orders by first computing the dual CFT correlator at large $N$ and then taking the flat space limit.
Holographic correlators were originally computed at large $N$ using Witten diagrams derived from the explicit AdS supergravity action, see e.g. \cite{Arutyunov:2000py,Arutyunov:2002ff,Arutyunov:2002fh,DHoker:1999bve,DHoker:1999mqo,DHoker:1999kzh}. Recently, a more powerful analytic bootstrap approach was introduced in \cite{Rastelli:2017udc} that computes tree level holographic correlators in the leading supergravity approximation purely based on crossing symmetry, analyticity, superconformal symmetry, and the flat space limit. This approach was generalized to higher derivative corrections to tree level supergravity in \cite{Alday:2014tsa,Goncalves:2014ffa} following the general discussion in \cite{Heemskerk:2009pn}, where the coefficients of these corrections are no longer completely determined by symmetry, but can be fixed using theory-specific inputs like supersymmetric localization \cite{Chester:2018aca,Binder:2018yvd,Binder:2019jwn,Binder:2019mpb,Chester:2019jas,Chester:2020dja,Chester:2020vyz,Chester:2019pvm} for 3d and 4d CFTs or protected sectors \cite{Chester:2018dga} for 6d CFTs.
The cutting edge of the large $N$ analytic bootstrap is 1-loop, which can be computed up to a finite number of contact terms by ``squaring'' the tree level CFT data of all double-trace operators in the correlator \cite{Aharony:2016dwx}. More precisely, we apply crossing symmetry to this data to compute the double-discontinuity, which according to the Lorentzian inversion formula \cite{Caron-Huot:2017vep} determines the entire correlator up to a finite number of contact terms discussed in \cite{Heemskerk:2009pn} that contribute to CFT data of low spins.\footnote{At finite $N$ the Lorentzian inversion formula in fact converges for CFT data of all spins \cite{Caron-Huot:2017vep,Caron-Huot:2018kta,Lemos:2021azv}, but this convergence gets worse in the large $N$ expansion.} This program was carried out for 1-loop diagrams involving the supergravity $R$ and first higher derivative correction $R^4$ vertices for 4d $SU(N)$ $\mathcal{N}=4$ SYM \cite{Alday:2017xua,Alday:2018pdi,Alday:2018kkw,Alday:2019nin,Alday:2017vkk,Aprile:2017bgs,Aprile:2017qoy,Aprile:2019rep,Drummond:2019hel,Drummond:2020dwr,Aprile:2020luw}, which is dual to Type IIB string theory on $AdS_5\times S^5$, and 6d $(2,0)$ theory \cite{Alday:2020tgi}, which is dual to $AdS_7\times S^4$ for the $A_{N-1}$ theories and $AdS_7\times S^4/\mathbb{Z}_2$ for the $D_N$ theories.\footnote{For CFTs dual to higher spin gravity, 1-loop terms were also computed in 3d in \cite{Aharony:2018npf}, see also \cite{Binder:2021cif,Turiaci:2018nua,Li:2019twz,Silva:2021ece} for tree level results.} The $R|R$ diagram has a single contact term ambiguity that was fixed using localization for $\mathcal{N}=4$ SYM in \cite{Chester:2019pvm}. However, higher loops diagrams such as $R|R^4$ have too many ambiguities to be fixed using localization, and for 6d $(2,0)$ theory there are no localization results available. These ambiguities also cannot be fixed by comparing to the known S-matrix in the flat space limit, which was used to fix tree level higher derivative terms in \cite{Goncalves:2014ffa,Binder:2019jwn,Chester:2020dja,Chester:2019jas,Chester:2020vyz}, since the contact term ambiguities are purely AdS features that disappear in the flat space limit.
In this paper we will extend the 1-loop analytic bootstrap to 3d maximally supersymmetric ABJ(M) theory \cite{Aharony:2008ug,Aharony:2008gk} with Chern-Simons level $k=2$, which is holographically dual to M-theory on $AdS_4\times S^7/\mathbb{Z}_2$, and propose how to fix 1-loop contact term ambiguities using an analytic continuation of the Lorentzian inversion formula. There are two such maximally supersymmetric CFTs with gauge groups $U(N)_2\times U(N)_{-2}$ or $U(N+1)_2\times U(N)_{-2}$, which are called ABJM or ABJ respectively, but they are identical when expanded at large central charge\footnote{The central charge is defined in \eqref{stress} as the coefficient of the canonically-normalized stress-tensor two point function, which has been calculated to all orders in $1/N$ through supersymmetric localization \cite{Chester:2014fya} using the results of \cite{Jafferis:2010un} and \cite{Closset:2012vg}.} $c_T\sim N^{\frac32}$,\footnote{This is because from the M-theory point of view, the two theories differ by a torsion flux, i.e.~a discrete holonomy of the 3-form field on a torsion 3-cycle of $S^7 / \mathbb{Z}_2$ \cite{Aharony:2008gk}. This torsion flux affects the CFT data only through non-perturbative effects.} so we will refer to both theories collectively as ABJ(M). We will study the stress tensor multiplet correlator $\langle 2222\rangle$, where $\langle pppp\rangle$ denotes the correlator of of the bottom component of the $p$th lowest single trace half-BPS multiplet, which are dual to the corresponding $p$th lowest scalar KK mode in the dimensional reduction of M-theory on AdS. We will find it convenient to work with the Mellin transform $M(s,t;\sigma,\tau)$ of $\langle2222\rangle$, where $\sigma,\tau$ parameterize the $R$-symmetry dependence and $s,t$ are Mellin variables that are related to the 11d Mandelstam variables in the flat space limit. The large $c_T$ expansion of $M$ is constrained by the analytic bootstrap to take the form
\es{M2222}{
M(s,t;\sigma,\tau)&=c_T^{-1}M^R+c_T^{-\frac53}B^{R^4}_4M^4+c_T^{-2}(M^{R|R}+B^{R|R}_4M^4)\\
&+c_T^{-\frac73}(B^{D^6R^4}_4M^4+B^{D^6R^4}_6M^6+B^{D^6R^4}_7M^7)\\
&+c_T^{-\frac{23}{9}}(B^{D^8R^4}_4M^4+B^{D^8R^4}_6M^6+B^{D^8R^4}_7M^7+B^{D^8R^4}_8M^8)\\
&+c_T^{-\frac83}(M^{R|R^4}+B^{R|R^4}_4M^4+B^{R|R^4}_6M^6+B^{R|R^4}_7M^7+B^{R|R^4}_8M^8)+\dots\,,
}
where the $M$'s are functions of $s,t,\sigma,\tau$ with numerical coefficients $B$ that can depend on $k$. The tree level terms\footnote{Since $c_T$ is the only expansion parameter, we can only distinguish between tree and loop terms at low orders where they have different power of $c_T$.} at orders $c_T^{-1}$, $c_T^{-\frac53}$, and $c_T^{-\frac73}$ were previously computed for both $k=1,2$ ABJ(M) in \cite{Zhou:2017zaw}, \cite{Chester:2018aca}, and \cite{Binder:2018yvd}, respectively, while this paper will focus on the 1-loop terms $R|R$ at order $c_T^{-2}$ and $R|R^4$ at order $c_T^{-\frac83}$ for $k=2$ ABJ(M).\footnote{We will also show some results for the $R^4|R^4$ term at order $c_T^{-{10}/{3}}$, but we will not consider the $R|D^6R^4$ term that contributes at the same order}
As in the 4d and 6d cases, double trace long operators are degenerate in the generalized free field theory (GFFT) at $c_T\to\infty$ limit, so their tree level CFT data at orders $c_T^{-1}$ for $R$ and $c_T^{-\frac53}$ for $R^4$ must be unmixed to get the 1-loop corrections we consider. For $k=2$ ABJ(M), this unmixing requires the average of GFFT OPE coefficients obtained from $\langle pppp\rangle$ for even $p$, as well as the average of $c_T^{-1}$ and $c_T^{-\frac53}$ anomalous dimensions obtained from $\langle 22pp\rangle$ for even $p$. For $k=1$ ABJM, the 1-loop double discontinuity would also receive contributions from the OPE coefficients of double trace long operators with odd twists, which generically contribute to the 1-loop double discontinuity of large $N$ 3d CFTs \cite{Aharony:2018npf}. These degenerate contributions must be similarly unmixed using GFFT OPE coefficients in $\langle pppp\rangle$ for odd $p$ as well as the average of $c_T^{-1}$ and $c_T^{-\frac53}$ OPE coefficients from $\langle 22pp\rangle$ for odd $p$. These odd $p$ terms do not contribute to the $k=2$ theory, because they are projected out by the orbifold.\footnote{In the 4d and 6d cases, all double trace multiplets have even twists, so only anomalous dimensions contribute to the 1-loop double discontinuity. The only difference between the orbifold and the non-orbifold cases is that we restrict the sum over tree level anomalous dimensions to even $p$ for the orbifold cases.} We computed the average GFFT OPE coefficients from $\langle pppp\rangle$ for general $p$ by computing the full superconformal block expansion up to $p=9$ using the superconformal Ward identities in \cite{Dolan:2004mu} and then guessing the general $p$ formula, similar to what we did in \cite{Alday:2020tgi} for the 6d case. For the tree $\langle 22pp\rangle$ data for even $p$, we extracted the average anomalous dimension from the $R$ correlator given in \cite{Alday:2020dtb} as well as the $R^4$ correlator that we compute here using the known M-theory term in the flat space limit. The odd $p$ case is technically much harder, because the tree level supergravity $\langle 22pp\rangle$ cannot be written in terms of a finite number of $\bar D_{r_1,r_2,r_3,r_4}(U,V)$ functions, unlike the even $p$ case in 3d or the general $p$ case in 4d and 6d.\footnote{In 4d and 6d, the correlator can be written in terms of just a few $\bar D_{r_1,r_2,r_3,r_4}(U,V)$ with integer arguments for any $p$. In 3d and even $p$, the number of $\bar D_{r_1,r_2,r_3,r_4}(U,V)$ grows with $p$ and has negative and half-integer arguments, as we will show in the main text.} We will thus only discuss $k=2$ ABJ(M) in this paper, where we use the even $p$ data to compute $R|R$, $R|R^4$ and $R^4|R^4$ using the Lorentzian inversion formula up to the finite number of contact terms discussed above, extract the low-lying CFT data for spins unaffected by these contact terms, and successfully compare to the M-theory S-matrix terms computed in \cite{Russo:1997mk,Green:1997as,Alday:2020tgi} in the flat space limit, which is unaffected by these contact terms.
We then analytically continue the Lorentzian inversion formula to extract CFT data of spins that are affected by the contact term ambiguities. For $R|R$, we find that this analytic continuation works for all CFT data, and in particular allows us to fix the contact term $B^{R|R}_4M^4$ in \eqref{M2222} to zero, where $M^{R|R}$ is defined so that its CFT data is analytic in spin for all values. For $R|R^4$, we find that the analytic continuation of the inversion formula works for all CFT data except that affected by the $B^{R|R^4}_4M^4$ contact term, which allows us to fix the other three contact terms in \eqref{M2222}. We then apply two localization constraints from \cite{Chester:2018aca,Agmon:2017xes} and \cite{Binder:2018yvd,Binder:2019mpb} to further constrain the amplitude. For $R|R$ we find that one of the localization constraints independently fixes $B^{R|R}_4=0$,\footnote{We are only able to write $M^{R|R}$ up to a finite number of polynomial in $s,t$ ambiguities that are in principle fixed by superconformal symmetry, but are difficult to fix in practice. As such we could not impose the localization constraint in \cite{Binder:2018yvd,Binder:2019mpb} that requires us to integrate $M^{R|R}$, but we were still able to impose the localization constraint from \cite{Chester:2018aca,Agmon:2017xes} that simply fixes certain protected OPE coefficients, which can be extracted from $R|R$ using the Lorentzian inversion formula without knowing the explicit Mellin amplitude as we will discuss in the main text. For $M^{R|R^4}$, we managed to compute the complete Mellin amplitude, so we could impose both localization constriants.} which confirms the result from the conjectured analytic continuation, while for $R|R^4$ we use one constraint to fix the remaining $B^{R|R^4}_4$ coefficient and the second constraint as a nontrivial check.
Finally, we compare the CFT data extracted from the 1-loop Mellin amplitudes to the numerical bootstrap. In \cite{Agmon:2017xes}, it was conjectured that the $k=2$ ABJ(M) theory saturates the numerical bootstrap bounds for $\langle2222\rangle$, which was motivated by comparing the bounds to the all orders in $1/c_T$ calculation of short operator OPE coefficients computed from supersymmetric localization \cite{Agmon:2017xes,Nosaka:2015iiw,Dedushenko:2016jxl,Kapustin:2009kz}. This conjecture was further checked in \cite{Chester:2018lbz}, where the $c_T^{-1}$ correction to all CFT data in $\langle2222\rangle$ was found to saturate the bootstrap bounds at sufficiently large $c_T$, but this correction does not depend on $k$. To compare higher order in $1/c_T$ corrections that depend on $k$, one must keep in mind that the large $c_T$ expansion is asymptotic, so either one must look at very high values of $c_T$, which require extremely high numerical bootstrap accuracy, or focus on terms in the $1/c_T$ expansion where the asymptotic expansion still converges (i.e. subsequent terms have decreasing coefficients). We focus on the $c_T^{-2}$ terms in the OPE coefficients of semishort operators, which are the most converged corrections beyond $O(c_T^{-1})$, and find that these corrections noticeably improve the saturation of the bootstrap bounds, which we compute at much higher accuracy than the previous studies \cite{Chester:2014fya,Chester:2014mea,Agmon:2017xes,Agmon:2019imm}. We do not have enough accuracy yet to extract sufficient CFT data to fix the $D^8R^4$ term in \eqref{M2222}, which would fulfill the goal of \cite{Chester:2018aca} of deriving new terms in the M-theory S-matrix from CFT. Nevertheless, this is the first successful comparison of analytic $c_T^{-2}$ terms to numerical bootstrap, which improves the many tree level checks performed in various other contexts in \cite{Beem:2016wfs,Beem:2015aoa,Chester:2018lbz,Binder:2021cif,Binder:2020ckj}, and is a necessary step toward the goal of deriving $D^8R^4$.
The rest of this paper is organized as follows. In Section~\ref{4points} we compute the explicit superblock decomposition of $\langle qqpp\rangle$ for $q\leq p$ and $p\leq9$. We use these superblock expansions to extract the average GFFT OPE coefficients and tree level anomalous dimensions that we will need to compute the 1-loop data for $k=2$ ABJM. In Section~\ref{1loop}, we use this data to compute the 1-loop corrections to the stress tensor correlator up to contact term ambiguities. We then match these 1-loop terms to the M-theory S-matrix in the flat space limit. We also extract CFT data from the 1-loop correlators using the Lorentzian inversion formula as well as a projection method. In Section \ref{1loopContact}, we fix the contact term ambiguities at orders $c_T^{-2}$ and $c_T^{-\frac83}$ by combining constraints from supersymmetric localization with an analytic continuation of the Lorentzian inversion formula. In Section \ref{numerics} we compare some of these 1-loop analytic results to bounds from the numerical conformal bootstrap, which we compute at much higher accuracy than previous studies. Finally, in Section~\ref{conc} we discuss future directions. Several technical details are given in various Appendices. We also include an ancillary \texttt{Mathematica} notebook, which includes many of our more complicated explicit results.
\section{Four-point functions at large $c_T$}
\label{4points}
We start by discussing the large $c_T\sim N^{3/2}$ expansion of four-point functions of the dimension $\frac p2$ scalar bottom component of half-BPS supermultiplets in $\mathcal{N}=8$ ABJ(M) theory, and derive the data needed for the 1-loop terms for $k=2$ ABJ(M) in the following sections. First we will review the constraints of the 3d $\mathcal{N}=8$ superconformal algebra $\mathfrak{osp}(8|4)\supset\mathfrak{so}(5)\oplus\mathfrak{so}(8)_R$ on $\langle ppqq\rangle$ following \cite{Agmon:2019imm}, and we explicitly perform the superblock expansion for $p,q\leq9$. We then discuss the generalized free field theory (GFFT) that describes the $c_T\to\infty$ limit, which we use to compute average OPE coefficients of double-trace singlet long multiplets in $\langle qqpp\rangle$. Afterwards, we consider tree level corrections to $\langle 22pp\rangle$ for even $p$, which we use to derive the average anomalous dimension of singlet long multiplets at orders $1/c_T$ and $1/c_T^{5/3}$ that correspond to tree level supergravity and $R^4$, respectively. Finally, we discuss higher orders in the large $c_T$ expansion of $\langle2222\rangle$, which will be our main focus in the rest of the paper.
\subsection{Block expansion of $\langle qqpp\rangle$}
\label{qqpp}
We consider half-BPS superconformal primaries $S_p$ in 3d $\mathcal{N}=8$ SCFTs that are scalars with $\Delta=\frac p2$ and transform in the $[00p0]$ of $\mathfrak{so}(8)_R$,\footnote{The convention we use in defining these multiplets is that the supercharges transform in the ${\bf 8}_v = [1000]$ irrep of $\mathfrak{so}(8)_R$.} where $p=1,2,\dots$. The first such interacting operator is $S_2$, which is the bottom component of the stress tensor multiplet. We can denote these operators as traceless symmetric tensors $S_{I_1\dots I_p}( x)$ of $\mathfrak{so}(8)_R$, where $I_i=1,\dots8$. We can avoid indices by introducing an auxiliary polarization vector $Y^I$ that is constrained to be null, $Y_i\cdot Y_i=0$, and then define
\es{S}{
S_p( x,Y)\equiv S_{I_1\dots I_p}Y^{I_1}\cdots Y^{I_p}\,.
}
Consider the four-point functions $\langle qqpp\rangle$ of four $S_p(x,Y)$'s, where $q\leq p$. Conformal and $\mathfrak{so}(8)_R$ symmetry fixes these correlators to take the form
\es{4point}{
\langle S_q( x_1,Y_1)S_q( x_2,Y_2)S_p( x_3,Y_3)S_p( x_4,Y_4) \rangle=\frac{(Y_1\cdot Y_2)^q(Y_3\cdot Y_4)^p}{| x_{12}|^{q}| x_{34}|^{p}}\mathcal{G}_{qp}(U,V;\sigma,\tau)\,,
}
where we define
\es{uvsigmatauDefs}{
U \equiv \frac{{x}_{12}^2 {x}_{34}^2}{{x}_{13}^2 {x}_{24}^2} \,, \qquad
V \equiv \frac{{x}_{14}^2 {x}_{23}^2}{{x}_{13}^2 {x}_{24}^2} \,, \qquad
\sigma\equiv\frac{(Y_1\cdot Y_3)(Y_2\cdot Y_4)}{(Y_1\cdot Y_2)(Y_3\cdot Y_4)}\,,\qquad \tau\equiv\frac{(Y_1\cdot Y_4)(Y_2\cdot Y_3)}{(Y_1\cdot Y_2)(Y_3\cdot Y_4)} \,,
}
with $x_{ij}\equiv x_i-x_j$. Since \eqref{4point} is a degree $q$ polynomial in each $Y_i$ separately, the quantity $\mathcal{G}_{qp}(U,V;\sigma,\tau)$ is a degree $q$ polynomial in $\sigma$ and $\tau$. We parametrize these polynomials in terms of eigenfunctions $Y_{[0\,m\,\,2n-2m\,0]}(\sigma, \tau)$ of the $\mathfrak{so}(8)_R$ quadratic Casimir for irreps $[0\,m\, 2n-2m\,0]$ that appear in the tensor product of $[00q0]\otimes[00q0]$, so that $n=0,1,\dots q$ and $m=0,\dots n$. The polynomials $Y_{[0\,m\, 2n-2m\,0]}(\sigma, \tau)$ can be computed explicitly \cite{Nirschl:2004pa}, and we give the explicit values for $q=2,\dots 9$ in the attached \texttt{Mathematica} file. We then expand $\mathcal{G}_{qp}(U,V;\sigma,\tau)$ in terms of this basis as
\es{Ybasis}{
\mathcal{G}_{qp}(U,V;\sigma,\tau)=\sum^{q}_{n=0}\sum_{m=0}^{n}Y_{[0\,m\, 2n-2m\,0]}(\sigma, \tau) \mathcal{G}_{qp}^{[0\,m\, 2n-2m\,0]}(U,V)\,,
}
and furthermore expand $\mathcal{G}_{qp}^{[0\,m\, 2n-2m\,0]}(U,V)$ in conformal blocks $G_{\Delta,\ell}(U,V)$ as
\es{blockExp}{
\mathcal{G}_{qp}^{[0\,m\, 2n-2m\,0]}(U,V)=\sum_{\Delta,\ell}\lambda_{qq{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}}\lambda_{pp{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}}G_{\Delta,\ell}(U,V)\,,
}
where ${\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}$ are conformal primaries with scaling dimension $\Delta$ and spin $\ell$ in irrep $[0\,m\, 2n-2m\,0]$ that appear in $S_{q}\times S_{q}$ with OPE coefficient $\lambda_{qq{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}}$. The 3d conformal blocks were computed in various series expansions in \cite{Dolan:2011dv,Dolan:2003hv,Kos:2013tga}, which we review in our conventions in Appendix \ref{block3dApp}.
The correlator is further constrained by the superconformal Ward identities \cite{Dolan:2004mu}:
\es{ward}{
\left[z\partial_z - \frac12\alpha \partial_\alpha\right] \mathcal{G}_{qp}(z,\bar{z};\alpha, \bar{\alpha})|_{\alpha = \frac1z} =
\left[\bar{z}\partial_{\bar{z}} - \frac12{\bar\alpha} \partial_{\bar{\alpha}}\right] \mathcal{G}_{qp}(z,\bar{z};\alpha, \bar{\alpha})|_{\bar{\alpha}=\frac{1}{\bar{z}}} &= 0\,,
}
where $z,\bar z$ and $\alpha,\bar\alpha$ are written in terms of $U,V$ and $\sigma,\tau$, respectively, as
\es{UVtozzbar}{
U=z\bar z\,,\quad V=(1-z)(1-\bar z)\,,\qquad\qquad \sigma=\alpha\bar\alpha\,,\quad \tau=(1-\alpha)(1-\bar\alpha)\,.
}
We can satisfy these constraints by expanding $ \mathcal{G}_{qp}$ in superconformal blocks as
\es{SBDecomp}{
\mathcal{G}_{qp}(U,V;\sigma,\tau)=\sum_{\mathcal{M}\in S_{q}\times S_{q}}\lambda_{qq\mathcal{M}} \lambda_{pp\mathcal{M}} \mathfrak{G}_{\mathcal{M}}(U,V;\sigma,\tau)\,,
}
where $\mathfrak{G}_{\mathcal{M}}$ are superblocks for each supermultiplet $\mathcal{M}$ that appears in $S_{q}\times S_{q}$ (and $S_{p}\times S_{p}$) with OPE coefficients $\lambda_{qq\mathcal{M}}$ (and $\lambda_{pp\mathcal{M}}$). The multiplets that can appear in the OPE are \cite{Ferrara:2001uj,Agmon:2019imm}:
\es{opemultEq}{
&S_q\times S_q=\text{Id}+\sum_{n=1}^q{(B,+)}_{n,0}^{[0\,0\,2n\,0]}+\sum_{n=1}^q\sum_{m=2,4,\dots,n}{(B,2)}_{n,0}^{[0\,m\,2n-2m\,0]}\\
&+\sum_{\ell=0,2,\dots}\sum_{n=1}^{q-1}{(A,+)}_{n+1+\ell,\ell}^{[0\,0\,2n\,0]}+\sum_{n=1}^{q-1}\Big[\sum_{\ell=0,2,\dots}\sum_{m=2,4,\dots,n}{(A,2)}_{n+1+\ell,\ell}^{[0\,m\,2n-2m\,0]}+\sum_{\ell=1,3,\dots}\sum_{m=1,3,\dots,n}{(A,2)}_{n+1+\ell,\ell}^{[0\,m\,2n-2m\,0]}\Big]\\
&+\sum_{n=0}^{q-2}\Big[\sum_{\ell=0,2,\dots}\sum_{m=0,2,\dots,n}{(A,0)}_{\Delta>n+1+\ell,\ell}^{[0\,m\,2n-2m\,0]}+\sum_{\ell=1,3,\dots}\sum_{m=1,3,\dots,n}{(A,0)}_{\Delta>n+1+\ell,\ell}^{[0\,m\,2n-2m\,0]}\Big]\,,
}
where we denote superconformal multiplets other than the identity $\text{Id}$ by $X_{\Delta,\ell}^{[a_1\, a_2 \, a_3 \, a_4]}$, with $(\Delta,\ell)$ and $[a_1\, a_2 \, a_3 \, a_4]$ representing the $\mathfrak{so}(3, 2)$ and $\mathfrak{so}(8)_R$ quantum numbers of the superconformal primary, while $X$ denotes the type of shortening condition. The $(A,0)^{[0\,m\,2n-2m\,0]}_{\Delta> n+\ell+1,\ell}$ multiplets that appear here are unprotected. When $\Delta$ saturates the bound we in general get semishort multiplets $(A,2)^{[0\,m\,2n-2m\,0]}_{ n+\ell+1,\ell}$ or $(A,+)^{[0\,0\,2n\,0]}_{ n+\ell+1,\ell}$, which are $\frac14$ or $\frac18$ BPS, respectively. The only exception is $(A,0)^{[0000]}_{\Delta> \ell+1,\ell}$ whose unitarity bound gives conserved currents that cannot appear in the interacting theories we consider. Finally, the $(B,2)^{[0\,n-m\,2n\,0]}_{ n,0}$ and $(B,+)^{[0\,0\,2n\,0]}_{n,0}$ are short multiplets, where the former is $\frac14$ BPS, while the latter are the half-BPS multiplets whose bottom component we called $S_p$. The lowest such multiplet is always the stress tensor multiplet $(B,+)_{1,0}^{[0020]}$, whose OPE coefficient squared is fixed by the conformal Ward identity \cite{Osborn:1993cr} to be inversely proportional to the coefficient of the canonically normalized stress tensor two-point function:
\es{stress}{
\langle T_{\mu\nu}(\vec{x}) T_{\rho \sigma}(0) \rangle = \frac{c_T}{64} \left(P_{\mu\rho} P_{\nu \sigma} + P_{\nu \rho} P_{\mu \sigma} - P_{\mu\nu} P_{\rho\sigma} \right) \frac{1}{16 \pi^2 \vec{x}^2} \,, \qquad P_{\mu\nu} \equiv \eta_{\mu\nu} \nabla^2 - \partial_\mu \partial_\nu \,,
}
where $c_T$ is normalized so that $c_T^\text{free}=16$ for the free $\mathcal{N}=8$ theory of eight massless real scalars and Majorana fermions. In this normalization we get the precise relationship
\es{cTolam}{
\lambda_{pp{(B,+)}_{1,0}^{[0020]}}=\frac{8p}{{c_T}}\,.
}
We will be mostly interested in $\langle2222\rangle$, whose multiplets are summarized in Table \ref{opemult}. Note that we introduce simpler notation for these multiplets, e.g. $(B,+)^{[0040]}_{2,0}\equiv (B,+)$, since not counting the stress tensor multiplet, only one operator of each type appears.
\begin{table}
\centering
\begin{tabular}{|c|c|r|c|c|}
\hline
Type & $(\Delta,\ell)$ & $\mathfrak{so}(8)_R$ irrep &spin $\ell$ & Name \\
\hline
$(B,+)$ & $(2,0)$ & ${\bf 294}_c = [0040]$& $0$ & $(B, +)$ \\
$(B,2)$ & $(2,0)$ & ${\bf 300} = [0200]$& $0$ & $(B, 2)$ \\
$(B,+)$ & $(1,0)$ & ${\bf 35}_c = [0020]$ & $0$ & Stress \\
$(A,+)$ & $(\ell+2,\ell)$ & ${\bf 35}_c = [0020]$ &even & $(A,+)_\ell$ \\
$(A,2)$ & $(\ell+2,\ell)$ & ${\bf 28} = [0100]$ & odd & $(A,2)_\ell$ \\
$(A,0)$ & $\Delta\ge \ell+1$ & ${\bf 1} = [0000]$ & even & $(A,0)_{\Delta,\ell}$\\
$\text{Id}$ & $(0,0)$ & ${\bf 1} = [0000]$ & even & $\text{Id}$\\
\hline
\end{tabular}
\caption{The possible superconformal multiplets in the ${\mathcal O}_\text{Stress} \times {\mathcal O}_\text{Stress}$ OPE\@. The $\mathfrak{so}(3, 2) \oplus \mathfrak{so}(8)_R$ quantum numbers are those of the superconformal primary in each multiplet.}
\label{opemult}
\end{table}
We then compare \eqref{SBDecomp} to \eqref{Ybasis} and \eqref{blockExp} to see that the superblocks are finite linear combinations of conformal blocks
\es{GExpansion}{
\mathfrak{G}_{\mathcal{M}}=\sum^{q}_{n=0}\sum_{m=0}^{n}Y_{[0\,m\, 2n-2m\,0]}(\sigma, \tau) \sum_{{\mathcal O}\in\mathcal{M}} \frac{\lambda_{qq{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}} \lambda_{pp{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}} }{ \lambda_{qq\mathcal{M}} \lambda_{pp\mathcal{M}} }G_{\Delta,\ell}(U,V)\,,
}
where ${\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}$ are conformal primaries that appear in $\mathcal{M}$, which can be derived using the Racah-Speiser algorithm in \cite{Cordova:2016emh}. For $\langle qqpp\rangle$, we fixed all of the pairs of OPE coefficients $\lambda_{qq{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}}\lambda_{pp{\mathcal O}_{\Delta,\ell,[0\,m\, 2n-2m\,0]}}$ in terms of the single pair of OPE coefficients $\lambda_{qq\mathcal{M}} \lambda_{pp\mathcal{M}} $ in \eqref{SBDecomp} by applying the Ward identities to the small $z,\bar z$ expansion of the superblocks, where we used the small $z,\bar z$ expansion of the conformal blocks as given in \cite{Dolan:2003hv} and reviewed in Appendix \ref{block3dApp}. We give the results for the $s$-channel of $\langle qqpp\rangle$ for $q=2,\dots,9$ in the attached \texttt{Mathematica} file.
\subsection{Generalized free field theory at $c_T\to\infty$}
\label{strong}
We now use the superblocks computed in the previous section to perform the superblock expansion for the GFFT that describes the $c_T\to\infty$ limit of $\langle qqpp\rangle$ and $\langle pppp\rangle$ for $k=1,2$ ABJ(M) theory. Recall from the Introduction that in fact both $k=1,2$ ABJ(M) have the same half-BPS correlators at this order, except that all correlators involving $S_p$ for odd $p$ vanish for the $k=2$ theory. In particular, both theories are described by a GFFT where the operators $S_p$ are treated as generalized free fields with two point functions $\langle S_p(x_1,Y_1) S_q(x_2,Y_2)\rangle=\delta_{pq}\frac{(Y_1\cdot Y_2)^p}{|x_{12}|^{p}}$. We can then compute $\langle qqpp\rangle$ (for $q\leq p$) using Wick contractions to get
\es{Ninf}{
{\mathcal G}_{qp}^{(0)}=1+\delta_{qp}\left(U^{\frac p2}\sigma^p+\frac{U^{\frac p2}}{V^{\frac p2}}\tau^p\right)\,,
}
which can be expanded in the superblocks of the previous section to extract OPE coefficients. If several operators have the same quantum numbers at this order, then we can only compute the average of their OPE coefficients. Such a degeneracy occurs for the double trace long multiplet $(A,0)^{[0000]}_{\Delta,\ell}$ operator $S_p\partial_{\mu_1}\dots\partial_{\mu_\ell}(\partial^2)^nS_p$ with spin $\ell$ and twist $ t\equiv \Delta-\ell=p+2n\geq2$. For $t\geq2$, there are $t-1$ such degenerate operators because of the different ways of adding $p$ and $n$ to get the same twist, which we label using the degeneracy label $I$. We denote the GFFT OPE coefficient of these operators in the $S_p\times S_p$ OPE by $\lambda^{(0)}_{p,t,\ell,I}$, and note that only twists $t=p,p+2,\dots$ appear in this OPE at leading order. By expanding in superblocks for $p=2,\dots, 9$ we found the general formula
\es{ppppLam}{
\langle (\lambda^{(0)}_{p,t,\ell})^2\rangle&=\frac{45 \sqrt{\pi } (2 \ell+1) 4^{p-2} (p-1) p \Gamma \left(\frac{t-1}{2}\right) \Gamma \left(\ell+\frac{t}{2}\right) \Gamma (\ell+t+3) \Gamma \left( \frac p2+\frac t2+2\right)
\Gamma \left( \ell+\frac p2+\frac t2+\frac52\right)}{\Gamma (p+2) \Gamma (p+4) \Gamma \left(\frac{t+6}{2}\right) \Gamma \left(\ell+t+\frac{5}{2}\right) \Gamma \left(
\ell+\frac t2+\frac72\right) \Gamma \left(-\frac p2+\frac t2+1\right) \Gamma \left( \ell-\frac p2+\frac t2+\frac 32\right)}\,,
}
which reproduces the $p=2,3$ values given in \cite{Chester:2014fya,Agmon:2019imm}. Note that the average $\langle \lambda^{(0)}_{q,t,\ell} \lambda^{(0)}_{p,t,\ell} \rangle$ trivially vanishes because $ {\mathcal G}_{qp}=1$ for $q\neq p$ at GFFT.
\subsection{Tree level $\langle 22pp\rangle$}
\label{22pptree}
We now consider the $1/c_T$ and $1/c_T^{5/3}$ terms in $\langle22pp\rangle$, which corresponds to tree level supergravity $R$ and $R^4$ in the bulk description, respectively, and whose CFT data is needed to compute loops with these vertices in the following section. We expand ${\mathcal G}_{2p}$ (which we will denote as ${\mathcal G}_{p}$) in \eqref{4point} as well as the long multiplet CFT data to this order as
\es{Hlarge}{
{\mathcal G}_{p}(U,V;\sigma,\tau)&={\mathcal G}^{(0)}_{p}+c_T^{-1}{\mathcal G}^R_{p}+c_T^{-\frac53}{\mathcal G}^{R^4}_{p}+\dots\,\\
\Delta_{t,\ell,I}&=t+\ell+c_T^{-1}\gamma^{R}_{t,\ell,I}+c_T^{-\frac53}\gamma^{R^4}_{t,\ell,I}+\dots\,,\\
(\lambda_{p,t,\ell,I})^2&=(\lambda^{(0)}_{p,t,\ell,I})^2+c_T^{-1}(\lambda^{R}_{p,t,\ell,I})^2+c_T^{-\frac53}(\lambda^{R^4}_{p,t,\ell,I})^2+\dots\,.\\
}
A similar expansion exists for the OPE coefficients of the protected operators, although of course their scaling dimensions are fixed. Using these expansions, we can write the superblock expansion for ${\mathcal G}_{p}$ in \eqref{SBDecomp} at large $c_T$ as
\es{SGexp}{
&{\mathcal G}_{p}^R(U,V)={128p} \mathfrak{G}_\text{Stress}(U,V;\sigma,\tau) +\hspace{-.2in}\sum_{\mathcal{M}_{\Delta,\ell}\in\{(B,+),(B,2),(A,2)_\ell,(A,+)_\ell\}}\hspace{-.2in}\lambda^R_{22\mathcal{M}} \lambda^R_{pp\mathcal{M}} \mathfrak{G}_\mathcal{M}(U,V;\sigma,\tau) \\
&\quad+\sum_{ t,\ell,I} \left[\lambda^{R}_{2,t,\ell,I} \lambda^{R}_{p,t,\ell,I}+\lambda^{(0)}_{2,t,\ell,I} \lambda^{(0)}_{p,t,\ell,I}\gamma_{t,\ell,I}^R(\partial_t^\text{no-log}+\frac12\log U)\right] \mathfrak{G}_{t+\ell,\ell}(U,V;\sigma,\tau) \,.
}
Here, the first line includes the protected multiplets, and the OPE coefficient for the stress tensor multiplet was written explicitly using \eqref{cTolam}. In the second line we denote the singlet long multiplet superblock by $\mathfrak{G}_{\Delta,\ell}$ and $\partial_t^\text{no-log} \mathfrak{G}_{t+\ell,\ell}(U,V;\sigma,\tau) $ denotes that we consider the term after taking the derivative that does not include a $\log U$, which has already been written separately. The expansion of ${\mathcal G}_{p}^{R^4}$ takes the same form with $R\to R^4$, except the stress tensor block does not appear.
While the superblock expansion is best expressed in position space, in the large $c_T$ expansion it is also useful to consider the Mellin transform $M_p(s,t;\sigma,\tau)$ of the connected correlator $\mathcal{G}^\text{con}_{p}(U,V;\sigma,\tau)\equiv\mathcal{G}_{p}(U,V;\sigma,\tau)-\mathcal{G}^{(0)}_{p}(U,V;\sigma,\tau)$, which is defined as \cite{Zhou:2017zaw}:
\es{mellinH}{
\mathcal{G}^\text{con}_{p}(U,V;\sigma,\tau)&=\int\frac{ds\, dt}{(4\pi i)^2} U^{\frac s2}V^{\frac t2-\frac{p}{4}-\frac12}M_p(s,t;\sigma,\tau) \\
&\qquad\qquad\times\Gamma\left[\frac p2-\frac s2\right]\Gamma\left[1-\frac s2\right]\Gamma^2\left[\frac p4+\frac12-\frac t2\right]\Gamma^2\left[\frac p4+\frac12-\frac {{u}}{2}\right],\\
}
where $u = p+2 - s - t$ and the integration contours here is defined to include all poles of the Gamma functions on one side of the contour. The Mellin amplitude is defined such that a bulk contact Witten diagram coming from a vertex with $2m$ derivatives gives rise to a polynomial in $s,t$ of degree $m$, and similarly an exchange Witten diagrams corresponds to a Mellin amplitude with poles for the twists of each exchanged operator. The Mellin amplitude must also obey the crossing relations
\es{crossM}{
M_p(s,t;\sigma,\tau) = M_p(s,u;\tau, \sigma) \,, \qquad M_2(s,t;\sigma,\tau) = \tau^2M_2(t,s;\sigma/\tau,1/\tau)\,,
}
which follow from interchanging the first and second operators, and, for $p = 2$, the first and third. Lastly, $M_p(s,t;\sigma,\tau)$ must satisfy the Ward identities \eqref{ward}, which can be implemented in Mellin space as shown in \cite{Zhou:2017zaw}. Using all these constraints, $M_p(s, t)$ can be expanded similar to the position space expression \eqref{Hlarge} to get
\es{Mplarge}{
M_{p}(s,t)=c_T^{-1}M^R_{p}+c_T^{-\frac53}B^{R^4}(p)M^{R^4}_{p}+\dots\,,
}
where $M^{R^4}_{p}$ is a complicated degree 4 polynomial in $s,t$ whose explicit form we give in the attached \texttt{Mathematica} notebook, while the tree level supergravity amplitude $M^R_{p}$ was written in \cite{Alday:2020dtb} as an infinite sum of the supergravity multiplet and its descendents:
\es{Ms}{
M^R_p=& \sum_{m=0}^\infty\Bigg[ \frac{2^{2 m+5} p ((p-2 t+2) (p+2 (s+2) \sigma -2 (s+t-1))-2 (s+2) \tau (p-2 (s+t-1)))}{\pi ^2 (2 m+3) (2 m-s+1) \Gamma
\left(\frac{1}{2}-m\right) \Gamma (2 m+2) \Gamma \left(\frac{1}{2} (-2 m+p-1)\right)} \\
&+\frac{16 p \tau \Gamma \left(\frac{p}{2}+1\right) ((p+2 t+2) (p (2 \sigma -1)+2 (-\sigma s+s+t-1))+2 \tau (p-s) (p-2
(s+t-1)))}{\pi \Gamma \left(\frac{1}{2}-m\right)^2 \Gamma (m+1) \Gamma \left(\frac{p-1}{2}\right) (4 m+p-2 t) \Gamma
\left(m+\frac{p+3}{2}\right)}\\
&+\frac{2^{2 m+5} p ((p-2 t+2) (p+2 (s+2) \sigma -2 (s+t-1))-2 (s+2) \tau (p-2 (s+t-1)))}{\pi ^2 (2 m+3) (2 m-s+1) \Gamma
\left(\frac{1}{2}-m\right) \Gamma (2 m+2) \Gamma \left(\frac{1}{2} (-2 m+p-1)\right)}\,,
}
where the overall coefficient was fixed by extracting the stress tensor OPE coefficient and comparing to \eqref{cTolam}. Note that this expression is independent of $k$. For even $p$, the sum can be performed to get an expression in terms of Gamma functions, which can then be written in terms of a finite sum of $\bar{D}_{r_1,r_2,r_3,r_4}(U,V)$ functions using its Mellin space definition in Appendix \ref{block3dApp}, as was pointed out in a similar case in \cite{Binder:2021cif}. For instance, for $p=2$ first we resum to get
\es{M2R}{
M^R_2=&\frac{32 \tau \left(\frac{4 \Gamma \left(\frac{1}{2}-\frac{t}{2}\right)}{\Gamma \left(1-\frac{t}{2}\right)}-\sqrt{\pi } (t+4)\right) ((t+2) (-\sigma s+s+2 \sigma +t-2)+(s-2) \tau (s+t-2))}{\pi
^{5/2} t (t+2)}\\
&+\frac{32 \sigma \left(\sqrt{\pi } (s+t-8)+\frac{4 \Gamma \left(\frac{1}{2} (s+t-3)\right)}{\Gamma \left(\frac{1}{2} (s+t-2)\right)}\right) ((t-2) (-\sigma s+s+2 \sigma +t-6)+(s-2) \tau
(s+t-6))}{\pi ^{5/2} (s+t-6) (s+t-4)}\\
& +\frac{32 \left(\frac{4 \Gamma \left(\frac{1}{2}-\frac{s}{2}\right)}{\Gamma \left(1-\frac{s}{2}\right)}-\sqrt{\pi } (s+4)\right) ((t-2) (-(s+2) \sigma +s+t-2)+(s+2) \tau (s+t-2))}{\pi ^{5/2} s (s+2)}
\,.
}
Note that the Gamma functions that appear in the denominator exactly cancel one of the Gamma functions in the Mellin transform \eqref{mellinH}, while the poles just shift the arguments of the Gamma functions. We can thus use \eqref{dbarM} to write this expression in terms of a finite number of $\bar{D}(U,V)$ functions. For instance, the contribution to the $[0040]$ irrep in \eqref{Ybasis} is
\es{p2A22R}{
A^R_{[0040]}(U,V)&=-\frac{32 U^3 }{3 \pi ^{5/2} V^2} \Big[ 2 \sqrt{\pi } V^2
\bar{D}_{1,3,-1,1}(U,V)-2 V^2 \bar{D}_{1,\frac{5}{2},-1,\frac{1}{2}}(U,V)\\
&+\sqrt{\pi } V^2 \bar{D}_{1,3,0,2}(U,V)
+\sqrt{\pi } V^2
\bar{D}_{2,3,-1,2}(U,V)-2 V^{\frac32} \bar{D}_{1,\frac{5}{2},\frac{1}{2},-1}(U,V)\\
&+3
\sqrt{\pi } \bar{D}_{3,1,-1,1}(U,V)-\sqrt{\pi } \bar{D}_{4,1,-1,2}(U,V)\Big]\,.
}
While the appearance of half integer and negative arguments might seem nonstandard, in fact these have standard expansions to all orders in $U$ and $V$, including a $\log U$ term, as shown in \cite{Dolan:2000ut} and reviewed in Appendix \ref{block3dApp}. We can then easily expand this and the other channels in superblocks to get the average anomalous dimensions $\langle \lambda^{(0)}_{2,t,\ell} \lambda^{(0)}_{p,t,\ell}\gamma_{t,\ell}\rangle\equiv\sum_{ I} \lambda^{(0)}_{2,t,\ell,I} \lambda^{(0)}_{p,t,\ell,I}\gamma_{t,\ell,I}$ weighted by OPE coefficients for $p=2$:
\es{averageSG}{
\langle \lambda^{(0)}_{2,t,\ell} \lambda^{(0)}_{2,t,\ell}\gamma^R_{t,\ell}\rangle=&-\frac{64 }{\pi ^2} \frac{\sqrt{\pi } (2 \ell+1) \Gamma \left(\frac{t-1}{2}\right) \Gamma \left(\ell+\frac{t}{2}\right) \Gamma (\ell+t+3)}{8 \Gamma \left(\frac{t}{2}\right) \Gamma
\left(\ell+\frac{t}{2}+\frac{1}{2}\right) \Gamma \left(\ell+t+\frac{5}{2}\right)}\\
&\times\left(\left(2 \ell^2+2 \ell t+6 \ell+t^2+5 t+4\right) \left(\psi({\ell+t+3})-\psi({\ell+1})\right)+(-t-2) (2 \ell+t+3)\right)\,,
}
which matches the values originally computed in \cite{Chester:2018lbz}. We can similarly compute $\langle \lambda^{(0)}_{2,t,\ell} \lambda^{(0)}_{p,t,\ell}\gamma^R_{t,\ell}\rangle$ for higher even $p$, and we show the results for $p\leq36$ in the attached \texttt{Mathematica} notebook. Since the $M_p^R$ did not depend on $k$, these average anomalous dimensions are also the same for $k=1,2$.
For the $R^4$ amplitude, we need to fix the overall coefficient $B^{R^4}(p)$, which will depend nontrivially on $k$ unlike $M_p^R$. For $p=2$ this was fixed using localization in \cite{Chester:2018aca}, and this derivation could in principle be extended to $p>2$ for $k=1$ using the localization results from \cite{Gaiotto:2020vqj}. For $k=2$ and $p>2$ however, which is our primary interest, we can only fix the coefficient by comparing to the known 11d M-theory S-matrix term in the flat space limit, as was done for the 6d $(2,0)$ theory in \cite{Alday:2020tgi}. In particular, the 11d M-theory S-matrix ${\mathcal A}$ can be expanded at small Planck length $\ell_{11}$ as
\es{A}{
\mathcal{A}(s,t)=\ell_{11}^{9}\mathcal{A}_{R}+\ell^{15}_{11}\mathcal{A}_{R^4}+\ell^{18}_{11}\mathcal{A}_{R|R}+\ell^{21}_{11}\mathcal{A}_{D^6R^4}+\ell^{23}_{11}\mathcal{A}_{D^8R^4}+\ell_{11}^{24}\mathcal{A}_{R|R^4}+\dots\,,
}
where $s,t,u$ are 11d Mandelstam variables. The lowest few terms ${\mathcal A}_R$, ${\mathcal A}_{R^4}$, and ${\mathcal A}_{D^6R^4}$ are protected, and so can be computed from Type IIA string theory by compactifying on a circle \cite{Green:1997as, Russo:1997mk, Green:2005ba}\footnote{${\mathcal A}_{D^4R^4}$ can also be computed in this way, but it vanishes and so we did not write it. Also, the 1-loop supergravity term ${\mathcal A}_{R|R}$ was computed in \cite{Russo:1997mk,Green:1997as}, while ${\mathcal A}_{R|R^4}$ and ${\mathcal A}_{R^4|R^4}$ were computed in \cite{Alday:2020tgi}.} to get
\es{SGtoR4}{
\frac{ \mathcal{A}_{R^4}}{\mathcal{A}_{R}}=\frac{stu}{3\cdot 2^7}\,,\qquad \frac{ \mathcal{A}_{D^6R^4}}{\mathcal{A}_{R}}=\frac{(stu)^2}{15\cdot 2^{15}}\,.
}
The small $\ell_{11}$ expansion in 11d maps to the large $N$ expansion in the CFT according to the dictionary
\cite{Aharony:2008ug,Aharony:2008gk}:
\es{cPlanck}{
\frac{L^6}{\ell_{11}^6}=\left(\frac{3\pi c_T k}{2^{11}}\right)^{\frac23}+O(c_T^0) \,.
}
The flat space limit formula \cite{Penedones:2010ue,Chester:2018aca} then relates a Mellin amplitude $M^a_p(s,t)$ of large $s,t$ degree $a$ to the 11d amplitude defined in \eqref{A} as
\es{flat}{
c_T^{\frac{2(1-a)}{9}}\frac{\pi ^{5/2} 2^{-a-p-3} \Gamma (p-1)}{\Gamma \left(\frac{1}{2} (2 a+p-1)\right)}\lim_{s,t\to\infty} \frac{s t (s+t)M^a(s,t)}{(t (-\sigma s+s+t)+s \tau (s+t))^2}= \ell_{11}^{2a-2}\frac{{\mathcal{A}_{2a+7}}}{{\mathcal A}_R}\,,
}
where $\mathcal{A}_{2a+7}$ is a term in the amplitude with length dimension $(2a+7)$, and $\ell_{11}$ is the 11d Planck length. For instance, $\mathcal{A}_{15}\equiv\mathcal{A}_{R^4}$ has length dimension 15 in \eqref{A} and corresponds to $M_p^4\equiv M_p^{R^4}$. The 11d amplitude of course is the same for all $p$, and the ratio of $\mathcal{A}_{R^4}/\mathcal{A}_R$ was given in \eqref{SGtoR4}. Using this 11d amplitude and the flat space limit we find
\es{BR4}{
B^{R^4}(p)=\frac{32\cdot{2^{\frac13}} (p-1) (p+1) (p+3) (p+5)}{3\ 3^{2/3} \pi ^{8/3} k^{2/3} \Gamma \left(\frac{p}{2}\right)}\,,
}
which matches the $p=2$ result computed from localization in \cite{Chester:2018aca}. Note that the simple $k$ dependence comes from the AdS/CFT dictionary \eqref{cPlanck}. Since $M_p^{R^4}$ is just a polynomial Mellin amplitude for every $p$, we can convert to a finite number of $\bar{D}(U,V)$ to get
\es{p2A22R4}{
A^{R^4}_{[0040]}(U,V)&=\
\frac{64 \cdot{2^{\frac13}} \left(p^2-1\right) U^{p/2}} { 3^{8/3} \pi ^{8/3} k^{2/3} \Gamma \left(\frac{p}{2}\right)}
\Big[6 (p-2) p \bar{D}_{\frac{p}{2},\frac{p}{2},1,1}(U,V)+4 (p-3) (p-2) (p+2)
\bar{D}_{\frac{p}{2},\frac{p}{2},2,2}(U,V)\\
&+192 \bar{D}_{\frac{p}{2},\frac{p}{2},3,3}(U,V)-312
\bar{D}_{\frac{p}{2},\frac{p}{2},4,4}(U,V)+
p (p-1) ((p-1) p-26)
\bar{D}_{\frac{p}{2},\frac{p}{2},3,3}(U,V)\\
&+4 p(p (p+2)-29) \bar{D}_{\frac{p}{2},\frac{p}{2},4,4}(U,V)+4p (p+8)
\bar{D}_{\frac{p}{2},\frac{p}{2},5,5}(U,V)+60 \bar{D}_{\frac{p}{2},\frac{p}{2},5,5}(U,V)
\Big]\,,
}
and similarly for the other channels. We can then expand in superblocks to get the even $p$ average anomalous dimensions
\es{averageR4}{
\langle \lambda^{(0)}_{2,t,\ell} \lambda^{(0)}_{p,t,\ell}\gamma^{R^4}_{t,\ell}\rangle=&
\delta_{\ell,0}(-1)^{\frac p2} \frac{ (t+1) 2^{p+t+\frac{25}{3}} \Gamma \left(\frac{t}{2}+2\right) \Gamma \left(\frac{t}{2}+4\right) \Gamma \left(\frac{1}{2} (p+t+5)\right)}{ 3^{5/3} \pi ^{8/3}
k^{2/3} \Gamma (p-1) \Gamma \left(t+\frac{5}{2}\right) \Gamma \left(\frac{1}{2} (-p+t+2)\right)}
\,,
}
which are only nonzero for zero spin, and for $p=2$ match the results in \cite{Chester:2018aca}.
\subsection{Large $c_T$ expansion of $\langle2222\rangle$}
\label{2222largec}
Finally, we restrict to the stress tensor correlator $\langle2222\rangle$, which is our primary interest. For simplicity, we will drop the $p=2$ subscript from all further expressions. The Mellin amplitude $M(s,t)$ is fixed by the analytic structure, growth at infinity, and crossing symmetry to take the form \eqref{M2222} given in the Introduction, where the coefficient of each $c_T^{-b}$ must include all allowed Mellin amplitudes of large $s,t$ degree $(9/2b-7/2)$ or less. These can include the polynomial Mellin amplitudes $M^a$ given in \cite{Chester:2018aca}, which we also include in the attached \texttt{Mathematica} file. For the amplitudes we consider above, this implies one allowed polynomial Mellin amplitude of each degree $a$, which correspond to contact Witten diagrams with $2a$ derivatives. These contact diagrams only contribute to a finite number of spins that grows with the degree \cite{Heemskerk:2009pn}. For the multiplets in $\langle2222\rangle$, the contribution from each $M^a$ in \eqref{M2222} is summarized in Table 4 of \cite{Chester:2018aca}, which we repeat here in Table \ref{resultList}. The other Mellin amplitudes shown in \eqref{M2222} include the tree level supergravity term $M^R$ discussed in the previous section, which includes poles for the single trace supergravity multiplet and its descendents, as well as the 1-loop Mellin amplitudes $M^{R|R}$ and $M^{{R^4}|R}$ of degrees $5.5$ and $8.5$, respectively, that we will discuss more in the following section.
\begin{table}
\centering
\begin {tabular} {| c || c | c | c | c | }
\hline
{CFT data:}&$M^{4}$ &$M^{6}$ & $M^{7}$ &$M^{8}$ \\
\hline
\TBstrut $\lambda^2_{(B,+)}$& $ \frac{256}{35} $ & $-\frac{59392}{693}$ & $-\frac{477184}{429}$ & $\frac{4022091776}{4448925}$ \\
\hline
\TBstrut $\lambda^2_{(B,2)}$& $ \frac{256}{7} $& $-\frac{296960}{693}$ & $-\frac{2385920}{429}$ & $\frac{4022091776}{889785}$ \\
\hline
\TBstrut $\lambda^2_{(A,+)_0}$& 0& $\frac{16384}{1485}$ & $\frac{950272}{6435}$ & $-\frac{131396796416}{467137125}$ \\
\hline
\TBstrut $\lambda^2_{(A,+)_2}$& 0& $0$ & $0$ & $\frac{67108864}{557375}$ \\
\hline
\TBstrut $\lambda^2_{(A,2)_1}$& 0& $\frac{131072}{1155}$ & $\frac{21889024}{15015}$ & $-\frac{3848847491072}{1089986625}$ \\
\hline
\TBstrut $\lambda^2_{(A,2)_3}$& 0& $0$ & $0$ & $\frac{268435456}{121275}$ \\
\hline
\TBstrut $\gamma_{2,0}$& -192& $\frac{15360}{11}$ & $\frac{192000}{11}$ & $-\frac{18059264}{1521}$ \\
\hline
\TBstrut $\gamma_{2,2}$& 0& $-1536$ & $-18432$ & $\frac{509591552}{12675}$ \\
\hline
\TBstrut $\gamma_{2,4}$& 0& $0$& $0$ & $-32768$ \\
\hline
\end{tabular}
\caption{Contributions from large $s,t$ degree $a$ contact Mellin amplitudes $M^{a}(s,t)$ to the OPE coefficients squared $\lambda^2_{22{\mathcal M}}$ of some protected multiplets $(B,+)$, $(B,2)$, $(A,+)_\ell$ for even $\ell$, and $(A,2)_\ell$ for odd $\ell$, as well as to the anomalous dimensions $\gamma_{t,\ell}$ for even $\ell$ of the lowest twist $t=2$ unprotected multiplet $(A,0)_{t+\ell,\ell}$. Adapted from \cite{Chester:2018aca} with typos fixed.
}\label{resultList}
\end{table}
The coefficient $B^{R^4}_4\equiv B^{R^4}(2)$ was fixed in the previous section. The three $B^{D^6R^4}$ coefficients were fixed in \cite{Binder:2018yvd} from the two localization constraints and the flat space limit \eqref{flat} to get
\es{D6R4answer}{
B^{D^6R^4}_4=-\frac{1352960\ 6^{2/3}}{9 \pi ^{10/3}k^{\frac43}}\,,\qquad B^{D^6R^4}_6=-\frac{220528\ 6^{2/3}}{\pi ^{10/3}k^{\frac43}}\,,\qquad B^{D^6R^4}_7=\frac{16016\ 6^{2/3}}{\pi ^{10/3}k^{\frac43}}\,.
}
There are not enough constraints to fully fix the other tree amplitudes, while the 1-loop amplitudes will be considered in the next section, so for now we will only extract CFT data up to order $c_T^{-\frac73}$ and leave the $R|R$ term unknown for now. The short multiplets OPE coefficients $\lambda^2_{(B,+)}$ and $\lambda^2_{(B,+)}$ were in fact computed to all orders in $1/c_T$ using localization in \cite{Agmon:2017xes}, and take the form for $k=2$:
\es{B2Bp}{
\lambda^2_{(B,2)}&=\frac{32}{3}-\frac{1024 \left(4 \pi ^2-15\right)}{9 \pi ^2
c_T}+\frac{20480 \left(\frac{2}{3}\right)^{\frac23}}{\pi ^{8/3} c_T^{5/3}}+\frac{16384
\left(2 \pi ^2-25\right)}{9 \pi ^4 c_T^2}-\frac{327680 ({\frac{2}{3}})^{\frac13}}{3
\pi ^{\frac{10}{3}} c_T^{7/3}}+\frac{7536640 \left(\frac{2}{3}\right)^{\frac23}}{9 \pi
^{\frac{14}{3}} c_T^{8/3}}+O(c_T^{-3})\,,\\
\lambda^2_{(B,+)}&=\frac{16}{3}-\frac{1024 \left( \pi ^2+3\right)}{9 \pi ^2
c_T}+\frac{4096 \left(\frac{2}{3}\right)^{2/3}}{\pi ^{8/3} c_T^{5/3}}+\frac{16384
\left(2 \pi ^2-25\right)}{45 \pi ^4 c_T^2}-\frac{65536
(\frac{2}{3})^{\frac13}}{3 \pi ^{\frac{10}{3}} c_T^{7/3}}+\frac{1507328
\left(\frac{2}{3}\right)^{2/3}}{9 \pi ^{14/3} c_T^{8/3}}+O(c_T^{-3})\,,\\
}
where note that these OPE coefficients are related due to crossing in the 1d topological sector as \cite{Chester:2014mea}
\es{1drel}{
\frac{1024}{c_T}-5\lambda^2_{(B,+)}+\lambda^2_{(B,2)}+16=0\,,
}
so in fact only one is independent. For the other multiplets in $S_2\times S_2$ we get for $k=1,2$
\es{2222data}{
\lambda^2_{(A,+)_\ell}&=\frac{\pi \Gamma (\ell+3)^2}{\Gamma \left(\ell+\frac{5}{2}\right)^2}-\frac{64 \Gamma (\ell+3)^2 (-2 \ell+2 (\ell (\ell+5)+5) \psi ^{(1)}(\ell+3)-5)}{\pi c_T\Gamma \left(\ell+\frac{5}{2}\right)^2}\\
&+c_T^{-2}(\lambda^{R|R}_{(A,+)_\ell})^2-\delta_{\ell,0}\frac{1835008\ 6^{2/3}}{27 \pi ^{10/3}c_T^{7/3} k^{4/3}}+O(c_T^{-\frac{23}{9}})\,,\\
\lambda^2_{(A,2)_\ell}&=\frac{\pi \Gamma (\ell+2) \Gamma (\ell+4)}{\Gamma \left(\ell+\frac{3}{2}\right) \Gamma \left(\ell+\frac{7}{2}\right)}+\frac{64 \Gamma (\ell+4)^2 \left((2 \ell+3) (\ell (\ell+7)+11)-2 (\ell+2)^2 (\ell+3)^2 \psi ^{(1)}(\ell+2)\right)}{\pi c_T(\ell+2)^2 (\ell+3)^2 \Gamma \left(\ell+\frac{3}{2}\right) \Gamma
\left(\ell+\frac{7}{2}\right)}\\
&+c_T^{-2}(\lambda^{R|R}_{(A,2)_\ell})^2-\delta_{\ell,1}\frac{8388608\ 6^{2/3}}{5 \pi ^{10/3}c_T^{7/3} k^{4/3}}+O(c_T^{-\frac{23}{9}})\,,\\
\Delta_{2,\ell}&=2+\ell-\frac{256 (2 \ell+3) (2 \ell+5) (2 \ell+7)}{\pi ^2c_T (\ell+1) (\ell+2) (\ell+3) (\ell+4)}-\frac{71680\cdot{6}^{\frac13} \delta _{0,\ell}}{\pi ^{8/3}c_T^{5/3} k^{2/3}}\\
&+c_T^{-2}\gamma_{2,\ell}^{R|R}+c_T^{-\frac73}\left(\delta_{\ell,0}\frac{1433600\ 6^{2/3}}{3 \pi ^{10/3} k^{4/3}}+\delta_{\ell,2}\frac{43524096\ 6^{2/3}}{\pi ^{10/3} k^{4/3}}\right)+O(c_T^{-\frac{23}{9}})\,,\\
}
where $\ell$ is even for $\Delta_{2,\ell}$ and $\lambda^2_{(A,+)_\ell}$, odd for $\lambda^2_{(A,2)_\ell}$, and for $\Delta_{2,\ell}$ we only wrote the result for the lowest twist because recall that higher twists are degenerate and so require unmixing beyond leading order. In the following section, we will determine the 1-loop corrections to some of this non-trivial CFT data.
\section{$\langle2222\rangle$ at 1-loop}
\label{1loop}
We now discuss the 1-loop terms $R|R$ at $c_T^{-2}$, $R|R^4$ at $c_T^{-\frac83}$, and $R^4|R^4$ at $c_T^{-\frac{10}{3}}$ for $k=2$ ABJ(M) theory. For each term we compute the double-discontinuity (DD) from the tree and GFFT data derived in the previous sections, and then use it as well as crossing symmetry and the superconformal Ward identity to write the entire correlator in Mellin space up to contact term ambiguities. We then take the flat space limit and match these correlators to the relevant 1-loop corrections to the 11d S-matrix. Finally, we extract low-lying CFT data using two methods: the Lorentzian inversion integral applied to the DD \cite{Alday:2016njk,Caron-Huot:2017vep} and a projection method applied to the entire Mellin amplitude \cite{Heemskerk:2009pn,Chester:2018lbz}. The inversion method does not converge for low spins that are affected by contact term ambiguities, while the projection method can be used to compute that CFT data in terms of those ambiguities. In the next section, we will discuss how to use localization and a conjectured analytic continuation of the inversion method to fix all the contact term ambiguities for $R|R$ and $R|R^4$.
\subsection{One-loop from tree level}
\label{1loopfrom}
We begin by expanding the correlator ${\mathcal G}$ for $\langle2222\rangle$ to 1-loop order at large $c_T$ using the block expansion described in section \ref{qqpp}. For $R|R$ at order $c_T^{-2}$, this takes the form
\es{RR}{
{\mathcal G}^{R|R}=&\sum_{t=2,4,\dots}\sum_{\ell\in\text{Even}}\Big[\frac18\langle(\lambda^{(0)}_{t,\ell})^2(\gamma^R_{t,\ell})^2\rangle(\log^2U+4\log U\partial_t^\text{no-log}+4(\partial_t^\text{no-log})^2) \\
&+\frac12\langle(\lambda^{R})^2_{t,\ell}\gamma^{R}_{t,\ell}\rangle(\log U+2\partial_t^\text{no-log})\\
&+\frac12\langle(\lambda^{(0)}_{t,\ell})^2\gamma^{{R}|{R}}_{t,\ell}\rangle(\log U+2\partial_t^\text{no-log})+\langle(\lambda^{{R}|{R}}_{t,\ell})^2\rangle\Big] \mathfrak{G}_{t+\ell,\ell}(U,V;\sigma,\tau)\\
&+\sum_{\mathcal{M}_{\Delta,\ell}\in\{(B,+),(B,2),(A,2)_\ell,(A,+)_\ell\}}(\lambda^{R|R}_{22\mathcal{M}} )^2 \mathfrak{G}_\mathcal{M}(U,V;\sigma,\tau)\,,
}
where $\partial_t^\text{no-log} \mathfrak{G}_{t+\ell,\ell}(U,V;\sigma,\tau)$ was defined in \eqref{SGexp}. The first three lines describe the double trace singlet long multiplets $(A,0)^{[0000]}_{t+\ell,\ell}$, where $\langle\rangle$ denotes the average over the $ (t-1)$-fold degenerate operators. The fourth line includes all the protected multiplets in $\langle 2222\rangle$ except the stress tensor multiplet, which is $1/c_T$ exact. The expression for ${\mathcal G}^{R^4|R^4}$ at order $c_T^{-\frac{10}{3}}$ is identical except we replace $R\to R^4$ and the sum for the long multiplets is now restricted to $\ell=0$, while for ${\mathcal G}^{R|R^4}$ at order $c_T^{-\frac83}$ we furthermore replace the $\frac18$ in the first line by $\frac14$, since the vertices are different.
As shown in \cite{Aharony:2016dwx}, the entire 1-loop term up to the contact term ambiguities described in Section \ref{2222largec} can in fact be constructed from the $\log^2 U$ terms shown above, which are written in terms of GFFT and tree data, since under $1\leftrightarrow3$ crossing
\es{crossing}{
{\mathcal G}(U,V;\sigma,\tau)=\frac{U}{V}\tau^2 {\mathcal G}(V,U;\sigma/\tau,1/\tau)\,,
}
the $\log^2U$ terms are related to $\log^2V$ terms that are the only contributions at this order to the DD, which fixes the entire correlator according to the Lorentzian inversion formula \cite{Alday:2016njk,Caron-Huot:2017vep}. Note that the average $\langle(\lambda^{(0)}_{t,\ell})^2\gamma^A_{t,\ell}\gamma^{B}_{t,\ell}\rangle$ for 1-loop vertices $A,B$ is what appears in the $\log^2U$ term, whereas the different averages $\langle(\lambda^{(0)}_{t,\ell})^2\gamma^A_{t,\ell}\rangle$ and $\langle(\lambda^{(0)}_{t,\ell})^2\gamma^B_{t,\ell}\rangle$ are what appear at tree level. As shown in \cite{Alday:2017xua,Aprile:2017bgs,Aprile:2017xsp,Alday:2018pdi} for $\mathcal{N}=4$ SYM and \cite{Alday:2020tgi} for 6d $(2,0)$, one can compute $\langle(\lambda^{(0)}_{t,\ell})^2\gamma^A_{t,\ell}\gamma^{B}_{t,\ell}\rangle$ from GFFT $\langle ppqq\rangle$ and tree level $\langle22pp\rangle$ data as
\es{appA}{
\langle(\lambda^{(0)}_{t,\ell})^2\gamma^A_{t,\ell}\gamma^{B}_{t,\ell}\rangle=\sum_{p=2,4,\dots}^{t}\frac{\langle\lambda^{(0)}_{2,t,\ell}\lambda^{(0)}_{p,t,\ell}\gamma^A_{t,\ell}\rangle \langle\lambda^{(0)}_{2,t,\ell}\lambda^{(0)}_{p,t,\ell}\gamma^B_{t,\ell}\rangle}{ {\langle(\lambda^{(0)}_{p,t,\ell})^2\rangle} }\,,
}
where we summed over each $p$ for which a given twist $t$ long multiplet appears. Unlike the 4d and 6d cases, in 3d the sum only runs over even $p$ regardless of the orbifold.\footnote{As discussed in the introduction, for $k=1$ ABJM the DD would receive additional contributions from the OPE coefficients of odd twist long multiplets that appear for odd $p$.} We computed ${\langle(\lambda^{(0)}_{p,t,\ell})^2\rangle} $, $\langle\lambda^{(0)}_{2,t,\ell}\lambda^{(0)}_{p,t,\ell}\gamma^R_{t,\ell}\rangle$, and $\langle\lambda^{(0)}_{2,t,\ell}\lambda^{(0)}_{p,t,\ell}\gamma^{R^4}_{t,\ell}\rangle$ in \eqref{ppppLam}, \eqref{averageSG}, and \eqref{averageR4}, respectively, which is sufficient to compute $R|R$, $R|R^4$, and $R^4|R^4$ for the $k=2$ ABJ(M) theory. The $p,t,\ell$ sums for the $\log^2U$ term in $R|R$ can be done by expanding at small $U$ in each $R$-symmetry channel to get:
\es{slices}{
\frac18\sum_{t=2,4,\dots}& \sum_{\ell\in\text{Even}}\sum_{p=2,4,\dots}^{t} \frac{\langle\lambda^{(0)}_{2,t,\ell} \lambda^{(0)}_{p,t,\ell}\gamma^R_{t,\ell}\rangle^2 }{ {\langle(\lambda^{(0)}_{p,t,\ell})^2\rangle} } \mathfrak{G}_{t+\ell,\ell}(U,V;\sigma,\tau)=\\
& Y_{[0000]}(\sigma,\tau) \big[U h_{R|R}^{(1),[0000]}(V) + \cdots \big]+Y_{[0100]}(\sigma,\tau) \big[U h_{R|R}^{(1),[0100]}(V) + \cdots \big]\\
& Y_{[0020]}(\sigma,\tau) \big[U h_{R|R}^{(1),[0020]}(V) + \cdots \big]+Y_{[0120]}(\sigma,\tau) \big[U^2 h_{R|R}^{(2),[0120]}(V) + \cdots \big]\\
& Y_{[0200]}(\sigma,\tau) \big[U^2 h_{R|R}^{(2),[0200]}(V) + \cdots \big]+Y_{[0040]}(\sigma,\tau) \big[U^3 h_{R|R}^{(3),[0040]}(V) + \cdots \big]\,,
}
where the different powers of $U$ in each channel correspond to the lowest twists that appear in each channel for the $(A,0)_{t+\ell,\ell}^{[0000]}$ superblock as given in Table 8 of \cite{Chester:2014fya}. The $U$-slices in each channel take the form
\es{slices2}{
h^{(n),[0ab0]}_{R|R}(V) ={P^{[0ab0]}_{1,R|R}(V)}\log ^2 V + {P^{[0ab0]}_{2,R|R}(V)} \text{Li}_2(1-V)+ {P^{[0ab0]}_{3,R|R}(V)} \log V + {P^{[0ab0]}_{4,R|R}(V)} \,,
}
where $P^{[0ab0]}_{i,R|R}(V)$ are polynomials in $V$ divided by monomials of $(1-V)$ whose precise degree varies in each channel. The expressions for $R|R^4$ and $R^4|R^4$ are also given by \eqref{appA}, except the $\ell$ sum is trivially $\ell=0$ in those cases, and $R|R^4$ has an extra factor of 2. The $U$-slices for $R|R^4$ take the simpler form
\es{slices3}{
h^{(n),[0ab0]}_{R|R^4}(V) = {P^{[0ab0]}_{1,R|R^4}(V)}\log V + {P^{[0ab0]}_{2,R|R^4}(V)}\,,
}
for similarly defined polynomials divided by monomials, and a similar expression holds for $R^4|R^4$. We give the explicit expressions for many $n$ in the attached \texttt{Mathematica} file.
\subsection{Mellin amplitude and comparison to 11d}
\label{reducedMellinAmplitude}
We now show how to complete the position space DD to the entire correlator using crossing symmetry and the superconformal Ward identity in Mellin space. For $R|R^4$ and $R^4|R^4$ we will find closed form expressions up to the expected contact term ambiguities, while for $R|R$ we are able to compute a closed form up to a certain polynomial in $s,t$ that in principle can be fixed from the Ward identity, but is difficult to fix in practice. The expressions we find in all cases are sufficient to check the flat space limit comparison to 11d, as well as to extract all CFT data except for some low spins.
We can compute the Mellin amplitudes from the resummed DD's following a similar but more complicated version of the calculation in the 4d \cite{Alday:2018kkw} and 6d \cite{Alday:2020tgi} cases. In the previous section, we computed the coefficient of $\log^2U$ in the $s$-channel, which gave the DD in the $t$-channel as an expansion in small $U$. From the definition of the Mellin transform in \eqref{mellinH}, we can then convert $U^n\log^2U h^{(n)}(V)$ to an $s$-pole in $M(s,t)$ as
\es{res}{
U^{n} \log^2 U h_n(V) \leftrightarrow \frac{res_{n-1}(t)}{s-2(n-1)}\,,
}
where the residues $res_{n-1}(t)$ follows from the $t$-integral in \eqref{mellinH}. For $R^4|R$ and $R^4|R^4$, $res_{n-1}(t)$ is analytic because the slices \eqref{slices3} do not contain any $\log^2V$ terms. In these cases, we can then use crossing symmetry \eqref{crossM} to fix the other parts of the Mellin amplitude that are analytic in $s$ to get
\es{RR4c}{
M^{R|R^4}(s,t;\sigma,\tau) &= \sum_{m=1}^\infty\Bigg[ \frac{1}{(s-2m)}\left(\frac{\hat{c}(m,s,t;\sigma,\tau)}{m}+\frac{\hat{d}(m,s,t;\sigma,\tau)\Gamma(m)}{\Gamma(m+\frac12)}\right) \\
&\qquad\qquad\qquad\qquad\qquad\qquad+\text{crossed} \Bigg]+\sum_{i=1}^{50} \hat k_i{\bf P}^{(8),i}(s,t;\sigma,\tau)\,,\\
M^{R^4|R^4}(s,t;\sigma,\tau) &= \sum_{m=1}^\infty\left[ \frac{\hat{\hat{d}}(m,s,t;\sigma,\tau)\Gamma(m)}{(s-2m)\Gamma(m+\frac12)} +\text{crossed} \right]+\sum_{i=1}^{84}\hat{\hat k}_i{\bf P}^{(11),i}(s,t;\sigma,\tau)\,,\\
}
where $\hat{c}(m,s,t;\sigma,\tau)$, $\hat {d}(m,s,t;\sigma,\tau)$, and $\hat{\hat {d}}(m,s,t;\sigma,\tau)$ are quadratic in $\sigma,\tau$ and polynomials in $m,s,t$, while ${\bf P}^{(8),i}(s,t;\sigma,\tau)$ and ${\bf P}^{(11),i}(s,t;\sigma,\tau)$ parameterize all crossing symmetric degree 8 and 11 polynomials in $s,t$, respectively, whose coefficients $\hat k_i$ and $\hat{\hat k}_i$ should be fixed by the superconformal Ward identity in terms of the physical contact term ambiguities in \eqref{M2222}. Note that one can swap $s$ for $2m$ in these expressions to get the same residues at the poles, which only changes the $\hat k_i$ and $\hat{\hat k}_i$. The only rule in performing this swap is that the degree of $M^{R|R^4}(s,t) $ and $M^{R^4|R^4}(s,t) $ at large $s,t$ does not exceed $8.5$ and $11.5$, respectively. In practice we can simply set $s=2m$ and similarly for the crossed terms, which allows us to resum to get
\es{RR4c2}{
M^{R|R^4} &=\Bigg[\frac{\psi \left(1-\frac{s}{2}\right)}{s} {\bf p}_1(s,t;\sigma,\tau)+\frac{\, _3F_2\left(1,1,1-\frac{s}{2};\frac{3}{2},2-\frac{s}{2};1\right)}{2-s} {\bf p}_2(s,t;\sigma,\tau)\\
&\qquad+\text{crossed}\Bigg]+\sum_{i=1}^{50} \hat k_i{\bf P}^{(8),i}(s,t;\sigma,\tau)\,,\\
M^{R^4|R^4}&=\Big[\frac{\, _3F_2\left(1,1,1-\frac{s}{2};\frac{3}{2},2-\frac{s}{2};1\right)}{2-s} {\bf p}_3(s,t;\sigma,\tau)+\text{crossed}\Bigg]+\sum_{i=1}^{84} \hat{\hat k}_i{\bf P}^{(8),i}(s,t;\sigma,\tau)\,,\\
}
where ${\bf p}_i(s,t;\sigma,\tau)$ are various polynomials in $s,t,\sigma,\tau$ that are given in the attached \texttt{Mathematica} file, along with the explicit $\hat k_i$ and $\hat{\hat k}_i$ that we fix using the Mellin space Ward identity. At large $s,t$ these amplitudes take the form
\es{flatRR4}{
\lim_{s,t\to\infty}M^{R|R^4} (s,t;\sigma,\tau)&=\frac{655360\cdot {2}^{\frac16} (-s)^{9/2}(t (-\sigma s+s+t)+s \tau (s+t))^2}{3^{5/3} \pi ^{19/6}}+\text{crossed}\,,\\
\lim_{s,t\to\infty}M^{R^4|R^4} (s,t;\sigma,\tau)&=\frac{63078400 \cdot (-s)^{15/2} (t (-\sigma s+s+t)+s \tau (s+t))^2}{2^{\frac16}{3}^{\frac43} \pi ^{23/6}}+\text{crossed}\,,
}
where we assumed $s,t<0$. We can then use the flat space limit formula \eqref{flat} for $p=2$ and $a=8.5$ and $a=11.5$ for $M^{R|R^4}$ and $M^{R^4|R^4}$, respectively, as well as the relation between between $\ell_{11}$ and $c_T$ in \eqref{cPlanck} for $k=2$, to get precisely the expected 11d amplitudes as given in equations 4.28 and 4.29 of \cite{Alday:2020tgi}.
For $R|R$, the residues $res_{n-1}(t)$ in \eqref{res} now contain poles in $t$ because the slices \eqref{slices2} contain $\log^2V$ terms. The resulting Mellin amplitude thus contains both single and double pole terms
\es{MellinRR}{
M^{R|R}(s,t;\sigma,\tau) &= \Bigg[ \sum_{m,n=1}^\infty \frac{c(m,n,s,t;\sigma,\tau)}{(s-2m)(t-2n)}\frac{\Gamma(m)\Gamma(n)\Gamma(m+n-\frac{11}{2})}{\Gamma(m+n-1)\Gamma(n-\frac12)\Gamma(m-\frac12)} \\
& + \sum_{m=1} \frac{1}{s-2m}\Bigg(\frac{d(m,s,t;\sigma,\tau)\Gamma(m)}{\Gamma(m+\frac12)(m-4)(m-3)(m-2)(m-1)}\\
&+\frac{e(m,s,t;\sigma,\tau)}{(m-4)(m-3)(m-2)(m-1)(2m-3)(2m-5)(2m-7)(2m-9)}\Bigg)\\
& +\text{crossed}\Bigg]+\hat{\bf P}(s,t;\sigma,\tau)+\sum_{i=1}^{24} k_i{\bf P}^{(5),i}(s,t;\sigma,\tau)\,.
}
Here, ${c}(m,n,s,t;\sigma,\tau)$, $ {d}(m,s,t;\sigma,\tau)$, and ${ {e}}(m,s,t;\sigma,\tau)$ are quadratic in $\sigma,\tau$ and polynomials in $m,s,t$, while ${\bf P}^{(5),i}(s,t;\sigma,\tau)$ are all crossing symmetric degree 5 polynomial in $s,t$, which in principle should be fixed by the superconformal Ward identity in terms of just one of the 24 $k_i$. For the double pole residues we can swap $s$ for $2m$ and $t$ for $2n$ to get the same residue at the poles, but which will change the single pole residues and the $k_i$. When swapping we must be careful that the resulting sums are all finite, and that the large $s,t$ growth does not exceed $5.5$. In fact, for all choices of swaps the large $s,t$ degree exceeds $5.5$, which is why we must also include the polynomial $\hat{\bf P}(s,t;\sigma,\tau)$ that generically will have degree greater than $5.5$, and is fixed to cancel the corresponding large $s,t$ terms from the single and double sum terms.
It is difficult to check the large $s,t$ limit of \eqref{MellinRR} with the constraints just discussed, because we must compute the double sum term to subleading order in large $s,t$ to take into account the cancellations of the leading terms by $\hat{\bf P}(s,t;\sigma,\tau)$. Instead, we can more easily compute the large $s,t$ limit by considering the ansatz \eqref{MellinRR} but with coefficients with unphysical poles at $s,t,u=0$:
\es{goodflat}{
c_\text{flat}(s,t;\sigma,\tau)&\equiv \frac{4mn}{st}c(s/2,t/2,s,t;\sigma,\tau)\,,\\
d_\text{flat}(s,t;\sigma,\tau)&\equiv \frac{1}{t}d(s/2,s,t;\sigma,\tau)\,,\qquad e_\text{flat}(s,t;\sigma,\tau)\equiv \frac{1}{t}e(s/2,s,t;\sigma,\tau)\,,\\
}
as well as replacing the ${\bf P}^{(5),i}(s,t;\sigma,\tau)$ by a higher degree polynomial multiplied by $\frac{1}{st}$, and without the now unnecessary $\hat{\bf P}(s,t;\sigma,\tau)$. The resulting explicit Mellin amplitude is given in the attached \texttt{Mathematica} file. In principle, we can completely fix the $k_i$ using the superconformal Ward identity, which should cancel all the unphysical poles at $s,t,u=0$. In practice, we checked that all the residues for both the double and single poles in $s,t$ are satisfied by the Ward identity, but did not carefully fix the $k_i$ since the terms they multiply are by definition subleading in the large $s,t$ limit and we only use this formulation to check the flat space limit. In particular, to get the leading large $s,t$ term that appears in the flat space limit formula \eqref{flat}, we should look at the regime where $m,n,s,t$ all scale equally large, in which case we can replace the sums over $m,n$ by integrals. For the double pole term we get
\es{dubPole}{
&\lim_{s,t\to\infty}\sum_{m,n=1}^\infty \frac{c_\text{flat}(m,n,s,t;\sigma,\tau)}{(s-2m)(t-2n)}\frac{\Gamma(m)\Gamma(n)\Gamma(m+n-\frac{11}{2})}{\Gamma(m+n-1)\Gamma(n-\frac12)\Gamma(m-\frac12)}+\text{crossed}\\
&=\int_0^\infty dmdn\frac{5120 m^{3/2} n^{3/2} s t (s+t) (t (-\sigma s+s+t)+s \tau (s+t))^2}{3 \pi ^{7/2} (m+n)^{9/2} (s-2 m) (t-2 n)}+\text{crossed}\\
&=-\frac{1024 \sqrt{2} s t(t (-\sigma s+s+t)+s \tau (s+t))^2}{63 \pi ^{5/2} (s+t)^4 \sqrt{s t}} \Bigg[6
(-t)^{9/2}-41 s^2 (-t)^{5/2}+88 s^3 (-t)^{3/2}\\
&+88 (-s)^{3/2} t^3-8 \sqrt{-s} t^4-45 (-s)^{7/2} t-45 s (-t)^{7/2}+6 (-s)^{9/2}-8 s^4 \sqrt{-t}-41 (-s)^{5/2} t^2\\
&+105 s^2 t^2 \sqrt{-s-t} \log \Big[\frac{\left(\sqrt{-s-t}+\sqrt{-s}\right) \left(\sqrt{-s-t}+\sqrt{-t}\right)}{\sqrt{s
t}}\Big] \Bigg]+\text{crossed}\,,\\
}
where we assumed that $s,t<0$. For the single pole terms the $d_\text{flat}$ term is leading at large $s,t$ and gives
\es{singPole}{
&\lim_{s,t\to\infty} \sum_{m=1} \frac{1}{s-2m}\frac{d_\text{flat}(m,s,t;\sigma,\tau)\Gamma(m)}{\Gamma(m+\frac12)(m-4)(m-3)(m-2)(m-1)}+\text{crossed}\\
&\qquad=-\int_0^\infty dm \frac{18}{7} \sqrt{\pi } m^{3/2} \left(4 m^2 \tau +2 m t (-\sigma +\tau +1)+t^2\right)^2+\text{crossed}\\
&\qquad=\frac{4096 \sqrt{2}}{7 \pi ^{5/2} } (t (-\sigma s+s+t)+s \tau (s+t))^2 \left(i t \sqrt{-s-t}+i s \sqrt{-s-t}+(-s)^{3/2}+(-t)^{3/2}\right)\,.\\
}
We then plug these large $s,t$ expressions into the flat space limit formula \eqref{flat} for $p=2$ and $a=5.5$ and use the relation between between $\ell_{11}$ and $c_T$ in \eqref{cPlanck} for $k=2$ to precisely get the expected 11d amplitudes as given in equations 4.19 of \cite{Alday:2020tgi}.
So far, we have a choice of coefficients \eqref{goodflat} in \eqref{MellinRR} that gives a putative 1-loop amplitude $M^{R|R}_\text{flat}$ that we know has the correct flat space limit, but which has unphysical poles at $s,t,u=0$ that in principle could be cancelled by subtracting a polynomial divided by $stu$, but in practice are hard to fix using the superconformal Ward identity because of the double sums. We can avoid these unphysical poles by choosing ${c}(m,n,s,t;\sigma,\tau)$, $ {d}(m,s,t;\sigma,\tau)$, and ${ {e}}(m,s,t;\sigma,\tau)$ in \eqref{MellinRR} that are in fact polynomials in $m,n,s,t$ as originally defined, and then demanding that the resulting expression for $M^{R|R}$ matches $M^{R|R}_\text{flat}$ up to the degree 5 polynomial ambiguities ${\bf P}^{(5),i}(s,t;\sigma,\tau)$. The resulting expressions for $c,d,e$ are given in the attached Mathematica result, and are now guaranteed to have both the correct flat space limit and only physical poles. Finally, the coefficients $k_i$ can in principle be fixed using the superconformal Ward identity in terms of just a single coefficient, which corresponds to the single physical degree 4 contact term ambiguity in \eqref{M2222}. In practice this is difficult due to the double sums, so instead we fix most of these coefficients in the next section by demanding a consistent superblock expansion, which is equivalent to imposing the superconformal Ward identity but easier in practice.
\subsection{Extracting CFT data}
\label{CFTData}
We now extract all low-lying CFT data from the $R|R$, i.e. $c_T^{-2}$, and $R|R^4$, i.e. $c_T^{-\frac83}$, correlators using two independent methods. Firstly, we derive an inversion integral formula for each DD in position space, which lets us to extract all CFT data above a certain spin in terms of a single integral, as expected from the Lorentzian inversion formula \cite{Caron-Huot:2017vep}. Secondly, we expand each entire correlator as written in Mellin space in superblocks to extract all CFT data for all spins up to the physical contact term ambiguities that appear in \eqref{M2222}, as well as some unphysical ambiguities for $R|R$ that in principle can be fixed by the superconformal Ward identity. We find that both methods agree in their respective regimes of applicability. We do not extract CFT data from the $R^4|R^4$, i.e. $c_T^{-\frac{10}{3}}$, correlator, since we anyway do not know the $R|D^6R^4$ term that would contribute at the same order, but it would be simple to extract the $R^4|R^4$ data as well from the formulae provided here.
To extract CFT data, we will look at the superblock expansion in the lightcone limit of small $U\sim z$, where conformal blocks are expanded as
\es{lightBlocksExp}{
G_{\Delta,\ell}(U,V)=\sum_{n=0}^\infty U^{\frac{\Delta-\ell}{2}+n}g_{\Delta,\ell}^{[n]}(1-V)\,.
}
Here, the lowest so-called lightcone block in our normalization is
\es{lightconeBlock}{
g_{\Delta,\ell}^{[0]}(1-V)&=\frac{\Gamma(\ell+1/2)}{4^\Delta\sqrt{\pi}\ell!}(1-V)^\ell \,{}_2F_1\left(\frac{\Delta+\ell}{2},\frac{\Delta+\ell}{2},\Delta+\ell,1-V\right)\,,\\
}
and we see that the expansion is naturally organized in terms of twist $t\equiv \Delta-\ell$. Applying this expansion to the superblocks, we observe that blocks in different supermultiplets with the same twist can appear in the same $R$-symmetry channel, so it is convenient to look at channels with the least mixing between different supermultiplets. For instance, the lowest twist long multiplet $(A,0)_{\ell+2,\ell}$ contributes at greater than twist two in the $[0200]$, $[0120]$, and $[0040]$ channels, while all the protected multiplets contribute at lower twists in these channels, so these channels are the simplest for extracting OPE coefficients of protected operators. In particular, if we focus on the lowest twist 2 conformal blocks at $O(U)$, then from \cite{Chester:2014fya} we see that the short superblocks contain the blocks
\es{shortblock}{
&\text{twist = 2:}\qquad\mathfrak{G}_{(B,+)}^{[0040]}=G_{2,0}\,,\qquad \mathfrak{G}_{(B,+)}^{[0120]}=-\frac43G_{3,1}\,,\qquad \mathfrak{G}_{(B,+)}^{[0200]}=0\,,\\
&\qquad\qquad\qquad\;\,\mathfrak{G}_{(B,2)}^{[0040]}=0\,,\qquad\quad\;\, \mathfrak{G}_{(B,2)}^{[0120]}=-\frac83G_{3,1}\,,\qquad \mathfrak{G}_{(B,2)}^{[0200]}=G_{2,0}+\frac{64}{45}G_{4,2}\,,\\
}
while the semishort superblocks contain
\es{Apblock}{
&\text{twist = 2:}\quad\mathfrak{G}_{(A,+)_\ell}^{[0040]}=\frac{16}{3}G_{\ell+4,\ell+2}\,,\qquad \mathfrak{G}_{(A,+)_\ell}^{[0120]}=-4G_{3+\ell,1+\ell}-\frac{64 (\ell+3)^4G_{5+\ell,3+\ell}}{(2 \ell+5)^2 (2 \ell+7)^2}\,,\\
&\quad\qquad\qquad\;\, \mathfrak{G}_{(A,+)_\ell}^{[0200]}=\frac{32 (\ell+2) (\ell+3)}{3 (2 \ell+3) (2 \ell+7)}G_{4+\ell,2+\ell}\,,\\
&\quad\qquad\qquad\;\,\mathfrak{G}_{(A,2)_\ell}^{[0040]}=0\,,\qquad\qquad\qquad\;\; \mathfrak{G}_{(A,2)_\ell}^{[0120]}=-\frac{32 (\ell+2)^2}{(2 \ell+3) (2 \ell+5)}G_{\ell+4,\ell+2}\,,\\
&\quad\qquad\qquad\;\,\mathfrak{G}_{(A,2)_\ell}^{[0200]}=4G_{\ell+3,\ell+1}+\frac{64 (\ell+2)^2 (\ell+3)^2}{(2 \ell+3) (2 \ell+5)^2 (2 \ell+7)}G_{\ell+5,\ell+3}\,.\\
}
For the long superblock, we will only extract its lowest twist 2 anomalous dimension, for which it is convenient to consider the $[0040]$ channel where only a single block at twist 6 appears:
\es{longblock}{
\mathfrak{G}_{\ell+2,\ell}^{[0040]} = \frac{16 (2 \ell+2) (2 \ell+4)}{(2 \ell+3) (2 \ell+5)}G_{\ell+6,\ell}\,.
}
We can now use this explicit block decomposition to extract CFT data, first using the Lorentzian inversion formula. In Appendix \ref{Lorentz} we review following the similar 4d case in \cite{Alday:2017vkk} how to use the inversion formula to extract CFT data from the DD of a 3d CFT by comparing to its conformal block expansion in the large $c_T$ limit. We can apply this to the twist 2 block expansion in the $[0040]$ channel \eqref{Apblock} for general $\ell$ where only $(A,+)_\ell$ appears, so that we get
\es{ApInversion}{
\lambda^2_{(A,+),\ell} =\frac{12 (2 \ell+5) \Gamma (\ell+3)^4}{\Gamma \left(\ell+\frac{5}{2}\right)^2
\Gamma \left(\ell+\frac{7}{2}\right)^2} \int_0^1 \frac{d \bar z}{\bar z} g_{\ell+4,\ell+2}(\bar z) \text{dDisc}[ {\cal G}^{[0040]}(z\bar z,1-\bar z)\vert_z ] \,,
}
where ${\cal G}^{[0040]}(z\bar z,1-\bar z)\vert_z$ denotes the leading $z$ term in the basis \eqref{Ybasis}, we dropped the superscript from the leading lightcone block in \eqref{lightconeBlock}, and the overall normalization was fixed using the known GFFT term in \eqref{2222data} as discussed in Appendix \ref{Lorentz}. The $(B,+)$ supermultiplet also appears at leading twist in this channel, and comparing \eqref{shortblock} to \eqref{Apblock} we see corresponds to the limit\footnote{In other channels negative spins blocks appear in this limit as discussed in \cite{Chester:2014fya}, so the comparison is more subtle.}
\es{BpfromAp}{
\mathfrak{G}_{(B,+)}^{[0040]}=\frac{3}{16}\lim_{\ell\to-2}\mathfrak{G}_{(A,+)_\ell}^{[0040]}\,,
}
so can be considered a special case of \eqref{ApInversion}. We can similarly analyze the $[0200]$ and $[0120]$ channels to get
\es{0200}{
&\frac{12 (\ell+1)^2 (\ell+2)^2}{(2 \ell+1) (2 \ell+3)^2 (2 \ell+5)} \lambda^2_{(A,2)_{\ell-1}}+
\frac{2 (\ell+2) (\ell+3)}{(2 \ell+3) (2 \ell+7)}\lambda^2_{(A,+)_\ell}+\frac{3}{4}\lambda^2_{(A,2)_{\ell+1}} = \\
&\frac{12 (2 \ell+5) \Gamma (\ell+3)^4}{\Gamma \left(\ell+\frac{5}{2}\right)^2
\Gamma \left(\ell+\frac{7}{2}\right)^2} \int_0^1 \frac{d \bar z}{\bar z} g_{\ell+4,\ell+2}(\bar z) \text{dDisc}[ {\cal G}^{[0200]}(z\bar z,1-\bar z)\vert_z ] \,,
}
and
\es{0120}{
&\frac{12 (\ell+2)^4}{(2 \ell+3)^2 (2 \ell+5)^2}\lambda^2_{(A,+)_{\ell-1}}+
\frac{6 (\ell+2)^2}{(2 \ell+3) (2 \ell+5)}\lambda^2_{(A,2)_\ell}+\frac{3}{4}\lambda^2_{(A,+)_{\ell+1}} = \\
&-\frac{12 (2 \ell+5) \Gamma (\ell+3)^4}{\Gamma \left(\ell+\frac{5}{2}\right)^2
\Gamma \left(\ell+\frac{7}{2}\right)^2} \int_0^1 \frac{d \bar z}{\bar z} g_{\ell+4,\ell+2}(\bar z) \text{dDisc}[ {\cal G}^{[0120]}(z\bar z,1-\bar z)\vert_z ] \,,
}
where the minus sign is expected because the spin is odd. These two equations give an overconstrained system for $\lambda^2_{(A,2)_\ell}$ in terms of $\lambda^2_{(A,+)_{\ell-1}}$ and $\lambda^2_{(A,+)_{\ell+1}}$, which can be extracted seperately from \eqref{ApInversion}. From comparing \eqref{shortblock} to \eqref{Apblock}, we see that $(B,2)$ corresponds to the limit
\es{B2fromA2}{
\mathfrak{G}_{(B,2)}^{[0200]}=\frac{1}{4}\lim_{\ell\to-1}\mathfrak{G}_{(A,2)_\ell}^{[0200]}\,,
}
and similarly for the $[0040]$ and $[0120]$ channels, so its OPE coefficient can be extracted from $4\lambda^2_{(A,2)_{-1}}$. Finally, we can apply the inversion analysis in Appendix \ref{Lorentz} for the $R|R$ correction to the anomalous dimension to the long superblock in the $[0040]$ channel \eqref{longblock} to get
\es{anom1}{
\gamma^{R|R}_{2,\ell} =& \frac{1}{(\lambda^{(0)}_{2,\ell})^2 }\Big(4 R^{[0040]}_{1,R|R}( \ell) + \frac12 \partial_{\ell} \big[(\lambda^{(0)}_{2,\ell})^2 ( \gamma^{R}_{2,\ell} )^2\big] - (\lambda^{R}_{2,\ell})^2 \gamma^{R}_{2,\ell} \Big)\,,
}
where we have the inversion integral
\es{anom2}{
R^{[0040]}_{1,R|R}(\ell) &=\frac{512 (\ell+1) (\ell+2) (2 \ell+3) \Gamma (\ell+1)^4}{\Gamma
\left(\ell+\frac{1}{2}\right)^2 \Gamma \left(\ell+\frac{5}{2}\right)^2}\int_0^1 {d \bar z}{\bar z} g_{\ell+6,\ell}(\bar z) \text{dDisc}[\left. {\cal G}_{R|R}^{[0040]}(z\bar z,1-\bar z) \right|_{z^3 \log z}] \,.
}
For $R|R^4$ we have a similar expression, except without the tree level terms in \eqref{anom1} since for $R^4$ they only have support for $\ell=0$.
To apply these inversion integrals, we need to compute the leading $z$ term of the DD in various channels for $R|R$ and $R|R^4$, which is given by the coefficient of $\log^2 (1-\bar z)$ according to
\es{DD}{
{\rm dDisc}\,[ f(z,\bar z) \log^2(1-\bar z) ] = 4\pi^2 f(z,\bar z)\,,
}
for arbitrary $f(z,\bar z)$ analytic at $\bar z=1$. We compute the $\log^2 (1-\bar z)$ term by taking the $U$-slices $h^{(n),[0ab0]}(V)$ in Section \ref{1loopfrom} that multiply $\log^2U$, applying the $1\leftrightarrow3$ crossing \eqref{crossing} to get expressions that multiply $\log^2V\sim\log^2 (1-\bar z)$, resumming the slices, and reexpanding in the $Y_{[0ab,0]}(\sigma,\tau)$ to get the DD in each irrep. For $R|R^4$, we were able to find closed form expressions, while for $R|R$ the result is written in terms of an integral over an auxiliary variable, see the attached \texttt{Mathematica} notebook for the explicit expressions. For $R|R$ we find that the inversion integrals with these explicit DDs converges for $\ell>-\frac12$ for \eqref{ApInversion}, for $\ell>\frac12$ for \eqref{0120}, and for $\ell>\frac32$ for \eqref{anom2}, which allows us to compute the CFT data:
\es{RRfinal}{
(\lambda^{R|R}_{(A,+)_0})^2&=285.32043668331375685394087\,,\\
(\lambda^{R|R}_{(A,+)_2})^2&=77.098186992813023177613926\,,\\
(\lambda^{R|R}_{(A,+)_4})^2&=48.536178605208049991881361\,,\\
(\lambda^{R|R}_{(A,2)_1})^2&=2239.9009500059848334084088\,,\\
(\lambda^{R|R}_{(A,2)_3})^2&=540.71435539680002180491475\,,\\
(\lambda^{R|R}_{(A,2)_5})^2&=328.90127928121821108070743\,,\\
\gamma_{2,2}^{R|R}&=\frac{1645242368}{1125 \pi ^4}-\frac{207785984}{663 \pi ^2}\,,\\
\gamma_{2,4}^{R|R}&=\frac{80811812224}{25725 \pi ^4}-\frac{2170015744}{6783 \pi ^2}\,,\\
\gamma_{2,6}^{R|R}&=\frac{14459024425792}{3678675 \pi
^4}-\frac{45500125184}{115115 \pi ^2}\,,\\
}
where we could also compute higher spin data if desired.\footnote{From computing some higher spins values, we observed that the anomalous dimensions are not monotonic in spin until $\ell=6$, unlike the the case of 4d $\mathcal{N}=4$ SYM \cite{Alday:2018pdi} and 6d $(2,0)$ \cite{Alday:2020tgi} that was monotonic in spin in general. Of course, monotonicity in spin is only required at sufficiently high spin \cite{Komargodski:2012ek}.} Note that the CFT data that we cannot compute, namely $\gamma_{2,0}^{R|R}$, $(\lambda^{R|R}_{(B,+)_0})^2$, and $(\lambda^{R|R}_{(B,2)_0})^2$, is what is affected by the degree 4 contact term $B_4^{R|R}M^{4}$ in \eqref{M2222}, as shown in Table \ref{resultList}, which is analogous to the 4d \cite{Alday:2017xua} and 6d \cite{Alday:2020tgi} cases. For $R|R^4$ we find that the inversion integrals with the explicit $R|R^4$ DDs converges for $\ell>\frac72$ for \eqref{ApInversion}, for $\ell>\frac92$ for \eqref{0120}, and for $\ell>\frac{11}{2}$ for \eqref{anom2}, which allows us to compute the CFT data:
\es{RR4final}{
(\lambda^{R|R^4}_{(A,+)_4})^2&=\frac{22291954008064 \left(\frac{2}{3}\right)^{2/3}}{4357815 \pi ^{14/3}}+\frac{1561306511441920
\left(\frac{2}{3}\right)^{2/3}}{6298655363 \pi ^{8/3}}\,,\\
(\lambda^{R|R^4}_{(A,2)_5})^2&=\frac{254814018760343552 \left(\frac{2}{3}\right)^{2/3}}{3277699425 \pi ^{14/3}}+\frac{281474976710656
\left(\frac{2}{3}\right)^{2/3}}{72177105 \pi ^{8/3}}\,,\\
\gamma_{2,6}^{R|R^4}&=-\frac{512640462848 \left(\frac{2}{3}\right)^{2/3}}{693 \pi ^{14/3}}-\frac{110655386419200\left( \frac23\right)^{2/3} }{2956811 \pi ^{8/3}}\,,\\
}
where we could also compute higher spin data if desired. The CFT data we cannot compute is what is affected by the degree 8 and smaller contact terms in \eqref{M2222}, as shown in Table \ref{resultList}, which is analogous to the 4d \cite{Alday:2018pdi} and 6d \cite{Alday:2020tgi} cases.
Finally, we can also extract CFT data from the entire correlator as written in Mellin space in terms of the contact term ambiguities described before. We extract this data in the lightcone expansion following \cite{Chester:2018lbz}, by taking the relevant $s$-pole, doing the $t$-integral, projecting against a block of the corresponding spin using the projectors introduced in \cite{Heemskerk:2009pn}, and comparing against the lightcone expansion of the superblocks in \eqref{shortblock}, \eqref{Apblock}, and \eqref{longblock}. For $R|R$, we first demand that $\log U$ terms, which correspond to anomalous dimensions, should only show up in the $[0200]$, $[0120]$, and $[0040]$ channels starting with the appropriate twists, which fixes 12 of the 24 coefficients $k_i$ of the polynomial ambiguity in the Mellin amplitude \eqref{MellinRR}. After fixing these, we find that $\gamma^{R|R}_{2,\ell}$ for $\ell\geq4$, $(\lambda_{(A,+)_\ell})^2$ for $\ell\geq 2$, and $(\lambda_{(A,2)_\ell})^2$ for $\ell\geq3$ are unaffected by the remaining $k_i$, so we could extract this data by computing the double sums numerically and confirm the inversion results in \eqref{RRfinal}. For $R|R^4$, we already completely fixed the Mellin amplitude up to the physical contact term ambiguities in \eqref{M2222}, so we can compute all CFT data, which confirms the inversion results in \eqref{RR4final}, and also gives the complete result at order $c_T^{-\frac83}$ for the low spin data:
\es{RR4final2}{
(\lambda^{R|R^4}_{(B,+)})^2&=-\frac{59609927581958144 \left(\frac{2}{3}\right)^{2/3}}{14189175 \pi ^{14/3}}+\frac{256}{35}B_4^{R|R^4}\,,\\
(\lambda^{R|R^4}_{(B,2)})^2&=-\frac{59609927581958144 \left(\frac{2}{3}\right)^{2/3}}{2837835 \pi ^{14/3}}+\frac{256}{7}B_4^{R|R^4}\,,\\
(\lambda^{R|R^4}_{(A,+)_0})^2&=-\frac{7798563930112 \left(\frac{2}{3}\right)^{2/3}}{1216215 \pi ^{14/3}}-\frac{134217728 \left(\frac{2}{3}\right)^{2/3}}{429 \pi ^{8/3}}\\
&\quad+\frac{16384}{1485}B_6^{RR|R^4}+\frac{950272}{6435}B_7^{RR|R^4}-\frac{131396796416}{467137125}B_8^{RR|R^4}\,,\\
(\lambda^{R|R^4}_{(A,+)_2})^2&=-\frac{148820650360832 \left(\frac{2}{3}\right)^{2/3}}{2786875 \pi ^{14/3}}-\frac{229076375699456 \left(\frac{2}{3}\right)^{2/3}}{56581525 \pi ^{8/3}}+\frac{67108864}{557375}B_8^{R|R^4}\,,\\
(\lambda^{R|R^4}_{(A,2)_1})^2&=\frac{3402914332672 \left(\frac{2}{3}\right)^{2/3}}{218295 \pi ^{14/3}}-\frac{794568949760 \left(\frac{2}{3}\right)^{2/3}}{29393 \pi ^{8/3}}\\
&\quad+\frac{131072}{1155}B_6^{R|R^4}+\frac{21889024}{15015}B_7^{R|R^4}-\frac{3848847491072}{1089986625}B_8^{R|R^4}\,,\\
(\lambda^{R|R^4}_{(A,2)_3})^2&=-\frac{3672876448219136 \left(\frac{2}{3}\right)^{2/3}}{2546775 \pi ^{14/3}}-\frac{614077244112896 \left(\frac{2}{3}\right)^{2/3}}{6292363 \pi ^{8/3}}+\frac{268435456}{121275}B_8^{R|R^4}\,,\\
\gamma_{2,0}^{R|R^4}&=\frac{5112797289066496 \left(\frac{2}{3}\right)^{2/3}}{45045 \pi ^{14/3}}+\frac{359760199680 \left(\frac{2}{3}\right)^{2/3} }{46189 \pi ^{8/3}}\\
&\quad-192B_4^{R|R^4}+\frac{15360}{11}B_6^{R|R^4}+\frac{192000}{11}B_7^{R|R^4}-\frac{18059264}{1521}B_8^{R|R^4}\,,\\
\gamma_{2,2}^{R|R^4}&=\frac{12875118112768 \left(\frac{2}{3}\right)^{2/3}}{1365 \pi ^{14/3}}+\frac{162643071467520 \left(\frac{2}{3}\right)^{2/3} }{96577 \pi ^{8/3}}\\
&\quad-1536B_6^{R|R^4}-18432B7^{R|R^4}+\frac{509591552}{12675}B_8^{R|R^4}\,,\\
\gamma_{2,4}^{R|R^4}&=\frac{1806876913664 \left(\frac{2}{3}\right)^{2/3}}{63 \pi ^{14/3}}+\frac{709927895040\left(\frac{2}{3}\right)^{2/3} }{391 \pi ^{8/3}}-32768B_8^{R|R^4}\,,\\
}
where note that $(\lambda^{R|R^4}_{(B,2)})^2=5(\lambda^{R|R^4}_{(B,+)})^2$ as expected from \eqref{1drel}.
\section{Fixing 1-loop contact terms}
\label{1loopContact}
So far we computed the 1-loop Mellin amplitudes $M^{R|R}(s,t)$, $M^{R|R^4}(s,t)$, and $M^{R^4|R^4}(s,t)$, but we did not fix the polynomial in $s,t$ contact terms ambiguities that appear in \eqref{M2222} at orders $c_T^{-2}$, $c_T^{-\frac83}$, and $c_T^{-\frac{10}{3}}$, respectively. It is necessary to fix these contact terms if we want to extract low spin CFT data at these orders that are affected by these ambiguities, as summarized by Table \ref{resultList}. We will fix the contact term ambiguities for $M^{R|R}(s,t)$ and $M^{R|R^4}(s,t)$ using two methods. First, we will propose a unique analytic continuation of Lorentzian inversion below the spins where it was shown to converge in the previous section, which will allow us to fix all CFT data at order $c_T^{-2}$, and all but the CFT data affected by the $B^{R|R^4}_4M^4$ contact term at order $c_T^{-\frac83}$. We will then use the two localization constraints from \cite{Chester:2018aca,Binder:2018yvd} to confirm these results at order $c_T^{-2}$, as well as fix $B^{R|R^4}_4$ and give an additional nontrivial consistency check. Note that at order $c_T^{-2}$, we will be able to extract all CFT data even though we will not write down an explicit Mellin amplitude, because we have not yet fixed all the coefficients $k_i$ in \eqref{MellinRR} that are in principle fixed by the superconformal Ward identity. Also, we cannot similarly analyze the order $c_T^{-\frac{10}{3}}$ term yet, because it receives contributions not only from $M^{R^4|R^4}(s,t)$ but also from $M^{R|D^6R^4}(s,t)$, which has not yet been computed.
\subsection{Analytic continuation of Lorentzian inversion}
\label{analyticCon}
Let us begin by discussing the inversion integral \eqref{ApInversion} for the $[0040]$ channel, which we use to compute $\lambda^2_{(A,+)_\ell}$ for even $\ell$. For $R|R$, we can see from the explicit expression for the DD in the attached \texttt{Mathematica} file that it has the small $\bar z$ expansion
\es{smallzb}{
\text{dDisc}[ {\cal G}_{R|R}^{[0040]}(z\bar z,1-\bar z)\vert_z ]=\frac{3072}{\pi {\bar z}^{3/2}}-\frac{61952}{27 \pi \sqrt{\bar z}}+\dots\,.
}
Since the measure in \eqref{ApInversion} scales as ${\bar z}^{\ell+1}$, we see that this integral converges for $\ell>-\frac12$, which allowed us to compute all the $(\lambda^{R|R}_{(A,+)_\ell})^2$ for even $\ell\geq0$ in the previous section, but did not allow us to compute $(\lambda^{R|R}_{(B,+)})^2=\frac{16}{3}(\lambda^{R|R}_{(A,+)_{-2}})^2$ as given by \eqref{BpfromAp}. We can uniquely analytically continue \eqref{ApInversion} to $\ell=-2$ by writing it as
\es{ApInversion2}{
\lambda^2_{(A,+),\ell} &=\frac{12 (2 \ell+5) \Gamma (\ell+3)^4}{\Gamma \left(\ell+\frac{5}{2}\right)^2
\Gamma \left(\ell+\frac{7}{2}\right)^2} \Bigg[\int_0^1 \frac{d \bar z}{\bar z} g_{\ell+4,\ell+2}(\bar z)\Big[ \text{dDisc}[ {\cal G}^{[0040]}(z\bar z,1-\bar z)\vert_z ]\\
&\quad-\frac{3072}{\pi} \left(\frac{1-\bar z}{\bar z}\right)^{3/2}-\frac{62464}{27 \pi } \sqrt{\frac{1-z}{z}} \Big]+\frac{3072}{\pi}{\bf f}(\ell,3/2)+\frac{62464}{27 \pi } {\bf f}(\ell,1/2)\Bigg]\,,
}
where we define the analytically continued integral
\es{analInt}{
\int_0^1 \frac{d \bar z}{\bar z} g_{\ell+4,\ell+2}(\bar z)\left(\frac{1-\bar z}{\bar z}\right)^{p}=\frac{\Gamma (2 (\ell+3)) \Gamma (p+1)^2 \Gamma (\ell-p+2)}{\Gamma (\ell+3)^2 \Gamma (\ell+p+4)}\,,
}
which can be computed using the integral expression for the hypergeometric function in the lightcone block \eqref{lightconeBlock}. The explicit $\bar z$ integral in \eqref{ApInversion2} as well as $\bf{f}(\ell,3/2)$ and $\bf{f}(\ell,1/2)$ are now convergent for $\ell<-\frac12$, which we can compute at high precision for $\ell=-2$ to get
\es{BpRRfinal}{
(\lambda^{R|R}_{(B,+)})^2=\frac{16}{3}(\lambda^{R|R}_{(A,+)_{-2}})^2=\frac{32768}{45 \pi ^2}-\frac{81920}{9 \pi ^4}\,.
}
We can similarly analytically continue the inversion integrals in the $[0120]$ \eqref{0120} and $[0200]$ \eqref{0200} channels to compute $(\lambda^{R|R}_{(B,2)})^2=4(\lambda^{R|R}_{(A,2)_{-1}})^2$, which is related to $(\lambda^{R|R}_{(B,+)})^2$ by a factor of 5 as expected from \eqref{1drel}, which is evidence that our analytic continuation respects superconformal symmetry. The analytic continuation of the inversion integral \eqref{anom2} for the anomalous dimension then gives
\es{A0RRfinal}{
\gamma_{2,0}^{R|R}=\frac{46224640}{9 \pi ^4}-\frac{117698560}{429 \pi ^2}\,,
}
which is the only other CFT data that was affected by the $B_4^{R|R}M^4(s,t)$ contact term, and so could not be computed in the previous section.
We can then analytically continue the inversion integrals in the same way for $R|R^4$. In this case we find that the small $\bar z$ expansion of the DD includes a term ${\bar z}^{-1}$, which gives a ${\bf{f}}(\ell,1)$ term after analytic continuation. From \eqref{analInt}, we see that this ${\bf{f}}(\ell,1)$ has a logarithmic divergence at $\ell=-2$, so we can only analytically continue for $\ell>-2$, which allows us to compute all $(\lambda^{R|R}_{(A,+)_\ell})^2$ for even $\ell\geq0$, but not $(\lambda^{R|R^4}_{(B,+)})^2=\frac{16}{3}(\lambda^{R|R^4}_{(A,+)_{-2}})^2$. We see a similar pattern in the other inversion integrals, where we can compute all CFT except $(\lambda^{R|R^4}_{(B,2)})^2$ and all $\gamma^{R|R^4}_{2,0}$, which would be affected by the $B_4^{R|R^4}M^4(s,t)$ contact term. The results for the other CFT data are\footnote{Curiously, the results for the anomalous dimensions for $R|R^4$ become monotonic in spin at $\ell=6$, which was the same value for $R|R$ as discussed above.}
\es{RR4final3}{
(\lambda^{R|R^4}_{(A,+)_0})^2&=-\frac{269877248 \left(\frac{2}{3}\right)^{2/3}}{45 \pi ^{14/3}}-\frac{134217728 \left(\frac{2}{3}\right)^{2/3}}{429 \pi ^{8/3}}\,,\\
(\lambda^{R|R^4}_{(A,+)_2})^2&=-\frac{7322684358656 \left(\frac{2}{3}\right)^{2/3}}{91875 \pi ^{14/3}}-\frac{229076375699456 \left(\frac{2}{3}\right)^{2/3}}{56581525
\pi ^{8/3}}\,,\\
(\lambda^{R|R^4}_{(A,2)_1})^2&=-\frac{167914766336 \left(\frac{2}{3}\right)^{2/3}}{315 \pi ^{14/3}}-\frac{794568949760 \left(\frac{2}{3}\right)^{2/3}}{29393 \pi
^{8/3}}\,,\\
(\lambda^{R|R^4}_{(A,2)_3})^2&=-\frac{1634776490442752 \left(\frac{2}{3}\right)^{2/3}}{848925 \pi ^{14/3}}-\frac{614077244112896
\left(\frac{2}{3}\right)^{2/3}}{6292363 \pi ^{8/3}}\,,\\
\gamma_{2,2}^{R|R^4}&=\frac{166203518976 \left(\frac{2}{3}\right)^{2/3}}{5 \pi ^{14/3}}+\frac{162643071467520 \left(\frac{2}{3}\right)^{2/3}}{96577 \pi ^{8/3}}\,,\\
\gamma_{2,4}^{R|R^4}&=\frac{2257848479744 \left(\frac{2}{3}\right)^{2/3}}{63 \pi ^{14/3}}+\frac{709927895040 \left(\frac{2}{3}\right)^{2/3}}{391 \pi ^{8/3}}\,.\\
}
We can compare this against the CFT data in \eqref{RR4final2} that was extracted from the explicit Mellin amplitude, which fixes the coefficients
\es{fixB}{
B_6^{R|R^4}=-\frac{128720195584 \left(\frac{2}{3}\right)^{2/3}}{819 \pi ^{14/3}}\,,\qquad B_7^{R|R^4}=\frac{775420813312 \left(\frac{2}{3}\right)^{2/3}}{68445 \pi ^{14/3}}\,,\qquad B_8^{R|R^4}=-\frac{655360 \left(\frac{2}{3}\right)^{2/3}}{3 \pi ^{14/3}}\,.
}
We cannot yet fix the $B_4^{R|R^4}$ coefficient, because the analytic continuation of inversion did not converge for the low spin data affected by $M^4(s,t)$.
\subsection{Supersymmetric localization}
\label{loc}
We can also fix the Mellin amplitudes using the two localization constraints in \cite{Chester:2018aca,Binder:2018yvd}. The first constraint is simply the value of the short OPE coefficients $\lambda^2_{(B,+)}$ and $\lambda^2_{(B,2)}$, which are shown to all orders in $1/c_T$ in \eqref{B2Bp}, and impose just one independent constraint on the 4-point function due to the relation \eqref{1drel}. For $R|R$, the localization values exactly match the prediction in \eqref{BpRRfinal} from the analytically continued Lorentzian inversion formula, which independently fixes the CFT data affected by $M^4(s,t)$ without needing to assume the conjectured analytic continuation, and so gives a nontrivial check on that conjecture. For $R|R^4$ we use this constraint to fix the last contact term ambiguity
\es{lastB}{
B_4^{R|R^4} = \frac{65229926487808 \left(\frac{2}{3}\right)^{2/3}}{135135 \pi ^{14/3}}\,,
}
which we can then use to compute the last remaining unfixed CFT datum
\es{lastAnom}{
\gamma^{R|R^4}_{2,2}=\frac{25509449728 \left(\frac{2}{3}\right)^{2/3}}{15 \pi ^{14/3}}+\frac{359760199680 \left(\frac{2}{3}\right)^{2/3}}{46189 \pi ^{8/3}}\,.
}
The second localization constraint involves a nontrivial integral \cite{Binder:2019mpb}:
\es{DerSimpTwoMassesFinal}{
&\frac{\partial \log Z}{\partial m_+^2 \partial m_-^2} \Big\vert_{m_\pm=0}= \frac{\pi^2 c_T^2}{2^{11}}I_{+-}[{\mathcal S}^i]\,,\\
&\qquad\quad\;\;\, I_{+-}[{\cal S}^i] \equiv \int \frac{ds\ dt}{(4\pi i)^2} \frac{2\sqrt{\pi}}{(2-t)(s+t-2)}\mathfrak{M}_1(s,t) \\
&\qquad\qquad\quad\times \Gamma \left[1-\frac{s}{2}\right] \Gamma \left[\frac{s+1}{2}\right] \Gamma \left[1-\frac{t}{2}\right] \Gamma \left[\frac{t-1}{2}\right] \Gamma \left[\frac {s+t-2}{2}\right] \Gamma \left[\frac{3-s-t}2\right]\,,
}
where $\mathfrak{M}_1(s,t)$ is the first element of the Mellin amplitude basis
\es{Mbasis}{
M(s,t;\sigma,\tau) = \mathfrak{M}_1+\sigma^2 \mathfrak{M}_2+\tau^2 \mathfrak{M}_3+\sigma\tau \mathfrak{M}_4+\tau \mathfrak{M}_5+\sigma \mathfrak{M}_6\,,
}
and the mass derivatives of the partition function was computed to all orders in $1/c_T$ in \cite{Binder:2018yvd} for $k=2$ ABJ(M):
\es{Z}{
\frac{\partial \log Z}{\partial m_+^2 \partial m_-^2} \Big\vert_{m_\pm=0}=-\frac{\pi ^2}{64
c_T}+\frac{5 \pi ^{4/3} \left(\frac23\right)^{\frac23}}{16c_T^{5/3}}-\frac{5}{12
c_T^2}-\frac{4 \left(\frac{2\pi^2}{3}\right)^{\frac13}}{3c_T^{7/3}}+\frac{91
\left(\frac{2}{3 \pi }\right)^{2/3}}{9c_T^{8/3}}+O(c_T^{-3})\,.
}
For the Mellin amplitudes that appear at order $c_T^{-\frac83}$ in \eqref{M2222} we compute the integrals in \eqref{DerSimpTwoMassesFinal} to get
\es{ints}{
I_{+-}[M^4] &= \frac{8 \pi ^2}{7}\,,\qquad I_{+-}[M^6] =-\frac{448 \pi ^2}{33}\,,\qquad I_{+-}[M^7] = -\frac{529472 \pi ^2}{3003}\,,\\
I_{+-}[M^8] &= \frac{49716397568 \pi ^2}{342567225}\,,\qquad I_{+-}[M^{R|R^4}]= -\frac{1861955980828672 \left(\frac{2}{3}\right)^{2/3}}{2837835 \pi ^{8/3}}\,,
}
where for the polynomial Mellin amplitudes we used the fact that they are all proportional to $(2-t)(s+t-2)$ as well as the Barnes Lemma
\es{barnes}{
\int_{-i\infty}^{i\infty}\frac{ds}{2\pi i}\Gamma(a+s)\Gamma(b+s)\Gamma(c-s)\Gamma(d-s) = \frac{\Gamma(a+c)\Gamma(b+d)\Gamma(b+c)\Gamma(a+d)}{\Gamma(a+b+c+d)} \,,
}
while for $M^{R|R^4}(s,t)$ we instead numerically computed the integral of the closed form expression in the \texttt{Mathematica} file to high precision. Plugging \eqref{ints} and \eqref{Z} into \eqref{DerSimpTwoMassesFinal}, we find that $M^{R|R^4}$ with the values of $B_i^{R|R^4}$ fixed in \eqref{lastB} and \eqref{fixB} precisely satisfies the constraint, which is a nontrivial check that the analytically continued Lorentzian inversion formula gives the correct CFT data and thus fixes the contact term ambiguities at 1-loop.
Finally, we can use both localization constraints and the explicit integrals in \eqref{ints} to fix two of the four coefficients $B_i^{D^8R^4}$ in the tree level $D^8R^4$ term in \eqref{M2222} to get
\es{D8R4answer}{
B^{D^8R^4}_4=-\frac{3200B^{D^8R^4}_7 }{429}-\frac{238578176B^{D^8R^4}_8}{1957527}\,,\quad B^{D^8R^4}_6=-\frac{177B^{D^8R^4}_7}{13}+\frac{10356296B^{D^8R^4}_8 }{494325}\,,
}
which will be useful in future attempts to fix this amplitude by independently computing its CFT data.
\section{Numerical bootstrap}
\label{numerics}
In the previous sections we studied $\langle2222\rangle$ for the $k=2$ ABJ(M) theory in the large $c_T\sim N^{\frac32}$ limit to several orders. In this section, we will study this correlator non-perturbatively using the numerical conformal bootstrap, and compute bounds on CFT data as a function of $c_T$ as was done in previous work \cite{Chester:2014fya,Chester:2014mea,Agmon:2017xes,Agmon:2019imm}, but now to much higher numerical accuracy using subsequent technical improvements to the bootstrap software \cite{Landry:2019qug}. Previously, the tree level supergravity correction \cite{Zhou:2017zaw,Chester:2018lbz} was found to saturate the lower bounds \cite{Chester:2014mea,Agmon:2017xes} for all CFT. These lower bounds were conjectured to correspond to $k=2$ ABJ(M) theory in \cite{Agmon:2017xes}, because they were found to be approximately saturated by the values of the short $(B,2)$ and $(B,+)$ OPE coefficients as computed all orders in $1/N$ in \cite{Agmon:2017xes}. We now find that the $R|R$ correction continues to saturate these lower bounds for those semishort $(A,2)_\ell$ and $(A,+)_\ell$ OPE coefficients where the asymptotic large $c_T$ expansion is well converged.
\subsection{Setup}
\label{setupNum}
We start by briefly reviewing how the numerical bootstrap can be applied to the stress-tensor multiplet four-point function in $\mathcal{N}=8$ theories, for further details see \cite{Chester:2014fya}. Invariance of the four-point function \eqref{4point} as expanded in superblocks \eqref{SBDecomp} under $1\leftrightarrow3$ crossing \eqref{crossing} implies crossing equations of the form
\es{crossingEq}{
\sum_{{\cal M}\, \in\,\{\text{Id},\,\text{Stress},\,(B,+),\,(B,2),\,(A,+)_\ell,\,(A,2)_\ell,\,(A,0)_{\Delta,\ell}\} } \lambda_{\cal M}^2\, \vec{V}_{{\cal M}} = 0 \,,
}
where ${\cal M}$ ranges over all the superconformal multiplets listed in Table~\ref{opemult}, $\vec{V}_{{\cal M}} $ are functions of superconformal blocks, and $\lambda_{\cal M}^2$ are squares of OPE coefficients that must be positive by unitarity. As in \cite{Chester:2014fya}, we normalize the OPE coefficient of the identity multiplet to $\lambda_{\text{Id}} = 1$, and parameterize our theories by the value of $\lambda_\text{Stress}$, which is related to $c_T$ through \eqref{cTolam} for $p=2$.
To find upper/lower bounds on a given OPE coefficient of a protected multiplet ${\cal M}'$ that appears in the ${\cal O}_{\text{Stress}} \times {\cal O}_{\text{Stress}}$ OPE, we consider linear functionals $\alpha$ satisfying
\es{CondOPE}{
&\alpha(\vec{V}_{\cal M'}) = s \,, \;\quad\qquad \text{$s=1$ for upper bounds, $s=-1$ for lower bounds} \,, \\
&\alpha(\vec{V}_{\cal M}) \geq 0 \,, \;\;\quad\qquad \text{for all short and semi-short ${\cal M} \notin \{ \text{Id}, \text{Stress}, \cal M' \}$} \,, \\
&\alpha(\vec{V}_{(A,0)_{\Delta,\ell}}) \geq 0 \,, \qquad \text{for all $\ell$ with $\Delta\geq \ell+1$} \,.\\
}
If such a functional $\alpha$ exists, then this $\alpha$ applied to \eqref{crossingEq} along with the positivity of all $\lambda_{\cal M}^2$ except, possibly, for that of $\lambda_{{\cal M}'}^2$ implies that
\es{UpperOPE}{
&\text{if $s=1$, then}\qquad \lambda_{{\cal M}'}^2 \leq - \alpha (\vec{V}_\text{Id}) -\frac{256}{c_T}\alpha( \vec{V}_\text{Stress} ) \,,\\
&\text{if $s=-1$, then}\qquad \lambda_{{\cal M}'}^2 \geq \alpha (\vec{V}_\text{Id}) + \frac{256}{c_T} \alpha( \vec{V}_\text{Stress} ) \,.\\
}
Note that we can get both upper/lower bounds because the protected multiplets are isolated from the continuum of operators, unlike the long multiplets $(A,0)_{\Delta,\ell}$ for which we could only compute upper bounds on their OPE coefficients. To obtain the most stringent upper/lower bound on $\lambda_{{\cal M}'}^2$, one should then minimize/maximize the RHS of \eqref{UpperOPE} under the constraints \eqref{CondOPE}. In the above algorithms, we fixed the SCFT by inputting the value of $c_T$, which was computed to all orders in $1/N$ for ABJ(M) in \cite{Agmon:2017xes}. We can further fix the theory by also putting in the values of all the short OPE coefficients $\lambda^2_{(B,2)}$ and $\lambda^2_{(B,+)}$, which were also computed to all orders in $1/N$. We should then remove these operators from the second line of \eqref{CondOPE} and put them on the RHS of \eqref{UpperOPE} with their explicit OPE coefficients, just like the stress tensor multiplet.
The numerical implementation of the minimization/maximization problem described depends on five parameters: the number of derivatives parameter $\Lambda$ used to construct $\alpha$, the range of spins of multiplets up to $\ell_\text{max}$ that we consider, the order $r_\text{max}$ to which we expand blocks, the parameter $\kappa$ that parametrizes how many poles we keep when approximating blocks, and the precision of the solver {\tt SDPB} \cite{Simmons-Duffin:2015qma}. We used $\Lambda=83$,\footnote{This is equivalent to $n_\text{max}=42$, which must be some kind of record for 3d bootstrap!} $\ell_\text{max}=90$, $r_\text{max}=140$, $\kappa=70$, and $1116$ binary digits of precision. The most important parameter is $\Lambda$, which should be compared to the previously highest value $\Lambda=43$ used in \cite{Agmon:2019imm}.
\subsection{Bootstrap bound saturation}
\label{saturation}
\begin{table}[]
\begin{center}
\begin{tabular}{c||c|c|c}
& $\frac{\lambda_\text{Stress}^2}{16}=\frac{16}{c_T}$& $\lambda_{(B,+)}^2$ & $\lambda_{(B,2)}^2$\\
\hline \hline
large $N$ & $0.20952$ & $ 7.36854$ & $7.43343$ \\
exact & $0.20944$ &$7.37115$ & $7.45176$ \\
\hline
\end{tabular}
\caption{Comparison of the large $N$ formulae to the exact values from \cite{Agmon:2019imm} for the OPE coefficients of short operators that appear in $\langle2222\rangle$ for $U(3)_2\times U(3)_{-2}$ ABJM.\label{compTab}}
\end{center}
\end{table}
The large $c_T$ expansion of CFT data is asymptotic, which means that after a few orders the expansion will actually get worse, unless we look at very large values of $c_T$. The larger the value of $c_T$, the more precise our numerics must be to make a meaningful comparison, so we should focus on CFT data for which the asymptotic expansion is still pretty good for the lowest few orders. In general, we observe that the convergence of the asymptotic expansion is better for more protected operators. For instance, the short operators $(B,2)$ and $(B,+)$ are the most protected, and their expansion at large $c_T$ as shown to all orders in \cite{Agmon:2017xes} takes the form
\es{B2num}{
\lambda^2_{(B,2)}=10.6667 - 17.6369 \frac{16}{c_T} + 7.26668 \Big[\frac{16}{c_T}\Big]^{\frac53} - 0.384051 \Big[\frac{16}{c_T}\Big]^2 -
3.25726 \Big[\frac{16}{c_T}\Big]^{\frac73} + 1.88158 \Big[\frac{16}{c_T}\Big]^{\frac{8}{3}}+\dots\,,
}
and similarly for $\lambda^2_{(B,+)}$. We expand in $16/c_T$ because the free $\mathcal{N}=8$ theory has $c_T=16$, which makes this a natural quantity. Note that coefficient of each subsequent order is in general getting smaller, which implies that this asymptotic expansion is expected to be pretty good even to many orders. This expectation is supported by the fact that the all orders in $1/N$ expansion matches the finite $N$ values to the sub-percent level for the $U(3)_2\times U(3)_{-2}$ ABJM theory, as computed in \cite{Agmon:2019imm}\footnote{Actually, in \cite{Agmon:2019imm} the values were computed directly for the interacting sector of $U(4)_{1}\times U(4)_{-1}$ ABJM theory, but this theory is dual to $U(3)_2\times U(3)_{-2}$ theory since in the UV they are both described by $SU(4)\cong SO(6)$ SYM \cite{Gang:2011xp,Agmon:2017lga}.} and reviewed in Table \ref{compTab}. Another piece of evidence is that the all orders expression is close to saturating the lower bound from the numerical bootstrap, as first observed in \cite{Agmon:2017xes} with $\Lambda=43$ accuracy, and now further confirmed with $\Lambda=83$ accuracy in Figure \ref{B2fig}. Note that there is still a discrepancy between the all orders expression in solid red and the lower bound in solid gray, even though the numerics seem well converged as can be seen from comparing to the old $\Lambda=43$ value in dashed gray. This suggests that either the lower bound is not actually saturated by the $k=2$ ABJ(M) theory, even though it is very close to the curve for a large range of $c_T$ as observed in \cite{Agmon:2017xes}, or that the non-perturbative in $c_T$ corrections to the all orders expressions for $\lambda^2_{(B,2)}$ might account for this small discrepancy. Since we cannot rule out either possibility at this stage, for subsequent plots we will show bounds where we inputed the values of $\lambda^2_{(B,2)}$ and $\lambda^2_{(B,+)}$ as well as bounds with no assumptions, and as expected these bounds differ by a small amount.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.85\textwidth]{B2plot.pdf}
\caption{Upper and lower bounds on the $\lambda_{(B,2)}^2$ OPE coefficient with $\Lambda=83$ (solid gray) and $\Lambda=43$ (dashed gray) in terms of the stress-tensor coefficient $c_T$ in the large $c_T$ regime. The red dotted line denotes the large $c_T$ expansion to order tree level supergravity $O(c_T^{-1})$, which is independent of $k$, while the red dot-dashed line also includes the tree level $R^4$ correction at order $O(c_T^{-5/3})$ and the red dashed line furthermore includes the 1-loop $R|R$ correction at order $O(c_T^{-2})$, both of which depend on the value $k=2$. The solid red line includes the all orders in $1/c_T$ expression, which only misses non-perturbative in $c_T$ corrections. The blue and brown vertical lines denote the values of $c_T$ for various known $k=2$ ABJM and ABJ theories, respectively, which are summarized in Table \ref{cTValues}. Since there is still a small discrepancy in this very zoomed in plot (relative to \cite{Agmon:2017xes}, which looked at $0\leq16/c_T\leq1$) between the all orders result and the numerical bounds even at very high $\Lambda$, we have imposed the value of $\lambda^2_{(B,2)}$ in subsequent plots, which should further constrain the numerical bounds to match $k=2$ ABJM.}
\label{B2fig}
\end{center}
\end{figure}
The next most protected operators are $(A,+)_\ell$ for even $\ell$ and $(A,2)_\ell$ for odd $\ell$. In the previous sections we computed the large $c_T$ expansion of their OPE coefficients to many orders, which we summarize showing explicit numerical values for each term:
\es{ApNum}{
\lambda^2_{(A,+)_{0}}&=7.11111 + 3.02803 \frac{16}{c_T} + 1.11453 \Big[\frac{16}{c_T}\Big]^2 - 3.04011 \Big[\frac{16}{c_T}\Big]^{\frac73} \\
&+\Big[\frac{9199616}{6435}B_7^{D^8R^4}-\frac{120501022588928}{5138508375}B_8^{D^8R^4}\Big]{c_T^{-\frac{23}{9}}}- 20.4134 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\lambda^2_{(A,+)_{2}}&=13.3747 + 3.19665 \frac{16}{c_T} + 0.301165 \Big[\frac{16}{c_T}\Big]^2+\frac{67108864}{557375}B_8^{D^8R^4}{c_T^{-\frac{23}{9}}} - 268.868 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\lambda^2_{(A,+)_{4}}&=19.6506 + 3.25967 \frac{16}{c_T} + 0.189594 \Big[\frac{16}{c_T}\Big]^2 + 16.9909 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
}
and
\es{A2Num}{
\lambda^2_{(A,2)_{1}}&=9.75238 - 6.17281 \frac{16}{c_T} + 8.74961 \Big[\frac{16}{c_T}\Big]^2 - 75.0472 \Big[\frac{16}{c_T}\Big]^{\frac73} \\
&-\Big[ \frac{262144}{3003}B_7^{D^8R^4}+\frac{2766298677248}{2397970575}B_8^{D^8R^4} \Big]{c_T^{-\frac{23}{9}}}- 1797.23 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\lambda^2_{(A,2)_{3}}&=16.2118 - 6.43488 \frac{16}{c_T} + 2.11217 \Big[\frac{16}{c_T}\Big]^2 +\frac{268435456}{121275}B_8^{D^8R^4}{c_T^{-\frac{23}{9}}} - 6491.08 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\lambda^2_{(A,2)_{5}}&=22.573 - 6.54041 \frac{16}{c_T} + 1.28477 \Big[\frac{16}{c_T}\Big]^2 + 261.161 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
}
where we included the $D^8R^4$ at order $c_T^{-\frac{23}{9}}$ term that we only know up to two as yet unfixed coefficients. As the spin increases, we observe that the $R|R$ term at order $c_T^{-2}$ becomes increasingly smaller compared to the tree level supergravity term, which means that we can trust it more for a larger range of $c_T$. On the other hand, the $R|R^4$ term at order $c_T^{-\frac83}$ is much bigger than the previous terms, which means that we can only trust it at very large $c_T$. There is also a $D^6R^4$ term at order $c_T^{-\frac73}$ that only affects the lowest spin for each multiplet, and also is roughly the same size as the supergravity term.
Lastly, the least protected multiplet is the long multiplet $(A,0)_{\Delta,\ell}$. In the previous sections we computed the large $c_T$ expansion to the scaling dimension of the lowest twist operator for the lowest few spins, which we summarize showing explicit numerical values for each term:
\es{A0Num}{
\Delta_{2,0}&=2 - 7.09248 \frac{16}{c_T} - 38.1501 \Big[\frac{16}{c_T}\Big]^{\frac53} + 97.378 \Big[\frac{16}{c_T}\Big]^2 + 21.3758 \Big[\frac{16}{c_T}\Big]^{\frac73}\\
& +\Big[-\frac{222720}{143}B_7^{D^8R^4}+\frac{18902167552}{1087515}B_8^{D^8R^4}\Big]{c_T^{-\frac{23}{9}}}+
3993.9 \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\Delta_{4,2}&=4 - 3.12069 \frac{16}{c_T} - 65.3944 \Big[\frac{16}{c_T}\Big]^2\\
& +\Big[\frac{32256}{13}B_7^{D^8R^4}+\frac{1322266624}{164775}B_8^{D^8R^4}\Big]{c_T^{-\frac{23}{9}}} + 112035. \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,,\\
\Delta_{6,4}&=6 - 2.06985 \frac{16}{c_T} - 0.645987 \Big[\frac{16}{c_T}\Big]^2 -32768B_8^{D^8R^4}{c_T^{-\frac{23}{9}}} + 120791. \Big[\frac{16}{c_T}\Big]^{\frac83}+\dots\,.\\
}
The asymptotic expansion for this quantity seems very poor, as the coefficient of each subsequent term is growing rapidly.
\begin{table}%
\begin{center}
\begin{tabular}{|l|c|c|}
\hline
\multicolumn{1}{|c|}{${\cal N} = 8$ SCFT} & $c_T$ & $\frac{\lambda_\text{Stress}^2}{16}=\frac{16}{c_T}$ \\
\hline
$\;\; U(4)_2 \times U(4)_{-2}$ \; ABJM & $126.492$ & $0.138133$\\
$\;\; U(5)_2 \times U(5)_{-2}$ \; ABJM & $172.058$ & $0.0998481$\\
$\;\; U(6)_2 \times U(6)_{-2}$ \; ABJM & $221.97$ & $0.0765165$\\
$\;\; U(7)_2 \times U(7)_{-2}$ \; ABJM & $275.879$ & $0.0610587$\\
$\;\; U(4)_2 \times U(5)_{-2}$ \; ABJ & $115.831$ & $0.12649$\\
$\;\; U(5)_2 \times U(6)_{-2}$ \; ABJ & $160.243$ & $0.0929919$\\
$\;\; U(6)_2 \times U(7)_{-2}$ \; ABJ & $209.105$ & $0.0720818$\\
$\;\; U(7)_2 \times U(8)_{-2}$ \; ABJ & $262.043$ & $0.0579965$\\
\hline
\end{tabular}
\end{center}
\caption{Several values of $c_T$ and $\lambda_\text{Stress}^2/16$ for $k=2$ ABJM or ABJ theories, as computed from the all orders in $1/N$ formulae in \cite{Agmon:2017xes}.}\label{cTValues}
\end{table}%
These observations motivate us to focus on comparing to numerical bootstrap for $\lambda^2_{(A,2)_\ell}$ for $\ell>1$ and $\lambda^2_{(A,+)_\ell}$ for $\ell>0$ up to $O(c_T^{-2})$, where we can be reasonably confident that the asymptotic expansion is well converged for a moderately large range of $c_T$. In Figures \ref{Apfig} and \ref{A2fig} we compare the large $c_T$ expansion of this CFT data to non-perturbative lower bounds from the numerical bootstrap in the large $c_T$ regime, which includes many physical examples of $k=2$ ABJ(M) theories as summarized in Table \ref{cTValues}. As discussed above, we show bounds where we inputed the values of $\lambda^2_{(B,2)}$ and $\lambda^2_{(B,+)}$ using their all orders in $1/c_T$ expressions, as well as bounds with no assumptions. The discrepancy between each type of bound is small, and we observe that both are well approximated by the $O(c_T^{-2})$ analytic expressions for the entire range of $c_T$ that we looked at, and that the discrepancy between the two kinds of bounds is smaller than the improvement of the $c_T^{-2}$ term relative to the $O(c_T^{-1})$ approximation. For the more protected $\frac14$-BPS multiplet $(A,+)_\ell$, the correction to tree level supergravity is quite small, so it is somewhat harder to see the 1-loop correction, but for the less protected $\frac18$-BPS multiplet $(A,2)_\ell$ we can very clearly see the improvement from the 1-loop correction to tree level supergravity. We also show upper bounds, which after imposing $\lambda^2_{(B,2)}$ and $\lambda^2_{(B,+)}$ become very close the lower bounds, which suggests that the theory is almost completely fixed.
\begin{figure}[]
\begin{center}
\includegraphics[width=0.85\textwidth]{Ap2.pdf}
\includegraphics[width=0.85\textwidth]{Ap4.pdf}
\caption{Upper and lower bounds on the $\lambda_{(A,+)_\ell}^2$ OPE coefficient for $\ell=2$ (top) and $\ell=4$ (bottom) in terms of the stress-tensor coefficient $c_T$ in the large $c_T$ regime. The black solid lines are with the all orders in $1/c_T$ values of $\lambda_{(B,+)}^2$ and $\lambda_{(B,2)}^2$ for $k=2$ ABJ(M) inputed into the bootstrap, while the gray solid lines are without any assumptions, and note that the allowed region for the black bounds is much smaller than the gray bounds. The red dotted line denotes the large $c_T$ expansion to order tree level supergravity $O(c_T^{-1})$, which is independent of $k$. The red dashed line also includes the 1-loop $R|R$ correction at order $O(c_T^{-2})$, which depends on the value $k=2$, and improves the saturation of the lower bound relative to $O(c_T^{-1})$. The blue and brown vertical lines denote the values of $c_T$ for various known $k=2$ ABJM and ABJ theories, respectively, which are summarized in Table \ref{cTValues}. These plots were made with $\Lambda=83$.}
\label{Apfig}
\end{center}
\end{figure}
\begin{figure}[]
\begin{center}
\includegraphics[width=0.85\textwidth]{A23.pdf}
\includegraphics[width=0.85\textwidth]{A25.pdf}
\caption{Upper and lower bounds on the $\lambda_{(A,2)_\ell}^2$ OPE coefficient for $\ell=3$ (top) and $\ell=5$ (bottom) in terms of the stress-tensor coefficient $c_T$ in the large $c_T$ regime. The black solid lines are with the all orders in $1/c_T$ values of $\lambda_{(B,+)}^2$ and $\lambda_{(B,2)}^2$ for $k=2$ ABJ(M) inputed into the bootstrap, while the gray solid lines are without any assumptions, and note that the allowed region for the black bounds is much smaller than the gray bounds. The red dotted line denotes the large $c_T$ expansion to order tree level supergravity $O(c_T^{-1})$, which is independent of $k$. The red dashed line also includes the 1-loop $R|R$ correction at order $O(c_T^{-2})$, which depends on the value $k=2$, and improves the saturation of the lower bound relative to $O(c_T^{-1})$. The blue and brown vertical lines denote the values of $c_T$ for various known $k=2$ ABJM and ABJ theories, respectively, which are summarized in Table \ref{cTValues}. These plots were made with $\Lambda=83$.}
\label{A2fig}
\end{center}
\end{figure}
\section{Conclusion}
\label{conc}
There are three main results of this work. Firstly, we computed the 1-loop terms $R|R$, $R|R^4$, and $R^4|R^4$ for $k=2$ ABJ(M) theory up to contact term ambiguities, and checked that they match the relevant terms in the 11d M-theory S-matrix in the flat space limit. Secondly, we fixed the contact terms for $R|R$ and $R|R^4$ by combining two constraints from supersymmetric localization with a conjectured analytic continuation of the Lorentzian inversion formula, where localization confirms the inversion results for $R|R$ and provides a nontrivial check for $R|R^4$. Finally, we found that the $R|R$, i.e. $c_T^{-2}$, correction to semishort CFT data saturates the numerical bootstrap bounds for $k=2$ ABJ(M) in the large $c_T$ regime.
One could try to perform the same analytic continuation of the Lorentzian inversion formula for the other maximally supersymmetric holographic CFTs that have been studied at 1-loop: 4d $SU(N)$ $\mathcal{N}=4$ SYM dual to Type IIB string theory on $AdS_5\times S^5$, and 6d $(2,0)$ theory dual to M-theory on $AdS_7\times S^4$ for the $A_{N-1}$ theories and $AdS_7\times S^4/\mathbb{Z}_2$ for the $D_{N}$ theories. In the 4d case, general spin formulae were found for the $R|R$ correction in \cite{Aprile:2017bgs} and the $R|R^4$ correction in \cite{Alday:2018pdi}. The $R|R$ formula can be analytically continued to spin zero, which is the spin that is affected by the 4d analogue of the $B_4^{R|R}M^4$ contact term in \eqref{M2222}, but it was shown using supersymmetric localization in \cite{Chester:2019pvm} that $B_4$ is nonzero, unlike what we found here in 3d. The $R|R^4$ formula has explicit poles for spins $0,2,4$, which are the spins affected by the 4d analogue of the $O(c_T^{-\frac83})$ contact terms in \eqref{M2222}, so we cannot analytically continue as we did in here in 3d. In 6d, one can check the Lorentzian inversion formula results in \cite{Alday:2018pdi} can be analytically continued to all CFT data for both $R|R$ and $R|R^4$, which is even better than what we observed in 3d where $R|R^4$ could not be continued to CFT data affected by the $B_4^{R|R}M^4$ contact term. Unlike 3d, however, in 6d we have no localization results to check if this conjectured analytic continuation is correct.
The similarity of ABJ(M) and 6d $(2,0)$ in contrast to $\mathcal{N}=4$ SYM suggests that the analytically continued Lorentzian inversion formula can be applied to holographic CFTs dual to 11d M-theory, but not those dual to 10d string theory. One could try to justify this by observing that contact terms must always correspond to even powers of Planck length, so they can affect 1-loop terms for 10d duals that are also even in Planck length, but they cannot effect 1-loop terms for 11d duals that are odd in Plank length. On the other hand, recall that we did not compute $M^{R|R}$ and $M^{R|R^4}$ using explicit Witten diagrams, so the difference between a 1-loop diagram and a contact diagram at the same order in $c_T$ is not entirely clear from our approach. One could tentatively define the 1-loop diagram $M^{R|R}$ as whatever gives a formula for CFT data that is analytic in spin for all spins, as we did in practice, and then define $B_4^{R|R}M^4$ as a putative contact term, which we found was zero. Unfortunately, this description would not make sense for $M^{R|R^4}$, where there is no general spin formula that converges for the spins that contribute to $B_4^{R|R^4}M^4$, so it is hard to distinguish between 1-loop diagrams and contact diagrams at the same order $c_T^{-\frac83}$.
It would be nice to check if the analytically continued inversion formula at order $c_T^{-\frac{10}{3}}$ can be used to fix all the contact term ambiguities, perhaps when combined with supersymmetric localization, and if the conjectured results could be checked with localization. In this work we already computed the $R^4|R^4$ term that contributes at order $c_T^{-\frac{10}{3}}$, but we did not yet compute the $R|D^6R^4$ term that also contributes at this order. To compute this term we would need to know $\langle 22pp\rangle$ for all even $p$ at tree level $D^6R^4$, i.e. order $c_T^{-\frac73}$. Currently we only know this for $p=2$, where we could fix the three coefficients in \eqref{M2222} using the two localization constraints from \cite{Chester:2018aca,Agmon:2017xes} and \cite{Binder:2018yvd,Binder:2019mpb} as well as comparison to the known term \eqref{SGtoR4} in the M-theory S-matrix in the flat space limit. For $p>2$, we still have the flat space limit constraint, and it is possible that we could generalize to $p>2$ the localization constraint in \cite{Chester:2018aca,Agmon:2017xes}, which is just the value of short operator OPE coefficients that could in principle be computed from a $k=2$ generalization of the 1d theory \cite{Dedushenko:2016jxl}, which is currently only known for $k=1$ ABJM. On the other hand, the second localization constraint \cite{Binder:2018yvd,Binder:2019mpb} is specific to $p=2$, and also the $p>2$ correlator has more coefficients that need fixing than $p=2$ because crossing constraints are weaker for mixed correlators. Perhaps the $p>2$ term could be computed from the known $p=2$ case, as both in principle arise from dimensional reduction of the same $D^6R^4$ term in the flat space 11d action effective action for M-theory.
It would be also be very interesting if more terms in the analytic large $c_T$ expansion of CFT data could be found to saturate the numerical bootstrap bounds, beyond the $c_T^{-2}$ terms for semishort OPE coefficients that we considered in this work. In particular, if we could successfully match the $c_T^{-\frac83}$ terms computed in this work, then this would allow us to read off the $D^8R^4$ term at the lower order $c_T^{-\frac{23}{9}}$, which could then be used to fix the corresponding unknown term in the 11d M-theory S-matrix as outlined in \cite{Chester:2018aca}. The coefficients in the asymptotic large $c_T$ expansion start to grow drastically starting at $c_T^{-\frac83}$ even for the best behaved case of semishort OPE coefficients, so to accurately compare them to the numerical bootstrap we must look at very large $c_T$, which requires a very accurate numerical bootstrap. In this work we pushed the current numerical bootstrap to very high accuracy, as parameterized by $\Lambda=83$, which is twice the value used in the previous study \cite{Agmon:2019imm}. We found that the current numerical bootstrap has already started to converge, so pushing to higher $\Lambda$ will likely not improve matters. On the other hand, we observed in this work that inputting the short OPE coefficients noticeably improved the bounds, such that the lower bounds actually become very close to the upper bounds for the regime of $c_T$ that we studied, so it is likely that imposing other exact quantities such as the integrated constraint in \cite{Binder:2018yvd,Binder:2019mpb} will further improve the accuracy of the bounds, and maybe even fix the theory by having upper and lower bounds approximately coincide. It is also likely that a third localization constraint can be computed by considering derivatives of the squashed sphere free energy, which was computed to all orders in $1/c_T$ in \cite{Chester:2021gdw} using the localization results in \cite{Hama:2011ea,Imamura:2011wg}. This additional constraint could both improve the numerical bootstrap, as well as allow us to analytically fix another of the coefficients in \eqref{M2222} for $D^8R^4$, so there would only be a single remaining unfixed coefficient to be fixed from numerics.
In this paper we focused on $k=2$ ABJ(M) theory, because it is technically more difficult to compute the $\langle 22pp\rangle$ data for odd $p$ that is needed to compute $k=1$ ABJM at 1-loop. In particular, while the $\langle 22pp\rangle$ tree level supergravity for even $p$ could be written in terms of a finite number of $\bar{D}_{r_1,r_2,r_3,r_4}(U,V)$, for odd $p$ we require an infinite number. If we could compute $k=1$ ABJM at 1-loop, then we could perform another nontrivial check of our conjectured analytic continuation of the Lorentzian inversion formula, and also try to compare to numerical bootstrap bounds. The numerical bootstrap for $k=1$ could be more accurate than the $k=2$ case in this work, because for $k=1$ we could use the mixed correlator setup in \cite{Agmon:2019imm} that only applies to the $k=1$ theory, and includes inputs from more short OPE coefficients that can be computed to all orders in $1/c_T$ using the localization results of \cite{Gaiotto:2020vqj}. We look forward to reporting on the $k=1$ case in the future.
Finally, it would be interesting to generalize our 1-loop derivation to $\mathcal{N}=6$ ABJ(M) with $k>2$, which at small $k$ is dual to M-theory on $AdS_4\times S^7/\mathbb{Z}_k$ and at large $k$ is dual to Type IIA string theory on $AdS_4\times\mathbb{CP}^3$. The single trace operators in this case can be either $\frac12$ or a $\frac13$ BPS \cite{Dolan:2008vc}, so the unmixing problem would be more complicated. In particular, one would need to derive the superblock expansion of correlators of the various single operator operators to extract the GFFT and tree level data needed to compute 1-loop. So far, the superblock expansion was only computed for the stress tensor multiplet correlator \cite{Binder:2020ckj}, and it seems difficult to generalize that derivation to arbitrary correlators of other $\frac13$-BPS operators. Instead, it might be easier to compute the relevant CFT data by directly imposing the superconformal Ward identity on 3-point functions, rather than expanding 4-point correlators in superblocks. If one could compute the 1-loop term for general $k$, then in the flat space limit one could interpolate between an 11d box diagram at small $k$ to a 10d box diagram at large $k$, just as the tree level $R^4$ term computed in \cite{Binder:2019mpb} was found to interpolate between M-theory and string theory.
\newpage
\section*{Acknowledgments}
We thank Ofer Aharony, Silvu Pufu, Walter Landry, and Xinan Zhou for useful conversations, Silviu Pufu and Xinan Zhou for collaboration at an early stage of the collaboration, and Ofer Aharony for reading through the manuscript. The work of LFA is supported by the European Research Council (ERC) under the European Union's Horizon
2020 research and innovation programme (grant agreement No 787185). SMC is supported by the Zuckerman STEM Leadership Fellowship. HR acknowledges the support from the PBC postdoctoral fellowship program as well as the Israel Science Foundation center for excellence grant (grant number 1989/14) and by the Minerva foundation with funding from the Federal German Ministry for Education and Research. The authors would like to acknowledge the use of the University of Oxford Advanced Research Computing (ARC) facility in carrying out this work.(http://dx.doi.org/10.5281/zenodo.22558)
|
1,477,468,751,035 | arxiv | \section{#1}}
\renewcommand{\theequation}{\thesection.\arabic{equation}}
\newtheorem{prop}{Proposition}
\newtheorem{cor}{Corollary}
\newtheorem{rem}{Remark}
\newtheorem{lem}{Lemma}
\newtheorem{thm}{Theorem}
\renewcommand {\theequation}{\thesection.\arabic{equation}}
\newcommand {\beq}{\begin{equation}}
\newcommand {\eeq}{\end{equation}}
\newcommand {\beqa}{\begin{eqnarray}}
\newcommand {\eeqa}{\end{eqnarray}}
\newcommand {\beqs}{\begin{eqnarray*}}
\newcommand {\eeqs}{\end{eqnarray*}}
\newcommand {\bds}{\begin{displaymath}}
\newcommand {\eds}{\end{displaymath}}
\newcommand {\n}{\nonumber\\}
\newcommand {\nn}{\nonumber}
\newcommand {\sfrac}[2]{{\textstyle \frac{#1}{#2}}}
\newcommand{\no}{\noindent}
\newcommand {\eqn}[1]{(\ref{#1})}
\newcommand {\eq}[1]{eq.(\ref{#1})}
\newcommand {\eqs}[1]{eqs.(\ref{#1})}
\newcommand {\Eq}[1]{Eq.(\ref{#1})}
\newcommand {\Eqs}[1]{Eqs.(\ref{#1})}
\newcommand {\Label}[1]{\label{#1}}
\newcommand {\eqdef}{\stackrel{\rm def}{=}}
\newcommand{\ns}{\normalsize}
\newcommand {\bebb}{ |
1,477,468,751,036 | arxiv | \section{Introduction}
There is a variety of groups that can act on a Riemann surface/algebraic curve over $\mathbb{C}$; the automorphism group, the mapping class group (here we might allow punctures) and if the curve is defined over $\bar{\mathbb{Q}}$, then the absolute Galois
group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ is also acting on the curve. Understanding the above groups is a difficult problem and these actions provide information on both the curve and the group itself. For all the groups mentioned above the action can often be understood in terms of linear representations, by allowing the group to act on vector spaces and modules related to the curve itself, as the (co)homology groups and section of holomorphic differentials.
For a compact Riemann surface $X$ the automorphism group $\mathrm{Aut}(X)$ consists of all invertible maps $X\rightarrow X$ in the category of Riemann surfaces.
A compact Riemann surface minus a finite number of punctures can be also seen as a connected, orientable topological surface and the mapping class group $\mathrm{Mod}(X)$ can be considered acting on $X$. The mapping class group is the quotient
\[
\mathrm{Mod}(X)=\mathrm{Homeo}^+(X)/\mathrm{Homeo}^0(X),
\]
where
$\mathrm{Homeo}^+(X)$ is the group of orientation preserving homeomorphisms of $X$ and $\mathrm{Homeo}^0(X)$
is the connected component of the identity in the compact-open topology.
These actions of the above mentioned three types of groups seem totally unrelated and come from different branches of Mathematics. Recent progress in the branch of ``Arithmetic topology'' provide us with a complete different picture.
First the group $\mathrm{Aut}(X)$ can be seen as a subgroup of $\mathrm{Mod}(X)$ consisting of ``rigid'' automorphisms.
Y. Ihara in \cite{Ihara1985-it}, \cite{IharaCruz}, proposed a method to treat elements in $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ as elements in the automorphism group of the profinite free group. This construction is similar to the realization of braids as automorphisms of the free group. This viewpoint of elements in $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ as ``profinite braids'' allows us to give a series of Galois representations similar to classical
braid representations.
In this article we will focus on curves which are cyclic ramified covers of the projective line.
These curves form some of the few examples of Riemann surfaces where explicit computations can be made.
A ramified cover of the projective curve reduces to a topological cover, when the branch points are removed. By covering map theory these covers correspond to certain subgroups of the fundamental group of the projective line with branch points removed, which is a free group.
The computation of homology groups can be done by abelianization of the fundamental group, which in turn can be computed using the Schreier lemma. This method of computation provides us with a unified way to treat all the actions on curves, by seeing an element in these aforementioned groups as an automorphism of the corresponding fundamental group.
The authors find very interesting that this approach
provides us with a totally new method in order to study actions in the dual case, that is actions on global sections of holomorphic differentials $H^0(X,\Omega_X)$. When $G$ is the automorphism group, the determination of the $G$-module structure $H^0(X,\Omega_X)$ is a classical problem first posed by Hecke \cite{MR3069500}, which was solved by Chevalley and Weil \cite{Chevalley1934-eb} using character theory, when the characteristic of the field is zero.
For the $\mathrm{Mod}(X)$ case, in \cite{McMullenBraidHodge} C. McMullen considered unitary representations of the braid group acting on global sections of differentials of cyclic covers of the projective line. His result can be recovered by our homological computations by dualizing. This approach was also mentioned in this article \cite[p. 914 after th. 5.5.]{McMullenBraidHodge}.
We believe that the details of this computation are worth studying and are by no means trivial.
Finally the homology approach allows us to study the pro-$\ell$ analogue according to Ihara's point of view, and several classical notions like the homology intersection pairing can be generalized to the Weil pairing for the Tate module. This fits well with the ``arithmetic topology'' viewpoint, where notions from knot theory have an arithmetic counterpart, \cite{Morishita2011-yw}, \cite{MorishitaATIT}.
Let us now describe the results and the structure of the article.
Section \ref{AIreps} is devoted to the construction of Artin's and Ihara's representations.
In section \ref{sec:FundGroupCcover} we compute the generators of the fundamental group of the open curves involved in this article. All information is collected in table \ref{Tab:hom} of page \pageref{Tab:hom}.
We will make computations in several group algebras for multiplicative groups. In order to avoid confusion we will denote by $\mathbf{Z}=\{t^a: a\in \mathbb{Z}\}$ and by $\mathbf{Z}_\ell=\{t^a: a\in \mathbb{Z}_\ell\}$, where $t$ is a formal parameter. These groups are isomorphic to the groups $\mathbb{Z}$ and $\mathbb{Z}_\ell$. The group $\mathbb{Z}/n\mathbb{Z}=\langle \sigma \rangle$ is considered to be generated by the order $n$ element $\sigma$.
Select a set $\Sigma$ consisted of $s$ points of $\mathbb{P}^1$.
Let $C_s$ be a topological cover of $X_s=\mathbb{P}^1\backslash \Sigma$ with Galois group $\mathrm{Gal}(C_s/X_s)=\mathbf{Z}$, see definition \ref{defCs}. Let also $Y_n$ be a topological cover of $X_s$, covered by $C_s$, so that $\mathrm{Gal}(Y_n/X_s)=\mathbb{Z}/n\mathbb{Z}$.
We will denote by $\bar{Y}_n$ the complete algebraic curve corresponding to $Y_n$.
In section \ref{sec:Uniform-ram} we investigate the decomposition of the homology groups as Galois modules and prove the following
\begin{theorem}
The homology groups for the cyclic covers $C_s$ (resp. $Y_n$) can be seen as Galois modules for the group $\mathbf{Z}$ (resp. $\mathbb{Z}/n\mathbb{Z}$) as follows:
\begin{align}
H_1(C_s,\mathbb{Z}) =R_0/R_0' &=\mathbb{Z}[\mathbf{Z}]^{s-2} =\mathbb{Z}[t,t^{-1}]^{s-2}
\label{homology-decomposition}
\\
H_1(Y_n,\mathbb{Z}) =R_n/R_n' &=\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]^{s-2} \bigoplus \mathbb{Z}.
\nonumber
\end{align}
\end{theorem}
Cyclic covers with infinite Galois group lead to the Burau representation which is discussed in \ref{sec:BurauDiscrete}.
Similar to the discrete case, we have that $H_1(C_s,\mathbb{Z}_\ell)= \mathbb{Z}_\ell[\mathbf{Z}]^{s-2}$ but in order to have an action of the absolute Galois group, a larger space is required, namely the completed group algebra
$\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]^{s-2}$.
In section \ref{sec:Burau-prof} we give a pro-$\ell$ version of the analogue of a Burau representation
\[
\rho_{\mathrm{Burau}}: \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow \mathrm{GL}_{s-2}(\mathbb{Z}_\ell[[\mathbf{Z}_\ell]])
\]
and in theorem \ref{MatBurau} we give a matrix expression of this representation.
In section \ref{sec:applications-cyc-cov} for the complete curve $\bar{Y}_n$ we prove the following
\begin{theorem}
Let $\sigma$ be a generator of the cyclic group $\mathbb{Z}/n\mathbb{Z}$.
The complete curve $\bar{Y}_n$ has homology
\[
H_1(\bar{Y}_n,\mathbb{Z})=J_{\mathbb{Z}/n\mathbb{Z}}^{s-2},
\]
where
$J_{\mathbb{Z}/n\mathbb{Z}}=\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]/\langle \sum_{i=0}^{n-1} \sigma^i\rangle$ is the co-augmentation module of $\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]$.
\end{theorem}
The later space when tensored with $\mathbb{C}$ gives a decomposition
\[
H_1(\bar{Y}_n,\mathbb{Z})\otimes_\mathbb{Z} \mathbb{C}= \bigoplus_{\nu=1}^{n-1} V_\nu,
\]
where each $V_\nu$ is the $s-2$-dimensional eigenspace corresponding to eigenvalue $e^{\frac{2\pi i \nu}{n}}$ of the action of a generator $\sigma$ of the group $\mathbb{Z}/n\mathbb{Z}$, where $\sigma$ is seen as a linear operator acting on $H_1(\bar{Y}_n,\mathbb{Z})\otimes_\mathbb{Z} \mathbb{C}$.
Each space $V_\nu$ gives rise to a representation of the braid group $B_s$, which is the reduction of the Burau representation at $t\mapsto e^{\frac{2\pi i \nu}{n}}$.
If $n=\ell^k$ then a similar reduction process can be applied to the pro-$\ell$ Burau representation. We consider the $\ell^k-1$ non-trivial $\ell^k$-roots of unity, $\zeta_1,\ldots,\zeta_{\ell^k-1}$ in the algebraically closed field $\bar{\mathbb{Q}}_\ell$.
We have
\[
\mathbb{Z}_{\ell}[[\mathbf{Z}_\ell]]^{s-2}
\otimes_{\mathbb{Z}_\ell} \bar{\mathbb{Q}}_\ell = \bigoplus_{\nu=1}^{\ell^k-1} V_\nu,
\]
which after reducing $\mathbb{Z}_{\ell}[[\mathbf{Z}_\ell]] \rightarrow \mathbb{Z}_{\ell} [\mathbb{Z}_{\ell}/\ell^k \mathbb{Z}_\ell]=\mathbb{Z}_{\ell} [\mathbb{Z}/\ell^k\mathbb{Z}]$ sending $t\mapsto \zeta_\nu$ gives rise to the representation in $V_\nu$. The modules $V_\nu$ in the above decomposition are only $\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]$-modules and $\mathrm{ker}N$-modules, where $N:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \mathbb{Z}_\ell^*$ is the pro-$\ell$ cyclotomic character.
We would like to point out that the space $\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]^{s-2}$ contains information of all covers $\bar{Y}_{\ell^k}$ for all $k\in \mathbb{N}$, and equals the \'etale homology of a curve $\tilde{Y}$, which appears as a $\mathbb{Z}_\ell$-cover of the projective line, minus the same set of points removed. Going back from the arithmetic to topology we can say that the classical discrete Burau representation can be recovered by all representations of finite cyclic covers $\bar{Y}_n$, since we can define the inverse limit of all mod $n$ representations obtaining the $B_s$-module $\mathbb{Z}[[\hat{\mathbb{Z}}]]^{s-2}$. This $B_s$-module in turn contains $\mathbb{Z}[\mathbf{Z}]^{s-2}$ as a dense subset.
Finally in section \ref{arithInter} we see how the analogue of the homology intersection pairing can be interpreted as an intersection pairing using the Galois action on the Weil pairing for the Tate module.
For a free $\mathbb{Z}$ (resp. $\mathbb{Z}_\ell$)-module of rank $2g$, endowed with a symplectic pairing $\langle\cdot,\cdot\rangle$ the symplectic group is defined as
\[
\mathrm{Sp}(2g,\mathbb{Z})=\{M\in \mathrm{GL}(2g,\mathbb{Z}):
\langle M v_1,M v_2\rangle=\langle v_1,v_2 \rangle
\}
\]
and the generalized symplectic group is defined as
\[
\mathrm{GSp}(2g,\mathbb{Z})=\{M\in \mathrm{GL}(2g,\mathbb{Z}_\ell):
\langle M v_1,M v_2\rangle=m\langle v_1,v_2 \rangle, \text{ for some } m\in \mathbb{Z}_\ell^*
\}.
\]
In the topological setting the pairing is the intersection pairing and we have the following representation
\[
\rho: B_{s-1} \rightarrow \mathrm{Sp}(2g,\mathbb{Z})
\]
We employ properties of the Weil pairing in order to show that we have a representation
\[
\rho': \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow
\mathrm{GSp}(2g,\mathbb{Z}_\ell)
\]
as an arithmetic analogue of the braid representation $\rho$.
{\bf Acknowledgement: }
The authors would like to thank Professor Nondas Kecha\-gias and the anonymous referee for their valuable comments and corrections.
\section{On Artin and Ihara representations}
\label{AIreps}
\subsection{Artin representation}
It is known that the braid group can be seen as an automorphism group of the free group $F_{s-1}$ in terms of the Artin representation. More precisely the group $B_{s-1}$ can be defined as the subgroup of $\mathrm{Aut}(F_{s-1})$ generated by the elements $\sigma_i$ for $1\leq i \leq s-2$, given by
\[
\sigma_i(x_k)=
\begin{cases}
x_k & \mbox{ if } k\neq i,i+1,\\
x_i x_{i+1} x_i^{-1} & \mbox{ if } k =i, \\
x_i & \mbox{ if } k=i+1.
\end{cases}
\]
The open disk with $s-1$ points removed is homeomorphic with the the projective line with infinity and $s-1$ points removed. In particular, these spaces have isomorphic fundamental groups.
Indeed,
the free group $F_{s-1}$ is the fundamental group of $X_s$ defined as
\begin{equation} \label{Xs-def}
X_s=\mathbb{P}^1-\{P_1,\ldots,P_{s-1},\infty\}.
\end{equation}
In this setting the group $F_{s-1}$ is given as:
\begin{equation} \label{free-quodis}
F_{s-1}=\langle x_1,\ldots,x_{s} | x_1x_2\cdots x_{s}=1\rangle,
\end{equation}
the elements $x_i$ correspond to homotopy classes of loop circling once clockwise around each removed point $P_i$.
\begin{remark} \label{larger-action}
Notice that not only $B_{s-1}$ acts on $F_{s-1}$ but also $B_{s}$ acts on $F_{s-1}$. Indeed, for the extra generator $\sigma_{s-1} \in B_s-B_{s-1}$
we define
\begin{equation}
\label{ss1}
\sigma_{s-1}(x_i)=x_i \qquad \text{ for } 1\leq i \leq s-2
\end{equation}
\begin{equation}
\label{ss2}
\sigma_{s-1}(x_{s-1})= x_{s-1} x_s x_{s-1}^{-1}
=
x_{s-2}^{-1}x_{s-3}^{-1} \cdots x_{1}^{-1} x_{s-1}^{-1}
\end{equation}
and using eq. (\ref{ss1}), (\ref{ss2}) we compute
\[
\sigma_{s-1}(x_s)=\sigma_{s-1}
(x_{s-1}^{-1}\cdots x_1^{-1})=
\sigma_{s-1}(x_{s-1})^{-1}
\big(
x_{s-2}^{-1}\cdots x_1^{-1}
\big)=
x_{s-1}.
\]
\end{remark}
\subsection{Ihara representation}
We will follow the notation of \cite{MorishitaATIT}.
Y. Ihara, by considering the \'etale (pro-$\ell$) fundamental group of the space $\mathbb{P}^1_{\bar{\mathbb{Q}}}-\{P_1,\ldots,P_{s-1},\infty\}$, with $P_i\in \mathbb{Q}$, introduced the monodromy representation
\[
\mathrm{Ih}_S: \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow \mathrm{Aut}(\mathfrak{F}_{s-1}),
\]
where $\mathfrak{F}_{s-1}$ is the pro-$\ell$ completion of the free group $F_{s-1}$.
Here the group $\mathfrak{F}_{s-1}$ admits a presentation, similar to eq. (\ref{free-quodis}),
\begin{equation}
\label{Frpres}
\mathfrak{F}_{s-1}=
\left\langle
x_1,\ldots,x_s | x_1x_2\cdots x_s=1
\right\rangle,
\end{equation}
where here $\mathfrak{F}_{s-1}$ is considered as a quotient of the free pro-$\ell$ group $\mathfrak{F}_s$ in the pro-$\ell$ category.
The image of the Ihara representation is inside the group \[
\tilde{P}(\mathfrak{F}_{s-1}):=
\left\{
\sigma\in \mathrm{Aut}(\mathfrak{F}_{s-1})| \sigma(x_i)\sim x_i^{N(\sigma)}
(1\leq i \leq s) \text{ for some } N(\sigma) \in \mathbb{Z}_\ell^*
\right\},
\]
where $\sim$ denotes the conjugation equivalence.
This group is the arithmetic analogue of the Artin representation of ordinary (pure) braid groups inside $\mathrm{Aut}(F_{s-1})$.
Notice that the exponent
$N(\sigma)$
depends only on $\sigma$ and not on $x_i$. Moreover the map
\[
N: \tilde{P}(\mathfrak{F}_{s-1})
\rightarrow \mathbb{Z}_\ell^*
\]
is a group homomorphism and
$N\circ\mathrm{Ih}_S:
\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \mathbb{Z}_\ell^*$ coincides with the cyclotomic character $\chi_\ell$.
\begin{remark}
As in remark \ref{larger-action}
the relation $x_1\cdots x_{s-1} x_s=1$ implies that $\tilde{P}(\mathfrak{F}_{s-1})$ also acts on the free group $\mathfrak{F}_s$ since $x_s=(x_1 \cdots x_{s-1})^{-1}$.
\end{remark}
In this setting an element $\sigma\in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ can be seen acting on the topological generators $x_1,\ldots,x_{s-1}$ of the free group by
\begin{equation}
\label{actGeneratos}
\sigma(x_i)=w_i(\sigma) x_i^{N(\sigma)} w_i(\sigma)^{-1}.
\end{equation}
Moreover, by normalizing by an inner automorphism we might assume that $w_1(\sigma)=1$. We will use this normalization from now on.
\begin{remark}
We have considered in Ihara's representation the points $P_1,\ldots,P_{s-1}$ to be in $\mathbb{Q}$.
If we allow $P_1,\ldots,P_{s-1}$ to be in $\bar{\mathbb{Q}}$ then there is a minimal algebraic number field $K$which contains them all. We can consider in exactly the same way the absolute Galois group $\mathrm{Gal}(\bar{\mathbb{Q}}/K)=\mathrm{Gal}(\bar{K}/K)$ and then all arguments of this article work in exactly the same way for $\mathrm{Gal}(\bar{K}/K)$.
If now we want to consider representations of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ but the field $K$ defined by the set of points $\bar{P}:=\{P_1,\ldots,P_{s-1}\}$ is strictly bigger than $\mathbb{Q}$, then in order to obtain a reasonable action of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ on the set of branch points, we have to assume that the polynomial
$
f_{\bar{P}}(x):=\prod_{j=1}^{s-1} (x-P_j)$ is in $\mathbb{Q}[x].
$
In this case the absolute Galois group induces a permutation action on the points $P_j$ and defines a subgroup of the the symmetric group $S_{s-1}$.
The braid group is equipped by an onto map $\phi:B_{s-1} \rightarrow S_{s-1}$ with kernel the group of pure braids.
We have have argued that the braid group $B_s$ is a discrete analogue of the absolute Galois group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$. Every selection of points $\bar{P}$, which gives rise to a polynomial $f_{\bar{P}}(x)\in \mathbb{Q}[x]$, provides us with a map
$\phi_{\bar{P}}:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow S_{s-1}$. We would like to see these maps $\phi_{\bar{P}}$ as analogues of the map $\phi$. The group of ``pure braids''
with respect to such a map $\phi_{\bar{P}}$ is the
absolute Galois group $\mathrm{Gal}(\bar{\mathbb{Q}}/K)$ of the field $K$
generated by the set of points $\bar{P}$, while the
image
$\phi_{\bar{P}}(\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})) \subset S_{s-1}$
is not onto, unless the points $P_1,\ldots,P_{s-1}$
have no polynomial algebraic relations defined over $\mathbb{Q}$. As a matter of fact it conjectured -this is the inverse Galois problem- that any finite group can appear as the image of such a map $\phi_{\bar{P}}$ allowing $\bar{P}$ and $s$ to vary.
In this way we obtain a short exact sequence
\[
1 \rightarrow
\mathrm{Gal}(\bar{\mathbb{Q}}/K)
\rightarrow \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})
\stackrel{\phi_{\bar{P}}}{\longrightarrow}
\mathrm{Gal}(K/\mathbb{Q}) \rightarrow 1.
\]
In general case, even if $\bar{P}$ is not a subset of $\mathbb{Q}$, there is a representation
\[
\rho: \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow
\mathrm{Aut}(\mathfrak{F}_{s-1})
\]
where for $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ $\rho(\sigma)(x_i)=w(\sigma) x_{\phi_{\bar{P}}(\sigma)} w(\sigma)^{-1}$. If $\sigma$ is a ``pure braid'', then
the above action can be simplified, since the generator $x_i$ of $\mathfrak{F}_{s-1}$ is not moved to another generator.
For this article the interesting part is the study of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and not the problem of finding the Galois group of a polynomial in $\mathbb{Q}[x]$.
If we start by selecting all points in $\bar{P}$ in $\mathbb{Q}$, as Ihara did, then the whole group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ can be considered as an analogue of pure braids.
\end{remark}
\subsection{Similarities}
For understanding representations of the absolute Galois group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$, the theory of coverings of $\mathbb{P}^1_{\mathbb{Q}}-\{0,1,\infty\}$ is enough, by Belyi's theorem, \cite{Belyi1}.
On the other hand the study of topological covers
of $\mathbb{P}^1_{\mathbb{Q}}-\{0,1,\infty\}$ is not very interesting; both groups
$B_2$ and $B_3$ which can act on covers of $\mathbb{P}^1_{\mathbb{Q}}-\{0,1,\infty\}$ are not very interesting braid groups.
In order to seek out similarities between the Artin and Ihara representation, we will study covers with more than three points removed.
Notice that when the number $s$ of points we remove is $s>3$, then we expect that their configuration might also affect our study.
Moreover elements in the braid group are acting like elements in the mapping class group of the punctured disk i.e. on the projective line minus $s$ points.
The braid group acts like the symmetric group on the set of removed points $\Sigma$ and acts like a complicated homeomorphism on the complement $D_{s-1}$ of the $s-1$ points.
Let $\Sigma=\bar{P} \cup \{\infty\}$ and let $K$ be the field generated by the points in $\bar{P}$. The group $\mathrm{Gal}(\bar{\mathbb{Q}}/K)$ keeps invariant the
set $\Sigma$ and corresponds to the notion of pure braids. Since $\mathrm{Gal}(\bar{\mathbb{Q}}/K)$ also acts on $\mathbb{P}^1_{\bar{\mathbb{Q}}}$ it acts on the difference
$\mathbb{P}^1_{\bar{\mathbb{Q}}}\backslash \Sigma$. This mysterious action should be seen as the arithmetic analogue of the action of the (pure)braid group on the punctured disc.
Knot theorists study braid group representations, in order to provide invariants of knots (after Markov equivalence, see \cite[III.6 p.54]{Prasolov97}) and number theorists study Galois representations in order to understand the absolute Galois group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$. Both kind of representations are important and bring knot and number theory together within the theory of arithmetic topology.
\section{On the fundamental group of cyclic covers}
\label{sec:FundGroupCcover}
Let $\pi:Y\rightarrow \mathbb{P}^1$ be a ramified
Galois cover of the projective line ramified above the set $\Sigma=\{P_1,\ldots,P_s\} \subset \mathbb{P}^1$.
The open curve $Y_0=Y-\pi^{-1}(\Sigma)$ is then a topological cover of $X_s=\mathbb{P}^1-\Sigma$ and can be seen as a quotient of the universal covering space $\tilde{X}_s$ by the free subgroup $R_0=\pi_1(Y_0,y_0)$ of the free group $\pi_1(X_s,x_0)=F_{s-1}$ (resp. pro-$\ell$ free group $\mathfrak{F}_{s-1}$), where $s=\#\Sigma$.
We will employ the Reidemeister Schreier method,
algorithm \cite[chap. 2 sec. 8]{bogoGrp},\cite[sec. 2.3 th. 2.7]{MagKarSol} in order to compute the group $R_0$.
\subsection{Schreier' s Lemma}
Let $F_{s-1}=\langle x_1, \cdots, x_{s-1} \rangle$ be the free group with basis $X=\{ x_1, \cdots, x_{s-1}\}$ and let $H$ be a
subgroup of of $F_{s-1}$.
A (right) {\bf Schreier Transversal} for $H$ in $F_{s-1}$ is a set $T=\{t_1=1, \cdots, t_n \}$ of reduced words, such that each right coset of $H$ in $F_{s-1}$ contains a unique word of $T$ (called a representative of this class) and all
initial segments of these words also lie in $T$.
In particular, $1$ lies in $T$ (and represents the class $H$) and $H t_i \neq H t_j$, $\forall i \neq j$. For any $g \in F_{s-1}$ denote by $\overline{g}$ the element of $T$ with the property $Hg=H\overline{g}$.
If $t_i \in T$ has the decomposition as a reduced word
$t_i=x_{i_1}^{e_1} \cdots x_{i_k}^{e_k}$ (with $i_j=1, \ldots, s-1$, $e_j= \pm 1$ and $e_j=e_{j+1}$ if $x_{i_j}=x_{i_{j+1}})$, then for every word $t_i$ in $T$ we have that
\begin{equation} \label{Xs-def1}
t_i=x_{i_1}^{e_1} \cdots x_{i_k}^{e_k} \in T \Rightarrow 1, x_{i_1}^{e_1}, x_{i_1}^{e_1}x_{i_2}^{e_2},\ldots, x_{i_1}^{e_1}x_{i_2}^{e_2} \cdots x_{i_k}^{e_k} \in T.
\end{equation}
\begin{lemma}[Schreier's lemma]
\label{lemma:schreier}
Let $T$ be a right Schreier Transversal for $H$ in $F_{s-1}$ and set $\gamma(t,x):= tx \overline{tx}^{-1}$, $t \in T$, $x \in X$ and $tx \notin T$. Then $H$ is freely generated by the set
\begin{equation} \label{free-quods}
\{ \gamma(t,x) | \gamma(t,x) \neq 1 \rangle
\}.
\end{equation}
\end{lemma}
\subsection{Automorphisms of Free groups acting on subgroups}
If $R_0=\pi_1(Y_0,y_0)$ is a characteristic subgroup of $F_{s-1}=\pi_1(\mathbb{P}^1-\Sigma)$ (resp. of $\mathfrak{F}_{s-1}$ in the pro-$\ell$ case) then it is immediate that the Artin (resp. Ihara) representation gives rise to an action on $R_0$.
Observe that since the cover $\pi:Y \rightarrow \mathbb{P}^1$ is Galois we have that $R_0 \lhd F_{s-1}$ and the Artin representation gives rise to a well defined action of the braid group on $R_0$.
The same argument applies for the kernel of the norm map in the Ihara case, that is since the pro-$\ell$ completion of $R_0$ is a normal subgroup of $\mathfrak{F}_{s-1}$,
every element $\sigma$ in $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ with $\chi_\ell(\sigma)=1$ acts on the pro-$\ell$ completion of $\pi_1(Y_0,y_0)$.
This is in accordance with a result of J. Birman and H. Hilden \cite[th. 5]{Birman1972-pg}, which in the case of cyclic coverings $\pi:C \rightarrow (\mathbb{P}^1-\Sigma)$, relates
the subgroup $\mathrm{Mod}_{\pi}(C)$ of the mapping class group of $C$ consisted of the fiber preserving automorphisms, the Galois group $\mathrm{Gal}(C/\mathbb{P}^1)$ and the mapping class group $\mathrm{Mod}(\mathbb{P}^1-\Sigma)$ of $\mathbb{P}^1-\Sigma$ in terms of the quotient
\[
\mathrm{Mod}_\pi(C)/\mathrm{Gal}(C/\mathbb{P}^1)=\mathrm{Mod}(\mathbb{P}^1-\Sigma).
\]
For example when $Y$ is the covering corresponding to the commutator group $F_{s-1}'$, then $\mathrm{Gal}(Y/X_s)\cong F_{s-1}/F_{s-1}'=H_1(X,\mathbb{Z})$. Therefore, the latter space is acted on by the group of automorphisms, and the braid group $B_s$.
\subsection{Automorphisms of curves}
For the case of automorphisms of curves, where the Galois cover $\pi:Y \rightarrow \mathbb{P}^1$ has Galois group $H$, we consider
the short exact sequence
\[
1 \rightarrow R_0 \rightarrow F_{s-1} \rightarrow H \rightarrow 1.
\]
We see that there is an action of $H$ on $R_0$ modulo inner automorphisms
of $R_0$ and in particular a well defined action of $H$ on $R_0/R_0'=H_1(Y_0,\mathbb{Z})$.
Therefore the space $H_1(Y_0,\mathbb{Z})$ can be seen as a direct sum of indecomposable $\mathbb{Z}[H]$-modules.
\begin{remark}
A cyclic cover $X$ given in eq. (\ref{cyccov}) might have a bigger automorphism group than the cyclic group of order $n$, if the roots $\{b_i, 1\leq i \leq s\}$ form a special configuration. Notice also that if the number $s$ of branched points satisfies $s > 2n$ then the automorphism group $G$ fits in a short exact sequence
\begin{equation} \label{extGrp}
1 \rightarrow \mathbb{Z}/n\mathbb{Z} \rightarrow G \rightarrow H \rightarrow 1,
\end{equation}
where $H$ is a subgroup of $\mathrm{PGL}(2,\mathbb{C})$ \cite[prop. 1]{Ko:99}. The first author in \cite{Ko:99} classified all such extensions.
Observe that the action of the mapping class group on homology is of topological nature and hence independent of the special configuration of the roots $b_i$. If these roots have a special configuration, then certain elements of the mapping class group become automorphisms of the curve. This phenomenon is briefly explained on page 895 of \cite{McMullenBraidHodge}.
Similarly, suppose that the set $b_1,\ldots,b_s$ is fixed point wise by the absolute Galois group, that is $b_1,\ldots,b_s \in \mathbb{P}^1(\mathbb{Q})$. The action of elements of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ on homology is the same for all such selections of $\{b_1,\ldots,b_s\} \subset \mathbb{P}^1(\mathbb{Q})$.
However if these roots $b_i$ have a special configuration, then certain elements of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ become automorphisms of the group.
If the branch locus $\{b_i:1\leq i \leq s\}$ is invariant under the group $H$ then $H_1(X,\mathbb{Z})$ is a $\mathbb{Z}[G]$ module, where $G$ is an extension of $H$ with kernel $\mathbb{Z}/n\mathbb{Z}$ given by eq. (\ref{extGrp}).
\end{remark}
\subsection{Adding the missing punctures}
Let us now relate the group $R=\pi_1(Y,y_0)$ corresponding to the complete curve $Y$ with the group
$R_0$ corresponding to the open curve $Y_0=Y-\pi^{-1}(\Sigma)$.
We know that the group $R_0$ admits a presentation
\[
R_0=\langle a_1,b_1,\ldots, a_g,b_g,\gamma_1,\ldots,\gamma_s
| \gamma_1 \gamma_2 \cdots \gamma_s \cdot [a_1,b_1][a_2,b_2] \cdots
[a_g,b_g]=1
\rangle,
\]
where $g$ is the genus of $Y$.
\begin{convention}
Given $\gamma_1,\ldots,\gamma_s$ group elements we will denote by $\langle \gamma_1,\ldots,\gamma_s \rangle$ the closed normal group generated by these elements. In the case of usual groups the extra ``closed'' condition is automatically satisfied, since these groups have the discrete topology. So the ``closed group'' condition has a non-trivial meaning only in the pro-$\ell$ case.
\end{convention}
The completed curve $Y$ has a fundamental group which admits a presentation of the form
\begin{align*}
R & = \langle a_1,b_1,a_2,b_2,\ldots, a_g,b_g | [a_1,b_1][a_2,b_2] \cdots
[a_g,b_g]=1
\rangle
\\
&=\frac{R_0}{\langle \gamma_1,\ldots,\gamma_s\rangle}.
\end{align*}
There is the following short exact sequence relating the two homology groups:
\begin{equation} \label{relate-gamma}
\xymatrix@R-15pt@C=10pt{
0 \ar[r] &
\langle \gamma_1, \ldots,\gamma_s \rangle \ar[r] &
H_1(Y_0,\mathbb{Z}) \ar[r] \ar[d]^{\cong} &
H_1(Y,\mathbb{Z}) \ar[r] \ar[d]^{\cong}
&
0\\
& & R_0/R_0' \ar[r] & R/R' =R_0/R_0' \langle \gamma_1,\ldots,\gamma_s \rangle & &
}
\end{equation}
Note that if a group acts on $R_0$,
then this action can be extended to an action of $R_0/\langle \gamma_1,\ldots,\gamma_s \rangle$ if and only if the group keeps $\langle \gamma_1,\ldots,\gamma_s \rangle$ invariant.
\section{Examples- Curves with punctures}
\label{sec:Uniform-ram}
\begin{definition} \label{defCs}
Recall that
$X_s=\mathbb{P}^1\backslash \Sigma$, where $\Sigma$ is a subset of $\mathbb{P}^1$ consisted of $s$ points.
Consider the projection
\[
0 \rightarrow I \rightarrow H_1(X_s,\mathbb{Z})
\stackrel{\alpha}{\longrightarrow}
\mathbb{Z}
\rightarrow
0
\]
and let $C_s$ be the curve given as quotient $Y/I$, so that $\mathrm{Gal}(C_s/X_s)=\mathbb{Z}$. The map $\alpha$ is the winding number map which can be defined both on the fundamental group and on its abelianization by: $(1 \leq i_1,\ldots,i_t \leq s, \ell_{i_1},\ldots,\ell_{i_t} \in \mathbb{Z})$
\begin{equation}
\label{w-map}
\alpha:\pi_1(X_s,x_0) \longrightarrow \mathbb{Z} \qquad
x_{i_1}^{\ell_{i_1}} x_{i_2}^{\ell_{i_2}} \cdots x_{i_t}^{\ell_{i_t}} \mapsto
\sum\limits_{\mu=1}^t \ell_{i_\mu}.
\end{equation}
\end{definition}
The following map is a pro-$\ell$ version of the $w$-map defined in eq. (\ref{w-map}). Let $\mathfrak{F}_{s-1}$ be the free pro-$\ell$ group in generators $ x_1,\ldots,x_{s-1}$.
Consider the map
\begin{equation}
\label{a-map}
\alpha: \mathfrak{F}_{s-1}\rightarrow
\mathfrak{F}_{s-1}/\langle x_1x_j^{-1}, j=2,\ldots,s-1 \rangle \cong
\mathfrak{F}_1\cong \mathbb{Z}_\ell.
\end{equation}
The map $\alpha$ is continuous so if $v_n$ is a sequence of words in $F_{s-1}$ converging to $v\in \mathfrak{F}_{s-1}$, then
\[
\lim_n \alpha(v_n)= \alpha(v) \in \mathbb{Z}_\ell.
\]
\subsection{On certain examples of cyclic covers of $\mathbb{P}^1$}
Consider the commutative diagram below on the left:
\begin{minipage}{0.3\textwidth}
$
\xymatrix{
\tilde{X}_s \ar[ddd]|-{F_{s-1}} \ar[drr]|-{F_{s-1}'} \ar[ddr]^{R_0} & & \\
& & Y \ar@/^2pc/[ddll]^{H_1(X_s,\mathbb{Z})} \ar[dl]_{I} \\
& C_s \ar[dl]_{\mathbf{Z}} & \\
X_s &
}
$
\end{minipage}
\begin{minipage}{0.65\textwidth}
Then $H_1(C_s,\mathbb{Z})=R_0/R_0'$, where $R_0=\pi_1(C_s)$ is the free subgroup of $F_{s-1}$ corresponding to $C_s$. Moreover $H_1(C_s,\mathbb{Z})$ is a free $\mathbb{Z}[\mathbf{Z}]$-module free of rank $s-2$ acted on also by $B_{s-1}$ giving rise to the so called Burau representation:
\[
\rho: B_{s-1} \rightarrow \mathrm{GL}(s-2,\mathbb{Z}[t,t^{-1}]).
\]
Keep in mind that $\mathbb{Z}[\mathbf{Z}]\cong \mathbb{Z}[t,t^{-1}]$. In what follows will give a proof of these facts using the Schreier's lemma.
\end{minipage}
\begin{lemma}
The group $R_0$, is an infinite rank group and is freely generated by the set
\begin{equation}
\label{R-free-gen}
\{x_1^i x_j x_1^{-i-1}: i\in \mathbb{Z}, j\in {2,\ldots, s-1} \}.
\end{equation}
\end{lemma}
\begin{proof}
Consider the epimorphisms
\[
\xymatrix{
F_{s-1} \ar[r]^-{p'} \ar@/_1.3pc/[rr]_{\alpha}& F_{s-1}/F_{s-1}' \ar[r]^-{p''} & \mathbb{Z}=H_1(Y,X_s)/I.
}
\]
Set $\alpha=p''\circ p'$.
Let $y$ be an element in $\alpha^{-1}(1_\mathbb{Z})$. By the properties of the winding number we can take as $y=x_1$. Moreover $\alpha(x_j)=y$ for all $1\leq j \leq s-1$, since the
automorphism $x_i \leftrightarrow x_j$ is compatible with $I$ and therefore introduces an automorphism of $\mathbb{Z}$, so $\alpha(x_j)=y^{\pm 1}$, and we rename the generators $x_i$ to $x_i^{-1}$ if necessary.
Let $T:=\{y^i:i\in \mathbb{Z}\}\subset F_{s-1}$ be a set of representatives of classes in $F_{s-1}/R_0\cong \mathbf{Z}$. The set $T$ is a Schreier transversal, and Schreier's lemma can be applied, see lemma \ref{lemma:schreier}. For every $x\in F_{s-1}$ we will denote by $\bar{x}$ the representative in $T$. Moreover for all $i\in \mathbb{Z}$ and $1\leq j \leq s-1$ we have $\overline{y^i x_j}=y^{i+1}$ and by the Schreier's lemma we see that
\[
y^i x_j \left(\overline{y^i x_j}\right)^{-1}=y^i x_j y^{-i-1}=
x_1^i x_j x_1^{-i-1}
\qquad i\in \mathbb{Z}, j\in {2,\ldots, s-1} .
\]
\end{proof}
\begin{remark}
The action of $\mathbb{Z}[\mathbf{Z}]$ on $R_0/R'_0$ is given by conjugation.
This means that for $n \in \mathbb{Z}$ we have
\begin{align}
\label{Zaction}
\mathbb{Z}[\mathbf{Z}] \times R_0 & \longrightarrow R_0 \\
(t^n, r) & \longmapsto x_1^n r x_1^{-n} \nonumber
\end{align}
A generating set for $H_1(C_s,\mathbb{Z})$ as a free $\mathbb{Z}[\mathbf{Z}]$-module is given by the $s-2$ elements $\beta_j:=x_jx_1^{-1}$. Moreover the $\mathbf{Z}$-action is given by
\[
\left( x_ix_1^{-1}\right)^{t^n}=x_1^{n}x_ix_1^{-n-1},
\]
where $t$ is a generator of the infinite cyclic group $\mathbf{Z}$. This means that
$H_1(C_s,\mathbb{Z})$ is a free $\mathbb{Z}[\mathbf{Z}]$-module of rank $s-2$.
Observe that in $R_0/R_0'$ we have
\begin{align*}
x_j(x_i x_1^{-1}) x_j^{-1} &=
(x_j x_1^{-1}) x_1\beta_i x_1^{-1} (x_j x_1^{-1})^{-1}\\
&= \beta_j x_1 \beta_i x_1^{-1} \beta_j^{-1} =\beta_i^{t},
\end{align*}
i.e. the conjugation by any generator $x_j$
has the same effect as the conjugation by $x_1$.
\end{remark}
\noindent
\begin{minipage}{0.57\textwidth}
Let us now consider a finite cyclic cover $Y_n$ of $X_s$ which is covered by $C_s$, i.e. we have the diagram on the right bellow:
\begin{lemma}
\label{lemma:funRn}
The group $R_n=\pi_1(Y_n)\supset R_0$ is the kernel of the map $\alpha_n$
\[
\xymatrix{
\pi_1(X) \ar[r]^-\alpha \ar@/^1pc/[rr]^{\alpha_n}& \mathbf{Z} \ar[r] & \mathbb{Z}/n\mathbb{Z}.
}
\]
\end{lemma}
\begin{proof}
This is clear from the explicit description of the group $R_0$ given in eq. (\ref{R-free-gen}).
\end{proof}
\end{minipage}
\begin{minipage}{0.4\textwidth}
$
\xymatrix@R=16pt{
\tilde{X}_s \ar[dddd] \ar[drr] \ar[ddr]^{R_0} & & \\
& & Y \ar@/^4pc/[dddll]^{H_1(X_s,\mathbb{Z})} \ar[dl]_{I} \\
& C_s \ar[ddl]_{\mathbf{Z}} \ar[d] & \\
& Y_n \ar[dl]^{\mathbb{Z}/n\mathbb{Z}} \\
X_s &
}
$
\end{minipage}
\begin{lemma} \label{Rngenerators}
The group $R_n$ is generated by
\[
R_n=
\{
x_1^i x_j x_1^{-i-1}: 0 \leq i \leq n-2, 2\leq j\leq s-1 \}
\cup
\{x_1^{n-1}x_j: 1\leq j \leq s-1\}.
\]
which is a free group on $r=(s-2)n+1$ generators.
\end{lemma}
\begin{proof}
In this case the transversal set equals $T=\{y^i: 0 \leq i \leq n-1\}$.
Moreover
\[
\overline{y^i x_j}=
\begin{cases}
y^{i+1} & \text{ if } i< n-1 \\
1 & \text { if } i=n-1.
\end{cases}
\]
For all $i$, $0 \leq i \leq n-1$ and for all generators $x_j$, $1\leq j \leq s-1$ we compute
\[
y^i x_j (\overline{y^i x_j})^{-1}=
\begin{cases}
y^i x_j y^{-i-1}=x_1^ix_jx_1^{-i-1} & \text{ if } 0\leq i \leq n-2 \\
y^{n-1} x_j=x_1^{n-1} x_j & \text{ if } i=n-1
\end{cases}
\]
Keep in mind that if $j=1$ then $x_1^{i}x_j x_1^{-i-1}=1$ and this value does not give us a generator. On the other hand the expression $x_1^{-1}x_j$ survives even if $j=1$.
The desired result follows.
\end{proof}
\begin{proposition}
\label{prop:GalModRn}
The $\mathbb{Z}$-module $R_n/R_n'$ as $\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]$-module is isomorphic to
\[
R_n/R_n' = \mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]^{s-2} \bigoplus \mathbb{Z}.
\]
\end{proposition}
\begin{proof}
Set $\beta_j=x_j x_1^{-1}$ for $2\leq j \leq s-1$. Then the action of $\mathbb{Z}/n\mathbb{Z}=\langle \sigma \rangle$ on elements $\beta_j$ is
given by
\[
\beta_j^{\sigma^\ell}=x_1^{\ell}
\left(x_j x_1^{-1}
\right)
x^{-\ell}=x_1^{\ell} x_j x_1^{-\ell-1}
\text{ for } 0 \leq \ell \leq n-1.
\]
It is clear that for each fixed $j$, $2\leq j \leq s-1$, the elements $\beta_j^{\sigma^\ell}$ generate a copy of the group algebra $\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]$. By the explicit form of the basis generators given in lemma \ref{Rngenerators} we have the alternative basis given by
\label{sec:Rn}
\begin{equation}
\label{secBaseRn}
\{ x_1^{i}x_jx_1^{-i-1}: 2 \leq j \leq s-1, 0\leq i \leq n-1\} \cup \{x_1^n\}.
\end{equation}
The result follows.
\end{proof}
\begin{remark}
The above computation is compatible with the Schreier index formula \cite[cor. 8.5 p.66]{bogoGrp} which asserts that
\begin{equation}
\label{ASRn}
r-1=n(s-2).
\end{equation}
\end{remark}
\begin{remark}
Observe that there is no natural reduction modulo $n$ map from $H_1(C_s,\mathbb{Z})$ to $H_1(Y_n,\mathbb{Z})$ corresponding to the group reduction $\mathbf{Z} \rightarrow \mathbb{Z}/n\mathbb{Z}$.
\end{remark}
We collect here the generators of the open curves involved in this article.
The curves on the third column correspond to the quotients of the universal covering space of $X_s$ by the groups of the first column.
\begin{table}[h]
\begin{tabular}{|lllll|}
\rowcolor{LightCyan}
\hline
\textup{Group} & \textup{Generators} & \textup{Curve} & \textup{Galois group}
& \textup{Homology} \\
\hline
\rowcolor{Gray1}
$ F_{s-1}$
&
$x_1,\ldots,x_{s-1}$
&
$X_s$
&
$\{1\}$ & $F_{s-1}/F_{s-1}'$
\\
\rowcolor{Gray}
$\{1\}$ & $\emptyset$ & $\tilde{X}_s$ & $F_{s-1}$ &
$\{1\}$
\\
\rowcolor{Gray1}
$F_{s-1}'$ & $[x_i,x_j], i\neq j$ & $Y$ & $ F_{s-1}/F_{s-1}'$ & $F_{s-1}'/F_{s-1}''$
\\
\rowcolor{Gray}
$R_0$ & $ x_1^i x_j x_1^{-i-1},
\substack{
i\in \mathbb{Z}
\\
2\leq j \leq s-1}
$ & $C_s$ & $\mathbf{Z}$ & $R_0/R_0'$
\\
\rowcolor{Gray1}
$R_n$ &
$\begin{array}{l}
x_1^i x_j x_1^{-i-1}, \substack{0\leq i \leq n-2 \\ 2 \leq j \leq s-1}
\\
x_1^{n-1}x_j, 1\leq j \leq s-1
\end{array}
$
& $Y_n$ & $\mathbb{Z}/n\mathbb{Z}$ & $R_n/R_n'$
\\
\hline
\end{tabular}
\caption{Generators and homology \label{Tab:hom}}
\end{table}
\subsection{The Burau Representation}
\label{sec:BurauDiscrete}
Consider the action of a generator $\sigma_i$ of $B_s$ seen as an automorphism of the free group, given for $1\leq i,j \leq s-2$ as
\[
\sigma_i(x_j)=
\begin{cases}
x_j & \text{ if } j\neq i, i+1 \\
x_i & \text{ if } j=i+1\\
x_i x_{i+1} x_i^{-1} & \text{ if } j=i
\end{cases}
\]
Therefore the conjugation action on the generators $\beta_j=x_j x_1^{-1}$ of $R_0$, seen as a $\mathbb{Z}[\mathbf{Z}]$-module, is given for $j\geq 2$ by:
\[
\sigma_j(\beta_{j+1})=\sigma_j(x_{j+1}x_1^{-1})=x_jx_1^{-1}=\beta_j,
\]
\begin{align*}
\sigma_j(\beta_j) &= \sigma_j(x_jx_1^{-1})
= x_j \cdot x_{j+1} \cdot x_{j}^{-1} \cdot x_1^{-1} = x_j x_1^{-1} \cdot x_1 x_{j+1} x_1^{-2} x_1^2 x_{j}^{-1} \cdot x_1^{-1} \\
&= \beta_j x_1\beta_{j+1}x_1^{-1} x_1\beta_j^{-1} x_1^{-1}
= \beta_j \beta_{j+1}^t \beta_j^{-t}=\beta_j^{1-t} \beta_{j+1}^t.
\end{align*}
The notation for $t$ above is in accordance with the group algebra notation $\mathbb{Z}[\mathbf{Z}]=\mathbb{Z}[t,t^{-1}]$.
Also in the special case where $j=1$ we compute:
\[
\sigma_1(\beta_{2})=\sigma_1(x_{2}x_1^{-1})=x_1 \cdot x_1 x_2^{-1} x_1^{-1}=\beta_2^{-t},
\]
and if $i > 2$
\begin{align*}
\sigma_1(\beta_i) &= \sigma_1(x_ix_1^{-1})
= x_i \cdot x_{1} x_{2}^{-1} x_1^{-1}
= x_i x_1^{-1} \cdot x_1 x_{1} x_2^{-1} x_{1}^{-1}
= \beta_i \beta_{2}^{-t}.
\end{align*}
We now compute the action on the $\mathbb{Z}[\mathbf{Z}]$-module $R/R'$, so the $\beta_i,\beta_j$ are commuting and we arrive at the matrix of the action with respect to the basis $\{\beta_2,\ldots,\beta_{s-1}\}$:
\[
\sigma_j \mapsto
\begin{pmatrix}
\mathrm{Id} & & & \\
& 1-t & 1 & \\
& t & 0 & \\
& & & \mathrm{Id}
\end{pmatrix}, \text{if} \ \
\ j\neq 1
\text{
and }
\sigma_1 \mapsto
\begin{pmatrix}
-t & -t & & -t \\
0 & 1 & & 0 \\
\vdots & \ddots & \ddots & \\
0 & \cdots & 0 & 1
\end{pmatrix}.
\]
\begin{lemma} \label{CommActions}
The action of $t$ on $R_0^{\mathrm{ab}}$ commutes with the action of the braid group.
\end{lemma}
\begin{proof}
It is obvious that for $\sigma_{j}$ $j\geq 2$ and $a\in R_0$ we have
\[
\sigma_j(a^t)=\sigma_{j}(x_1 a x_1^{-1})=x_1 \sigma_j(a) x_1^{-1}
=(\sigma_j(a))^t.
\]
For $\sigma_1$ we observe that
\begin{align*}
\sigma_1(a^t)&= \sigma_1(x_1 a x_1^{-1})=x_1 x_2 x_1^{-1} \sigma_1(a) x_1 x_2^{-1} x_1^{-1}=
x_1 \beta_2 \sigma_1(a) \beta_2^{-1} x_1^{-1} \\
&=x_1 \sigma_1(a) x_1^{-1}=(\sigma_1(a))^t,
\end{align*}
since $\sigma_1(a)$ is expressed as product of $\beta_\nu$ and the elements $\beta_i$ commute modulo $R_0'$.
\end{proof}
\subsection{The profinite Burau representation}
\label{sec:Burau-prof}
Since the action of elements $\sigma\in
\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ on elements $x_i$ involves $N(\sigma) \in \mathbb{Z}_\ell$, we cannot
define
an action of the absolute Galois group on $H_1(C_s,\mathbb{Z}_\ell)=H_1(C_s,\mathbb{Z}) \otimes_\mathbb{Z} \mathbb{Z}_\ell=\mathbb{Z}_\ell[\mathbf{Z}]^{s-2}$,
in the same way we defined the action of the braid group on $H_1(C_s,\mathbb{Z})$.
Recall that we denote by $\mathbf{Z}_\ell$ the group $\mathbb{Z}_\ell$ written multiplicatively, i.e.
$\mathbf{Z}_\ell \cong \langle t^\alpha, \alpha\in \mathbb{Z}_\ell \rangle$.
It turns out that instead of the ordinary group algebra $\mathbb{Z}_\ell[\mathbf{Z}]$ we need the completed group algebra $\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]$.
In this way we see the profinite Burau representation as a linear representation:
\[
\rho_{\mathrm{Burau}}: \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow \mathrm{GL}_{s-2}(\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]).
\]
\begin{remark}
The $\mathbb{Z}_\ell$-algebra $\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]$ is a ring
defined as the inverse limit
\[
\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]=
\lim_{\substack{\leftarrow \\ n}}
\mathbb{Z}_\ell[\mathbb{Z}/\ell^n \mathbb{Z}]
\]
of the ordinary group algebra, see \cite[p.171]{RibesZalesskii}.
It contains the $\mathbb{Z}$-algebra $\mathbb{Z}[\mathbf{Z}]\cong \mathbb{Z}[t,t^{-1}]$ which appears in the discrete topological Burau representation as a dense subalgebra.
\end{remark}
\begin{lemma}
\label{existLimit}
Let $\alpha=\sum_{\nu=0}^\infty a_\nu \ell^\nu \in \mathbb{Z}_\ell$, $0\leq a_\nu < \ell$ for all $0\leq \nu$.
Set
\[
A_n=\left(
1+t+t^2+\ldots+ t^{(\sum_{\nu=0}^n a_\nu \ell^\nu)-1}
\right).
\]
Then the sequence above converges and we will denote its limit by $(t^\alpha-1)/(t-1)$, that is
\[
\lim_{n\rightarrow \infty}
\left(
1+t+t^2+\ldots+ t^{(\sum_{\nu=0}^n a_\nu \ell^\nu)-1}
\right)
=
\frac{t^{\alpha}-1}{t-1}.
\]
\end{lemma}
\begin{proof}
The algebra $\mathbb{Z}_\ell[\mathbb{Z}/\ell^n \mathbb{Z}]$ is identified by the set of all expressions $\sum_{\nu=0}^{\ell^n-1} b_\nu t_n^\nu$, where $t_n$ is a generator of the cyclic group $\mathbb{Z}/\ell^n \mathbb{Z}$ and $b_\nu \in \mathbb{Z}_\ell$. In the inverse limit defining the ring of $\ell$-adic numbers the generator $t_{n+1}$ of
$\mathbb{Z}/\ell^{n+1}\mathbb{Z}$ is sent to the generator $t_n$ of $\mathbb{Z}/\ell^n\mathbb{Z}$. The corresponding map in the group algebras (by identifying $t_n=t_{n+1}=t$) is given by sending
\[
\mathbb{Z}_\ell[\mathbb{Z}/\ell^{n+1}\mathbb{Z}] \ni
\sum_{\nu=0}^{\ell^{n+1}-1} b_\nu t^\nu
\longmapsto
\sum_{\nu=0}^{\ell^{n}-1} b_\nu t^\nu
\in
\mathbb{Z}_\ell[\mathbb{Z}/\ell^{n}\mathbb{Z}].
\]
We compute now for $m<n$
\begin{align*}
A_n-A_m &=
\sum_{\nu=a_0+a_1\ell+\cdots+ a_m \ell^m}^
{a_0+a_1\ell+\cdots+ a_n \ell^n} t^\nu
= t^{a_0+a_1\ell+\cdots+ a_m \ell^m}
\sum_{\nu=0}^{a_{m+1} \ell^{m+1} + \cdots +
a_n \ell^n
} t^\nu
\end{align*}
Therefore, the sequence is Cauchy and converges in the complete group algebra $\mathbb{Z}_\ell[[\mathbf{Z}_\ell]]$.
\end{proof}
\begin{lemma} \label{writeInv}
We have for $\alpha\in \mathbb{N}$, $\beta_k=x_k x_1^{-1}$.
\begin{equation}
\label{n-rel}
x_k^\alpha x_1^{-\alpha}= \beta_k \cdot \beta_k^t \cdot \beta_k^{t^2} \cdots \beta_k^{t^{\alpha-1}}.
\end{equation}
For $\alpha\in \mathbb{Z}_\ell$ we have
\begin{equation}
\label{n-rel1}
x_k^\alpha x_1^{-\alpha}=
\beta_k^{\frac{t^\alpha-1}{t-1}}.
\end{equation}
\end{lemma}
\begin{proof}
We will prove first the result for $\alpha=n\in \mathbb{\mathbb{Z}}$.
Indeed, for $\alpha=1$ the result is trivial while by induction
\[
x_k^n x_1^{-n} = x_k \beta_k \cdots \beta_k^{t^{n-2}} x_1^{-1}
= x_k x_1^{-1} x_1 \beta_k \cdots \beta_k^{t^{n-2}} x_1^{-1}
= \beta_k \cdot \beta_k^t \cdot \beta_k^{t^2} \cdots \beta_k^{t^{n-1}}
\]
Now for $\alpha=\sum_{\nu=0}^{\infty} a_\nu \ell^{\nu} \in \mathbb{Z}_\ell$ we consider the
sequence $c_n=\sum_{\nu=0}^{n} a_\nu \ell^{\nu}\rightarrow \alpha$.
We have
\[
x_k^\alpha x_1^{-\alpha} =\lim_n x_k^{c_n} x_1^{-c_n}=
\lim_n \beta_k^{\frac{t^{c_n}-1}{t-1}} =
\beta_k^{\frac{t^{\alpha}-1}{t-1}}.
\]
\end{proof}
\begin{lemma}
\label{pass-over}
For every $i\neq 1$, and $N\in \mathbb{Z}_\ell$ we have
\[
x_i^{-1}x_1^{-N} =x_1^{-N} x_i^{-1} \cdot \beta_i^{1-t^N}.
\]
More generally for $a\in \mathbb{Z}_\ell^*$
\[
x_i^{-a} x_1^{-N}= x_1^{-N} x_i^{-a} \cdot \beta_i^{\frac{t^a-1}{t-1}(1-t^N)}
\]
\end{lemma}
\begin{proof}
We compute
\begin{align*}
x_i^{-1}x_1^{-N} &=x_1^{-N} x_i^{-1} \cdot
x_i x_1^{N} x_i^{-1} x_1^{-N}
\\
& =
x_1^{-N} x_i^{-1} \cdot
x_i x_1^{-1} x_1^{N} (x_i x_1^{-1})^{-1} x_1^{-N} \\
&= x_1^{-N} x_i^{-1} \cdot\beta_i \beta_i^{-t^N} \\
& =x_1^{-N} x_i^{-1} \cdot\beta_i^{1-t^N}.
\end{align*}
The second equality is proved the same way
\begin{align*}
x_i^{-a}x_1^{-N} &=x_1^{-N} x_i^{-a} \cdot
x_i^{a} x_1^{N} x_i^{-a} x_1^{-N}
\\
& =
x_1^{-N} x_i^{-a} \cdot
x_i^a x_1^{-a} x_1^{N} (x_i^a x_1^{-a})^{-1} x_1^{-N} \\
&= x_1^{-N} x_i^{-a} \cdot\beta_i^{\frac{t^a-1}{t-1}(1-t^N)}.
\end{align*}
\end{proof}
\begin{lemma}
For a given word $x_{s-1}^{-a_{s-1}}\cdots x_{1}^{-a_{1}}$ we have
\[
\left(
x_{s-1}^{-a_{s-1}}\cdots x_{1}^{-a_{1}}
\right) x_1^{-N}=
x_1^{-N}
\left(
x_{s-1}^{-a_{s-1}}
\beta_{s-1}^{
\frac{
t^{a_{s-1}}-1
}
{t-1}
(1-t^N)
}
\cdots
x_{2}^{-a_{2}}
\beta_{2}^
{
\frac{
t^{a_{2}}-1}
{t-1}
(1-t^N)
}
x_1^{-a_1}
\right).
\]
\end{lemma}
\begin{proof}
We use lemma \ref{pass-over} inductively to have
\begin{align*}
x_{s-1}^{-a_{s-1}}\cdots x_{1}^{-a_{1}} x_1^{-N}
&=
x_{s-1}^{-a_{s-1}}\cdots x_{3}^{-a_{3}} x_1^{-N}
x_{2}^{-a_{2}}
\beta_{2}^{
\frac{
t^{a_{2}}-1
}
{t-1}
(1-t^N)
}
x_1^{-a_1}
\\
&=
x_{s-1}^{-a_{s-1}}\cdots x_{4}^{-a_{4}}
x_1^{-N}
x_{3}^{-a_{3}}
\beta_{3}^{
\frac{
t^{a_{3}}-1
}
{t-1}
(1-t^N)
}
x_{2}^{-a_{2}}
\beta_{2}^{
\frac{
t^{a_{2}}-1
}
{t-1}
(1-t^N)
}
x_1^{-a_1}
\\
&= \cdots
\\
&=
x_1^{-N}
x_{s-1}^{-a_{s-1}}
\beta_{s-1}^{
\frac{
t^{a_{s-1}}-1
}
{t-1}
(1-t^N)
}
\cdots
x_{2}^{-a_{2}}
\beta_{2}^{
\frac{
t^{a_{2}}-1
}
{t-1}
(1-t^N)
}
x_1^{-a_1}.
\end{align*}
\end{proof}
For simplicity denote $N(\sigma)$ by $N$ and $w_i(\sigma)$ by $w$.
We will consider $w x_i^N w^{-1} x_1^{-N}$, where
$
w^{-1}=x_{s-1}^{-a_{s-1}}\cdots x_{1}^{-a_{1}}
$.
We have
\begin{align*}
wx_i^N w^{-1} x_1^{-N} &=
\beta_i^{t^{\sum_{\nu=1}^{s-1} a_{\nu}}\frac{t^N-1}{t-1}}
\beta_{s-1}^
{
t^{\sum_{\nu=1}^{s-2} a_{\nu}}\frac{t^{a_{s-1}}-1}{t-1}
(1-t^N)
}
\cdots
\beta_{2}^
{
t^{a_1} \frac{t^{a_2}-1}{t-1}
(1-t^N)
}.
\end{align*}
An arbitrary element $w \in \mathfrak{F}_{s-1}$ can be written in a unique way as
\[
w=B\cdot x_1^{a_1}\cdots x_{s-1}^{a_{s-1}}, \qquad a_i \in \mathbb{Z}_\ell
\]
where $B$ is an element in the group $R_0$ generated by the elements $\beta_i$, $i=2,\ldots,s-1$. Observe now that
for every $\beta_i$, and $N\in \mathbb{Z}_{\ell}$ we have
\[
\beta_i x_1^{-N} = x_1^{-N} x_1^{N} \beta_i x_1^{-N}=
x_1^{-N} \beta_i^{t^N}.
\]
By considering a sequence of words in $\beta_i$ tending to $B$ we see that
\[
B x_1^{-N} = x_1^{-N} B^{t^N},
\]
for every element $B$ in the pro-$\ell$ completion of $R_0$.
This means that
\[
w x_i^{N} w^{-1} x_1^{-N}
= B (x_1^{a_1}\cdots x_{s-1}^{a_{s-1}}) x_i^N
(x_{s-1}^{-a_{s-1}} \cdots x_1^{-a_1}) B^{-1} x_1^{-N}
\]
\[
{\scriptstyle
=B
\left(
x_1^{a_1}\cdots x_{s-1}^{a_{s-1}}
\right) x_i^N x_1^{-N}
\left(
x_{s-1}^{-a_{s-1}}
\beta_{s-1}^{(1-t^N)
\frac{t^{
a_{s-1}}-1
}
{t-1}
}
\cdots
x_{2}^{-a_2}
\beta_{2}^{(1-t^N)
\frac{
t^{a_{2}}-1
}
{t-1}
}
x_1^{-a_1}
\right)
B^{-t^N}
}
\]
\[=
B
\beta_i^{t^{a_1+\cdots a_{s-1}}
\frac{t^{
N}-1
}
{t-1}
}
\beta_{s-1}^{(1-t^N)t^{a_1+\cdots +a_{s-2}}
\frac{t^{
a_{s-1}}-1
}
{t-1}
}
\cdots
\beta_2^{(1-t^N) t^{a_1}
\frac{t^{
a_2}-1
}
{t-1}
}
B^{-t^N}.
\]
The above in $R_0/R_0'$ evaluates to
\begin{equation}
{\scriptstyle
\label{congAbsQ}
w x_i^N w^{-1} x_1^{-N}=
\beta_i^{t^{a_1+\cdots+ a_{s-1}}
\frac{t^{
N}-1
}
{t-1}
}
\beta_{s-1}^{(1-t^N)t^{a_1+\cdots +a_{s-2}}
\frac{t^{
a_{s-1}}-1
}
{t-1}
}
\!\!\!\!\!\!\cdots
\beta_2^{(1-t^N)t^{a_1}
\frac{t^{
a_2}-1
}
{t-1}
}
B^{-t^N+1}.
}
\end{equation}
\begin{theorem}
\label{MatBurau}
For $\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and $1\leq i \leq s-1$ we have that $\sigma(x_i)=w_i(\sigma) x_i^{N(\sigma)} w_i(\sigma)^{-1}$, where $N(\sigma)$ is the cyclotomic character $N:\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})\rightarrow \mathbb{Z}_\ell^*$.
Consider the multiplicative group $\mathbf{Z}_\ell$ which is isomorphic to $\mathbb{Z}_\ell$ and has
topological generator $t$ given by
$
\mathbf{Z}_\ell\cong
\langle
t^\alpha, \alpha\in \mathbb{Z}_\ell
\rangle.
$
Let us write
\[
w_i(\sigma)= B_i(\sigma)x_1^{a_{1,i}(\sigma)}\cdots x_{s-1}^{a_{s-1,i}(\sigma)}, \qquad a_{\nu,i}(\sigma)\in \mathbb{Z}_\ell,
\]
where $B_i(\sigma) \in R_0/R_0'$ is expressed as
\[
B_i(\sigma)=
\beta_2^{b_{2,i}(\sigma)}
\cdots
\beta_{s-1}^{b_{s-1,i}(\sigma)}C,
\]
with $b_{i,j}(\sigma) \in \mathbb{Z}_\ell$ and $C\in R_0'$.
The matrix representation of $\rho_{\mathrm{Burau}}$ with respect to the basis $\beta_j=x_jx_1^{-1}$, $j=2,\ldots,s-1$ has the following form:
\[
\rho_{\mathrm{Burau}}(\sigma)=
\frac{t^{N(\sigma)}- 1}{t-1}
L(\sigma)
+
\big(
1-t^{N(\sigma)}
\big) M(\sigma)+
\big(
1-t^{N(\sigma)}
\big) K(\sigma),
\]
where
$L,M,K$ are $(s-2)\times(s-2)$ matrices given by
\[
L(\sigma)=
\mathrm{diag}
\left(
t^{\sum_{\nu=1}^{s-1} a_{\nu,2}(\sigma)},
\ldots
,
t^{\sum_{\nu=1}^{s-1} a_{\nu,s-2}(\sigma)}
\right)
\]
\[
M(\sigma)\!\!= \!\!{\scriptscriptstyle
\begin{pmatrix}
\Gamma(a_{2,2}) \cdot
t^{a_{1,2}(\sigma)}
& \cdots &
\Gamma(a_{s,s-1}) \cdot
t^{a_{1,s-1}(\sigma)} \\
\Gamma(a_{3,2}) \cdot
t^{a_{1,2}(\sigma)+a_{2,2}(\sigma)}
& \cdots &
\Gamma(a_{3,s-1}) \cdot
t^{a_{1,s-1}(\sigma)+a_{2,3}(\sigma)} \\
\vdots
& & \vdots \\
\Gamma(a_{s-2,2}) \cdot
t^{a_{1,2}(\sigma)
+\cdots+a_{s-1,2}(\sigma)}
& \cdots &
\Gamma(a_{s-2,s-1}) \cdot
t^{a_{1,s-1}(\sigma)
+\cdots+a_{s-1,s-1}(\sigma)}
\end{pmatrix}
}
\]
\[
K(\sigma)=
\begin{pmatrix}
b_{2,2}(\sigma) & b_{2,3}(\sigma) & \cdots & b_{2,s-1}(\sigma) \\
b_{3,2}(\sigma) & b_{3,3}(\sigma) & \cdots & b_{3,s-1}(\sigma) \\
\vdots & \vdots & & \vdots \\
b_{s-1,2}(\sigma) & b_{s-1,3}(\sigma) & \cdots & b_{s-1,s-1}(\sigma)
\end{pmatrix}.
\]
In the above theorem the term
\[
\Gamma(a):=(t^a-1)/(t-1)\]
for $a\in \mathbb{Z}_\ell$,
is defined in lemma \ref{existLimit}.
\end{theorem}
\begin{proof}
We will find the matrix $\rho$ corresponding to the action given by
$\sigma(x_i)=w_i(\sigma) x_i^{N(\sigma)} w_i(\sigma)^{-1}$.
Let us write each $w_i(\sigma)$
as
\[
w_i(\sigma)= B_i(\sigma)x_1^{a_{1,i}(\sigma)}\cdots x_{s-1}^{a_{s-1,i}(\sigma)},
\]
where $B_i(\sigma) \in R_0/R_0'$ is expressed as
\[
B_i(\sigma)=
\beta_2^{b_{2,i}(\sigma)}
\cdots
\beta_{s-1}^{b_{s-1,i}(\sigma)}C,
\]
with $b_{i,j}(\sigma) \in \mathbb{Z}_\ell[[\mathbf{Z}_\ell]]$ and $C\in R_0'$.
Let us now consider the action of $\sigma$ on $\beta_i$ for $i=2,\ldots,s-1$ and recall that just after eq. (\ref{actGeneratos}) we have selected a normalization by an inner automorphism $w_1(\sigma)=1$,
so that $\sigma(x_1)=x_1^{N(\sigma)}$.
Therefore
\[
\sigma(\beta_i)=\sigma(x_i x_1^{-1})=
w_i(\sigma) x_i^{N(\sigma)} w_i(\sigma)^{-1} x_1^{-N(\sigma)}.
\]
The matrix form of $\rho_{\mathrm{Burau}}$ as given in theorem \ref{MatBurau} follows by eq. (\ref{congAbsQ}).
More preciselly the
matrix $L(\sigma)$ comes from the coefficients of the factor $\beta_i^{a_1+\cdots+a_{s-1}}$, the matrix $M(\sigma)$ comes from the next factor
\[\beta_{s-1}^{(1-t^N)t^{a_1+\cdots+a_{s-2}}
(t^{a_{s-1}}-1)/(t-1)
}\cdots
\beta_2^{(1-t^N)t^{a_1}
(t^{a_{2}}-1)/(t-1)
} \]
and the matrix $K(\sigma)$ comes from the final factor $B^{-t^N+1}$.
\end{proof}
\section{Examples - Complete curves}
\label{sec:applications-cyc-cov}
\subsection{The compactification of cyclic covers}
Every topological cover of the Riemann surface $\mathbb{P}^1\backslash \{P_1,\ldots,P_{s}\}$ gives rise to a Riemann surface $X^0$, which can compactified to a compact Riemann surface $X$, see \cite[prop. 19.9]{MR1343250}. Moreover if the topological cover is Galois with Galois group $G$, then the corresponding function field $\mathbb{C}(X)/\mathbb{C}(x)$ form a Galois extension with the same Galois group. We know that every Kummer extension of the rational function field, totally ramified above $s$ points,
corresponds to the cyclic cover of the projective line given by:
\begin{equation} \label{cyccov}
y^n=\prod_{i=1}^{s} (x-b_i)^{d_i}, \qquad (d_i,n)=1.
\end{equation}
For different choices of exponents $d_1,\ldots,d_s$ the curves are in general not isomorphic, see \cite{Kallel-Sjerve}.
Without loss of generality we can assume that the infinity point of this model is not ramified and this is equivalent to the condition
$\sum\limits_{i=1}^s d_i\equiv 0 \mod n$, see \cite[p. 667]{Ko:99}. This means the ramified points $\{P_1,\ldots,P_{s-1},P_s=\infty\}$ in our original setting are now mapped to the points $\{b_1,\ldots,b_s\}$.
Conversely, the cover given in eq. (\ref{cyccov})
determines equivalently a cyclic Kummer extension of the rational function field $\mathbb{C}(x)$ and since the exponents $d_i$ are prime to $n$ we have that the points $P_1,\ldots,P_{s}$ are all fully ramified see \cite{Ko:99}. Therefore, the open curve obtained by removing the $s$ points $Q_1,\ldots,Q_s$ which map onto $P_1,\ldots,P_s$ is a topological cyclic cover, which can be considered with the tools developed so far.
However, we will show that the assumption made so far in this article lead to the selection $d_i=1$ for all $1\leq i \leq s-1$.
Let $Q_i$ be the unique point of $X$ above $b_i$ and let $t_i$ be a local uniformizer at $Q_i$.
We can select $t_i$ so that
$x-b_i=t_i^n$. Indeed, valuation of $x-b_i$ in the local ring at $Q_i$ is $n$ and by Hensel's lemma any unit is an $n$-power that can be absorbed by reselecting the uniformizer $t_i$ if necessary.
We can replace the factor $(x-b_i)^{d_i}$ in the original defining equation (\ref{cyccov}) of the curve in order to arrive at the following equation
\begin{equation}
\label{eq1d}
y^n= t_i^{nd_i} U, \qquad
U=
\prod_{\substack{\nu=1 \\ \nu \neq i}}^{s} (x-b_\nu)^{d_\nu}
\in k[x], v_{Q_i}(U)=0.
\end{equation}
The element $U$ is invariant under the action of $\langle \sigma \rangle$ and so is its $n$-th root $u\in k[[t_i]]$. Indeed, since $\sigma(u^n)=\sigma(U)=u^n$ we have that $\sigma(u)=\zeta^a u$, for some $a$, $0\leq a <n$. But $u$ is a unit, therefore $u \equiv a_0 \mod t_i k[[t_i]]$, for some element $a_0 \in k$, $a_0\neq 0$. Also $\sigma(a_0)=a_0$, so by considering $\sigma(u)=\zeta^a u$ modulo $t_i k[[t_i]]$ we obtain $a_0= \zeta^a a_0 $. This implies that $a=0$ and $u$ is a $\sigma$-invariant element.
Since $x-b_i=t_i^n$
the generator $\sigma$ of $\mathrm{Gal}(X/\mathbb{P}^1)$ acts on $t_i$ by sending $\sigma(t_i)=\zeta^{\ell} t_i$ for some $\ell \in \mathbb{N}$.
This $\ell$ equals $d_i^*$ for some $0< d_i^* < n$, where
$d_i d_i^*\equiv 1 \mod n$.
Indeed,
by taking the $n$-root in eq. (\ref{eq1d}), we have
$
y=t_i^{d_i} u
$
for some $\sigma$-invariant unit in $k[[t_i]]$. Then the action of $\sigma$ gives us that
$\zeta=\zeta^{\ell d_i}$, so $\ell d_i \equiv 1 \mod n$.
So in the short exact sequence
\[
1 \rightarrow
R_n
\rightarrow
F_{s-1}
\rightarrow
\mathbb{Z}/n\mathbb{Z}
\rightarrow
1
\]
the elements $x_i$, which correspond to loops winding once around each branch point, map to the element $\sigma^{d_i^*} \in \mathbb{Z}/n\mathbb{Z}$. This is not compatible
with the selection of the winding number function $\alpha$ given in equation (\ref{w-map}) unless all $d_i$ are equal. Without loss of generality we can assume that $d_i=1$ for all $1\leq i \leq s-1$.
Riemann-Hurwitz theorem implies that
\begin{equation} \label{genusCyclic}
g=\frac{(n-1)(s-2)}{2},
\end{equation}
which is compatible with the computation of $r=2g+s-1$ given in eq. (\ref{ASRn}).
This curve can be uniformized as a quotient $\mathbb{H}/\Gamma$ of the hyperbolic space modulo a discrete free subgroup of genus $g$, which admits a presentation
\[
\Gamma=\langle a_1,b_1,a_2,b_2,\ldots,a_g,b_g | [a_1,b_1][a_2,b_2]\cdots [a_g,b_g]=1 \rangle.
\]
On the other hand side, when we remove the $s$ branch points we obtain a topological cover of the space $X_s$ defined in the previous section.
This topological cover corresponds to the free subgroup $R_n<F_{s-1}$ given by
\[
R_n=\langle a_1,b_1,a_2,b_2,\ldots,a_g,b_g,\gamma_1,\ldots,\gamma_s | \gamma_1 \gamma_2 \cdots \gamma_s \cdot[a_1,b_1]\cdots [a_g,b_g]=1\rangle.
\]
The group $\mathrm{Gal}(X/\mathbb{P}^1)\cong \mathbb{Z}/n\mathbb{Z}=\langle \sigma \rangle$ is a subgroup of the automorphism group $\mathrm{Aut}(X)\subset \mathrm{Mod}(X)$. Therefore the generator $\sigma$ acts on $R_n$.
Since by lemma \ref{lemma:funRn} the group $R_n$ is the fundamental group of $Y_n$ the space $R_n/R_n'$ is the first homology group of the open curve $Y_n$. By proposition \ref{prop:GalModRn} its structure is given by $H_1(Y_n,\mathbb{Z})=\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]^{s-1} \bigoplus \mathbb{Z}$.
Let $\widehat{R}_n,\widehat{R_n'}$ be the pro-$\ell$ completions of $R_n$ and $R_n'$ respectively. Since the quotient $R_n/R_n'$ is torsion free, the completion functor is exact, see \cite[p. 35 exer. 21,22]{DDMS} and \cite[p. 81-85]{RibesZalesskii}.
This allows us to see that
\[
\widehat{R}_n /\widehat{R_n'}=\widehat{H_1(Y_n,\mathbb{Z})}=H_1(Y_n,\mathbb{Z}_\ell).
\]
\begin{lemma}
\label{invariantEll}
With notation as above, the $\mathrm{Gal}(X/\mathbb{P}^1)$-invariant elements of $H_1(Y_n,\mathbb{Z})$ (resp. $H_1(Y_n,\mathbb{Z}_\ell)$) is the group generated by the elements
\[
\{x_i^n : 1\leq i \leq s-1\}.
\]
\end{lemma}
\begin{proof}
We will use the decomposition of proposition \ref{prop:GalModRn} for $H_1(Y_n,\mathbb{Z})$ and the corresponding decomposition of $H_1(Y_n,\mathbb{Z}_\ell)=H_1(Y_n,\mathbb{Z})\otimes_\mathbb{Z} \mathbb{Z}_\ell$.
Observe that an element in the group algebra $\mathbb{Z}[\langle \sigma \rangle]$ is $\sigma$-invariant if and only if it is of the form
$
\sum_{i=0}^{n-1}a\sigma^i$ for some $a\in \mathbb{Z}$.
Hence the invariant elements are multiples (powers in the multiplicative notation) by
\[
\beta_j \beta_j^\sigma \beta_j^{\sigma^2} \cdots \beta_j^{\sigma^{n-1}}=x_j^nx_1^{-n}.
\]
The action of
$\langle \sigma\rangle=\mathrm{Gal}(X/\mathbb{P}^1)$ is
given by conjugation with $x_1$, therefore $x_1^n$ is invariant under this conjugation action and the result follows.
\end{proof}
The elements $\gamma_i$ are lifts of the loops $x_i$ around each hole in the projective line. Thus $\gamma_i$ are $\mathbb{Z}/n\mathbb{Z}$-invariant.
Set $\gamma_i=x_i^{n}$.
The quotient
$\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]/ \langle \sum_{i=0}^{n-1} \sigma ^i\rangle$ is the co-augmentation module, see \cite[sec. 1]{NeukirchBonn}.
\begin{lemma}
We have
\[
x_k^n x_i x_k^{-n} x_1^{-1}= \beta_k \cdot \beta_k^\sigma \cdot \beta_k^{\sigma^2} \cdots \beta_k^{\sigma ^{n-1}}
\cdot
\beta_i^{\sigma ^n}
\cdot
\beta_k^{-\sigma ^n}
\cdot
\beta_k^{-\sigma ^{n-1}}
\cdots
\beta_k^{-\sigma ^2}
\cdot
\beta_k^{-\sigma }
\]
Moreover in the abelian group $R/R'$ we have
\[
x_k^n x_i x_k^{-n} x_1^{-1}=\beta_i^{\sigma ^n} \beta_k^{1-\sigma ^n}.
\]
\end{lemma}
\begin{proof}
Write
\begin{eqnarray*}
x_k^n x_i x_k^{-n} x_1^{-1} & = &
x_k^n x_1^{-n} \cdot x_1^{n}
x_i
x_1^{-1}
x_1^{-n}
x_1^{n+1}
x_k^{-n}
x_1^{-1} \\
& = &
\beta_k \cdot \beta_k^\sigma \cdot \beta_k^{\sigma ^2} \cdots \beta_k^{\sigma ^{n-1}}
\cdot
x_1^{n} \beta_i x_1^{-n}
x_1
\left(
\beta_k \cdot \beta_k^\sigma \cdot \beta_k^{\sigma ^2} \cdots \beta_k^{\sigma ^{n-1}}
\right)^{-1} x_1^{-1}
\\
& =&
\beta_k \cdot \beta_k^\sigma \cdot \beta_k^{\sigma ^2} \cdots \beta_k^{\sigma ^{n-1}}
\cdot
\beta_i^{\sigma ^n}
\cdot
\beta_k^{-\sigma ^n}
\cdot
\beta_k^{-\sigma ^{n-1}}
\cdots
\beta_k^{-\sigma ^2}
\cdot
\beta_k^{-\sigma }
\end{eqnarray*}
\end{proof}
\begin{lemma}
The subgroup of $H_1(Y_n,\mathbb{Z})=R_n/R_n'$ generated by the following two sets of $\mathbb{Z}/n\mathbb{Z}$-invariant elements
\[\{x_1^n, x_j^n x_1^{-n}: 2 \leq j \leq s-1\},
\{
x_j^n : 1 \leq j \leq s-1
\}
\]
is invariant under the action of the braid group.
The subgroup of $H_1(Y_n,\mathbb{Z}_\ell)$ generated by the same elements is invariant under the braid group
and under the action of the group $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$.
\end{lemma}
\begin{proof}
We consider first the braid action. The proof is the same in the discrete and in the pro-$\ell$ setting.
By lemma \ref{writeInv} we have
\begin{align*}
\sigma_1 (x_1^n) & = (x_1 x_2 x_1^{-1})^{n}
= x_1 \cdot x_2^{n} \cdot x_1^{-1}
= x_1 \cdot x_2^{n} x_1^{-n}\cdot x_1^{n-1}
\\
&= x_1 \cdot \beta_2 \cdot \beta_2^\sigma \cdot \beta_2^{\sigma^2} \cdots \beta_2^{\sigma^{n-1}} \cdot x_1^{-1} \cdot x_1^{n}
= \beta_2^\sigma \cdot \beta_2^{\sigma^{2}} \cdots \beta_2^{\sigma^{n}} \cdot x_1^{n}
\\
&= \beta_2 \cdot \beta_2^\sigma \cdots \beta_2^{\sigma^{n-1}} \cdot x_1^{n}
= x_2^{n} x_1^{-n} \cdot x_1^{n}=x_2^n \\
\sigma_1 (x_2^n) &= x_1^{n}, \sigma_1 (x_i^n) = x_i^{n} \ (i>2).
\end{align*}
\begin{eqnarray*}
\text{For $j\geq 2$: }
\sigma_j (x_j^n x_1^{-n}) & = & (x_j x_{j+1} x_j^{-1})^{n} x_1^{-n}
= x_j \cdot x_{j+1}^{n} \cdot x_j^{-1} \cdot x_1^{-n}\\
& = & x_jx_1^{-1} \cdot x_1 (x_{j+1}^{n}x_1^{-n}) x_1^{-1} \cdot x_1^{n} \cdot x_1x_j^{-1}\cdot x_1^{-n} \\
&= & x_{j+1}^n x_1^{-n} \\
\sigma_j(x_j^n)&= & \sigma_j(x_j^{n} x_1^{-n}) \sigma_j(x_1^n)
=x_{j+1}^n.
\end{eqnarray*}
We will now consider the action of $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$, which makes sense only in the pro-$\ell$ setting. Each element $\tau\in
\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ acts on $x_i$ by
\[
\tau(x_i)=w_i(\tau) x_i^{N(\tau)} w_i(\tau)^{-1},
\]
Therefore, for $i=2,\ldots,s-1$ we have
\begin{align*}
\tau(x_i^n x_1^{-n}) &=
\tau(\beta_j \beta_j^\sigma \cdots \beta_j^{\sigma^{n-1}}) \\
&=
\left(
\tau(\beta_j)\right)^{
1+\sigma+\cdots+\sigma^{n-1}
}\end{align*}
which is an element invariant under the action of $\mathbb{Z}/n\mathbb{Z}=\langle \sigma \rangle$, therefore it belongs to the desired group by lemma \ref{invariantEll}. We have assumed that we will normalize by an inner automorphism the element $\tau$ so that $\tau(x_1^n)=x_1^{N(\tau)n}$, that is $w_1(\tau)=1$.
\end{proof}
Consider now the space
\[
H_1(\bar{Y}_n,\mathbb{Z})=\frac{R_n}{R_n' \cdot \langle \gamma_1,\ldots,\gamma_s \rangle} =
\frac{R_n}{R_n' \cdot \langle x_1^n,\ldots,x_s^n \rangle}.
\]
Observe that $R_n/R_n' \cdot \langle x_1 \rangle=\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]^{s-2}.$
Since $\langle \gamma_1,\ldots,\gamma_s\rangle$ is both $\mathbb{Z}/n\mathbb{Z}$ and $B_s$ stable we have a natural defined action of $B_s$ on the quotient.
We compute now the action of the braid group on $\beta_j^{\sigma^i}=x_1^{i} x_j x_1^{-i-1}$. We can pick as a basis of the $\mathbb{Z}$-module $H_1(\bar{Y}_n,\mathbb{Z})$ the elements
\[
\{
\beta_j^{\sigma^i}=x_1^i x_j x_1^{-1-i}: 2\leq j \leq s-1, 0\leq i \leq n-2
\}
\]
and equation (\ref{n-rel}) written additively implies that
$\beta_j^{\sigma^{n-1}}=-\sum_{\nu=0}^{n-2} \beta_j^{\sigma^\nu}$, recall that all powers $x_i^n$ are considered to be zero.
Let $J_{\mathbb{Z}/n\mathbb{Z}}$ be the co-augmentation module.
Observe that
$\beta_j^{t^{\nu}-1}=[x_1^{\nu},x_j]$.
It is well known (see, \cite[Prop. 1.2]{NeukirchBonn}) that
$
\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]=J_{\mathbb{Z}/n\mathbb{Z}} \oplus \mathbb{Z}.
$
We have
\begin{equation}
\label{DSeq}
H_1(\bar{Y}_n,\mathbb{Z})=J_{\mathbb{Z}/n\mathbb{Z}}^{s-2}.
\end{equation}
Notice that the above $\mathbb{Z}$-module has the correct rank $2g=(n-1)(s-2)$.
The direct sum in eq. (\ref{DSeq}) is in the category of $\mathbb{Z}$-modules not in the category of $B_s$-modules. Also on the co-augmentation module $J_{\mathbb{Z}/n\mathbb{Z}}$ the generator of the $\mathbb{Z}/n\mathbb{Z}$ is represented by the matrix:
\begin{equation} \label{augmentationMat}
A:=
\begin{pmatrix}
0 & \cdots & 0 & -1 \\
1 & \ddots & \vdots & \vdots \\
0 & \ddots & 0 & -1 \\
0 & 0 & 1 & -1
\end{pmatrix}
\end{equation}
which is the companion matrix of the polynomial $x^{n-1}+\cdots+ x+1$.
Notice that for $n=p$ prime we can represent
$J_{\mathbb{Z}/n\mathbb{Z}}$ is in terms of the $\mathbb{Z}$-module $\mathbb{Z}[\zeta]$, where $\zeta$ is a primitive $p$-th root of unity, i.e.
\[
\mathbb{Z}[\zeta]=\bigoplus_{\nu=0}^{p-2} \zeta^{\nu} \mathbb{Z},
\]
and the $\mathbb{Z}[\mathbb{Z}/n\mathbb{Z}]$-module structure is given by multiplication by $\zeta$.
Since the $\mathbb{Z}/n\mathbb{Z}$-action and the braid action are commuting we have a decomposition (notice that $1$ does not appear in the eigenspace decomposition below)
\[
H_1(\bar{Y}_n,\mathbb{Z})\otimes_{\mathbb{Z}}\mathbb{C} =\bigoplus_{\nu=1}^{n-1} V_\nu
\]
where $V_\nu$ is the eigenspace of the $\zeta^\nu$-eigenvalue. Each $V_\nu$
is a $B_s$-module of dimension $s-2$.
In order to compute the spaces $V_\nu$ we have to diagonalize the matrix given in eq. (\ref{augmentationMat}). Consider the Vandermonde matrix given by:
\[
P=
\begin{pmatrix}
1 & \zeta_1 & \zeta_1^2 & \cdots & \zeta_1^{n-2} \\
1 & \zeta_2 & \zeta_2^2 & \cdots & \zeta_2^{n-2} \\
\vdots & \vdots & & \vdots \\
1 & \zeta_{n-1} & \zeta_{n-1}^2 & \cdots & \zeta_{n-1}^{n-2}
\end{pmatrix},
\]
where $\{\zeta_1,\ldots,\zeta_{n-1}\}$ are all $n$-th roots of unity different than $1$. Observe that
\[
P\cdot A = \mathrm{diag}(\zeta_1,\zeta_2,\ldots,\zeta_{n-1}) \cdot P.
\]
Thus the action of the braid group on the eigenspace $V_\nu$ of the eigenvalue $\zeta^\nu$ can be computed by a base change as follows:
Consider the initial base
$1,\beta_j, \beta_j^{t},\ldots,\beta_j^{t^{n-2}}$ for $2 \leq j \leq s-1$. The eigenspace of the $\zeta^\nu$ eigenvalue has as basis the $k$-elements of the $1\times (n-2)$ matrix
\[
\left( 1,\beta_j, \beta_j^{\sigma},\ldots,\beta_j^{\sigma^{n-2}} \right) \cdot P^{-1}
\]
for all $j$ such that
$2\leq j \leq s-1$. These elements are $\mathbb{C}$-linear combinations of the elements $\beta_j$ and the action of the braid generators on them can be easily computed.
Since the action of $\mathrm{Gal}(\bar{Y}_n/\mathbb{P}^1)=\langle \sigma \rangle$ commutes with the action of $B_{s}$ (resp. $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$) each eigenspace is a $B_{s}$ (resp. $\mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$) module. The action of the operator $t$ on each $V_n$ is essentially the action of $\sigma$, which by definition of eigenspace, acts by multiplication by $\zeta_\nu$.
Therefore, the matrix representation corresponding to each eigenspace $V_n$ is the matrix of the Burau (resp. pro-$\ell$ Burau) evaluated at $t=\zeta_\nu$.
Similarly in the pro-$\ell$ case we have
\begin{equation}
\label{module-decomp1}
\mathbb{Z}_{\ell}[[\mathbf{Z}_\ell]]^{s-2}
\otimes_{\mathbb{Z}_\ell} \bar{\mathbb{Q}}_\ell = \bigoplus_{\nu=1}^{\ell^k-1} V_\nu,
\end{equation}
which after reducing $\mathbb{Z}_{\ell}[[\mathbf{Z}_\ell]] \rightarrow \mathbb{Z}_{\ell} [\mathbb{Z}_{\ell}/\ell^k \mathbb{Z}_\ell]=\mathbb{Z}_{\ell} [\mathbb{Z}/\ell^k\mathbb{Z}]$ sending $t\mapsto \zeta_\nu$ gives rise to the representation in $V_\nu$.
The decomposition in \ref{module-decomp1} is a decomposition of $\mathbb{Z}_\ell$-module. The Galois module structure and the $\mathbb{Z}_\ell$ action do not commute in this case. Indeed, the equation (\ref{actGeneratos}) implies that
$\sigma \in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ acts on the pro-$\ell$ generator by
\[
\sigma t =t^{N(\sigma)} \sigma.
\]
Therefore, the modules $V_\nu$ defined above are $\mathrm{ker}N$-modules.
\subsection{Relation to actions on holomorphic differentials}
\label{sec:GalInvarInterforms}
Let $S$ be a compact Riemann-surface of genus $g$. Consider the first homology group $H_1(S,\mathbb{Z})$ which is a free $\mathbb{Z}$-module of rank $2g$. Let $H^0(S,\Omega_S)$ be the space of holomorphic differentials which is a $\mathbb{C}$-vector space of dimension $g$. The function
\begin{eqnarray*}
H_1(S,\mathbb{Z}) \times H^0(S,\Omega_S) & \rightarrow & \mathbb{R} \\
( \gamma , \omega) & \mapsto & \langle \gamma, \omega \rangle =\mathrm{Re}\int_\gamma \omega
\end{eqnarray*}
induces a duality $H_1(S,\mathbb{Z})\otimes \mathbb{R}$ to $H^0(S,\Omega_S)^*$, see \cite[th. 5.6]{LangIntroAlgAbFun}, \cite[sec. 2.2 p. 224]{Griffiths-Harris:95}. Therefore an action of a group element on $H_1(S,\mathbb{Z})$ gives rise to the contragredient action on holomorphic differentials, see also \cite[p. 271]{Farkas-Kra}.
C. Mc Mullen in \cite[sec. 3]{McMullenBraidHodge} considered the Hodge decomposition of the DeRham cohomology as
\[
H^1(X)=\mathrm{Hom}_{\mathbb{C}}(H_1(X,\mathbb{Z}),\mathbb{C})=
H^{1,0}(X) \oplus H^{0,1}(X)\cong \Omega(X) \oplus \bar{\Omega}(X).
\]
Of course this decomposition takes place in the dual space of holomorphic differentials, and is based on the intersection form
\begin{equation} \label{dua-intform}
\langle \alpha,\beta \rangle =i/2 \int_X \alpha \wedge \bar{\beta}, \qquad i^2=-1.
\end{equation}
In this article we use the group theory approach and we focus around the homology group $H_1(X,\mathbb{Z})$. Homology group is equipped with an intersection form and a canonical symplectic basis $a_1,\ldots,a_g,b_1,\ldots,b_g$ such that
\[
\langle a_i,b_j \rangle =\delta_{ij}, \qquad \langle a_i,a_j \rangle= \langle b_i,b_j \rangle =0.
\]
Every two homology classes $\gamma,\gamma'$ can be written as $\mathbb{Z}$-linear combinations of the canonical basis
\[
\gamma = \sum_{i=1}^g (\lambda_i a_i + \mu_i b_i) \qquad
\gamma' = \sum_{i=1}^g (\lambda_i' a_i + \mu_i' b_i)
\]
and the intersection is given by
\[
\langle \gamma,\gamma' \rangle=
(\lambda_1,\ldots,\lambda_g, \mu_1,\ldots,\mu_g)
\begin{pmatrix}
0 & \mathbb{I}_g \\
- \mathbb{I}_g & 0
\end{pmatrix}
(\lambda_1',\ldots,\lambda_g', \mu_1',\ldots,\mu_g')^t.
\]
This gives rise to a representation
\begin{equation}
\label{sympRep}
\rho: B_{s-1} \rightarrow \mathrm{Sp}(2g,\mathbb{Z})
\end{equation}
since $\langle \sigma(\gamma),\sigma(\gamma')\rangle=\langle \gamma,\gamma'\rangle$.
Indeed, it is known\cite[sec. 3.2.1]{MR2435235} that the action of the braid group keeps the intersection multiplicity of two curves.
The relation to the unitary representation on holomorphic differentials (and the signature computations) is given by using the diagonalization of
\[
\begin{pmatrix}
0 & \mathbb{I}_g \\
- \mathbb{I}_g & 0
\end{pmatrix}
=P \cdot \mathrm{diag}(\underbrace{i,\ldots,i}_g,\underbrace{-i,\ldots,-i}_g) \cdot P^{-1},
\]
and the extra``$i$'' put in front of eq. (\ref{dua-intform}).
\subsubsection{Arithmetic intersection}
\label{arithInter}
In order to define an analogous result in the case of absolute Galois group we have first to define an intersection form in $H_1(X,\mathbb{Z}_\ell)$, which can be defined as the limit of the intersection forms in $H_1(X,\mathbb{Z}/\ell^n \mathbb{Z})$.
For every $\sigma\in \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q})$ and $\gamma,\gamma' \in H_1(X,\mathbb{Z}_\ell)$ we
have
\[
\langle \sigma(\gamma),\sigma(\gamma')\rangle=
\chi_\ell(\sigma) \langle \gamma,\gamma' \rangle,
\]
where $\chi_\ell(\sigma)$ is the $\ell$-cyclotomic character.
Indeed, consider the Jacobian variety $J(X)$ for the curve $X$. By construction of the Jacobian variety as a quotient of its tangent space at the identity element it is clear that $H_1(J(X),\mathbb{Z})=H_1(X,\mathbb{Z})$ and after tensoring with $\mathbb{Z}_\ell$ the same equality holds for the pro-$\ell$ homology groups. Consider the following diagram
\[
\xymatrix{
H_1(X,\mathbb{Z}) \times H_1(X,\mathbb{Z})
\ar[r]^{\langle \cdot,\cdot\rangle} \ar[d] &
\mathbb{Z} \ar[d]
\\
T_\ell(J(X)) \times T_\ell(J(X))
\ar[r]^{e^{\lambda}} &
\mathbb{Z}_\ell(1)=
\displaystyle \lim_{\leftarrow} \mu_{\ell^n}\subset \bar{\mathbb{Q}},
}
\]
where the down horizontal array is given by the Weil pairing $e^{\lambda}$ with respect to the canonical polarization $\lambda$, and the upper map is the homology intersection form.
The arrows pointing down on the left are the obvious ones, while the down pointing arrow $\mathbb{Z} \rightarrow \displaystyle \lim_{\leftarrow} \mu_{\ell^n}$
is given by
$\mathbb{Z} \ni m \mapsto (\ldots,e^{ \frac{2 \pi i m}{\ell^n} },\ldots )$.
The above diagram
is known to commute with a negative sign, see \cite[p. 237]{MumfordAbelian}, \cite[ex.13.3 p.58]{milneAV} that is
\[
e^\lambda(a,a')=
(\ldots, e^{ -\frac{2 \pi i \langle a,a' \rangle}{\ell^n} } ,\ldots )
\]
By selecting a primitive $\ell^n$-root of unity for every $n$, say $e^{2\pi i/\ell^n}$ we can write $\mathbb{Z}_\ell(1)$ as an additive module, that is we can send
\[
\mathbb{Z}_\ell(1) \ni \alpha =(\ldots, e^{2 \pi i a_n/ \ell^n},\ldots) \mapsto (\ldots, a_n,\ldots) \in \mathbb{Z}_\ell.
\]
It is known that the Weil pairing induces a symplectic pairing in $T_\ell(J(X))\cong H_1(X,\mathbb{Z}_\ell)$, \cite[prop. 16.6]{MR861974},\cite{MR3549183}, \cite{Duarte} so that
\[
\langle \sigma a ,\sigma a' \rangle=
\chi_\ell(\sigma) \langle a, a'\rangle.
\]
In this way we obtain a representation
\[
\rho: \mathrm{Gal}(\bar{\mathbb{Q}}/\mathbb{Q}) \rightarrow
\mathrm{GSp}(2g,\mathbb{Z}_\ell)
\]
which is the arithmetic analogue of the representation given in eq. (\ref{sympRep}).
\def$'${$'$}
|
1,477,468,751,037 | arxiv | \section{Introduction}
A superlattice structure in two-dimensional (2D) materials has opened a new way to engineer electronic bands, starting with the investigation on a honeycomb superlattice structure in monolayer graphene \cite{PhysRevLett.71.4389}. Recently, Moir\'e pattern in a twisted-bilayer van der Waals heterostructure has been immensely successful in generating a variety of band structures, including Hofstadter butterfly \cite{PhysRevB.84.035440,spanton2018observation} and flat bands \cite{bistritzer2011moire,PhysRevB.86.155449,PhysRevLett.122.106405,cao2018unconventional,cao2018correlated}. These bands can induce intriguing strongly correlated phases such as fractional Chern insulator \cite{spanton2018observation}, anomalous Hall phase \cite{sharpe2019emergent,PhysRevX.9.031021}, Mott insulating phase \cite{cao2018correlated,choi2019electronic,chen2019evidence,PhysRevX.8.031089}, nontrivial magnetic phases \cite{PhysRevLett.119.107201,PhysRevB.98.075109,sharpe2019emergent,PhysRevLett.120.266402}, and superconductivity \cite{cao2018unconventional,yankowitz2019tuning,PhysRevX.8.031089,PhysRevLett.121.257001,PhysRevLett.122.257002}. Yet, this passive way of creating a superlattice has been largely limited by the microscopic structure of the 2D materials since different samples should be prepared for different superlattice structures. Therefore, it is interesting to find alternative ways to synthesize a spatiotemporal structure in 2D materials.
At the same time, the recent progress in the beam-shaping technique has enabled the generation of arbitrary beam patterns with high resolution comparable to the optical wavelengths \cite{zupancic2016ultra,barredo2016atom,endres2016atom,barredo2018synthetic,schine2019electromagnetic,fazal2011optical}, which already found remarkable successes in ultracold-atom systems \cite{tai2017microscopy,lukin2019probing,chiu2019string,PhysRevLett.122.173201,PhysRevX.9.041052}. This wide tunability of light can be naturally applied to 2D electronic systems to imprint arbitrary superlattices, regardless of the underlying microscopic lattice structure. This is particularly interesting in the context of the ``Floquet topological insulator,'' where the illumination of circularly polarized (CP) light can turn a trivial system into a topological insulator \cite{PhysRevB.79.081406,kitagawa2011transport,lindner2011floquet,wang2013observation, mciver2019light,PhysRevB.90.115423,PhysRevResearch.1.023031,li2019floquet,katz2019optically,PhysRevLett.107.276601}.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig1}\subfigure{\label{F1a}}\subfigure{\label{F1b}}
\caption{A 2D material irradiated by a spatially periodic CP light with frequency $\omega$. Here, we use the example of a monolayer graphene. The superposition of multiple CP Gaussian beams generates a periodic amplitude pattern $A_0(\mathbf{r})$ with translation vectors $\mathbf{L}_1$ and $\mathbf{L}_2$. $|\mathbf{L}_1|=l$. Upper inset: We denote the interatomic distance of the graphene as $a$ and the tight-binding energy between the nearest neighbors as $t$. Lower inset: Each Gaussian beam has a peak amplitude $\mathcal{A}_0$ and a half waist $w$ (black lines). The overall beam amplitude (red line) results from the superposition of the Gaussian beams.}\label{F1}
\end{figure}
In this paper, we propose a method to create superlattice structures in a 2D material by shining spatially periodic laser beams, as schematically shown in \Cref{F1}. We illustrate the idea with an example of monolayer graphene irradiated by a circularly polarized beam with a superlattice structure, where the beam amplitude is spatially periodic. To demonstrate the tunability of this superlattice structure and unique physics originating from the superlattice, we first study the case of a square superlattice and explore the topological phase transition induced by varying the superlattice size. Then, we investigate the topological phase transitions, when the square superlattice is sheared to a stretched hexagonal one. In particular, we examine the relationship between this topological phase transition and the role of lattice geometry in creating complex tunneling phases. Further, we demonstrate the possibility of creating more exotic lattices by superposing multiple lattices, with an example of tuning between a hexagonal and a kagome lattice where the flat bands can be obtained. These flat bands particularly can harbor strongly correlated phenomena in Floquet systems.
\section{Graphene with spatially patterned light}
Let us consider a monolayer graphene with the inter-atomic distance $a$ and the tight-binding energy $t$ between the nearest neighbors. The low-energy description for this monolayer graphene under the electromagnetic field $\mathbf{A}(\mathbf{r},t)$ is given by
\begin{eqnarray}\label{H_coupling}
H = v\left[ \mathbf{p}+e\mathbf{A}(\mathbf{r},t)\right] \cdot \left( \tau_z\sigma_x\mathbf{\hat{x}}+\sigma_y\mathbf{\hat{y}} \right),
\end{eqnarray}
where $\sigma_x,\sigma_y,\sigma_z$ are Pauli matrices acting on sublattice degrees of freedom, $v=(3/2)ta$ is the Fermi velocity at Dirac points, and $\tau_z=\pm 1$ is the valley index \cite{RevModPhys.81.109}. In particular, if we shine the CP beam with spatial amplitude pattern $\mathbf{A}(\mathbf{r},t)=A_0(\mathbf{r})e^{i\omega t}(\mathbf{\hat{x}}+i\mathbf{\hat{y}})+\text{c.c.}$ (\Cref{F1}), the effective Floquet Hamiltonian to the first order in $\omega^{-1}$ becomes \cite{PhysRevB.79.081406,PhysRevLett.110.200403,PhysRevX.4.031027,HDehghani2014Dissipative,[{In a Floquet system, the quantum Hall physics is dominated by the non-equilibrium distribution that depends on whether the system is isolated or coupled to a reservoir where photoexcitation competes with bath-induced cooling, as elaborated in }]HDehghani2015Out,*PhysRevB.99.014307,eckardt2015high,PhysRevB.93.144307}
\begin{eqnarray}\label{eqn_Heff}
H_\text{eff} = v(\tau_z p_x\sigma_x + p_y\sigma_y) + \tau_z\frac{4e^2 v^2}{\omega}\left|A_0(\mathbf{r})\right|^2 \sigma_z.
\end{eqnarray}
We denote the peak amplitude of $A_0(\mathbf{r})$ as $\mathcal{A}_0$. Then, \eqnref{eqn_Heff} becomes a valid description when frequency $\omega$ is high enough ($\omega \gg ev\mathcal{A}_0$) and the amplitude varies in length scale larger than $a$ ($\mathcal{A}_0/\text{max}\left\lbrace |\nabla A_0(\mathbf{r})| \right\rbrace \gg a$). For brevity, we set $\hbar=1$ from here on.
We specifically study the superlattice structure created by a spatially periodic amplitude $|A_0(\mathbf{r})|=|A_0(\mathbf{r}+\mathbf{L}_1)|=|A_0(\mathbf{r}+\mathbf{L}_2)|$. While the 2D material with spatioally modulated beams has been studied in the different contexts \cite{PhysRevLett.110.016802,PhysRevB.88.224106,morina2018optical}, here we investigate the generation of a superalttice with spatially periodic beams. In particular, to make the beam experimentally relevant, we consider the superposition of CP Gaussian beams positioned on the superlattice,
\begin{eqnarray}
A_0(\mathbf{r}) = \sum_{n_1,n_2} \mathcal{A}_0 \exp\left( -\frac{|\mathbf{r}-n_1\mathbf{L}_1-n_2\mathbf{L}_2|^2}{2 w^2} \right),
\end{eqnarray}
where $w$ is the radius of each Gaussian beam. This beam configuration is achievable with recent progress in beam-shaping technologies \cite{zupancic2016ultra,barredo2016atom,endres2016atom,barredo2018synthetic,schine2019electromagnetic,fazal2011optical}. For the cases $|\mathbf{L}_1|,|\mathbf{L}_2|=l\gg a$, the Brillouin-zone folding occurs on a momentum scale $1/l$. Furthermore, the hybridization of Floquet sidebands is suppressed for $v/l\ll\omega$ so that the low-energy description is captured by \eqnref{eqn_Heff} (see Appendix A). We obtain Bloch eigenstates $\ket{\psi_{m,\mathbf{k}}}$ and eigenenergies $ E_{m,\mathbf{k}}$, where $m$ is the band index and $\mathbf{k}$ is the crystal momentum within the Brillouin zone set by reciprocal lattice vectors of $\mathbf{L}_1$ and $\mathbf{L}_2$. Note that \eqnref{eqn_Heff} preserves particle-hole symmetry ($\sigma_x H_{\text{eff}}^* \sigma_x = -H_{\text{eff}}$) and therefore the energy spectrum is symmetric with respect to the zero energy. Also, $\sigma_y H_{\text{eff}} \sigma_y = \left. H_{\text{eff}}\right|_{\tau_z\to -\tau_z}$, so two valleys have the same spectrum and eigenstates up to a unitary operation, $\sigma_y$. This also ensures that both valleys have the same Chern number. For brevity, let us only consider the $\tau_z=1$ valley from now on.
\section{Illumination of square superlattice}
We first consider the simplest case of a square superlattice, $\mathbf{L}_1=l\mathbf{\hat{x}}$ and $\mathbf{L}_2=l\mathbf{\hat{y}}$. Before directly diagonalizing \eqnref{eqn_Heff}, we can make some speculations. First of all, the contribution from the spatial average of $|A_0(\mathbf{r})|$ opens up the gap around the zero energy ($\Delta_b$) as in the case of the graphene under the CP uniform light, where the Chern number, $\mathcal{C}_1$, of the first band above $E=0$ is nonzero \cite{PhysRevB.79.081406,kitagawa2011transport,PhysRevLett.110.016802,lindner2011floquet,PhysRevX.3.031005}. $\mathcal{C}_1$ remains nonzero for small $l$, as far as the maximum kinetic energy within the Brillouin zone, which is of the order of $v/l$, is much larger than the spatial Fourier components of the $\sigma_z$ term in \eqnref{eqn_Heff}, which is of the order of $e^2v^2\mathcal{A}_0^2/\omega$. On the other hand, as $l\to\infty$, the contribution of the kinetic term becomes negligible and therefore the bands become flat. Also, the Bloch wavefunctions look similar regardless of $\mathbf{k}$ and therefore the bands become topologically trivial. Therefore, there must be a topological phase transition where $\mathcal{C}_1$ changes from a nonzero value to zero as we increase $l$. This topological transition would occur at a superlattice size that makes the two energy scales $e^2v^2\mathcal{A}_0^2/\omega$ and $v/l$ comparable to each other. For a succinct description of this phase transition, we use the rescaled superlattice size
\begin{eqnarray}
\chi = (v e^2 \mathcal{A}_0^2 /\omega)l
\end{eqnarray}
so that the critical superlattice size $\chi_c$ is $O(1)$. Here, $\chi$ represents the ratio of the effective superlattice potential over the kinetic energy.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig2}\subfigure{\label{F2a}}\subfigure{\label{F2b}}\subfigure{\label{F2c}}
\caption{(a) Energy spectrum for square superlattices with different superlattice size $\chi$. We set $\mathcal{A}_0=0.006(ea)^{-1}$, $\omega=0.06t$, and $w/l=0.3$. Only the positive-energy spectrum is shown for simplicity. The Chern numbers of low-lying bands, $\mathcal{C}$, are presented as colors. The topological phase transition occurs at $\chi_c=0.965$. Upper inset: Direct gap at $\mathbf{k}=\text{M}$ between the first- and the second-lowest positive band ($\delta_\text{M}$) is plotted in the vicinity of $\chi_c$. Lower insets: The particle density $n(\mathbf{r})$ and current density $\mathbf{j}(\mathbf{r})$ of the Bloch wavefunction of the lowest positive band at $\mathbf{k}=\text{M}$ are shown for $\chi=0.8<\chi_c$ and $\chi=4>\chi_c$. In the density plots, the centers of the Gaussian beams are located at the corners of the plotted region. The particle density is shown in units of $l^{-2}$. The amplitude of the current density is presented with the color in units of $ev/l^2$ and the direction of $\mathbf{j}(\mathbf{r})$ is represented by arrows. (b) Orbital magnetization $M_\text{orb}$ for the lowest positive band for different superlattice sizes. (c) For the lowest positive band, we plot the energy gap below the band ($\Delta_b$), the energy gap above the band ($\Delta_t$), the direct band gap at $\mathbf{k}=\text{M}$ ($\delta_M$), and the bandwidth ($\delta E$) with respect to the superlattice size $\chi$. $\alpha_0$ is the minimum value of $(4e^2 v^2/\omega)|A_0(\mathbf{r})|^2$. The black dashed lines are asymptotic lines showing that $El/v$ is constant, indicating $E\propto\chi^{-1}$. }\label{F2}
\end{figure}
To study the detail of this topological phase transition, we numerically diagonalize \eqnref{eqn_Heff} as shown in \Cref{F2a}. Along with the energy spectrum, we present the Chern number $\mathcal{C}$ of each band calculated based on Ref. \cite{fukui2005chern}. In \Cref{F2}, we set $\mathcal{A}_0=0.006(ea)^{-1}$, $\omega=0.06t$, and $w/l=0.3$. With these parameters, we can check that the topological phase transition occurs at $\chi_c=0.965$, which is close to 1. This topological transition accompanies the direct gap closing at $\mathbf{k}=\text{M}$ and the band inversion between the first- and second-lowest positive-energy bands. To see this, we compare the particle and current densities of the lowest positive-energy band's wave function at the direct gap closing point. Here, for the Bloch wavefunction of the $m$th band, $\psi(\mathbf{r})=\braket{\mathbf{r}|\psi_{m,\mathbf{k}}}$, the particle and current densities are given by
\begin{eqnarray}
n(\mathbf{r}) &=& \psi^\dag(\mathbf{r}) \psi(\mathbf{r}), \\
\mathbf{j}(\mathbf{r}) &=& -e\psi^\dag(\mathbf{r}) \frac{\partial H_\text{eff}}{\partial \mathbf{p}} \psi(\mathbf{r})
= -ev\psi^\dag(\mathbf{r})\left( \sigma_x\mathbf{\hat{x}} + \sigma_y\mathbf{\hat{y}} \right) \psi(\mathbf{r}). \nonumber
\end{eqnarray}
The comparison of $n(\mathbf{r})$ and $\mathbf{j}(\mathbf{r})$ before ($\chi=0.8$) and after ($\chi=4$) the transition point shows a drastic change in the wave function, which signifies that the band inversion has occurred in the phase transition. In the current density plot, one can also find that the circulation direction of the electron flips as the band inversion occurs. This phenomenon can also be captured in the calculation of the $m$th band contribution to the orbital magnetization \cite{PhysRevLett.95.137205,PhysRevLett.95.137204,PhysRevLett.99.197202},
\begin{eqnarray}\label{eqn_Morb}
M_\text{orb}= \text{Im}\int \frac{d^2\mathbf{k}}{(2\pi)^2}
e\frac{\partial\bra{u_{m,\mathbf{k}}}}{\partial k_x}
\left( H_{\mathbf{k}} + E_{m,\mathbf{k}} \right)
\frac{\partial\ket{u_{m,\mathbf{k}}}}{\partial k_y}, \quad
\end{eqnarray}
where $\ket{u_{m,\mathbf{k}}}=e^{-i\mathbf{k}\cdot\mathbf{r}}\ket{\psi_{m,\mathbf{k}}}$ and $H_{\mathbf{k}}=e^{-i\mathbf{k}\cdot\mathbf{r}} H_{\text{eff}} e^{i\mathbf{k}\cdot\mathbf{r}}$. In \Cref{F2b}, one can see that $M_\text{orb}$ of the lowest positive band shows the sign flip at the phase transition point, agreeing with the observation in the current density plots.
We also remark that even if this topological phase transition theoretically exists regardless of the Gaussian beam size, it is desirable to keep $w$ comparable to $l$ for experimental realizations since a fainter superlattice will imply a smaller direct band gap.
This topological phase transition could be experimentally detected in several ways. The change in $\mathcal{C}_1$ causes the difference in the Hall current carried by the chiral edge state, and such difference can be revealed by transport measurements, similar to Ref. \cite{mciver2019light}. For the bulk property, one can measure the orbital magnetization, where the sudden jump would be observed at the phase transition shown in \Cref{F2b}.
As the superlattice size $\chi$ increases, the electrons become localized at the local minima of $|A_0(\mathbf{r})|$. This provides an explanation for the exponential suppression of the bandwidth of the lowest positive-energy band ($\delta E$) in $\chi$ [\Cref{F2c}]. For well-localized electrons, the dynamics can effectively be described by a tight-binding model, and the tunneling energy of that model is approximately given by the WKB integrals. This integral decays exponentially with the distance between the superlattice sites, so the bandwidth decreases exponentially as well. The band gaps ($\Delta_b$, $\Delta_t$, $\delta_M$) decay as $O(\chi^{-1})$, where the details of this band gap scaling are explained in the Appendix B.
\section{Superlattice shearing}
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{fig3}\subfigure{\label{F3a}}\subfigure{\label{F3b}}\subfigure{\label{F3c}}\subfigure{\label{F3d}}
\caption{(a) We shear a square lattice by angle $\theta$. Tunneling between two sites can be understood as the flow of chiral edge currents around each Gaussian CP beam. If the system has reflection symmetry around the line connecting the two sites, this tunneling should be real. Otherwise, the tunneling can have a complex phase. As examples, the next-nearest-neighbor tunnelings for the $\theta=0,\pi/4$ case and the $\theta=\tan^{-1}(1/2)$ case are presented. (b) The Chern number of the lowest positive energy band $\mathcal{C}_1$ is shown as a phase diagram between the shearing angle $\theta$ and the superlattice size $\chi$. (c) Energy spectra for $\chi=2.4$ at selected angles are shown where the colors of low-lying bands represent the Chern numbers. The particle density in units of $l^{-2}$ is plotted for angles before and after the phase transition.}\label{F3}
\end{figure}
To further investigate the role of the superlattice geometry, let us shear the square superlattice by angle $\theta$ so that $\mathbf{L}_1=l\mathbf{\hat{x}}$ and $\mathbf{L}_2=l(\tan\theta\mathbf{\hat{x}}+\mathbf{\hat{y}})$. From the perspective of the Floquet Chern insulator created by uniform CP light, in a large superlattice size limit where the tight-binding description is valid, we might interpret the electron tunneling between superlattice sites as the chiral currents around the strongly irradiated region. That is, the paths that these chiral currents flow would give the major contribution to the path integral from one superlattice site to another. In this viewpoint, two superlattice sites can have a \textit{complex} tunneling phase between them if the system has no reflection symmetry along the line connecting the two sites [\Cref{F3a}], which is analogous to Ref. \cite{hafezi2011robust}. Then we can see that the tunneling terms of the tight-binding model for the square lattice ($\theta=0$ and $\theta=\pi/4$) are real. At angles close to $\theta=\tan^{-1}(1/2)$, the localized electrons form a hexagonal superlattice under a uniform strain and can have complex tunneling phases between the next-nearest neighbors. Then we can construct a tight-binding model for the lowest positive band similar to the Haldane model \cite{PhysRevLett.61.2015}, as explained in the Appendix C. Similar to the Haldane model, a complex tunneling phase in the next-nearest-neighbor tunneling makes $\mathcal{C}_1$ nonzero at this angle. With these considerations, we can predict successive topological phase transitions as we increase $\theta$ from 0 to $\pi/4$.
We obtain the phase diagram numerically in \Cref{F3b} by calculating the Chern number of the lowest positive-energy band for each value of $\chi$ and $\theta$. As we predicted, we can observe the successive topological phase transitions at $\chi$ larger than a certain value, which corresponds to the phase transition point described in \Cref{F2}. Another salient feature is that the $\mathcal{C}_1=1$ regime very sharply blows up toward the angle $\theta=\tan^{-1}(1/2)$, at which the $\chi$ region for $\mathcal{C}_1=1$ diverges. This can be explained by combining the fact that the size of tunneling strengths decreases exponentially with the distance between the superlattice points and another fact that the Dirac cones can disappear and the topologically trivial gap opens in the extreme strain (see Appendix C). We can also see that the topological phase transition also accompanies the gap closing and the band inversion, as shown in the particle density plots [\Cref{F3c}].
\section{Hexagonal lattice to kagome lattice}
To engineer favorable features such as flatter bands, we can create an even more complicated superlattice by superposing different kinds of lattices. For instance, we consider the superposition of the triangular lattice beam $A_\text{tri}(\mathbf{r})$ and the hexagonal lattice beam $rA_\text{hex}(\mathbf{r})$, where $r$ is the amplitude ratio of the two lattices (\Cref{F4}). When the contribution from the hexagonal lattice beam is negligible, the localized electrons form a hexagonal superlattice and the lowest part of the positive-energy spectrum can be explained by a two-band model. As $r$ increases, electrons are confined to a kagome superlattice \cite{[{In cold-atom systems, optical kagome lattice has been implemented for linearly-polarized lasers as in }]PhysRevLett.108.045305} and the lowest part of the positive-energy spectrum can be explained by a three-band model including a flat band. Note that slight gaps are observed in both the two-band model for the hexagonal superlattice and the three-band model for the kagome superlattice. The gap in the two-band model can be explained with the Haldane model with complex phases in the next-nearest-neighbor tunneling, as shown in \Cref{F3a}. The gap in the kagome lattice comes from the complex phase in the nearest-neighbor tunneling \cite{PhysRevA.82.043811,PhysRevB.93.144307}. At $r=0$, we can see that the third band is nearly flat, while it is gapped well from the other bands. This flat band can be potentially used to stabilize strongly correlated phases.
\begin{figure}[h]
\centering
\includegraphics[width=\linewidth]{fig4}\subfigure{\label{F4a}}\subfigure{\label{F4b}}
\caption{Superposition of the triangular lattice beam, $A_\text{tri}(\mathbf{r})$, and the hexagonal lattice beam $rA_\text{hex}(\mathbf{r})$. As we increase the ratio $r$, we effectively change the electron superlattice from the hexagonal lattice to the kagome lattice. Energy spectra for $\chi=5.4$ at selected values of $r$ are shown where the colors of low-lying bands represent the Chern numbers. By zooming in the spectrum, we can check the gaps in the two-band model and the three-band models in the lowest part of the spectrum.}\label{F4}
\end{figure}
\section{Experimental feasibility}
For numerical calculation, we have set $\mathcal{A}_0=0.006(ea)^{-1}$, $\omega=0.06t$, and $w/l=0.3$ for \Cref{F2} and \Cref{F3}. With the typical values of $t=3\text{ eV}$ and $a=0.142\text{ nm}$ for the monolayer graphene, these parameters of the laser field correspond to the field amplitude $7.6\times 10^{6}\text{ V/m}$, the beam frequency $43.5\text{ THz}$, and beam spot size $0.1\mu\text{m}$ (FWHM). This is similar to the beam frequency in a recent experiment \cite{mciver2019light} while the peak intensity is about $4\%$ of the beam used in the same experiment. With these parameters, the typical size of the gap ($\Delta_b$ in \Cref{F2}) is 4\text{ meV}. \Cref{F4} uses $\mathcal{A}_0=0.0015(ea)^{-1}$ and $\omega=0.06t$ while $w/l=0.3$ and $w/l=0.15$ for $A_\text{tri}(\mathbf{r})$ and $A_\text{hex}(\mathbf{r})$, respectively. Finally, we remark that due to the injection of photons into the system, heating effects could eventually destroy the nontrivial topological behavior that is initially formed. Therefore, we only consider the prethermal regime where electron-electron and electron-phonon scatterings can be ignored \cite{HDehghani2015Out}. In the past few years, the existence of this transient regime has been convincingly demonstrated in several pump-probe experiments \cite{wang2013observation, mahmood2016selective, mciver2019light}.
\section{Outlook}
By considering Coulomb interaction in our nearly flat and topologically nontrivial bands, one could potentially induce strongly correlated phases such as fractional Chern insulators \cite{PhysRevLett.103.196803,PhysRevB.83.195139,maciejko2015fractionalized,spanton2018observation}, superconductors \cite{cao2018unconventional,yankowitz2019tuning,PhysRevX.8.031089,PhysRevLett.121.257001,PhysRevLett.122.257002,PhysRevB.94.214501,*flatsc,martin2019moire}, or magnetic phases \cite{PhysRevLett.119.107201,PhysRevB.98.075109,sharpe2019emergent,PhysRevLett.120.266402}. Moreover, by irradiating with frequencies comparable to the bare tunneling strength, instead of the high-frequency regime considered here, higher-order terms become relevant \cite{PhysRevB.93.144307}, and therefore, one can induce a wider class of structures. While we focus on the Dirac semimetal system in this paper, our scheme can also be applied to other 2D materials such as semiconductors \cite{panna2019ultrafast}. Our approach can be combined with other methods, such as surface acoustic waves in a solid-state platform \cite{PhysRevX.7.041019}, for trapping, cooling, and controlling charged particles, and for simulation of quantum many-body systems. Finally, these ideas could be used to engineer a new class of dielectric materials for potential applications in optical devices \cite{Min:08}.
\section*{Acknowledgments}
I.M. was supported by the Materials Sciences and Engineering Division, Basic Energy Sciences, Office of Science, U.S. Department of Energy. H.A. acknowledges support from JSPS KAKENHI Grant No. 17H06138, and CREST (Core Research for Evolutionary Science and Technology) "Topology" project from JST. H.K., H.D., and M.H. were supported by Grants No. AFOSR FA9550-16-1-0323 and No. FA9550-19-1-0399, Grant No. ARO W911NF2010232, and the NSF Physics Frontier Center at the Joint Quantum Institute. I.M., H.D., and M.H. are thankful for the hospitality of the Kavli Institute for Theoretical Physics, supported by Grant No. NSF PHY-1748958. The authors thank Zhi-Cheng Yang for the insightful discussion.
|
1,477,468,751,038 | arxiv | \section{Introduction}
\label{s:Introduction}
In ultra-relativistic heavy-ion collisions a new state of matter known as Quark-Gluon Plasma (QGP) is produced. A key observable in the study of QGP is azimuthal anisotropy in particle production wrt. the collision symmetry plane, $\Psi_n$ \cite{Ollitrault:1992bk, Voloshin:2008dg}. The anisotropies are described by coefficients, $v_n$, in a Fourier decomposition of the azimuthal yields with respect to the corresponding $\Psi_n$. Anisotropic flow harmonics are calculated as an average over all particles, $v_n = \left \langle \cos \left [ n (\varphi - \Psi_n) \right ] \right \rangle$, where $\varphi$ is the azimuthal angles of the particles.
For many years only the first and second symmetry plane were considered to be of importance. Recent developments have shown that higher harmonics are present and provide important information about the QGP \cite{Alver:2010gr}. Nowadays the flow harmonics $v_1-v_6$ are all being reported on \cite{ALICE:2011ab,ATLAS:2012at}, and can help set tighter limits on the shear viscosity, $\eta/s$, of the QGP.
The elliptic flow coefficient, $v_2$, has previously been measured over a wide pseudorapidity-range, $\eta$, in Au-Au collisions over about an order of magnitude in collision energy ($\sqrt{s_{\rm NN}} = 19.6 - 200$ GeV) \cite{Back:2004zg}. $v_2$ was found to be independent of collision energy when observed in the rest frame of one of the colliding nuclei, an effect known as longitudinal scaling.
Here we report on elliptic flow ($v_2$) and triangular flow ($v_3$) as measured over a wide pseudorapidity range, $-3.75 < \eta < 5$, in Pb-Pb collisions at $\sqrt{s_{\rm NN}} = 2.76$ TeV and compare to models and previous measurements at RHIC. Elliptic flow fluctuations has recently been found to be independent of $p_{\rm T}$ up to very high transverse momenta \cite{Abelev:2012di}, and here we report on the elliptic flow fluctuations vs. rapidity.
\section{Analysis details}
\label{s:Analysis}
The ALICE \cite{Aamodt:2008zz} minimum bias trigger was used for the event selection. Only events with a valid centrality estimate and primary vertex $|v_z| < 10$ cm were accepted. In total about $10.6$ million events from the 2010 data taking period were analyzed. The centrality is estimated using the VZERO detector, a pair of scintillator arrays covering $-3.7 < \eta < -1.7$ and $2.8 < \eta < 5.1$ in pseudorapidity.
The flow analysis is done on charged particle hits in the Forward Multiplicity Detector, a silicon strip detector which covers $-3.4 < \eta < -1.7$ and $1.7 < \eta < 5$. At mid-rapidity clusters from the innermost layer of the Silicon Pixel Detector, the inner part of the ALICE Inner Tracking System, are used. Using hits and clusters for the analysis, allows to measure flow down to almost zero $p_{\rm T}$.
The analysis is based on analytical calculations for two- and four-particle cumulants \cite{Bilandzic:2010jr}, written as $v_n\{2\}$ and $v_n\{4\}$ respectively. The two-particle cumulant is enhanced by flow fluctuations, $\sigma_{v_n}$, and non-flow, $\delta_n$, such that $v^2_n\{2\} = \langle v_n \rangle^2 + \sigma^2_{v_n} + \delta_n$ and the four-particle cumulant has a negative contribution from flow fluctuations and is unaffected by non-flow, $v^2_n\{4\} = \langle v_n \rangle^2 - \sigma^2_{v_n}$. For this analysis the contribution by non-flow is subtracted using azimuthal correlations, $v_n\{2\}^{pp}$, from pp collisions at $\sqrt{s}=2.76$ TeV: $\delta_n^{cent}=v^2_n\{2\}^{pp}\cdot\frac{M^{pp}}{M^{cent}}$, where $M$ is the multiplicity. The different sensitivity to flow fluctuations can be used to estimate the flow fluctuations as \cite{Voloshin:2008dg}:
\begin{equation}
\frac{\sigma_{v_n}}{\langle v_n \rangle} \approx \sqrt{\frac{v^2_n\{2\}-v^2_n\{4\}}{v^2_n\{2\}+v^2_n\{4\}}}
\end{equation}
\begin{figure}[ht]
\hspace{-1.3cm}
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=1.1\textwidth]{vn_eta.pdf}
\caption{(color online) $v_2\{2\}$ and $v_3\{2\}$ vs. $\eta$ for very central events $(0-5\%)$ and more peripheral $(50-60\%)$.}
\label{f.flow1a}
\end{minipage}
\hspace{0.40cm}
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=1.1\textwidth]{v2_eta_rhicComp.pdf}
\caption{(color online) $v_2$ comparison with PHOBOS\cite{Back:2004mh} and CMS\cite{Chatrchyan:2012ta} for $25-50\%$ central events.}
\label{f.flow1b}
\end{minipage}
\end{figure}
\section{Results}
\label{s:Results}
Results from two-particle cumulant calculations of elliptic and triangular flow for very central and more peripheral events (Fig.~\ref{f.flow1a}) clearly show that $v_2$ has a strong centrality dependence for all rapidities, while $v_3$ has a weak centrality dependence.
It has previously been shown that there is a $20-30\%$ increase in $v_2$ going from the RHIC top energy to LHC at mid-rapidity \cite{Aamodt:2010pa}, a comparison over a wide rapidity range (Fig.~\ref{f.flow1b}) shows that at forward-rapidities the increase can be larger than $30\%$.
$v_2$ plotted as a function of pseudorapidity measured from a beam rapidity (Fig.~\ref{f.flow2a}) exhibits longitudinal scaling previously observed at RHIC \cite{Back:2004zg} and for directed flow at the LHC \cite{Selyuzhenkov:2011zj}.
The AMPT model \cite{Lin:2004en} (Fig.~\ref{f.flow2b}) with parameters tuned to mid-rapidity LHC results for semi-central events \cite{Xu:2011fi} gives a good description of $v_2\{2\}$, $v_2\{4\}$ and $v_3\{2\}$ for all rapidities, with a slight underestimate of $v_2$ at mid-rapidity for the most peripheral events and a slight overestimate for all rapidities for the most central events.
\begin{figure}[ht]
\begin{minipage}[b]{0.55\linewidth}
\hspace{-1.6cm}
\centering
\includegraphics[width=1.1\textwidth]{v2_eta_longScal.pdf}
\caption{(color online) longitudinal scaling of $v_2$ over two orders of magnitude in collision energy, with data from PHOBOS\cite{Back:2004zg}.}
\label{f.flow2a}
\end{minipage}
\hspace{0.30cm}
\begin{minipage}[b]{0.55\linewidth}
\hspace{-1.6cm}
\centering
\includegraphics[width=1.1\textwidth]{vn_cent.pdf}
\caption{(color online) $v_2$ and $v_3$ vs. centrality compared to AMPT with parameters tuned to LHC mid-rapidity results.}
\label{f.flow2b}
\end{minipage}
\end{figure}
A comparison of $v_2\{2\}$ and $v_2\{4\}$ is shown in Fig.~\ref{f.fluca}, the shift caused by fluctuations is clearly seen. In the bottom panel the flow fluctuations are estimated, they are found to have a strong centrality dependence, but within the errors no rapidity dependence is observed.
The centrality dependence of the flow fluctuations (Fig.~\ref{f.flucb}) is observed to be strongest for the most central events. This is consistent with earlier results at mid-rapidity \cite{Collaboration:2011yba}, here we show the same for all $|\eta| < 5$.
\begin{figure}[ht]
\hspace{-1.3cm}
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=1.1\textwidth]{v2_eta_fluc.pdf}
\caption{(color online) $v_2\{2\}$ and $v_2\{4\}$ vs. $\eta$ and elliptic flow fluctuations, lines show the statistical uncertainty, bands the systematic uncertainty.}
\label{f.fluca}
\end{minipage}
\hspace{0.50cm}
\begin{minipage}[b]{0.55\linewidth}
\centering
\includegraphics[width=1.1\textwidth]{v2s_cent.pdf}
\caption{(color online) Elliptic flow fluctuations vs. centrality at mid-rapidity and forward-rapidity.}
\label{f.flucb}
\vspace{1cm}
\end{minipage}
\end{figure}
\section{Summary}
We have reported the results on the rapidity dependence of elliptic and triangular flow. Elliptic flow was found to have a strong centrality dependence at all rapidities, while triangular flow was found to depend weakly on centrality. Comparing with RHIC measurements, elliptic flow was found to exhibit longitudinal scaling up to LHC energies. The longitudinal scaling is now observed over two orders of magnitude in collision energy.
The AMPT model with parameters tuned to mid-rapidity were found to give a reasonable description for all rapidities, except in the most central and most peripheral, where it overestimates and underestimates the elliptic flow respectively.
The elliptic flow fluctuations were also studied, and were found to be independent of rapidity within the uncertainties. The centrality dependence for all rapidities was found to be consistent with results previously reported for mid-rapidity, with the central events having the largest flow fluctuations.
|
1,477,468,751,039 | arxiv | \section{The space of functions of bounded variation}\label{sec:int}
A \emph{countable vector space} $A$ over a countable field $K$ consists of a set $\lvert A \rvert\subseteq \Nat$ and mappings $+ \colon \lvert A \rvert \times \lvert A \rvert \longto \lvert A \rvert$, $\cdot \colon K \times \lvert A \rvert \longto \lvert A \rvert$, and a distinguished element $0\in \lvert A \rvert$, such that $A,+,\cdot,0$ satisfies the usual vector space axioms.
A (code for a) \emph{separable Banach space} $B$ consists of a countable vector space $A$ over $\Rat$ together with a function $\lVert \cdot \rVert \colon A \longto \Real$ satisfying $\lVert q \cdot a \rVert = \lvert q \rvert \cdot \lVert a \rVert$ and $\lVert a + b \rVert \le \lVert a \rVert + \lVert b \rVert$ for all $q\in \Rat$, $a,b\in A$. A point in $B$ is defined to be a sequence of elements $(a_k)_k$ in $A$ such that $\lVert a_k - a_{k+1}\rVert \le 2^{-k}$.
Addition and multiplication on $B$ are defined to be the continuous extensions of $+$, $\cdot$ from $A$ to $B$.
The space $L_1 := L_1([0,1])$ will be represented by the $\Rat$-vector space of rational polynomials $\Rat[x]$ together with the norm ${\lVert p \rVert}_{1} := \int_0^1 \lvert p(x) \rvert\, dx$. Since the rational polynomials are dense in the usual space $L_1$, this defines (a space isomorphic to) the usually used space (provably in suitable higher-order system where the textbook definition of $L_1$ can be formalized). See Example~II.10.4, Exercise~IV.2.15 and Chapter~X.1 in \cite{sS09}.
\subsection {Bounded variation}
The \emph{variation} of a function $f\colon [0,1]\longto \Real$ is defined to be
\begin{equation}\label{eq:var}
V(f) := \sup_{0\le t_1 < \dots < t_n \le 1} \sum_{i=1}^{n-1} \abs{f(t_i) - f(t_{i+1})}
.\end{equation}
For an $L_1$-equivalence classes of functions $f\in L_1$ the variation is defined to be the infimum over all elements, i.e.,
\begin{equation}\label{eq:varl}
V_{L_1}(f) := \inf \left\{\, V(g) \sizeMid g\colon [0,1]\to \Real \text{ and $g=f$ almost everywhere} \,\right\}
.\end{equation}
The subspace of all $L_1$\nobreakdash-functions of bounded variation form a subspace of $L_1$ with the following norm
\begin{equation}\notag
{\lVert f \rVert}_{BV} := {\lVert f \rVert}_{1} + V_{L_1}(f)
.\end{equation}
However, it is not possible to code this space as a separable Banach space, as we did for $L_1$, since the variation $V$ is difficult to compute (see \prettyref{pro:bccduleq1eq} below) and since this space is not separable in this norm. (To see this take for instance the characteristic functions $\chi_{[0,u]}(x)$ of the intervals $[0,u]$. It is clear that these functions belong to $BV$. For $u,w\in [0,1]$ with $u \neq w$ the function $\chi_{[0,u]} - \chi_{[0,w]}$ contains a bump of height $1$, therefore ${\lVert \chi_{[0,u]} - \chi_{[0,w]}\rVert}_{BV} \ge 2$. Thus, these functions form a set of the size of the continuum which cannot be approximated by countably many functions.)
We will define the space $BV$ to be a subspace of $L_1$.
\begin{definition}[$BV$, \ls{RCA_0}]\label{def:bv}
The space $BV:=BV([0,1])$ is defined like the space $L_1([0,1])$ with the following exception.
A point in $BV$ is a sequence $(p_k)_k\subseteq \Rat[x]$ together with a rational number $v\in \Rat$, such that
\begin{itemize}
\item ${\lVert p_k - p_{k+1} \rVert}_{1} \le 2^{-k}$, and
\item $\int_0^1 \lvert p_k'(x) \rvert \, dx \le v$.
\end{itemize}
The vector space operations are defined pointwise for $p_k$ and $v$. (For scalar multiplication one chooses a suitable rational upper bound for the new $v$.)
The parameter $v$ will be called the \emph{bound on the variation of $f$}.
\end{definition}
This definition is justified by Propositions \ref{pro:jus1} and \ref{pro:jus2} below.
For later use we will collect the following lemma.
\begin{lemma}[\ls{RCA_0}]\label{lem:l1bv}
Let $(f_n)_n\subseteq BV$ be a sequence converging in $L_1$ at a fixed rate to a function $f\in L_1$, i.e., ${\lVert f_n-f \rVert}_1 \le 2^{-n}$.
If the bounds of variations $v_n$ for $f_n$ are uniformly bounded by a $v$, then $f\in BV$.
\end{lemma}
\begin{proof}
Let $(p_{n,k})_k$ be the rational polynomials coding $f_n$. One has ${\lVert p_{k+1,k+1} - f\rVert}_1 \le {\lVert p_{k+1,k+1} - f_{k+1}\rVert}_1 + {\lVert f_{k+1} - f\rVert}_1 \le 2^{-k}$. Thus, $(p_{k+1,k+1})_k, v$ is a code for $f$ in the sense of \prettyref{def:bv}.
\end{proof}
For working with functions of $BV$ it will be handy to use mollifiers as defined below, since one can use them to smoothly approximate characteristic functions without increasing the variation.
\begin{definition}[Mollifier, \ls{RCA_0}]
Let
\[\eta(x) :=
\begin{cases}
c \cdot \exp\left(\frac{1}{x^2-1}\right) & \text{if }\lvert x \rvert < 1\text{,} \\
0 & \text{otherwise,}
\end{cases}
\quad
\text{where }c:= \left( \int_{-1}^1 \exp\left(\frac{1}{x^2-1}\right) \, dx\right)^{-1}.
\]
The function $\eta$ is called a \emph{mollifier}.
It is easy to see that $\eta$ is infinitely often differentiable provably in $\ls{RCA_0}$. By definition $\int_{-1}^1 \eta \, dx = 1$.
Define $\eta_\epsilon(x) := \frac{1}{\epsilon} \eta\left(\frac{x}{\epsilon}\right)$.
We have that the support of $\eta_\epsilon$ is contained in $B(0,\epsilon)=\{ x\in \Real \mid \abs{x} < \epsilon \}$ and that $\int_{-1}^1 \eta_\epsilon \, dx = 1$.
\end{definition}
The integral of this mollifier can be used to smoothly approximate characteristic functions of intervals. For instance
\begin{equation}\label{eq:chiappr}
x\longmapsto \int_{-1}^x \eta_\epsilon\left(y-\tfrac{1}{4}\right) - \eta_\epsilon\left(y-\tfrac{3}{4}\right)\, dy
\end{equation}
approximates $\chi_{\left[\frac{1}{4},\frac{3}{4}\right]}$ in $L_1$, see \prettyref{fig:chiappr}. Since the approximating function does not oscillate, the variation of it is not bigger that the variation of the approximated function.
\begin{figure}
\centering
\begin{tikzpicture}[xscale=5, yscale=3]
\draw[color=blue] plot[id=mol1,smooth] file{bv.mol1.table};
\draw[color=blue] (0.82,0.25) node[right] {\eqref{eq:chiappr} with $\epsilon=0.2$};
\draw[color=green!50!black] plot[id=mol2,smooth] file{bv.mol2.table};
\draw[color=green!50!black] (0.75,0.7) node[right] {\eqref{eq:chiappr} with $\epsilon=0.1$};
\draw (0.25,1)--(0.75,1);
\draw[dashed] (0.25,1)--(0.25,0) node[below] {$\tfrac{1}{4}$};
\draw[dashed] (0.75,1)--(0.75,0) node[below] {$\tfrac{3}{4}$};
\draw[->] (0,0)--(1.0,0) node[right] {$x$};
\draw[->] (0,0)--(0,1.1);
\end{tikzpicture}
\caption{Approximation of $\chi_{[\frac{1}{4},\frac{3}{4}]}$.}\label{fig:chiappr}
\end{figure}
The integral of such a mollifier $x\longmapsto \int_0^x \eta_\epsilon(y-z)\, dy$ is contained in $BV$. To see this let $(q_k)_k \subseteq \Rat[x]$ be a sequence approximating $\eta_\epsilon(x-z)$ in $L_1$, i.e.
\[
{\lVert q_k - \eta_\epsilon(x-z) \rVert}_{1} \le 2^{-k}
.\]
Since ${\lVert \eta_\epsilon(x-z) \rVert}_{1} \le 1$ we have that $\lVert q_k \rVert \le 2$.
Integrating $q_k$ we obtain a sequence of again rational polynomials $p_k(x) = \int_0^x q_k(y) \, dy$. By definition ${\left\lVert p_k- \int_0^x \eta_\epsilon(y-z)\, dy\right\rVert}_{1} \le 2^{-k}$. Thus $(p_k)_k, v=2$ is a code for the integral of the mollifier.
\begin{proposition}[{\ls{WWKL_0}}]\label{pro:just1con}
Let $f\colon [0,1]\longto \Real$ be a continuous function. If the variation of $f$ is bounded, that means that there exists a $v\in \Rat$ such that all sums in \eqref{eq:var} are bounded by $v$, then (the $L_1$-equivalence class of) $f$ belongs to $BV$.
\end{proposition}
For the proof of this proposition we will need the following notation and theorem from \cite{SY12}.
A partition of $[0,1]$ is a finite set $\Delta=\big\{\, 0=x_0 \le \xi_1 \le x_1 \le \dots \le \xi_n \le x_n = 1 \,\big\}$. The mesh of $\Delta$ is $\lvert \Delta\rvert:=\max\{x_k -x_{k-1} \mid 1\le k\le n\}$.
The Riemann sum for $\Delta$ is $S_\Delta(f):= \sum_{k=1}^n f(\xi_k)(x_k-x_{k-1})$.
The limit $\lim_{\lvert \Delta \rvert\to 0} S_\Delta(f)=\int_0^1 f(x)\,dx$ is the Riemann integral.
\begin{definition}
A function $f$ is effectively integrable if there exists a $h\colon \Nat \longto \Nat$ such that for any partitions $\Delta_1,\Delta_2$ and $n\in\Nat$,
\[
\lvert \Delta_1 \rvert < 2^{-h(n)} \AND \lvert \Delta_2 \rvert < 2^{-h(n)} \IMPL \lvert S_{\Delta_1(f)} - S_{\Delta_2(f)} \rvert < 2^{-n+1}
.\]
The function $h$ is called modulus of integrability for $f$.
\end{definition}
\begin{theorem}[\ls{RCA_0}, \cite{SY12}]\label{thm:uniint}
The following are equivalent:
\begin{enumerate}
\item \lp{WWKL_0},
\item Every bounded, continuous function on $[0,1]$ is effectively integrable.
\end{enumerate}
\end{theorem}
\begin{proof}[Proof of \prettyref{pro:just1con}]
Since the variation of $f$ is bounded, $f$ is bounded. Therefore by \prettyref{thm:uniint} the function $f$ is effectively integrable. In particular, there exists a modulus of integrability $h$.
Let $f_n$ be the following sequence of step functions approximating $f$.
\[
f_n(x) := \sum\nolimits_k \chi_{[k \cdot 2^{-h(n)},(k+1) \cdot 2^{-h(n)})} \cdot f(k \cdot 2^{-h(n)}) \quad \text{where $k$ is such that } x\in \left[ \tfrac{k}{2^{h(n)}}, \tfrac{k+1}{2^{h(n)}}\right)
\]
Since $f_n$ is a finite sum of characteristic functions of intervals, it belongs to $BV$. The variation of $f_n$ is obviously bounded by $v$.
By definition $\lVert f_n - f_{n+1}\rVert_1 < 2^{-n+1}$, thus $(f_n)_n$ converges in $L_1$-norm to an $f\in L_1$.
By \prettyref{lem:l1bv}, $f\in BV$.
\end{proof}
In the following we will use right continuous functions. Such a function $f\colon [0,1]\longto \Real$ will be coded by a sequence of real numbers $(x_q)_{q\in\Rat}$ index by rational numbers such that the limit from the right
\[
\lim_{q\searrow x,q\in \Rat} x_q =: f(x)
\]
exists. This definition makes sense in \ls{ACA_0}.
\begin{proposition}[\ls{ACA_0}]\label{pro:jus1}
Let $f\colon [0,1]\longto \Real$ be a right continuous function.
If the variation of $f$ is bounded as in \prettyref{pro:just1con} then (the $L_1$-equivalence class of) $f$ belongs to $BV$.
\end{proposition}
\begin{proof}
We approximate $f$ using the functions $f_n$ given by
\[
f_n(x) := \sum\nolimits_k \chi_{[k \cdot 2^{-n},(k+1) \cdot 2^{-n})} \cdot f(k \cdot 2^{-n}) \quad \text{where $k$ is such that } x\in \left[ \tfrac{k}{2^n}, \tfrac{k+1}{2^n}\right)
\]
Like in the proof of \prettyref{pro:just1con} the variation of $f_n$ is bounded by the variation $v$ of $f$. The values of $f_n(x)$ are included in $[f(0)-v,f(0)+v]$.
The functions $f_n(x)$ converge to $f$ on all points of continuity of $f$. We claim that the points of discontinuity of $f$ have measure $0$.
Indeed, consider the measurable set (in the sense of \cite[Defintion~X.1.12]{sS09})
\[
A:= \bigcup_{n\in\Nat} \underbrace{\bigcap_{k\in \Nat} \left\{\, x \sizeMid \max \left(\abs{f(x-2^{-k})-f(x)}, \abs{f(x+2^{-k})-f(x)}\right) > 2^{-n} \,\right\}}_{=:A_n}
.\]
This formula describes the points of discontinuity of $f$.
Consider the set $A_n$ from above. If for any $n$ the set $A_n$ would have positive measure then there exists $2^n \cdot v$ many points in $A_n$ which would contradict the boundedness of the variation. Thus each $A_n$ has measure $0$ and with this $A$.
Therefore, we can apply the dominated convergence theorem (see \cite[Theorem~4.3]{ADR12}) and obtain that $(f_n)_n$ converges in $L_1$ to (the $L_1$-equivalence class of) $f$ and by \prettyref{lem:l1bv} then $f\in BV$.
\end{proof}
\begin{lemma}[\lp{RCA_0}]
For a continuous function $f\colon [0,1]\longto \Real$, such that $\abs{f'(x)}$ is effectively integrable, the variation $V(f)$ is bounded by $\int_0^1 \abs{f'(x)} \, dx$.
\end{lemma}
\proof
For two points $t_1,t_2\in [0,1]$ we can estimate
\[
\abs{f(t_1)-f(t_2)} = \abs{\int_{t_1}^{t_2} f'(x)\, dx} \le \int_{t_1}^{t_2} \abs{f'(x)}\, dx
.\]
Therefore,
\begin{align*}
V(f) & = \sup_{0\le t_1 < \dots < t_n \le 1} \sum_{i=1}^{n-1} \abs{ f(t_i) - f(t_{i+1})} \\
& \le \sup_{0\le t_1 < \dots < t_n \le 1} \sum_{i=1}^{n-1} \int_{t_i}^{t_{i+1}} \abs{f'(x)}\, dx
\le \int_0^1 \abs{f'(x)} \, dx\rlap{\hbox to 84 pt{\hfill\qEd}}
.\end{align*}
\begin{proposition}[\ls{ACA_0}]\label{pro:jus2}
For each $f\in BV$ there exists a right-continuous function $g$ which is almost everywhere equal to $f$ and with $V(g)<\infty$, or in other words the infimum in \eqref{eq:varl} is bounded.
\end{proposition}
\begin{proof}
Let $(p_k)_k$, $v$ be a code for $f$. By the previous lemma $V(p_k) \le v$.
By \cite[Remark~X.1.11]{sS09} the polynomials $(p_k)_k$ converge to a function $g$ almost everywhere. To be precise there exists an ascending sequence of closed sets ${(C^f_n)}_n$ with measure $1-2^{-n}$ such that $(p_k(x))_k$ converges uniformly on $C^f_n$ for each $n$. Let $M:=\bigcup_n C^f_n$.
It is clear that $(p_k)_k$ converges to $g$ also in $L_1$-norm.
The variation of $g$ with $t_i$ in \eqref{eq:var} restricted to be in $M$ is, as the pointwise limit of $p_k$, also bounded by $v$.
To obtain the proposition the only thing left to show is how to extend $g$ to a proper function on the full unit interval.
We claim that there exists a subsequence of ${(p_{k_j})}_j$ such that ${(p_{k_j}(x))}_j$ converges for all $x\in \Rat \cap [0,1]$.
To obtain this subsequence note that $\abs{p_k(x)} \le {\lVert f \rVert}_1 + v=:v'$. Let $q_i$ be an enumeration of $\Rat \cap [0,1]$ and consider for each $k$ the point $(p_k(q_i))_i \in {[-v',v']}^\Nat$.
Now ${[-v',v']}^\Nat$ is compact and $((p_k(q_i))_i)_k$ contains, by the Bolzano-Weierstra{\ss} principle, a convergent subsequence, which also satisfies the claim. See Lemma~III.2.5 and Theorem~III.2.7 of \cite{sS09}.
Thus, we may assume that $\Rat \cap [0,1]\subseteq M$ by passing to a subsequence of $(p_k)$.
Then let $g_+$ be the right continuous extension of $g$, i.e.
\[
g_+(x) :=
\begin{cases}
g(x) & \text{if } x\in M , \\
\lim_{y\searrow x, \, y\in \Rat} g(y) & \text{otherwise.}
\end{cases}
\]
The limit in the second case exists by the boundedness of the variation of $g$. Suppose that it does not exist then there would be an $\epsilon$ and an infinite sequence in $M$ oscillating at least $\epsilon$ at each step and, with this, the variation of $g$ would be infinite.
The almost everywhere converging subsequence of $(p_k)_k$ follows by Remark~X.1.11~\cite{sS09} from \lp{WWKL}. The set $M$ is arithmetic and thus exists provably in \ls{ACA_0}. Also the extension $g_+$ of $g$ can be build in using a routine application of the Bolzano-Weierstra{\ss} principle again provable in \ls{ACA_0}.
\end{proof}
\begin{corollary}[Jordan decomposition, \ls{ACA_0}]\label{cor:jordan}
For each function $f\in BV$ coded by $(p_k)_k,v$ there exists a
measurable set $C$ such that $f$ restricted to $C$ is
non-decreasing, that is, $\liminf p'_k(x) \ge 0$ for almost all
$x\in C$, and $f$ restricted to the complement of $C$ is
non-increasing, that is, $\limsup p'_k(x) \le 0$.
\end{corollary}
\proof
Let $g$ be the right-continuous function as in \prettyref{pro:jus2} and let $C$ be the following measurable set
\begin{align*}
C &:= \bigcap_{m\in\Nat} \bigcup_{n\in\Nat} \bigcap_{k>n} \{ x \mid g(x) < g(x+2^{-k}) + 2^{-m} \}
.
\intertext{Since $g$ has bounded variation the complement of $C$ is almost everywhere equal to }
[0,1]\setminus C &= \bigcap_{m\in\Nat} \bigcup_{n\in\Nat} \bigcap_{k>n} \{ x \mid g(x) > g(x+2^{-k}) - 2^{-m} \}
.
\end{align*}
The result follows.\qed
Independently, the Jordan decomposition was investigated by Nies, Yokoama et al. in \cite{LogicBlog2013}.
\section{Comparison to other spaces}\label{sec:comp}
\subsection{Sobolev space $W^{1,1}$}
Our motivation for representing the space $BV$ in the way we did in \prettyref{def:bv} is that in this way $BV$ lies between $L_1$ and the Sobolev space $W^{1,1}$. We believe that this is the right way to represent this space since $BV$ is in practice almost always used as an intermediate space between $L_1$ and $W^{1,1}$.
Recall that the Sobolev space $W^{1,1}:= W^{1,1}([0,1])$ is the coded separable Banach space over the rational polynomials $\Rat[x]$ together with the following norm
\[
{\lVert p \rVert}_{W^{1,1}} := {\lVert p \rVert}_{1} + {\lVert p' \rVert}_{1}
.\]
From this definition it is obvious that $W^{1,1}$ is a subspace of $BV$.
\begin{proposition}[\ls{RCA_0}]
$
W^{1,1} \subseteq BV \subseteq L^1
$
and all of these inclusions are strict.
\end{proposition}
\begin{proof}
The inclusions are clear. We show only the strictness.
The function
\[
f(x) :=
\begin{cases}
x \cdot \sin(1/x \cdot 2\pi) & \text{if $x>0$,} \\
0 & \text{otherwise,}
\end{cases}
\]
is continuous on $[0,1]$ and therefore contained in $L^1$. However, it has unbounded variation and therefore $f\notin BV$.
A characteristic function of a nontrivial interval, say $\chi_{[\frac{1}{2},1]}$, is contained in $BV$. It is not contained in $W^{1,1}$, because the derivative of $\chi_{[\frac{1}{2},1]}$ would be almost everywhere $0$ and infinite at $\frac{1}{2}$, which is impossible.
\end{proof}
\subsection{$BV$ as dual space}
It is well known that the space $BV$ is isomorphic to the dual space of $C([0,1])$, i.e.\ the space of uniformly continuous and linear functionals defined on the continuous functions on $[0,1]$ with ${\lVert \cdot \rVert}_\infty$-norm.
Before we can show this we will need some more properties of mollifiers.
\begin{definition}[Mollification of a function, \ls{RCA_0}]
Let $f\colon [0,1]\longto \Real$ be a continuous, effectively integrable function.
We extend $f$ to $[-1,2]$ by setting $f(x) = f(1-x)$ for $x > 1$ and $f(x) = f(-x)$ for $x < 0$.
We define the \emph{mollification of $f$} to be
\begin{equation}\label{eq:defmollification}
f^\epsilon(x) := (f \ast \eta_\epsilon)(x) := \int_{x-\epsilon}^{x+\epsilon} \eta_\epsilon(x-y) f(y) \, dy = \int_{-\epsilon}^{\epsilon} \eta_\epsilon(y)f(x-y) \, dy
\end{equation}
for $x\in [0,1]$ and $0< \epsilon \le 1$.
For a function $f\in L_1$ the mollification is defined in the same way. (The extension of $f$ can be defined pointwise for each $(p_k)_k$ coding $f$.)
\end{definition}
\begin{proposition}[\ls{RCA_0}]
Let $f$ be as above.
\begin{enumerate}[label=(\roman*)]
\item\label{enum:molprop:1} $f^\epsilon$ is infinitely often differentiable.
\item\label{enum:molprop:2} If $f$ is uniformly continuous, then $f^\epsilon \xrightarrow{\epsilon \to 0} f$ uniformly. If $f$ has additionally a modulus of uniform continuity then there exists a modulus of convergence for $f^\epsilon \xrightarrow{\epsilon\to 0} f$.
\end{enumerate}
\end{proposition}
\begin{proof}
\ref{enum:molprop:1}: We show only that $f^\epsilon$ differentiable.
\begin{equation*}
\frac{f^\epsilon(x+h) - f^\epsilon(x)}{h} = \frac{1}{\epsilon} \int_0^1 \frac{1}{h} \left( \eta\left(\frac{x+h-y}{\epsilon}\right) - \eta\left(\frac{x-y}{\epsilon}\right)\right) f(y) \, dy
\end{equation*}
Now for $h\to 0$ we have that $\frac{1}{h}\left(\eta\big(\frac{x+h-y}{\epsilon}\big) - \eta\big(\frac{x-y}{\epsilon}\big)\right)$ converges uniformly in $y$ to $\frac{d}{dx} \eta\left(\frac{x-y}{\epsilon}\right)$. Therefore one can exchange integration and taking the limit of $h$ and obtains that
\begin{equation}\label{eq:moldif2}
\frac{d}{dx} f^\epsilon(x) = \frac{1}{\epsilon} \int_0^1 \frac{d}{dx}\big(\eta(x-y)\big) \cdot f(y)\, dy
\end{equation}
exists.
\ref{enum:molprop:2}:
\begin{align*}
\abs{f^\epsilon(x) - f(x)} &= \abs{\int_{x-\epsilon}^{x+\epsilon} \eta_\epsilon(x-y) (f(y)-f(x)) \, dy} \\
& \le \sup_{y\in [x-\epsilon,x+\epsilon]} \abs{f(y)-f(x)}\ \xrightarrow{\epsilon\to 0} 0 \qquad \text{by uniform continuity}.
\end{align*}
Thus from a modulus of uniform continuity one can define a uniform modulus of convergence of $f^\epsilon(x) \xrightarrow{\epsilon \to 0} f(x)$.
\end{proof}
For a code $(p_k)_k, v$ for an $f\in BV$ let $T$ be the following linear functional defined on all $h\in C([0,1])$.
\begin{align}
T(h) & := \lim_{k\to \infty} \int_0^1 h \cdot p_k' \, dx \label{eq:cdinf}
\shortintertext{
Note that $T$ will depend not only on the $L_1$-class of $f$ but also on the specific sequence of rational polynomials. See \prettyref{pro:intpart} below.
We can estimate}
& T(h) \le {\lVert h \rVert}_\infty \cdot v .\notag
\end{align}
Thus $T$ is continuous and therefore in the dual $C^*([0,1])$. It is clear that this is provable in \ls{ACA_0}. (For a formal definition of bounded functionals and the dual space, see Definitions~II.10.5 and X.2.3 in \cite{sS09}.)
For the other direction let $T\colon C([0,1]) \longto \Real$ be a linear, continuous functional with $\lVert T\rVert \le v$ for some $v\in \Real$.
We can continuously extend $T$ to functions of the form $\chi_{(y,1]}$ (and linear combinations thereof) by approximating this function using the mollifier, cf.~\eqref{eq:chiappr}.
We claim that the function
\[
m(y) := T(\chi_{(y,1]})
\]
has bounded variation. Indeed for $0\le t_1 < \dots < t_n \le 1$ we have
\begin{align*}
\sum_{i=1}^{n-1} \abs{m(t_{i+1}) - m(t_i)} &= \sum_{i=1}^{n-1} e_i\, {\left(m(t_{i+1}) - m(t_i)\right)} \hphantom{\le v} \text{for suitable }e_i\in \{-1,1\} \\
& = T\left(\sum_{i=1}^{n-1} e_i \, \chi_{(t_i,t_{i+1}]}\right) \\
& \le v \hphantom{= \sum_{i=1}^{n-1} e_i\, {\left(m(t_{i+1}) - m(t_i)\right)}} \text{since the sum is bounded by $1$.}
\end{align*}
It is clear that $m$ is right continuous. Thus, by \prettyref{pro:jus1} we have $m\in BV$. Now let $h\in C([0,1])$ be a uniformly continuous function. The function $h$ can be approximated in ${\lVert \cdot \rVert}_\infty$ by functions of the form
\[
h_n(x) := h\left(\frac{i}{2^n}\right) \quad\text{if } x\in\left[\frac{i}{2^n},\frac{i+1}{2^n}\right)
.\]
(A modulus of convergence can be defined from a modulus of uniform continuity of $h$.)
Then
\begin{align*}
T(h) & = T\left(\lim_{n\to \infty} h_n\right) = \lim_{n\to\infty} T(h_n) \\
& = \lim_{n\to \infty} \sum_i \left[h\!\left(\frac{i}{2^n}\right) \cdot \left(m\!\left(\frac{i+1}{2^n}\right)-m\!\left(\frac{i}{2^n}\right)\right)\right]
\shortintertext{(for a suitable choice of $(p_k)_k$ converging pointwise at all $q\in[0,1]\cap\Rat$, see proof of \prettyref{pro:jus2})}
& = \lim_{n\to\infty} \lim_{k\to\infty}\sum_i \left[h\!\left(\frac{i}{2^n}\right) \cdot \left(p_k\!\left(\frac{i+1}{2^n}\right)-p_k\!\left(\frac{i}{2^n}\right)\right)\right]
\shortintertext{(by uniform convergence in $n$)}
& = \lim_{k\to\infty} \lim_{n\to\infty}\sum_i \left[h\!\left(\frac{i}{2^n}\right) \cdot \left(p_k\!\left(\frac{i+1}{2^n}\right)-p_k\!\left(\frac{i}{2^n}\right)\right)\right] \\
& = \lim_{k\to \infty} \int_0^1 h \cdot p_k' \, dx .
\end{align*}
These observations give rise to the following propositions.
\begin{proposition}[\ls{ACA_0}]\label{pro:bccduleq1}
Each (code of an) $f\in BV$ induces a bounded linear functional $T\in C^*([0,1])$ given by~\eqref{eq:cdinf}.
\end{proposition}
\begin{proposition}[\ls{ACA_0}]
Each $T\in C^*([0,1])$ is of the form \eqref{eq:cdinf} for a suitable (code of an) $f\in BV$.
\end{proposition}
We just note that since $h$ can be approximated by infinitely often differentiable functions we may assume that it is differentiable. Then one can use integration by parts on \eqref{eq:cdinf} and obtain that
\[
T(h) = \lim_{k \to \infty} \left( h(1)p_k(1) - h(0)p_k(0) - \int_0^1 h' \cdot p_k \, dx \right)
.\]
Under the assumption that $h(0)=h(1)=0$---this is given for instance if $h\in C_0((0,1))$, that is the space of all uniformly continuous functions with compact support included in $(0,1)$---we get
\[
T(h) = - \lim_{k\to \infty} \int_0^1 h' \cdot p_k \, dx
.\]
This value can be computed from ${\lVert h' \rVert}_\infty$ since ${\lVert p_k - p_{k+1} \rVert}_1 \le 2^{-k}$.
Thus one obtains the following.
\begin{proposition}[\ls{RCA_0}]\label{pro:intpart}
The functional $T(h)$ as in \eqref{eq:cdinf} restricted to $h\in C_0((0,1))$ exists and does only depend on the $L_1$-equivalence class of $f$ (and not on its code).
\end{proposition}
Or in other words, in this restricted case one does not need \ls{ACA_0} to get \prettyref{pro:bccduleq1}. The proposition below shows that \ls{ACA_0} is in general necessary.
\begin{proposition}[\ls{RCA_0}]\label{pro:bccduleq1eq}
The statement of \prettyref{pro:bccduleq1} is equivalent to \ls{ACA_0}.
In fact, it suffices to know for each $f\in BV$ the value $\lVert T \rVert$ or $V_{L_1}(f)$ for $T$ as in \eqref{eq:cdinf} to obtain \ls{ACA_0}.
\end{proposition}
\begin{proof}
The right-to-left direction is \prettyref{pro:bccduleq1}. For the other direction consider the $\Pi^0_1$-statement (indexed by $n$)
\[
\Forall{i} \phi(n,i)
.\]
We show that we can build a set $X$ with $n\in X \IFF \Forall{i} \phi(n,i)$.
Let
\[
f_{n,k}(x) :=
\begin{cases}
1- 2 \int_0^x \eta_{2^{-i'-1}}(y) \, dy & \text{if } \Exists{i\le k} \phi(n,i) \text{ and $i'$ is minimal with $\phi(n,i')$}, \\
0 & \text{otherwise.}
\end{cases}
\]
Since ${\left\lVert 1- 2\int_0^x \eta_{2^{-i'-1}}(y) \, dy\right\rVert}_1 < 2^{-i'-1}$ the sequence $(f_{n,k})_k$ forms a Cauchy-sequence with rate $2^{-k}$ for each $n$ and the variation is bounded by $1$. By \prettyref{lem:l1bv} the limit of $f_n$ of $(f_{n,k})_k$ is contained in $BV$.
Let $T_n$ be the functional corresponding to $f_n$ as in \eqref{eq:cdinf}. Since the function $f_n$ is the constant $0$ function if $\Forall{i} \phi(n,i)$ is true and otherwise $\lambda x . 1- 2 \int_0^x \eta_{2^{-i'-1}}(y) \, dy$ for an $i'$ we get that
\begin{align*}
T_n(\lambda x. 1) = 0 &\IFF \Forall{i} \phi(n,i) \\
T_n(\lambda x. 1) = -1 &\IFF \NOT \Forall{i} \phi(n,i)
\end{align*}
Thus, one can read off the real number $T_n(\lambda x .1)$ whether $\Forall{i} \phi(n,i)$ is true. To obtain the second statement of the proposition for this particular $n$ note that since $T_n$ is non-increasing $\lVert T_n \rVert = V_{L_1}(f) = -T_n(\lambda x. 1)$.
To obtain the full result we use a standard Cantor-middle third set construction to embed the Cantor space into the unit interval. See for instance the proof of Theorem~IV.1.2 in \cite{sS09}.
Thus, let $f(x):= \sum_{n=0}^\infty \frac{2 f_n(x)}{3^n}$ and let $T$ be the corresponding functional.
Then $\Forall{i} \phi(0,i)$ if true if $-T(\lambda x. 1) \in [0,1/3]$ and false if it is in $[2/3,1]$. The statement for $n=1$ is true if $-T(\lambda x. 1) \in [0,1/9] \cup [2/3,7/9]$ and false if it is in $[2/9,1/3]\cup[8/9,1]$ and so on. From this one can easily construct the set $X$.
\end{proof}
\begin{remark}[Weak$^*$ topology]
The space $C^*([0,1])$ is a dual space and, with this, one can define the weak$^*$ topology on it in the usual way. We say a sequence $(T_n)_n\subseteq C^*([0,1])$ converges to $T$ in the \emph{weak$^*$ topology} if{f}
\[
T_n(h) \xrightarrow{n\to \infty} T(h) \quad\text{for all $h\in C([0,1])$}
.
\]
Since $BV$ is isomorphic to $C^*([0,1])$ this induces a topology on $BV$. However, in most cases the following combination with the $L_1$\nobreakdash-topology is used.
We say that a sequence of functions $(f_n)\subseteq BV$ converges in the \emph{weak$^*$ topology} to $f$ if{f} $f_n\xrightarrow{n\to\infty} f$ in $L_1$ and the functionals corresponding to $f_n$ converge in weak$^*$ topology of $C^*([0,1])$. See Definition~3.11 of \cite{AFP00}.
One can show that for a sequence $(f_n)_n$ and $f$ in $BV$ that if
\begin{itemize}[label=$-$]
\item $f_n \xrightarrow{n\to\infty} f$ in $L_1$ and
\item the variation of $(f_n)_n$ is uniformly bounded
\end{itemize}
then there exists a subsequence $(f_{g(n)})_n$ converging in the weak$^*$ topology to $f$.
See Proposition~3.13 in \cite{AFP00}.\footnote{Note that the theorem there is stated in a misleading way. The statement should actually read ``\textbf{Proposition 3.13} Let $(u_h) \subset [BV(\omega)]^m$. Then there exists a subsequence $(u_{k(h)})$ converging to $u$ in $[BV(\omega)]^m$ if $(u_h)$ is bounded in $[BV(\omega)]^m$ and $u_h$ converges to $u$ in $[L_1(\omega)]^m$. If $(u_{h})$ converges to $u$ in $[BV(\omega)]^m$ then $u_h$ converges to $u$ in $[L_1(\omega)]^m$ and is bounded in $[BV(\omega)]^m$.''}
This leads to the following.
The representation of $BV$ as given in \prettyref{def:bv} is \emph{consistent} with the weak$^*$ topology in the sense that if a sequence of representations $(r_i)_i\subseteq \Nat^\Nat$ converges in the Baire space then the sequence of represented elements $f_{r_i}$ contains a weak$^*$-\hspace{0pt}converging subsequence. See also \prettyref{lem:l1bv}.
\end{remark}
\subsection{Other representations}
In \cite{vB05} Brattka proposes two different ways to represent elements of non-separable spaces. The first representation essentially codes an element $f$ of a space $X$ as a sequence of countable objects plus the norm ${\lVert f \rVert}_X$. Whereas the second representation just consists of the countable objects plus an upper bound $v$ on the norm. See also \cite{BS05}.
In the case of \prettyref{def:bv} the countable objects are rational polynomials.
The representation we defined in \prettyref{def:bv} is intermediate between those two representations proposed by Brattka because we have an upper bound of the norm of an element $f\in BV$, i.e.~${\Vert f \rVert}_{BV} \le v$, and thus the second representation is reducible to our representation. However, we have $f$ as full $L_1$ object including its norm, thus our representation is stronger.
Alternatively, we could have added the value of the variation instead of merely an upper bound to the representation of an element of $BV$. Since by \prettyref{pro:bccduleq1eq} going from an upper bound to right value of $V_{L_1}$ requires \ls{ACA_0}, this representation is too strong in general.
Other ways to represent functions of bounded variation are to take computable functions with a computable variation, see \cite{RZB02}, or as a computable function defined on a countable, dense subset of $[0,1]$, see \cite{HW07,JPW13}.
The first approach is too restricted since very few functions of bounded variation are computable. The second approach is orthogonal to ours since it defines points of functions, whereas we define the function in the $L_1$\nobreakdash-sense. This representation has been successfully used in algorithmic randomness, see \cite{BMNta,jRta}. However we believe that our approach is more natural since it fits nicely into the Sobolev spaces and easily generalizes to functions defined in $\Real^n$, which is not the case for the pointwise definition.
\section{Helly's selection theorem}\label{sec:hst}
\begin{theorem}[Helly's selection theorem, \lp{HST}, \ls{ACA_0}]\label{thm:helly}
Let $(f_n)_n\subseteq BV$ be a sequence of functions with bounds for variations $v_n$.
If
\begin{enumerate}[label=(\roman*)]
\item\label{enum:hst1} ${\left\lVert f_n \right\rVert}_1 \le u$ for a $u\in \Rat$,
\item\label{enum:hst2} $v_n \le v$ for a $v\in \Rat$,
\end{enumerate}
then there exists an $f\in BV$ and a subsequence $f_{g(n)}$ such that $f_{g(n)} \xrightarrow{n\to\infty} f$ in $L_1$ and the variation of $f$ is bounded by $v$.
\end{theorem}
The statement of this theorem will be abbreviated by \lp{HST}.
Originally Helly's selection theorem was formulated for usual functions and not $L_1$\nobreakdash-\hspace{0pt}functions. There usually \ref{enum:hst1} is replaced by the statement that $\abs{f_n(x)} \le u'$ for an $x\in[0,1]$ and a bound $u'$. Note that this implies \ref{enum:hst1} since by \ref{enum:hst2} with the bound $u'$ we have ${\lVert f_n \rVert}_\infty \le u'+v$ and with this also ${\lVert f_n \rVert}_1 \le u'+v =: u$.
For the proof of \lp{HST} we will need the following lemma.
\begin{lemma}[\ls{RCA_0}]
Let $f\in BV$ and let $v$ be the bound of variation of $f$. The system $\ls{RCA_0}$ proves that for each $\epsilon > 0$ that
\begin{enumerate}[label=(\roman*)]
\item\label{enum:moll2:1} $f^\epsilon \in L_1 $ exists, and that
\item\label{enum:moll2:2} ${\lVert f^\epsilon - f \rVert}_1 \le 2 \epsilon v$.
\end{enumerate}
\end{lemma}
\proof
Let $(p_k)_k$ be the sequence of rational polynomials coding $f$. We have
\begin{align*}
{\left\lVert f^\epsilon - {(p_k)}^\epsilon \right\rVert}_1 &
= \int_0^1 \int_{-\epsilon}^{\epsilon} \eta_\epsilon(y) \left(f(x-y) - p_k(x-y)\right) \, dy \, dx \\
& = \int_{-\epsilon}^{\epsilon} \eta_\epsilon(y) \int_0^1 \left(f(x-y) - p_k(x-y)\right) dx \, dy \qquad\text{by Fubini}\\
& \le 2 {\lVert f - p_k \rVert}_1 \int _{-\epsilon}^{\epsilon} \eta_\epsilon(y) = 2 {\lVert f - p_k \rVert}_1 .
\end{align*}
(The $2$ in the above inequality comes from the possible reflection of $f$ in the mollification as we defined it.)
It follows that (a $2^{-k+1}$-good approximation with rational polynomials of) $(p_{k+2})^\epsilon$ is a code for $f^\epsilon\in L_1$.
For \ref{enum:moll2:2} we have for any $k$
\begin{alignat*}{3}
{\lVert f^\epsilon - f \rVert}_1 & \le {\lVert (p_{k})^\epsilon - p_k \rVert}_1 + 2^{-k+2}
\shortintertext{since ${\lVert f-p_k\rVert}_1 < 2^{-k}$ and ${\lVert f^\epsilon-p_k^\epsilon\rVert}_1 < 2^{-k+1}$ by the above estimate. Further, }
{\lVert (p_{k})^\epsilon - p_k \rVert}_1
& = \int_0^1 \int_{-\epsilon}^{\epsilon} \eta_\epsilon(y) \cdot p_k(x-y) \, dy - p_k(x) \, dx \\
& = \int_0^1 \int_{-\epsilon}^{\epsilon} \eta_\epsilon(y) \cdot \left(p_k(x-y) - p_k(x)\right) \, dy \, dx & \text{since $\int \eta_\epsilon = 1$}\\
& = \int_0^1 \int_{-1}^{1} \eta(y) \cdot \left(p_k(x-\epsilon y) - p_k(x)\right) \, dy \, dx & \text{substituting $y\mapsto \epsilon y$}
\shortintertext{since $\abs{p_k(x-\epsilon y) - p_k(x)} = \abs{\int_{0}^{y} \frac{d}{dy} p_k(x-\epsilon y)\, dy} = \abs{\epsilon \int_{0}^{y} p_k'(x-\epsilon y) \,dy} \le 2 \epsilon {\lVert p_k' \rVert}_1$ for $y\in [-1,1]$}
& \le \int_0^1 \int_{-1}^{1} \eta(y) \cdot 2 \epsilon {\lVert p_k'
\rVert}_1 \, dy \, dx = 2 \epsilon {\lVert p_k' \rVert}_1
\;\smash[b]{\le 2 \epsilon v.}\rlap{\hbox to 119 pt{\hfill\qEd}}\medskip
\end{alignat*}
\begin{proof}[Proof of \prettyref{thm:helly}]
For the mollifications $f_n^\epsilon$ of $f_n$ we have by definition \eqref{eq:defmollification} that
\begin{alignat*}{4}
{\left\lVert f_n^\epsilon \right\rVert}_\infty & \le {\lVert f_n \rVert}_1\, \lVert {\eta_\epsilon \rVert}_\infty &&\le \frac{u}{\epsilon} ,
\shortintertext{and by \eqref{eq:moldif2} that}
{\left\lVert \left(f_n^\epsilon\right)' \right\rVert}_\infty & \le {\lVert f_n \rVert}_1\, \lVert {\eta_\epsilon' \rVert}_\infty && \le u \, \lVert {\eta_\epsilon' \rVert}_\infty .
\end{alignat*}
Thus, for each fixed $\epsilon$ the sequence ${(f_n^\epsilon)}_n$ is uniformly bounded and---by the uniform bound on the derivative---equicontinuous.
We instantiate $\epsilon$ with $2^{-i}$ and obtain a sequence of sequences of bounded, equicontinuous functions ${(f_n^{(2^{-i})})}_{n,i}$. By the previous lemma this sequence is contained in $L_1$ and converges as $i\to \infty$ to $f_n$.
By \prettyref{pro:aadiag} below, a variant of the Arzelà-Ascoli theorem, there exists a subsequence $g(n)$, such that for each $k$
\[
\Forall{j\le k} \Forall{n,n'\ge k} {\left\lVert f_{g(n)}^{(2^{-j})} - f_{g(n')}^{(2^{-j})}\right\lVert}_\infty \le 2^{-k} .
\]
Now for $n,n'\ge k$
\begin{align*}
{\left\lVert f_{g(n)} - f_{g(n')} \right\rVert}_1 & \le {\left\lVert f_{g(n)}^{(2^{-k})} - f_{g(n')}^{(2^{-k})} \right\rVert}_1 + {\left\lVert f_{g(n)} - f_{g(n)}^{(2^{-k})} \right\rVert}_1 + {\left\lVert f_{g(n')} - f_{g(n')}^{(2^{-k})} \right\rVert}_1 \\
& \le 2^{-k} + 2\cdot 2 \cdot 2^{-k} v
.
\end{align*}
Thus, $f_{g(n)}$ forms a $L_1$\nobreakdash-converging sequence with rate of convergence $2^{-k} + 2^{-k+2} v$. Thus $\lim f_{g(n)} = f \in L_1$. By \prettyref{lem:l1bv} we have that $f\in BV$.
\end{proof}
The previous proof was inspired by \cite[Theorem~3.23]{AFP00}.
\begin{proposition}[Diagonalized Arzelà-Ascoli, \ls{ACA_0}]\label{pro:aadiag}
Let $f_{n,j}\colon [0,1] \longto \Real$ be a sequence of sequences of functions. If for each $j$
\begin{enumerate}
\item the sequence $(f_{n,j})_n$ is bounded by $u_j\in \Rat$, and
\item $(f_{n,j})_n$ is uniformly equicontinuous, i.e., there exists a modulus of uniform equicontinuity $\phi_j(l)$, such that
$\Forall{l}\Forall{n}\Forall{x,y\in[0,1]} \big(\abs{x-y} < 2^{-\phi_j(l)}\linebreak[1] \IMPL \abs{f_{n,j}(x)-f_{n,j}(y)} < 2^{-l}\big)$,
\end{enumerate}
then there exists a subsequence $g(n)$ such that for all $j$ the sequence $f_{g(n),j}$ converges uniformly in the sense that
\begin{equation}\label{eq:conv}
\Forall{k} \Forall{j\le k} \Forall{n,n'\ge k} {\left\lVert f_{g(n),j} - f_{g(n'),j} \right\lVert}_\infty < 2^{-k}
.\end{equation}
\end{proposition}
\begin{proof}
By replacing $f_{n,k}$ with $\frac{ f_{n,k}}{2 u_n} + \frac{1}{2}$ we may assume that the image of $f_{n,k}$ is contained in the unit interval $[0,1]$.
In \cite[Lemma~3, Corollary~4]{aK14a} we showed that an equicontinuous sequence of functions $h_n\colon [0,1]\longto[0,1]$ converges uniformly if{f} $h_n$ converges pointwise on $\Rat \cap [0,1]$, i.e., for an enumeration $q$ of $\Rat \cap [0,1]$ the sequence ${\big({(h_n(q(i)))}_i\big)}_n \subseteq [0,1]^\Nat$ converges in $[0,1]^\Nat$ with the product norm $d((x_i),(y_i)) = \sum_i 2^{-i} d(x_i,y_i)$.
Moreover, from a rate of convergence and the modulus of uniform equicontinuity one can calculate a rate of convergence of $h_n$ in ${\lVert \cdot \rVert}_\infty$.
With this the Arzelà-Ascoli theorem follows directly from an application of the Bolzano-Weierstra{\ss} principle for the space $[0,1]^\Nat$. For details see \cite{aK14a}.
We can parallelize this process for $f_{n,j}$ by applying the Bolzano-Weierstra{\ss} principle to the sequence
${\big(\big({f_{n,j}(q(i))}\big)_{\langle i,j \rangle}\big)}_n \subseteq [0,1]^\Nat$. With this we obtain a subsequence $g(n)$ such that for each $j$ we have $\big({f_{g(n),j}(q(i))}\big)_i\in{[0,1]}^\Nat$ converges at a given rate for $n\to \infty$. By the above considerations we get that $f_{g(n),j}\in C([0,1])$ converges uniformly at a given rate (depending in $\phi_j$). By thinning out the sequence $g(n)$ we get \eqref{eq:conv}.
This proposition is provable in \ls{ACA_0} since Bolzano-Weierstra{\ss} principle for the space $[0,1]^\Nat$ is instance-wise equivalent to the Bolzano-Weierstra{\ss} principle for $[0,1]$ which is provable in \ls{ACA_0}, see e.g.~\cite{aK14a,sS09}.
\end{proof}
We now come the reversal.
\begin{theorem}\label{thm:hstaca}
Over \ls{RCA_0}, \lp{HST} is equivalent to \lp{ACA_0}.
\end{theorem}
\begin{proof}
The right to left direction is simply \prettyref{thm:helly}. For the left-to-right direction we will show that \lp{HST} implies the Bolzano-Weierstra{\ss} principle (for $[0,1]$) which is by \cite[Theorem~III.2.2]{sS09} equivalent to \lp{ACA_0}. Let $(x_n)_n\subseteq [0,1]$ be any sequence in the unit interval. Let $f_n(x) := x_n$ be the sequence of corresponding constant functions. It is clear that $f_n\in BV$ and that ${\lVert f_n \rVert}_1 = x_n$. One easily verifies that for any limit $f$ as given by \lp{HST} the value ${\lVert f \rVert}_1$ is a limit point of $x_n$ and thus a solution to \lp{BW}.
\end{proof}
The proofs of \prettyref{thm:helly} and \prettyref{thm:hstaca} actually give more information on the strength of \lp{HST}. It shows that for each instance of \lp{HST}, that is for each sequence of functions $(f_n)_n\subseteq BV$ with a uniform bound of variation, one can compute uniformly a sequence $(x_n)_n\subseteq [0,1]$, such that from any limit point of this sequence one can compute a solution to \lp{HST} for $f_n$. By the proof of \prettyref{thm:hstaca} the backward direction also holds.
This is summarized in the following corollary.
\begin{corollary}
The principles \lp{HST} and \lp{BW} are instance-wise equivalent, i.e., writing \lpp{HST}{(f_n)} for \lp{HST} restricted to $(f_n)$ and \lpp{BW}{(x_n)} for \lp{BW} restricted to $(x_n)$, then we have the following.
There are codes for Turing machines $e_1,e_2$, such that
\begin{align*}
\ls{RCA_0} &\vdash \Forall{X} \left(\lpp{BW}{\{e_1\}^X} \IMPL \lpp{HST}{X}\right), \\
\ls{RCA_0} &\vdash \Forall{X} \left(\lpp{HST}{\{e_2\}^X} \IMPL \lpp{BW}{X}\right).
\end{align*}
\end{corollary}
This corollary should be compared with Theorem~3.1 of \cite{aK11}, where it is shown that \lp{BW} is instance-wise equivalent to \lp{WKL} for $\Sigma^0_1$\nobreakdash-trees, and Theorem~9 of \cite{aK14a}, where it is shown that \lp{BW} is instance-wise equivalent to the Arzelà-Ascoli theorem.
\begin{remark}[\lp{HST_{weak}}]
In \cite{aK11} we also analyzed the following weaker variant \lp{BW_{weak}} of the Bolzano-Weierstrass principle, which states that for each sequence $(x_n)\subseteq [0,1]$ there is a subsequence that converges but possibly without any computable rate of convergence. Since points are coded as sequences converging at the rate $2^{-k}$ the existence of the limit point of the sequence might not be provable. This principle is considerably weaker than \lp{BW}. For instance \lp{BW_{weak}} it does not imply \lp{ACA_0} nor \lp{WKL_0}.
Replacing \lp{BW} in the above proof immediately yields that \lp{BW_{weak}} is instance-wise equivalent to the variant of \lp{HST} which only states the existence of a converging subsequence.
\end{remark}
\subsection{\lp{HST} in the Weihrauch lattice}
Helly's selection theorem can be formulated in the Weihrauch lattice. The above proof yields also a classification in these terms.
We refer the reader to \cite{BG11,BGM12} for an introduction to the Weihrauch lattice.
The functions of the space $L_1$ can be represented by the rational polynomials closed under the $\lVert \cdot \rVert_1$\nobreakdash-norm. We will call this representation $\delta_{L_1}$. With this Helly's selection theorem is then a partial multifunction of the following type.
\[
\mathsf{HST} :\subseteq \left(L_1([0,1]),\delta_{L_1}\right)^\Nat \rightrightarrows \left(L_1([0,1]),\delta_{L_1}\right)
\]
where $\textrm{dom}(\mathsf{HST}) = \left\{\, (f_n) \sizeMid \int_0^1 \abs{f_n'}\, dx \text{ is uniformly bounded}\, \right\}$. The derivative of $f_n'$ here is taken in the sense of distributions. We chose this representation since it is customary in the Weihrauch lattice not to add any additional information---like the uniform bound on the variation---to the representation. However, we can easily recover this uniform bound by searching for it. This can be done using the limit $\mathrm{lim}$. Since $v$ is not needed to build the sequence of equicontinuous functions ($f_n^{(2^{-i})}$ in the proof of \prettyref{thm:helly}) the bound of the variation can be computed in parallel to the application of the diagonalized Arzelà-Ascoli theorem (which follows from $\mathsf{BWT_{[0,1]^\Nat}}$) . Thus,we get
\[
\mathsf{HST} \le_{\mathrm{{W}}} \mathrm{lim} \times \mathsf{BWT_{[0,1]^\Nat}} \le_{\mathrm{W}} \mathsf{BWT_{[0,1]^\Nat}} \equiv_{\mathrm{W}} \mathsf{BWT}_{\Real}
.\]
Using the reversal we obtain in total $\mathsf{HST} \equiv_{\mathrm{W}} \mathsf{BWT}_{\Real}$.
\section*{Acknowledgments}
I thank Jason Rute for useful discussions, and the two anonymous referees for remarks that lead to an improved presentation, to \prettyref{pro:just1con} (in particular its proof in \lp{WWKL_0}), and \prettyref{pro:bccduleq1eq}.
\bibliographystyle{acm}
|
1,477,468,751,040 | arxiv | \section{Introduction}
In this article, a linear map from differential two-forms to symmetric two-tensors
in two-dimensional Hermitian manifolds
introduced in \cite{gi-u1-prl} is studied.
The map reveals another aspect of Seiberg-Witten map.
The original Seiberg-Witten map is a map
from noncommutative gauge fields to commutative gauge fields with a background $B$-field \cite{sw-ncft}.
On the other hand, it has been interpreted in \cite{gi-u1-prl,gi-u1-plb,gi-u1-epl}
as a map from a noncommutative gauge field to a K\"ahler metric.\\
A purpose of this article is to clarify the map in \cite{gi-u1-prl,gi-u1-plb,gi-u1-epl}
which locally maps (anti-)self-dual two-forms on ${\mathbb C}^2$
to Hermitian-Einstein metrics of two-dimensional K\"ahler manifolds.
It might be worth noting that it is enough for these two-forms to be defined
as a symplectic structure on a commutative manifold,
although this map was developed in the context of Seiberg-Witten map in noncommutative gauge theory.
But this correspondence between the self-dual two-form and
Hermitian-Einstein metric can be lifted to noncommutative spaces
after (canonical or deformation) quantization \cite{ly-jkps2018}.\\
The second purpose of this article is to construct explicit examples of Hermitian-Einstein metrics from
noncommutative $U(1)$ instantons.
$U(1)$ instantons on noncommutative ${\mathbb C}^2$
were found by Nekrasov and Schwarz \cite{Nekrasov:1998ss}.
We will construct the two-form from a multi-instanton solution given in \cite{Ishikawa:2001ye}
where the noncommutative $U(1)$ instanton solutions are written in an operator form acting on a Fock space.
The Fock space is defined by Heisenberg algebra generated by noncommutative complex coordinates.
There is a dictionary between the linear operators acting on the Fock space
and usual functions \cite{Sako:2016gqb}.
The dictionary
is applicable for arbitrary noncommutative K\"ahler manifold
obtained by deformation quantization
with separation of variables \cite{Karabegov1996}.
Concrete Hermitian-Einstein metrics are obtained by translating the noncommutative instantons as linear operators into ordinary functions by using
the dictionary in \cite{Sako:2016gqb}.\\
The third purpose is to clarify the K\"ahler condition for the metrics derived
from noncommutative $U(1)$ instantons.
Since a K\"ahler manifold is a symplectic manifold too although the reverse
is not necessarily true, one can quantize the K\"ahler manifold by quantizing
a Poisson algebra derived from the underlying symplectic structure of the K\"ahler geometry,
as recently clarified in \cite{ly-jkps2018}.
We will show that the metric derived from noncommutative $U(1)$ instantons
becomes a K\"ahler metric if the underlying Poisson algebra of $U(1)$ instantons or its quantization is an associative algebra. \\
Here we mention some studies related with subjects of this article.
It has been conjectured in \cite{inov,ideal-sheaf} that
NC $U(1)$ gauge theory is the fundamental description
of K\"ahler gravity at all scales including the Planck scale
and provides a quantum gravity description such as quantum gravitational foams.
Recently it was shown in \cite{hsy-jhep09,review4,hsy-jpcs12} that
the electromagnetism in noncommutative spacetime can be realized
as a theory of gravity and the symplectization of spacetime geometry is the origin of gravity.
Such picture is called emergent gravity and it proposes a candidate of the origin of spacetime.
See also related works in Refs. \cite{rivelles,review1,review2,review3,yasi-prd10,lee-yang,review6,review7,kawai-ks2016}
As a bottom-up approach of the emergent gravity formulated in \cite{our-jhep12},
the Eguchi-Hanson metric \cite{eh-plb,eh-ap} in four-dimensional Euclidean gravity
is used to construct anti-self-dual symplectic $U(1)$ gauge fields,
and $U(1)$ gauge fields corresponding to the Nekrasov-Schwarz instanton \cite{Nekrasov:1998ss}
are reproduced by the reverse process \cite{Lee:2012rb}.
As a top-down approach of emergent gravity, the $U(1)$ instanton
found by Braden and Nekrasov \cite{bn-inst} derives a corresponding gravitational metric.\\
The organization of this paper is as follows.
In section \ref{sect2},
some linear algebraic formulas for self-duality are prepared.
In section \ref{Einstein and ASD},
the correspondence between the self-dual two-forms and Hermitian-Einstein metrics is studied.
In section \ref{sect4},
Hermitian-Einstein metrics are explicitly constructed from noncommutative $U(1)$ instantons.
In section \ref{sect5}, the gauge theory realization of the K\"ahler condition is studied.
In section \ref{sect6},
we discuss an outlook of this subject.
Some technical details are left for the appendices.
\section{Self-duality}\label{sect2}
\begin{df}[Hodge star operator]
An automorphism $\star$ on the set of $4\times 4$ alternative matrices is defined as
\[\star\left[\left(
\begin{array}{cccc}
0 &\omega_{12} &\omega_{13} & \omega_{14} \\
-\omega_{12} &0 &\omega_{23} & \omega_{24} \\
-\omega_{13} &-\omega_{23} &0 & \omega_{34} \\
-\omega_{14} & -\omega_{24}&-\omega_{34} & 0
\end{array}
\right) \right]:=\left(
\begin{array}{cccc}
0 &\omega_{34} &-\omega_{24} & \omega_{23} \\
-\omega_{34} &0 &\omega_{14} & -\omega_{13} \\
\omega_{24} &-\omega_{14} &0 & \omega_{12} \\
-\omega_{23} & \omega_{13}&-\omega_{12} & 0
\end{array}
\right),\]
\[\left(i.e., ~ \omega_{12}\leftrightarrow \omega_{34}, ~ \omega_{13}\leftrightarrow \omega_{42}, ~ \omega_{14}\leftrightarrow \omega_{23}\right). \]
In other words, $\star\omega_{kl}$ is defined as
\[\star\omega_{kl}
=\frac{1}{2} \sum _{m,n}^4 \varepsilon_{klmn} \omega_{mn},\] where $\varepsilon_{klmn}$ is Levi-Civita symbol.
The operator $\star$ is called the Hodge star operation in Euclidean $\mathbb{R}^4$.
\end{df}
\begin{df}[Anti-self-dual matrix]
A $4\times 4$ alternative matrix $\omega^{\pm}$ is an (anti-)self-dual matrix if
\begin{align}
\star\omega^{\pm}=\pm \omega^{\pm}. \label{asd}
\end{align}
\end{df}
An (anti-)self-dual matrix $\theta^{\pm}$ is defined as
\begin{align}
\theta^{\pm }:=\left(
\begin{array}{cccc}
0 &-\theta & 0& 0 \\
\theta &0 &0 & 0 \\
0 &0 &0 & \mp \theta \\
0 &0 & \pm \theta & 0
\end{array}
\right) & \label{theta}
\end{align}
where $\theta$ is a real number.
Note that $\omega^{\pm}$ and $\theta^{\mp}$ commute each other:
\begin{align}
\omega^{\pm}\theta^{\mp}=\theta^{\mp}\omega^{\pm}. \label{omegatheta}
\end{align}
\begin{df}[Matrix $g^{\pm}$]Let $E_4$ be the $4\times4$ unit matrix and $\omega^{\pm}$
be a $4\times 4$ (anti-)self-dual matrix.
Assume that $\det\left[E_4-\omega^{\pm}\theta^{\mp} \right]\neq 0$,
then $4\times 4$ matrix $g^{\pm}$ is defined as
\[g^{\pm}:=2\left(E_4-\omega^{\pm}\theta^{\mp} \right)^{-1}-E_4. \label{g-metric}
\]
\end{df}
\begin{rem}\label{gsym}$g^{\pm}$ is a symmetric matrix because of (\ref{omegatheta})
and it can be inverted to
\[\omega^{\pm}
=\left(g^{\pm} - E_4 \right) \left(g^{\pm} + E_4 \right)^{-1}
\left(\theta^\mp \right)^{-1}.\]
\end{rem}
The Remark \ref{gsym} allows us to regard $g^{\pm}$
as a metric tensor since it is symmetric and nondegenerate.
\begin{lem}\label{lem2.1} For any $4\times 4$ (anti-)self-dual matrix $\omega^\pm$,
\begin{align}\label{det=1}
\star\omega^\pm =\pm \omega^\pm \Longrightarrow \det\left[ g^\pm \right]=1.
\end{align}
\end{lem}
This lemma is proved by a direct calculation.
\begin{df}\label{iskew}The map $\iota_{skew}:\left\{\omega_{\mathbb{C}}\in M_2[\mathbb{C}]
\: |~ \omega_{\mathbb{C}}^\dagger=-\omega_{\mathbb{C}} \right\}\longrightarrow M_4[\mathbb{R}]$
is defined as \[\iota_{skew}\left[\left(
\begin{array}{cc}
\omega_{\mathbb{C}1\bar{1}} & \omega_{\mathbb{C}1\bar{2}} \\
\omega_{\mathbb{C}2\bar{1}} & \omega_{\mathbb{C}2\bar{2}}
\end{array}
\right) \right]=\left(
\begin{array}{cccc}
0 &2\mathrm{i}\omega_{\mathbb{C}1\bar{1}} & \omega_{\mathbb{C}1\bar{2}}-\omega_{\mathbb{C}2\bar{1}}
& \mathrm{i}\left(\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} \right) \\
-2\mathrm{i}\omega_{\mathbb{C}1\bar{1}} &0 &-\mathrm{i}\left(\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} \right)
& \omega_{\mathbb{C}1\bar{2}}-\omega_{\mathbb{C}2\bar{1}} \\
-\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} &\mathrm{i}\left(\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} \right) &0
& 2\mathrm{i}\omega_{\mathbb{C}2\bar{2}} \\
-\mathrm{i}\left(\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} \right)
&-\omega_{\mathbb{C}1\bar{2}}+\omega_{\mathbb{C}2\bar{1}} &-2\mathrm{i}\omega_{\mathbb{C}2\bar{2}} & 0
\end{array}
\right).\]
Note that $\omega_{\mathbb{C}1\bar{1}}$ and $\omega_{\mathbb{C}2\bar{2}}$ are pure imaginary.
\end{df}
If the coordinate transformation on the coordinate neighborhood is $z_1:=x^2+\mathrm{i}x^1,z_2:=x^4+\mathrm{i}x^3$, then
the $\iota_{skew}$ is the pull-back of a two-form. This means
\[ \sum _{k,l=1}^2\omega_{\mathbb{C}k\bar{l}}\mathrm{d}z_k \wedge \mathrm{d}\bar{z}_l
=\frac{1}{2}\sum _{k,l=1}^4\omega_{kl}\mathrm{d}x^k \wedge \mathrm{d}x^l=
\frac{1}{2}\sum _{k,l=1}^4\left(\iota_{skew}\left[ \omega_\mathbb{C}\right] \right)_{kl}\mathrm{d}x^k \wedge \mathrm{d}x^l.\]
The above $\iota_{skew}$ is defined as satisfying this relation.
\begin{rem}$\iota_{skew}$ satisfies the following relation
\[\det\left[\iota_{skew}\left[\omega_{\mathbb{C}}\right] \right]=16\left(\det\left[\omega_{\mathbb{C}} \right]\right)^2.\]
\end{rem}
Using this result, the following lemma can be deduced.
\begin{lem}\label{lem2.2}
Suppose that the anti-Hermitian matrix $\omega_{\mathbb{C}}$ satisfies $\omega_{\mathbb{C}2\bar{2}}=-\omega_{\mathbb{C}1\bar{1}}$,
i.e. ${\rm tr \omega_{\mathbb{C}}} = 0$.
Then the two-form $\iota_{skew} [\omega_{\mathbb{C}}]$ is anti-self-dual, i.e.,
\[\star\left\{ \iota_{skew}\left[\left(
\begin{array}{cc}
\omega_{\mathbb{C}1\bar{1}} & \omega_{\mathbb{C}1\bar{2}} \\
\omega_{\mathbb{C}2\bar{1}} & \omega_{\mathbb{C}2\bar{2}}
\end{array}
\right)\right]\right\}=-\iota_{skew}\left[\left(
\begin{array}{cc}
\omega_{\mathbb{C}1\bar{1}} & \omega_{\mathbb{C}1\bar{2}} \\
\omega_{\mathbb{C}2\bar{1}} & \omega_{\mathbb{C}2\bar{2}}
\end{array}
\right)\right].\]
\end{lem}
\section{Hermitian-Einstein metrics and (anti-)self-dual two-forms}\label{Einstein and ASD}
In this section, we discuss how to make a Hermitian-Einstein metric from an anti-self-dual two-form.
Let us define a $u\left(1 \right)$-valued two-form on $\mathbb{R}^4$ by
\[
\sum _{k,l=1}^4\omega_{kl}\mathrm{d}x^k \wedge \mathrm{d}x^l.\]
where $\omega$ is an alternative matrix $\left(\omega \right)_{kl}:=\omega_{kl}$.
If $\omega$ is an anti-self-dual matrix, then the two-form is called anti-self-dual two-form.
\subsection{Ricci flat metrics and Hermitian-Einstein metrics}
Let $M$ be a Hermitian manifold and $h$ be its metric.
As a well-known fact, Ricci curvature $R_{\bar{j}k}$ for a Hermitian manifold $\left(M,h,\nabla \right)$
with the Levi-Civita connection $\nabla$ takes a simple form
\begin{align}\label{ricci}
R_{\bar{j}k}=\partial _{\bar{j}}\partial _{k}\log \left(\det\left[ h\right] \right).
\end{align}
See, for example, \cite{Kobayashi_Nomizu,besse}.
Let $\lambda$ be a cosmological constant. When $h$ satisfies the Einstein's equation.
\[R_{\bar{k}l}=\lambda h_{\bar{k}l}\]
then $M$ is called an Einstein manifold. In this paper we will focus on a Ricci flat manifold (i.e.
$R_{\bar{k}l} = 0$ or $\lambda = 0$).
We consider $M$ as a real manifold with local coordinates $x^\mu\left(\mu=1,2,3,4 \right)$.
\begin{df}\label{isym}The map $\iota_{sym}:\left\{h\in M_2[\mathbb{C}]~|~h^\dagger=h \right\}\longrightarrow M_4[\mathbb{R}]$ is defined as
\[\iota_{sym}\left[\left(
\begin{array}{cc}
h_{1\bar{1}} & h_{1\bar{2}} \\
h_{2\bar{1}} & h_{2\bar{2}}
\end{array}
\right) \right]
=\left(
\begin{array}{cccc}
h_{1\bar{1}} &0 &\frac{1}{2}\left(h_{1\bar{2}}+h_{2\bar{1}} \right) &\frac{1}{2\mathrm{i}}\left(h_{2\bar{1}}-h_{1\bar{2}} \right) \\
0 &h_{1\bar{1}} &-\frac{1}{2\mathrm{i}}\left(h_{2\bar{1}}-h_{1\bar{2}} \right) & \frac{1}{2}\left(h_{1\bar{2}}+h_{2\bar{1}} \right) \\
\frac{1}{2}\left(h_{1\bar{2}}+h_{2\bar{1}} \right) &-\frac{1}{2\mathrm{i}}\left(h_{2\bar{1}}-h_{1\bar{2}} \right) &h_{2\bar{2}} &0 \\
\frac{1}{2\mathrm{i}}\left(h_{2\bar{1}}-h_{1\bar{2}} \right) &\frac{1}{2}\left(h_{1\bar{2}}+h_{2\bar{1}} \right) &0 & h_{2\bar{2}}
\end{array}
\right).\]
where $h$ is a matrix and $\left( h\right)_{k\bar{l}}:=h_{k\bar{l}}$.
\end{df}
\begin{rem}Assume that $h$ is a Hermitian metric.
If the coordinate transformation on a coordinate neighborhood is $z^1:=x^2+\mathrm{i}x^1,z^2:=x^4+\mathrm{i}x^3$,
the $\iota_{sym}$ is then the pull-back of the Hermitian metric given by
\[ \sum _{k,l=1}^2h_{k\bar{l}}\mathrm{d}z_k \mathrm{d}\bar{z}_l=
\sum _{k,l=1}^4\left(\iota_{sym}\left[ h\right] \right)_{kl}\mathrm{d}x^k \mathrm{d}x^l.\]
Hence $\iota_{sym}$ squares the determinant:
\[\det\left[\iota_{sym}\left(h \right) \right]=(\det \left[h \right])^2.\]
\end{rem}
A Hermitian metric made with $\iota_{sym}^{-1}$ will be used below.\\
\begin{df}If $\tilde{h}\in C^\infty \left(U,M_2[\mathbb{C}] \right)$ and $\tilde{h}^\dagger =\tilde{h}$, then
$$\tilde{h}>0~in~U ~~ \Longleftrightarrow ~~ \forall u\in U, ~ \tilde{h}\left(u \right)>0 $$
where $\tilde{h}\left(u \right)>0$ means that $\tilde{h}$ is positive definite as a Hermitian matrix.
\end{df}
\begin{lem} \label{lem3.1}
If $h \in C^\infty \left(U,M_2[\mathbb{C}] \right)$ is a Hermitian matrix with $\det\left[h \right]=1$ and
$h$ is positive (negative) at ${}^\exists p\in U$, then $h$ is positive (negative) in $U$.
\end{lem}
\begin{pf}This follows from
\begin{align*}
\lefteqn{\left\{h\in M_2[\mathbb{C}]~\big|~h=h^\dagger,~\det\left[h \right]=1 \right\}} \\
&=\left\{\left(
\begin{array}{cc}
a & b \\
\bar{b} & d
\end{array}
\right)
\in M_2[\mathbb{C}]~\big|~a,d\in\mathbb{R},~a>0,~d>0,~a d\geq 1,~\left|b \right|=\sqrt{a d- 1} \right\}\\
&\coprod \left\{\left(
\begin{array}{cc}
a & b \\
\bar{b} & d
\end{array}
\right)
\in M_2[\mathbb{C}]~\big|~a,d\in\mathbb{R},~a<0,~d<0,~a d\geq 1,~\left|b \right|=\sqrt{a d- 1} \right\}
\end{align*}
which means two spaces are disconnected.
\qed\end{pf}
From the above discussions, the following theorem is obtained.
\begin{thm} \label{thm-kal-ma}
Let $\omega^\pm$ be an (anti-)self-dual two-form on an open neighborhood $U$, i.e.
$\star \omega^\pm =\pm\omega^\pm$,
\label{masspro}
and
\begin{align} \label{sdmetric}
h^\pm:=\iota_{sym}^{-1} \left[2 \left(E_4-\omega^\pm \theta^\mp \right)^{-1}-E_4 \right].
\end{align}
Then $h^\pm$ gives a Ricci-flat Hermitian metric on $U$.
So $(U, h^\pm)$ is a local realization of an Einstein manifold.
\end{thm}
\begin{pf}Because of Lemma \ref{lem2.1},
if $\star \omega^\pm =\pm\omega^\pm$, then
\begin{align}\label{sd}
\det\left[h^\pm \right]=1.
\end{align}
Because of Lemma \ref{lem3.1} and Remark \ref{gsym}, $h^\pm$ is a metric tensor.
From equations (\ref{ricci}) and (\ref{sd}),
$R_{\bar{j}k}=\partial _{\bar{j}}\partial _{k}\log\left(\det\left[ h^\pm\right] \right)=0$.
\qed\end{pf}
Local complex coordinates can be arranged in such a way that the Jacobians of the
transition functions on overlapping charts are one on all the overlaps.
In that case, $\det [h^\pm]$ is a globally defined function and
the Ricci-flat condition reduces to the Monge-Amp\'ere equation \cite{ma-eq}
\begin{equation}\label{ma-eq}
\det [h^\pm] = \kappa,
\end{equation}
where the constant $\kappa$ is related to the volume of a K\"ahler manifold that depends only on the K\"ahler
class. Therefore Theorem \ref{thm-kal-ma} implies that the self-duality for the two-form $\omega^\pm$ is equivalent to the Ricci-flat condition \eq{ma-eq} of K\"ahler manifolds defined
by the metric $h^\pm$ \cite{u1-cym}.
\section{Hermitian-Einstein metric from noncommutative instanton on $\mathbb{C}^2$}\label{sect4}
In the previous section we found the way to construct a Hermitian-Einstein metric from an (anti-)self-dual two-form.
To construct the Hermitian-Einstein metric, we will employ the instanton curvature
on noncommutative $\mathbb{C}^2$ as the (anti-)self-dual two-form.
There are many ways to obtain noncommutative $\mathbb{C}^2$
(see \cite{nc-review} for a review and references therein).
We use the Fock representation of noncommutative $\mathbb{C}^2$ given in \cite{Sako:2016gqb},
which is based on the Karabegov's deformation quantization \cite{Karabegov1996}.
There is a simple dictionary between the Fock representation and ordinary functions.
Using the dictionary, the Hermitian-Einstein metric is expressed in terms of usual functions.
\subsection{Noncommutative $\mathbb{C}^2$}
Consider a noncommutative algebra $\left(C^\infty \left(\mathbb{C}^2 \right)\left[\left[\hbar \right] \right],* \right)$ led by (\ref{f*g})
in Appendix \ref{dq}.
The star product induces a Heisenberg algebra
\begin{align}
\label{ncp}
\left[z^k,\bar{z}^l\right]_*=-\zeta_k\delta_{kl},
\qquad \left[z^k,z^l\right]_*=0,\qquad \left[\bar{z}^k,\bar{z}^l\right]_*=0 ,
\end{align}
where $\left[x,y\right]_*:=x*y-y*x $. We represent it by creation and annihilation operators given by
\[a_k:=\frac{\bar{z}^k}{\sqrt{\zeta_k}},\qquad a_k^\dagger:=\frac{z^k}{\sqrt{\zeta_k}},\]
then
\[\left[a_k,a_l^\dagger \right]_*=\delta_{kl},\qquad \left[a_k^\dagger ,a_l^\dagger \right]_*=0,\qquad \left[a_k,a_l\right]_*=0.\]
In the following $\zeta_1=\zeta_2=\zeta>0$ is assumed.\\
Note that the choice of a noncommutative parameter has the freedom associated
with a choice of a background two-form \cite{sw-ncft}.
Here the $\zeta$ in (\ref{ncp}) is regarded as the only noncommutative parameter.
However, in Section \ref{sect5}, we will implicitly assume the identification $\zeta := 2 \theta$
since we will work in the background-independent prescription, i.e. $\theta = B^{-1}$.\\
The algebra $\mathcal{F}$ on $\mathbb{C}$ is defined as follows.
The Fock space $\mathcal{H}$ is a linear space
spanned by the bases generated by acting $a_l^\dagger$'s on $\Ket{0,0}$ :
\begin{align}
\frac{1}{\sqrt{m_1!m_2!}}\left( a_1^\dagger\right)^{m_1}_**\left( a_2^\dagger\right)^{m_2}_*\Ket{0,0}= \Ket{m_1,m_2} ,\label{fsp}
\end{align}
where $m_1$ and $m_2$ are positive integers and $\left( a\right)^m_*$ stands for $\overbrace{a * \cdots * a}^m$.
The ground state $\Ket{0,0}$ satisfies $a_l\Ket{0,0}=0, ~ \forall ~ l$.
Here, we define the basis of a dual vector space by acting $a_l$'s on $\Bra{0,0}$ as
$$\frac{1}{\sqrt{n_1!n_2!}}\Bra{0,0}\left(a_1\right)^{n_1}_**\left(a_2\right)^{n_2}_*=\Bra{n_1,n_2} , $$
where $\Bra{0,0}$ satisfies $\Bra{0,0}a_l^\dagger=0, ~ \forall ~ l$. Then we define a set of linear operators as
\begin{align}
\mathcal{F}:=span_{\mathbb{C}}\left(\Ket{m_1,m_2}\Bra{n_1,n_2};m_1,m_2,n_1,n_2=0,1,2,\cdots \right) \label{ketbra}
\end{align}
where $\left( \Ket{m_1,m_2}\Bra{n_1,n_2}\right)\Ket{k_1,k_2}=\delta_{k_1n_1}\delta_{k_2n_2}\Ket{m_1,m_2}$ and
$\Bra{k_1,k_2}\left( \Ket{m_1,m_2}\Bra{n_1,n_2}\right)=\delta_{k_1m_1}\delta_{k_2m_2}\Bra{n_1,n_2}$.
The product on $\mathcal{F}$ is defined as
$$\left(\Ket{j_1,j_2}\Bra{k_1,k_2} \right)\circ\left( \Ket{m_1,m_2}\Bra{n_1,n_2}\right):=
\delta_{k_1m_1}\delta_{k_2m_2}\Ket{j_1,j_2}\Bra{n_1,n_2},$$
so, $\mathcal{F}$ is an algebra.
There is a one to one correspondence between $\mathcal{F}$ and some subalgebra of $C^\infty \left(\mathbb{C}^2 \right)$.
For arbitrary noncommutative K\"ahler manifold
obtained by deformation quantization
with separation of variables \cite{Karabegov1996},
we can find the similar correspondence \cite{Sako:2016gqb}.
The following is the simplest example of the correspondence.
\begin{df}\label{iota}(Twisted Fock representation). The linear map
$\iota:\mathcal{F}\longrightarrow C^\infty \left(\mathbb{C}^2 \right)$
is defined as
\begin{align}
\iota\left(\Ket{m_1,m_2}\Bra{n_1,n_2} \right)=e_{\left( m_1,m_2,n_1,n_2\right)}:
=\frac{z_1^{m_1}z_2^{m_2}\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}\bar{z}_1^{n_1}
\bar{z}_2^{n_2}}{\sqrt{m_1!m_2!n_1!n_2!}\left( \sqrt{\zeta}\right)^{m_1+m_2+n_1+n_2}},\label{Fockrep}
\end{align}
especially $\iota\left( \Ket{0,0}\Bra{0,0}\right)=e_{\left( 0,0,0,0\right)}=\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}.$
\end{df}
\begin{prop}Let $\iota \left(\mathcal{F}\right)$ be defined by
\begin{align}
\iota \left(\mathcal{F}\right):=span_{\mathbb{C}}\left(e_{\left( m_1,m_2,n_1,n_2\right)};m_1,m_2,n_1,n_2=0,1,2,\cdots \right).
\end{align}
Then $\left\{\iota \left(\mathcal{F}\right),* \right\}$ is an algebra where $*$ is in (\ref{f*g}).
\end{prop}
\begin{pf}
After a little algebra, one can deduce the following identity
\begin{align}
e_{\left( k_1,k_2,l_1,l_2\right)}*e_{\left( m_1,m_2,n_1,n_2\right)}=\delta_{l_1m_1}\delta_{l_2m_2}e_{\left( k_1,k_2,n_1,n_2\right)}. \label{e*e}
\end{align}
Details are given in \cite{Sako:2016gqb}.
\qed\end{pf}
The identity (\ref{e*e}) derives the following fact.
\begin{prop}\label{dictionary}The algebras $\left(\mathcal{F},\circ\right)$ and $\left\{ \iota \left(\mathcal{F}\right),*\right\}$
are isomorphic.
\end{prop}
This isomorphism $\iota$ is a ``Fock space - function space'' dictionary.
From this isomorphism, we do not distinguish these two algebras and we only use $*$ to represent products in the following.\\
\bigskip
Here we consider a $U(1)$ gauge theory on noncommutative $\mathbb{C}^2$.
$U(1)$ gauge connection in the noncommutative space is defined as follows (see for example \cite{Nekrasov:2000ih}).
\begin{df}Rescaled coordinates of $\mathbb{C}^2$ are defined as
\[\hat{\partial }_{z_l}:=\frac{\bar{z}_l}{\zeta_l}.\]
This acts on $\mathcal{H}$ as a linear operator.
\end{df}
Using $\hat{\partial }_{z_l},\hat{\partial }_{\bar{z}_m}$,
let us introduce covariant derivatives and the gauge curvature as follows.
\begin{df}\label{cov}Covariant derivatives for a scalar field in fundamental representation $\phi\in \mathcal{F}$ on noncommutative $\mathbb{C}^2$ are defined as
\[\hat{\nabla }_{z_l}\hat{\phi}:=\left[\hat{\partial }_{z_l},\hat{\phi }\right]_*+\hat{A}_{z_l}*\hat{\phi}
=-\hat{\phi}*\hat{\partial }_{z_l}+\hat{D}_{z_l}*\hat{\phi}\]
where we define a local gauge field $\hat{A}_{z_l}\in \mathcal{F}$ and
\[\hat{D}_{z_l}:=\hat{\partial }_{z_l}+\hat{A}_{z_l}.\]
The gauge curvature is defined as
\begin{align}
\hat{F}_{z_l\bar{z}_m}:&=\mathrm{i}\left[\hat{\nabla }_{z_l},\hat{\nabla }_{\bar{z}_m} \right]_*
=-\frac{\mathrm{i}\delta_{lm}}{\zeta_l}+\mathrm{i}\left[\hat{D}_{z_l},\hat{D}_{\bar{z}_m} \right]_*, \\
\hat{F}_{z_lz_m}:&=\mathrm{i}\left[\hat{\nabla }_{z_l},\hat{\nabla }_{z_m} \right]_*
=\mathrm{i}\left[\hat{D}_{z_l},\hat{D}_{z_m} \right]_*, \nonumber \\
\hat{F}_{\bar{z}_l\bar{z}_m}:&=\mathrm{i}\left[\hat{\nabla }_{\bar{z}_l},\hat{\nabla }_{\bar{z}_m} \right]_*
=\mathrm{i}\left[\hat{D}_{\bar{z}_l},\hat{D}_{\bar{z}_m} \right]_*. \nonumber
\end{align}
\end{df}
\subsection{Ricci-flat metrics from noncommutative $k$-instantons}
In this section, we make Ricci-flat metrics on a local neighborhood from noncommutative instantons on ${\mathbb C}^2$.
As we saw in Section \ref{Einstein and ASD}, (anti)-self-dual two-forms satisfying (\ref{asd}) derive Ricci-flat metrics.
Nekrasov and Schwarz found in \cite{Nekrasov:1998ss} how to construct noncommutative instantons on ${\mathbb C}^2$
by using the ADHM method and the general solutions for the $U(1)$ gauge theory are given in \cite{Nekrasov:2000ih}.
We introduce the commutation relation of complex coordinates as (\ref{ncp}).
As (anti)-self-dual two-forms in Section \ref{Einstein and ASD},
we employ noncommutative instantons given in \cite{Ishikawa:2001ye}.\\
The general instanton solutions (see \cite{Ishikawa:2001ye}) satisfy the (anti)-self-dual relation.
An instanton curvature tensor is described by
\[\hat{F}^-_{\mathbb{C}}\left[k \right]:=\left(
\begin{array}{cc}
\hat{F}^-_{z_1\bar{z}_1}\left[k \right] & \hat{F}^-_{z_1\bar{z}_2}\left[k \right] \\
\hat{F}^-_{z_2\bar{z}_1}\left[k \right] & -\hat{F}^-_{z_1\bar{z}_1}\left[k \right]
\end{array}
\right) , \]
and satisfies (\ref{asd}):
\begin{align}
\star\left(\iota_{skew}\left(\hat{F}^-_{\mathbb{C}}\left[k \right] \right) \right)=-\iota_{skew}\left(\hat{F}^-_{\mathbb{C}}\left[k \right] \right).
\end{align}
See Lemma \ref{lem2.2} in Section \ref{sect2}.
This fact leads to the following result.
\begin{prop}
If $\hat{F}^-_{\mathbb{C}}$ is a $k$-instanton curvature tensor of $U(1)$ gauge theory
on noncommutative ${\mathbb C}^2$,
and
\begin{align}
h\left[k \right]:= & \iota^{-1}_{sym}\left\{
2\left(E_4-\iota_{skew}\left(\hat{F}^-_{\mathbb{C}}\left[k \right] \right) \theta^+ \right)^{-1}-E_4 \right\} \nonumber \\
=&\frac{1}{4\left|\hat{F}_\mathbb{C}^-\left[k \right] \right|\theta^2-1}\left(
\begin{array}{cc}
-4\mathrm{i}\hat{F}^-_{z_1\bar{z}_1}\left[k \right]\theta-2 & -4\mathrm{i}\hat{F}^-_{z_1\bar{z}_2}\left[k \right]\theta \\
-4\mathrm{i}\hat{F}^-_{z_2\bar{z}_1}\left[k \right]\theta & 4\mathrm{i}\hat{F}^-_{z_1\bar{z}_1}\left[k \right]\theta-2
\end{array}
\right)-\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right),
\end{align}
then $h\left[k \right]$ is an Einstein (Ricci-flat) metric.
\end{prop}
A concrete example of $k$-instanton curvature tensors is given in \cite{Ishikawa:2001ye}
and the curvature is written by using linear operators on a Fock space.
It is known from (\ref{Fockrep}) and Proposition \ref{dictionary}
how to translate the operators into functions. (See also Appendix \ref{fock} and \cite{Sako:2016gqb}.)
Then the $k$-instanton curvature tensor is expressed by concrete elementary functions as follows:
\begin{align*}
\lefteqn{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]=\frac{\mathrm{i}}{\zeta}-\frac{\mathrm{i}}{\zeta}\sum _{n_2=0}^\infty
\frac{z_2^{n_2}\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}\bar{z}_2^{n_2}}{n_2!\zeta^{n_2}} \left(d_1\left(0,n_2;k \right) \right)^2} \\
&-\frac{\mathrm{i}}{\zeta} \sum _{n_1=1}^\infty \sum _{n_2=0}^\infty
\frac{z_1^{n_1}z_2^{n_2}\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}\bar{z}_1^{n_1}\bar{z}_2^{n_2}}{n_1!n_2!\zeta^{n_1+n_2}}
\left\{\left(d_1\left(n_1,n_2;k \right) \right)^2-\left(d_1\left(n_1-1,n_2;k \right) \right)^2 \right\},
\end{align*}
\begin{align*}
\lefteqn{\hat{F}^-_{z_1\bar{z}_2}\left[k \right]=-\frac{\mathrm{i}}{\zeta}\frac{z_1^{k-1}z_2\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}}
{\sqrt{\left(k-1 \right)!}\left( \sqrt{\zeta}\right)^{k}}d_1\left(k-1,1;k \right)d_2\left(0,0;k \right)} \\
&-\frac{\mathrm{i}}{\zeta}\sum _{n_1=1}^{k-1} \frac{z_1^{n_1+k-1}z_2\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}\bar{z}_1^{n_1}}
{\sqrt{\left(n_1+k-1 \right)!n_1!}\left( \sqrt{\zeta}\right)^{2n_1+k}}
\left\{d_1\left(n_1+k-1,1;k \right)d_2\left(n_1,0;k \right)-d_1\left(n_1-1,0;k \right)d_2\left(n_1-1,0;k \right) \right\} \\
&-\frac{\mathrm{i}}{\zeta} \sum _{n_1=1}^\infty \sum _{n_2=1}^\infty
\frac{z_1^{n_1-1}z_2^{n_2+1}\mathrm{e}^{-\frac{z^1\bar{z}^1+z^2\bar{z}^2}{\zeta}}\bar{z}_1^{n_1}\bar{z}_2^{n_2}}
{\sqrt{\left(n_1-1 \right)!\left(n_2+1 \right)!n_1!n_2!}\left( \sqrt{\zeta}\right)^{2n_1+2n_2}} \\
&\times \left\{d_1\left(n_1-1,n_2+1;k \right)d_2\left(n_1,n_2;k \right)-d_1\left(n_1-1,n_2;k \right)d_2\left(n_1-1,n_2;k \right) \right\},
\end{align*}
\[\hat{F}^-_{z_1\bar{z}_2}\left[k \right] =- \hat{F}^-_{z_2\bar{z}_1}\left[k \right]^\dagger,\]
where $n_2\neq 0$ and
\begin{align}
\lefteqn{d_1\left(n_1,0;k \right)=\sqrt{n_1+k+1}\sqrt{\frac{\Lambda\left(n_1+k+1,0 \right)}{\Lambda\left(n_1+k,0 \right)}},}\nonumber \\
&d_1\left(n_1,n_2;k \right)=\sqrt{n_1+1}\sqrt{\frac{\Lambda\left(n_1+1,n_2 \right)}{\Lambda\left(n_1,n_2 \right)}},\label{d1} \\
&d_2\left(n_1,0;k \right)=\sqrt{\frac{\Lambda\left(n_1+k,1 \right)}{\Lambda\left(n_1+k,0 \right)}},\nonumber \\
&d_2\left(n_1,n_2;k \right)=\sqrt{n_2+1}\sqrt{\frac{\Lambda\left(n_1,n_2+1 \right)}{\Lambda\left(n_1,n_2 \right)}} \label{d2} .
\end{align}
Here
$$\Lambda\left[k \right]\left(n_1,n_2 \right)=\frac{w_k\left[k \right]\left(n_1,n_2 \right)}
{w_k\left[k \right]\left(n_1,n_2 \right)-2kw_{k-1}\left[k \right]\left(n_1,n_2 \right)},$$
and
$$w_n\left[k \right]\left(n_1,n_2 \right)=\sum _{l=0}^n \left\{ \frac{n!}{l!}\frac{\left(n_1-n_2+k+l \right)!}{\left(n_1-n_2-k \right)!}
\frac{2^{\left(n-l \right)}}{\left(n-l \right)!}\frac{\left(n_2+\left(n-l \right) \right)!}{n_2!}\right\}.$$
Note that some notations are slightly changed from \cite{Ishikawa:2001ye} and imaginary unit factor causes here.
See also Appendix \ref{imaginaryunit}.\\
Using these instanton curvatures, Hermitian-Einstein metrics
can be constructed by concrete elementary functions according to the Theorem \ref{masspro}.
\subsection{Einstein metric from finite $N$ }\label{finiten}
The full noncommutative $U\left(1 \right)$ instanton
solution is very complicated.
For simplicity, let us consider the $\zeta$-expansion. \\
In the previous subsection, $\hat{F}^-$ is represented by an infinite series
\begin{align}
\hat{F}^-=\sum _{n=1}^\infty\left(\frac{1}{\zeta} \right)^{\frac{n}{2}} \hat{F}^-_{\left({\frac{n}{2}} \right)}. \label{infiniteseries}
\end{align}
The anti-self-dual condition $\star \hat{F}^-=-\hat{F}^-$ implies
\begin{align}
\star \hat{F}^-_{\left({\frac{n}{2}} \right)}=-\hat{F}^-_{\left({\frac{n}{2}} \right)}
\end{align}
for each $n/2$.
Therefore it is possible to employ an arbitrary partial sum of (\ref{infiniteseries}) determined by a subset
$\displaystyle S\subset {\frac{1}{2}}\mathbb{Z}_{>0}$
\begin{align}
\hat{F}^-_S=\sum _{{\frac{n}{2}} \in S}\left(\frac{1}{\zeta} \right)^{\frac{n}{2}} \hat{F}^-_{\left({\frac{n}{2}} \right)}\label{Fseries}
\end{align}
for the anti-self-dual two-form to construct a Hermitian-Einstein metric $h$
without losing rigorousness.\footnote{One may choose even more loose condition
than (\ref{Fseries}). One can choose a different subset $S$ for each
$\hat{F}^-_{z_1\bar{z}_1},\hat{F}^-_{z_1\bar{z}_2}$ to obtain a Hermitian-Einstein metric.}
In the following we consider
\begin{align}
\hat{F}^-_{\left\{{\frac{N}{2}} \right\}}:=\sum _{n=1/2}^{{N}/{2}}
\left(\frac{1}{\zeta} \right)^{\frac{n}{2}} \hat{F}^-_{\left({\frac{n}{2}} \right)}.
\end{align}
\begin{ex}
First let us make the Ricci-flat metric $h\left[k \right]_{\left\{1 \right\}}$ from ${\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{1 \right\}}$.
The curvature tensor in this case is ${\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{1 \right\}}=\left(
\begin{array}{cc}
\frac{\mathrm{i}}{\zeta} & 0 \\
0 & -\frac{\mathrm{i}}{\zeta}
\end{array}
\right)$, and its determinant is
$\det\left[{\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{1 \right\}} \right]=\frac{1}{\zeta^2}.$
So the metric $h\left[k \right]_{\left\{1 \right\}}$ is given by
\begin{align}
h\left[k \right]_{\left\{1 \right\}} &:=\frac{1}{4~\det\left[{\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{1 \right\}} \right]\theta^2-1}\left(
\begin{array}{cc}
-4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]}_{\left\{1 \right\}}\theta-2
& -4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_2}\left[k \right]}_{\left\{1 \right\}}\theta \\
-4\mathrm{i}{\hat{F}^-_{z_2\bar{z}_1}\left[k \right]}_{\left\{1 \right\}}\theta
& 4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]}_{\left\{1 \right\}}\theta-2
\end{array}
\right)-\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right) \nonumber \\
&=\frac{1}{1-4\zeta^{-2}\theta^2}\left(
\begin{array}{cc}
1-4\zeta^{-1}\theta+4\zeta^{-2}\theta^2 & 0 \\
0 & 1+4\zeta^{-1}\theta+4\zeta^{-2}\theta^2
\end{array}
\right)=\left(
\begin{array}{cc}
\frac{1-2\zeta^{-1}\theta}{1+2\zeta^{-1}\theta} & 0 \\
0 & \frac{1+2\zeta^{-1}\theta}{1-2\zeta^{-1}\theta}
\end{array}
\right). \nonumber
\end{align}
This corresponds to the Euclidean metric essentially.
\end{ex}
\begin{ex}
Let us make a Ricci-flat metric $h\left[k \right]_{\left\{2 \right\}}$ from ${\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{2 \right\}}$.
From (\ref{f11k}),(\ref{f12k}),
\begin{align}
{\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{2 \right\}}& =\frac{\mathrm{i}}{\zeta} \left[
1-\frac{z_2\bar{z}_2}{\zeta} \left(d_1\left(0,1;k \right) \right)^2
-\frac{z_1\bar{z}_1}{\zeta}\left\{\left(d_1\left(1,0;k \right) \right)^2-\left(d_1\left(0,0;k \right) \right)^2 \right\} \right] \left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right) \nonumber \\
&-\frac{\mathrm{i}d_1\left(k-1,1;k \right)d_2\left(0,0;k \right)}{\zeta^{1+k/2}\sqrt{\left(k-1 \right)!}}\left(
\begin{array}{cc}
0 & z_1^{k-1}z_2 \nonumber\\
\bar{z}_1^{k-1}\bar{z}_2 & 0
\end{array}
\right). \nonumber
\end{align}
Then its determinant is
\begin{align}
\det\left[{\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{2 \right\}} \right] &
=\frac{1}{\zeta^2} \left[
1-\frac{z_2\bar{z}_2}{\zeta} \left(d_1\left(0,1;k \right) \right)^2
-\frac{z_1\bar{z}_1}{\zeta}\left\{\left(d_1\left(1,0;k \right) \right)^2-\left(d_1\left(0,0;k \right) \right)^2 \right\} \right]^2 \nonumber \\
&+ \frac{\left\{d_1\left(k-1,1;k \right)\right\}^2\left\{d_2\left(0,0;k \right) \right\}^2z_1^{k-1}z_2\bar{z}_1^{k-1}\bar{z}_2 }
{\zeta^{2+k}\left(k-1 \right)!}.\nonumber
\end{align}
So the metric $h\left[k \right]_{\left\{2 \right\}}$ is given by
\begin{align}
h\left[k \right]_{\left\{2 \right\}} &:=\frac{1}{4\det\left[{\hat{F}^-_\mathbb{C}\left[k \right]}_{\left\{2 \right\}} \right]\theta^2-1}\left(
\begin{array}{cc}
-4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]}_{\left\{2 \right\}}\theta-2
& -4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_2}\left[k \right]}_{\left\{2 \right\}}\theta \\
-4\mathrm{i}{\hat{F}^-_{z_2\bar{z}_1}\left[k \right]}_{\left\{2 \right\}}\theta
& 4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]}_{\left\{2 \right\}}\theta-2
\end{array}
\right)-\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right), \nonumber
\end{align}
which can be calculated concretely though its expression becomes complex.
To simplify this we assume $k>3$, then
\begin{align}
h\left[k \right]_{\left\{2 \right\}}&
=\left\{ \frac{2}{1-4\det\left[{\hat{F}_\mathbb{C}^-\left[k \right]}_{\left\{2 \right\}} \right] \theta^2}-1\right\}\left(
\begin{array}{cc}
1 & 0 \\
0& 1
\end{array}
\right)
+\frac{4\mathrm{i}{\hat{F}^-_{z_1\bar{z}_1}\left[k \right]}_{\left\{2 \right\}}\theta}
{1-4\det\left[{\hat{F}_\mathbb{C}^-\left[k \right]}_{\left\{2 \right\}} \right]\theta^2}\left(
\begin{array}{cc}
1 & 0 \\
0& -1
\end{array}
\right) \nonumber \\
&=\left\{ \frac{2}{1-4\theta^2\zeta^{-2}\left[
1-\frac{z_2\bar{z}_2}{\zeta} \left(d_1\left(0,1;k \right) \right)^2
-\frac{z_1\bar{z}_1}{\zeta}\left\{\left(d_1\left(1,0;k \right) \right)^2-\left(d_1\left(0,0;k \right) \right)^2 \right\} \right]^2}-1\right\}\left(
\begin{array}{cc}
1 & 0 \\
0& 1
\end{array}
\right) \nonumber \\
&-\frac{\frac{4\theta}{\zeta} \left[
1-\frac{z_2\bar{z}_2}{\zeta} \left(d_1\left(0,1;k \right) \right)^2
-\frac{z_1\bar{z}_1}{\zeta}\left\{\left(d_1\left(1,0;k \right) \right)^2-\left(d_1\left(0,0;k \right) \right)^2 \right\} \right] }
{1-4\theta^2\zeta^{-2}\left[
1-\frac{z_2\bar{z}_2}{\zeta} \left(d_1\left(0,1;k \right) \right)^2
-\frac{z_1\bar{z}_1}{\zeta}\left\{\left(d_1\left(1,0;k \right) \right)^2-\left(d_1\left(0,0;k \right) \right)^2 \right\} \right]^2}\left(
\begin{array}{cc}
1 & 0 \\
0& -1
\end{array}
\right). \nonumber
\end{align}
\end{ex}
In next subsection, we discuss a Hermitian-Einstein metric obtained from
$1$-instanton solution.
\subsection{Hermitian-Einstein metric from a 1-instanton}\label{1inst}
For the simplest example of the Hermitian-Einstein metric given in the previous discussion,
we describe a Hermitian-Einstein metric obtained from a single noncommutative $U(1)$ instanton.
Now we pay attention to low order terms.\\
For $k=1,\hat{F}^-_{\mathbb{C}}\left[1 \right]$ is
\begin{align*}
\lefteqn{\hat{F}^-_{z_1\bar{z}_1}\left[1 \right]=\frac{\mathrm{i}}{\zeta}-\frac{\mathrm{i}z_2\bar{z}_2}{\zeta^2} \left(d_1\left(0,1;1 \right) \right)^2
-\frac{\mathrm{i}z_1\bar{z}_1}{\zeta^2}\left\{\left(d_1\left(1,0;1 \right) \right)^2-\left(d_1\left(0,0;1 \right) \right)^2 \right\}
+\mathcal{O}\left(\zeta^{-3} \right) } \\
&=\frac{\mathrm{i}}{\zeta}- \frac{2\mathrm{i}}{3}\frac{z_2\bar{z}_2}{\zeta^2}-\frac{\mathrm{i}z_1\bar{z}_1}{\zeta^2}
\left\{\frac{5}{2}-\frac{4}{3} \right\}+\mathcal{O}\left(\zeta^{-3} \right)
=\frac{\mathrm{i}}{\zeta}- \frac{\mathrm{i}}{6\zeta^2}\left(4z_2\bar{z}_2+7z_1\bar{z}_1\right)+\mathcal{O}\left(\zeta^{-3} \right) \\
&\hat{F}^-_{z_1\bar{z}_2}\left[1 \right]=-\frac{\mathrm{i}z_2}{\zeta^{3/2}}\left(1-\frac{z_1\bar{z}_1}{\zeta} -\frac{z_2\bar{z}_2}{\zeta} \right)
d_1\left(0,1;1 \right)d_2\left(0,0;1 \right)+\mathcal{O}\left(\zeta^{-3} \right) \\
&=-\frac{2\mathrm{i}z_2}{3\zeta^{3/2}}\left(1-\frac{z_1\bar{z}_1}{\zeta} -\frac{z_2\bar{z}_2}{\zeta} \right)+\mathcal{O}\left(\zeta^{-3} \right) \\
&\hat{F}^-_{z_2\bar{z}_1}\left[1 \right]=-\frac{\mathrm{i}\bar{z}_2}{\zeta^{3/2}}\left(1-\frac{z_1\bar{z}_1}{\zeta} -\frac{z_2\bar{z}_2}{\zeta} \right)
d_1\left(0,1;1 \right)d_2\left(0,0;1 \right)+\mathcal{O}\left(\zeta^{-3} \right) \\
&=-\frac{2\mathrm{i}\bar{z}_2}{3\zeta^{3/2}}\left(1-\frac{z_1\bar{z}_1}{\zeta} -\frac{z_2\bar{z}_2}{\zeta} \right)+\mathcal{O}\left(\zeta^{-3} \right)
\end{align*}
from (\ref{f11k}),(\ref{f121}). Then
\begin{align}
det\left[\hat{F}_\mathbb{C}^-\left[1 \right]_{\left\{2 \right\}}\right]
&=\frac{4z_2\bar{z}_2}{9\zeta^5}\left(\zeta-z_1\bar{z}_1-z_2\bar{z}_2\right)^2
-\frac{1}{36\zeta^4}\left( 6\zeta- 7z_1\bar{z}_1-4z_2\bar{z}_2\right)^2
\end{align}
From this $1$-instaoton curvature, the Hermitian-Einstein metric is given as
\begin{align*}
&h \left[1 \right]_{\left\{2 \right\}}:=\frac{1}{4~\det\left[\hat{F}_\mathbb{C}^-\left[1 \right]\right]_{\left\{2 \right\}}\theta^2-1}\left(
\begin{array}{cc}
-4\mathrm{i}\hat{F}^-_{z_1\bar{z}_1}\left[1 \right]_{\left\{2 \right\}}\theta-2 & -4\mathrm{i}\hat{F}^-_{z_1\bar{z}_2}\left[1 \right]_{\left\{2 \right\}}\theta \\
-4\mathrm{i}\hat{F}^-_{z_2\bar{z}_1}\left[1 \right]_{\left\{2 \right\}}\theta & 4\mathrm{i}\hat{F}^-_{z_1\bar{z}_1}\left[1 \right]_{\left\{2 \right\}}\theta-2
\end{array}
\right)-\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right) \\
&=\frac{4}{1-4\left\{\frac{4z_2\bar{z}_2}{9\zeta^5}\left(\zeta-z_1\bar{z}_1-z_2\bar{z}_2\right)^2
-\frac{1}{36\zeta^4}\left( 6\zeta- 7z_1\bar{z}_1-4z_2\bar{z}_2\right)^2 \right\}\theta^2} \\
&\times \left\{\frac{1}{2}\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right)+
\frac{\theta}{\zeta}\left(
\begin{array}{cc}
1 & 0 \\
0 & -1
\end{array}
\right)+\frac{2\theta}{3\zeta^{3/2}}\left(
\begin{array}{cc}
0 & z_2 \\
\bar{z}_2 & 0
\end{array}
\right)+\frac{\theta}{6\zeta^2}\left(
\begin{array}{cc}
-4z_2\bar{z}_2-7z_1\bar{z}_1 & 0 \\
0 & 4z_2\bar{z}_2+7z_1\bar{z}_1
\end{array}
\right)
\right. \\ &\left.
+\frac{2\theta}{3\zeta^{5/2}}\left(
\begin{array}{cc}
0 & -z_2\left( z_1\bar{z}_1+z_2\bar{z}_2 \right) \\
-\bar{z}_2\left(z_1\bar{z}_1+z_2\bar{z}_2 \right) & 0
\end{array}
\right) \right\}-\left(
\begin{array}{cc}
1 & 0 \\
0 & 1
\end{array}
\right).
\end{align*}
\section{K\"ahler structure and Bianchi identity}\label{sect5}
In this section we discuss the K\"ahler condition on the metric derived from (anti-)self-dual two-forms
of noncommutative $U(1)$ instantons. We will clarify this issue by illuminating
the duality between the K\"ahler geometry and $U(1)$ gauge theory claimed in \cite{inov}.
\subsection{K\"ahler geometry and $U(1)$ gauge theory}
Let $M$ be a two-dimensional complex manifold with a K\"ahler metric
\begin{equation}\label{c-metric}
ds^2 = h_{i\bar{j}} (z, \overline{z}) dz^i d \overline{z}^{{j}},
\end{equation}
where local complex coordinates are given by $z^i = {x}^{2i} + \mathrm{i}~ x^{2i-1}, \; (i= 1, 2)$.
A K\"ahler manifold is described by a single function $K(z, \overline{z})$,
so-called K\"ahler potential, defined by
\begin{equation}\label{k-metric}
h_{i\bar{j}} (z, \overline{z}) = \frac{\partial^2 K(z, \overline{z})}{\partial z^i \partial \overline{z}^{j}}.
\end{equation}
The K\"ahler potential is not unique but admits a K\"ahler transformation given by
\begin{equation}\label{kah-gauge}
K(z, \overline{z}) \to K(z, \overline{z}) + f(z) + \overline{f}(\overline{z})
\end{equation}
where $f(z)$ and $\overline{f}(\overline{z})$ are arbitrary holomorphic and anti-holomorphic functions.
Two K\"ahler potentials related by the K\"ahler gauge transformation \eq{kah-gauge} give rise
to the same K\"ahler metric \eq{k-metric}.
\begin{df}[K\"ahler form \cite{besse}]
Given a K\"ahler metric (\ref{c-metric}), the K\"ahler form is a fundamental closed two-form defined by
\begin{align}
\Omega = \mathrm{i}~ h_{i\bar{j}} (z, \overline{z}) dz^i \wedge d \overline{z}^{j}. \label{ftwo-form}
\end{align}
\end{df}
Note that the K\"ahler form \eq{ftwo-form} can be written as
\begin{equation}\label{k-curvature}
\Omega = d \mathcal{A} \qquad \mathrm{and} \qquad
\mathcal{A} = \frac{\mathrm{i}}{2} (\overline{\partial} - \partial) K (z, \overline{z})
\end{equation}
where the exterior differential operator is given by $d = \partial + \overline{\partial}$ with
$\partial = dz^i \frac{\partial}{\partial z^i}$ and $ \overline{\partial}
= d \overline{z}^{i} \frac{\partial}{\partial \overline{z}^{i}}$.
Then the above K\"ahler transformation \eq{kah-gauge} corresponds to a gauge transformation
for the one-form $\mathcal{A}$ given by
\begin{equation}\label{k-gauge}
\mathcal{A} \to \mathcal{A} + d \lambda
\end{equation}
where $\lambda = \frac{\mathrm{i}}{2} \big(\overline{f}(\overline{z}) - f(z) \big)$.
This implies that the one-form $\mathcal{A}$ corresponds to $U(1)$ gauge fields
or a connection of holomorphic line bundle.
Note that the K\"ahler form $\Omega$ on a K\"ahler manifold $M$ is a nondegenerate, closed two-form.
Therefore the K\"ahler form $\Omega$ is a symplectic two-form.
This fact leads to the following proposition:
\begin{prop}\label{kahler-symplectic}
A K\"ahler manifold $(M, \Omega)$ is a symplectic manifold too
although the reverse is not necessarily true.
\end{prop}
The K\"ahler condition enforces a specific analytic characterization of K\"ahler metrics:
\begin{lem}\label{normal-kgh}
$ds^2$ is K\"ahler if and only if it osculates to order 2 to the Euclidean metric everywhere.
\end{lem}
\noindent
The proof of this lemma can be found in \cite{griffiths-harris} (Griffiths-Harris, p.~107).
It means that the existence of normal holomorphic coordinates around each
point of $M$ is equivalent to that of K\"ahler metrics.
Let us consider an atlas $\{(U_\alpha, \varphi_\alpha)| \alpha \in I \}$ on
the K\"ahler manifold $M$ and denote the K\"ahler form $\Omega$ restricted
on a chart $(U_\alpha, \varphi_\alpha)$ as $\omega_\alpha \equiv \Omega|_{U_\alpha}$.
According to the Lemma \ref{normal-kgh}, it is possible to write the local K\"ahler form as
\begin{equation}\label{kahler-gauge}
\omega_\alpha = B + F_\alpha,
\end{equation}
where $B$ is the K\"ahler form of $\mathbb{C}^2$.
Since the two-form $F_\alpha$ must be closed due to the K\"ahler condition,
it can be represented by $F_\alpha = d A_\alpha$.
Using Eq. \eq{k-curvature} and $F_\alpha = \omega_\alpha - B$,
the one-form $A_\alpha$ on $U_\alpha$ can be written as the form
\begin{equation}\label{local-one-form}
A_\alpha = \frac{\mathrm{i}}{2} (\overline{\partial} - \partial) \phi_\alpha (z, \overline{z})
\end{equation}
where $\phi_\alpha (z, \overline{z}) = K_\alpha (z, \overline{z}) - K_0 (z, \overline{z})$ and
$K_\alpha (z, \overline{z})$ is the K\"ahler potential on a local chart $U_\alpha$ and $K_0 (z, \overline{z})
= z^i \overline{z}^{\bar{i}}$ is the K\"ahler potential of $\mathbb{C}^2$.
On an overlap $U_\alpha \bigcap U_\beta$, two one-forms $A_\alpha$ and $A_\beta$ can be glued
using the freedom \eq{k-gauge} such that
\begin{equation}\label{g-transf}
A_\beta = A_\alpha + d \lambda_{\alpha\beta}
\end{equation}
where $\lambda_{\alpha\beta}(z, \overline{z})$ is a smooth function on the overlap $U_\alpha \bigcap U_\beta$.
The gluing \eq{g-transf} on $U_\alpha \bigcap U_\beta$ is equal to the K\"ahler transformation
\begin{equation}\label{holo-gauge}
K_\beta (z, \overline{z}) = K_\alpha (z, \overline{z})
+ f_{\alpha\beta} (z) + \overline{f}_{\alpha\beta} (\overline{z})
\end{equation}
if $\lambda_{\alpha\beta}(z, \overline{z}) = \frac{\mathrm{i}}{2} \big(\overline{f}_{\alpha\beta}(\overline{z})
- f_{\alpha\beta}(z) \big)$.
\begin{rem}\label{kahler-section}The K\"ahler transformation (\ref{holo-gauge}) implies the relation
\[e^{K_\beta} = | e^{f_{\alpha\beta}}|^2 e^{K_\alpha}.\]
So $e^{K(z, \overline{z})}$ is a section of a nontrivial line bundle over $M$.
\end{rem}
According to the proposition (\ref{kahler-symplectic}),
the K\"ahler manifold $(M, h)$ is also a symplectic manifold $(M, \Omega)$.
Therefore one can find a coordinate transformation $\varphi_\alpha \in \mathrm{Diff}(U_\alpha)$
on a local coordinate patch $U_\alpha$ such that $\varphi_\alpha^* (B + F) = B$
according to the famous Darboux theorem or Moser lemma in symplectic geometry \cite{symp-book}.
In other words, the electromagnetic fields in the local K\"ahler form \eq{kahler-gauge} can always be
eliminated by a local coordinate transformation.
To be specific, the Darboux theorem ensures the existence of the local coordinate transformation
$\varphi_\alpha: y^\mu \mapsto x^a = x^a (y), \; \mu, a = 1, \cdots, 4$, obeying \cite{hliu,sw-darboux}
\begin{equation}\label{darboux-tr}
\Big (B_{ab} + F_{ab} (x) \Big) \frac{\partial x^a}{\partial y^\mu}
\frac{\partial x^b}{\partial y^\nu}
= B_{\mu\nu}.
\end{equation}
Note that $B_{ab}$ and $B_{\mu\nu}$ are constant since they are coming from the K\"ahler form
of $\mathbb{C}^2 \cong \mathbb{R}^4$ according to (\ref{kahler-gauge}) (see also the Lemma \ref{normal-kgh}).
\begin{rem}\label{gravity-darboux}So far the coordinates $x^\mu$ have been commonly used
for both gravity and field theory descriptions since it does not cause any confusion.
However it is convenient to distinguish two kinds of coordinates $(x^a, y^\mu)$
appearing in the Darboux transformation (\ref{darboux-tr}). The so-called Darboux coordinates $y^\mu$
will be used for field theory description while the so-called covariant coordinates $x^a$
will be used for gravity description.
\end{rem}
\begin{df}[Poisson bracket \cite{symp-book}]
Let $\theta := B^{-1} = \frac{1}{2} \theta^{\mu\nu}
\frac{\partial}{\partial {y}^\mu} \bigwedge \frac{\partial}{\partial y^\nu}
\in \Gamma(\Lambda^2 T\mathbb{R}^4)$ be a Poisson bivector.
Then the Poisson bracket $\{~\cdot~, ~\cdot~\}: C^\infty (\mathbb{R}^4) \times C^\infty (\mathbb{R}^4)
\to C^\infty (\mathbb{R}^4)$ is defined by $\{f, g\} = \theta(df, dg)$
for any smooth functions $f, g \in C^\infty (\mathbb{R}^4)$.
\end{df}
Since both sides of Eq. \eq{darboux-tr} are invertible, one can take its inverse
and derive the following relation
\begin{equation}\label{darboux-map1}
\Theta^{ab} (x) := \left(\frac{1}{B + F (x) } \right)^{ab}
= \theta^{\mu\nu} \frac{\partial x^a}{\partial y^\mu}
\frac{\partial x^b}{\partial y^\nu}
= \{ x^a (y), x^b (y)\}
\end{equation}
or
\begin{equation}\label{darboux-map2}
- \left(\frac{1}{1 + F\theta} B \right)_{ab} (x)
= \{\phi_a (y), \phi_b (y)\}
\end{equation}
where $\phi_a (y) := B_{ab} x^b(y)$.
Recall that we have started with a K\"ahler manifold with the metric \eq{c-metric} and
applied the Darboux transformation to the local K\"ahler form \eq{kahler-gauge}.
Now, in the description \eq{darboux-map1} or \eq{darboux-map2}, the curving of the K\"ahler manifold
is described by local fluctuations of $U(1)$ gauge fields on the line bundle $L \to \mathbb{R}^4$.
This becomes more manifest by taking the coordinate transformation in Eq. \eq{darboux-tr} as the form
\begin{equation}\label{cov-phi}
\phi_\mu (y) = p_\mu + a_\mu (y)
\end{equation}
and by calculating the Poisson bracket
\begin{equation}\label{poisson-br}
\{\phi_\mu (y), \phi_\nu (y) \}
= -B_{\mu\nu} + \partial_\mu a_\nu(y) - \partial_\nu a_\mu (y)
+ \{a_\mu (y), a_\nu (y) \} \equiv -B_{\mu\nu}
+ f_{\mu\nu} (y).
\end{equation}
The functions $a_\mu (y)$ in the Darboux transformation \eq{cov-phi} will be regarded
as gauge fields whose field strength is given by $f_{\mu\nu} (y) = \partial_\mu a_\nu(y)
- \partial_\nu a_\mu (y) + \{a_\mu (y), a_\nu (y) \}$.\footnote{Here $a_\mu$ is a gauge field of
a new $U(1)$ gauge symmetry with the Poisson structure rather than the original $U(1)$ gauge symmetry.
From the original $U(1)$ gauge theory point of view,
they are local sections of the line bundle $L \to \mathbb{R}^4$.}
Since they respect the non-Abelian structure due to the underlying Poisson structure,
they are different from ordinary $U(1)$ gauge fields $A_\mu (x)$ in \eq{local-one-form},
so they will be called ``symplectic" $U(1)$ gauge fields.
Then Eq. \eq{darboux-map2} leads to the exact Seiberg-Witten map between
commutative $U(1)$ gauge fields and symplectic $U(1)$ gauge fields \cite{sw-ncft,hliu,sw-darboux}:
\begin{equation}\label{esw-map}
f_{\mu\nu} (y) = \left(\frac{1}{1 + F \theta} F \right)_{\mu\nu} (x)
\quad \mathrm{or} \quad F_{\mu\nu} (x) = \left(\frac{1}{1 - f \theta} f \right)_{\mu\nu} (y).
\end{equation}
Thus the following Lemma is conferred \cite{hsy-jhep09,hsy-ijmpa09,hsy-ijmpa15}:
\begin{lem}\label{durboux-swmap}
The Darboux transformation $\varphi_\alpha \in \mathrm{Diff}(U_\alpha)$ on a local coordinate patch $U_\alpha$
obeying $\varphi_\alpha^* (B + F) = B$ is equivalent to the Seiberg-Witten map between
commutative $U(1)$ gauge fields and symplectic $U(1)$ gauge fields.
\end{lem}
The gauge theory description of K\"ahler gravity is realized by viewing a K\"ahler manifold
as a phase space and its K\"ahler form as the symplectic two-form on the phase space \cite{inov}.
This viewpoint naturally leads to a Poisson algebra $\mathfrak{P}=\{C^\infty(\mathbb{R}^4), \theta \}$
associated with the K\"ahler geometry we have started with.
The underlying Poisson structure is inherited from the symplectic structure, i.e.
$\theta = B^{-1} \in \Gamma(\Lambda^2 T\mathbb{R}^4)$,
which is a bivector field called the Poisson tensor.
\subsection{K\"ahler metric and Bianchi identity}
Recall that the Seiberg-Witten map \eq{esw-map} has been derived from the
local K\"ahler form \eq{kahler-gauge}.
With the identification $\omega^\pm = f^\pm$ and using the map \eq{esw-map},
the metric $g^\pm$ in the Definition \ref{g-metric} can be written as
\begin{equation}\label{sw-metg}
g^\pm = 2 F^\pm \theta^\mp + E_4
\end{equation}
which can be inverted to yield
\begin{equation}\label{sw-comf}
F^\pm = \frac{1}{2} (g^\pm - E_4) (\theta^\mp)^{-1}.
\end{equation}
Now we will prove the following proposition \cite{Lee:2012rb}.
\begin{prop}\label{kal-bianchi
Let $F$ be a two-form in (\ref{sw-comf}).
Then the K\"ahler condition for the metric $g$ in (\ref{sw-metg}) is equivalent to
the Bianchi identity for the $U(1)$ curvature $f$.
\end{prop}
\begin{pf}
First note that the K\"ahler condition for the metric $g$ in (\ref{sw-metg}) is the closedness
of the fundamental two-form $\omega = B + F$, which is equal to $d F = 0$.
Consider the Jacobi identity
\begin{equation}\label{x-jacobi}
\{ x^a, \{ x^b, x^c \} \}
+ \{ x^b, \{ x^c, x^a \} \}
+ \{ x^c, \{ x^a, x^b \} \} = 0
\end{equation}
that is equivalent to the Bianchi identity of symplectic $U(1)$ gauge fields
\begin{equation}\label{g-bianchi}
D_a f_{bc} + D_b f_{ca} + D_c f_{ab} = 0,
\end{equation}
where $D_a f_{bc} =\partial_a f_{bc}+ \left\{a_a,f_{bc} \right\}$.
Using Eq. \eq{darboux-map1}, let us rewrite the Jacobi identity \eq{x-jacobi} as
\begin{eqnarray}\label{x-bianchi}
0 &=& \{ x^a, \Theta^{bc} (x) \}_\theta
+ \{ x^b, \Theta^{ca} (x) \}_\theta
+ \{ x^c, \Theta^{ab} (x) \}_\theta \nonumber\\
&=& \theta^{\mu\nu}
\left( \frac{\partial x^a}{\partial y^\mu} \frac{\partial \Theta^{bc} (x)}
{\partial y^\nu}
+ \frac{\partial x^b}{\partial y^\mu}
\frac{\partial \Theta^{ca} (x)}{\partial y^\nu}
+ \frac{\partial x^c}{\partial y^\mu}
\frac{\partial \Theta^{ab} (x)}{\partial y^\nu} \right) \nonumber\\
&=& \theta^{\mu\nu}
\left( \frac{\partial x^a}{\partial y^\mu} \frac{\partial x^d}{\partial y^\nu}
\frac{\partial \Theta^{bc} (x)}{\partial x^d}
+ \frac{\partial x^b}{\partial y^\mu} \frac{\partial x^d}{\partial y^\nu}
\frac{\partial \Theta^{ca} (x)}{\partial x^d}
+ \frac{\partial x^c}{\partial y^\mu} \frac{\partial x^d}{\partial y^\nu}
\frac{\partial \Theta^{ab} (x)}{\partial x^d} \right) \nonumber\\
&=& \{ x^a, x^d \}_\theta \frac{\partial \Theta^{bc} (x)}{\partial x^d}
+ \{ x^b, x^d \}_\theta \frac{\partial \Theta^{ca} (x)}{\partial x^d}
+ \{ x^c, x^d \}_\theta \frac{\partial \Theta^{ab} (x)}{\partial x^d} \nonumber\\
&=& \Theta^{ad} (x) \frac{\partial \Theta^{bc} (x)}{\partial x^d}
+ \Theta^{bd} (x) \frac{\partial \Theta^{ca} (x)}{\partial x^d}
+ \Theta^{cd} (x) \frac{\partial \Theta^{ab} (x)}{\partial x^d} \nonumber \\
&=& - \Theta^{ad} \Theta^{be} \Theta^{fc} \left( \frac{\partial F_{ef} (x)}
{\partial x^d}
+ \frac{\partial F_{fd} (x)}{\partial x^e}
+ \frac{\partial F_{de} (x)}{\partial x^f} \right).
\end{eqnarray}
Since $\Theta^{ab}$ is invertible, we get from \eq{x-bianchi} the Bianchi identity
for the $U(1)$ curvature $F$, i.e.,
\begin{equation}\label{u1-bianchi}
\frac{\partial F_{bc} (x)}{\partial x^a}
+ \frac{\partial F_{ca} (x)}{\partial x^b}
+ \frac{\partial F_{ab} (x)}{\partial x^c} = 0
\qquad \Longleftrightarrow \qquad dF=0.
\end{equation}
The same argument shows that the reverse is also true, i.e., if $dF=0$,
the Bianchi identity (\ref{g-bianchi}) is deduced. This completes the proof.
\qed\end{pf}
If one introduces a new bivector $\Theta = \frac{1}{2} \Theta^{ab} (x) \frac{\partial}{\partial x^a}
\bigwedge \frac{\partial}{\partial x^b} \in \Gamma(\Lambda^2 T\mathbb{R}^4)$ using the Poisson tensor
in \eq{darboux-map1},
Eq. \eq{x-bianchi} shows that the Schouten-Nijenhuis bracket of
the bivector $\Theta \in \Gamma(\Lambda^2 TN)$ identically vanishes, i.e.,
$[\Theta, \Theta]_{SN} = 0$ \cite{vaisman}.
This means that the bivector $\Theta$ defines a new Poisson structure on $\mathbb{R}^4 \cong \mathbb{C}^2$.
We thus see that the Bianchi identity for symplectic $U(1)$ gauge fields leads to the Bianchi identity of commutative
$U(1)$ gauge fields and vice versa. Since the Bianchi
identity \eq{u1-bianchi} can be understood as the K\"ahler condition
for the local K\"ahler form \eq{kahler-gauge}, the Hermitian-Einstein metrics defined
by $g = \omega \cdot J$ must be K\"ahler.
Let us quantize the Poisson algebra $\mathfrak{P}$ to get a noncommutative algebra and
a corresponding noncommutative $U(1)$ gauge theory. We apply the deformation quantization $\mathcal{Q}$
in Appendix A and define the quantization map for symplectic $U(1)$ gauge fields \cite{ly-jkps2018}:
\begin{eqnarray}\label{q-map}
&& \mathcal{Q} (\phi_\mu) := \widehat{\phi}_\mu (y) = p_\mu + \widehat{A}_\mu (y), \nonumber \\
&& \mathcal{Q} (\{\phi_\mu, \phi_\nu \}) := - i [\widehat{\phi}_\mu (y), \widehat{\phi}_\nu (y)]
= -i \big(-B_{\mu\nu} + \widehat{F}_{\mu\nu} (y) \big),
\end{eqnarray}
where $\mathcal{Q} (f_{\mu\nu}) :=
\widehat{F}_{\mu\nu} (y) = \partial_\mu \widehat{A}_\nu (y)
- \partial_\nu \widehat{A}_\mu (y) - i [\widehat{A}_\mu (y), \widehat{A}_\nu (y)]$ is the field strength of
noncommutative $U(1)$ gauge fields $\widehat{A}_\mu (y) := \mathcal{Q} (a_\mu)$.
After quantization, the symplectic $U(1)$ gauge fields map to noncommutative $U(1)$ gauge fields
which contain infinitely many derivative corrections controlled
by the noncommutative parameter $\theta^{\mu\nu}$. For example,
the Seiberg-Witten map (\ref{esw-map}) receives noncommutative corrections
and takes a non-local form whose exact form was conjectured in \cite{hliu}:
\begin{equation}\label{esw-liu}
F_{\mu\nu} (k) = \int d^4 y L_* \left[ \sqrt{1- \theta \widehat{F}}
\left( \frac{1}{1-\widehat{F}\theta} \widehat{F} \right)_{\mu\nu} (y) W(y, C) \right] e^{ik \cdot y},
\end{equation}
where $W(x, C)$ is a straight open Wilson line, the determinant and rational function
of $\widehat{F}$ should be understood as a power series expansion,
and $L_*$ denotes the integrations together with the path ordering procedure.
The conjectured form (\ref{esw-liu}) was immediately proved in \cite{Okawa:2001mv,nc-openW}.
In a commutative limit where the derivatives of the field strength can be ignored, the map (\ref{esw-liu})
is reduced to the second form in (\ref{esw-map}).
An immediate question arises about the status of Proposition \ref{kal-bianchi}
after (deformation) quantization. Let us state the result with the following proposition.
\begin{prop}\label{cnc-bianchi}
Let $F$ be a two-form in (\ref{esw-liu}).
Then the closedness condition for the commutative $U(1)$ curvature $F$, $dF=0$,
is equivalent to the Bianchi identity for the noncommutative $U(1)$ curvature $\widehat{F}$.
\end{prop}
This proposition was proved in \cite{Okawa:2001mv} by proving the conjecture by H. Liu.
Theorem \ref{masspro} implies that the Hermitian metric $h^\pm$ in (\ref{masspro}) constructed
by the identification $\omega^\pm = \widehat{F}^\pm$ still generates a Ricci-flat metric.
Therefore Proposition \ref{kal-bianchi} may be lifted to noncommutative spaces although
we do not have a rigorous proof yet.
\section{Discussion}\label{sect6}
We have shown that the K\"ahler geometry can be described by a $U(1)$ gauge theory on a symplectic manifold
leading to a natural Poisson algebra associated with the K\"ahler geometry we have started with.
Since the Poisson algebra $\mathfrak{P}$ defined by the Poisson bracket
$\{f,g\} = \theta(df, dg)$ is mathematically
the same as the one in Hamiltonian dynamics of particles, one can quantize the Poisson algebra
in the exactly same way as quantum mechanics. Hence we have applied the deformation quantization
to the Poisson algebra $\mathfrak{P} = (C^\infty (\mathbb{R}^4), \{-, -\})$.
The quantization of the underlying Poisson algebra leads to a noncommutative
$U(1)$ gauge theory which arguably describes a quantized K\"ahler geometry,
as claimed in \cite{inov} and illuminated in \cite{ly-jkps2018}.
Then we get a remarkable duality between K\"ahler gravity and noncommutative $U(1)$ gauge theory
depicted by the following flow chart \cite{ly-jkps2018}:
\begin{equation} \label{q-diag}
\begin{array}[c]{ccc}
\mathrm{K\ddot{a}hler~gravity}&\stackrel{\mathfrak{I}^{-1}_\epsilon}{\longrightarrow}&
\mathrm{Symplectic~{\it U(1)}~gauge~theory }\\
{\mathcal{Q}}\downarrow\scriptstyle&&\downarrow{\mathcal{Q}}\scriptstyle\\
\mathrm{Quantized~K\ddot{a}hler~gravity} &\stackrel{\mathfrak{I}_\theta}{\longleftarrow}&
\mathrm{Noncommutative~{\it U(1)}~gauge~theory }
\end{array}
\end{equation}
Here $\mathcal{Q}: C^\infty (\mathbb{R}^4) \to \mathcal{A}_\theta$ means the quantization
and $\mathfrak{I}$ means an isomorphism between two theories.
In some sense $\mathfrak{I}$ corresponds to the gauge-gravity duality. It turns out \cite{ly-jkps2018} that
it can be interpreted as the large $N$ duality too. Since symplectic $U(1)$ gauge theory is
a commutative limit of noncommutative $U(1)$ gauge theory, we understand the classical isomorphism in \eq{q-diag}
as $\mathfrak{I}_\epsilon = \mathfrak{I}_\theta|_{\varepsilon = |\theta| \to 0}$.
The duality in \eq{q-diag} implies that a quantized K\"ahler gravity is isomorphically described by
a noncommutative $U(1)$ gauge theory. Actually this relation was already observed in \cite{inov}
in the context of topological strings probing K\"ahler manifolds where
several nontrivial evidences have been analyzed to support the picture.
In particular, the authors in \cite{inov} argue that
noncommutative $U(1)$ gauge theory is the fundamental description of K\"ahler gravity at all scales including
the Planck scale and provides a quantum gravity description such as quantum gravitational foams.
The duality in \cite{inov} has been further clarified in \cite{neova-kap} by showing that it follows
from the S-duality of the type IIB superstring.
This duality, if any, suggests an important clue about how to quantize the K\"ahler gravity.
Surprisingly, the correct variables for quantization are not metric fields but dynamical coordinates
$x^a(y)$ and their quantization is defined in terms of $\alpha'$
rather than $\hbar$. So far, there is no well-established clue to quantize metric fields directly
in terms of $\hbar$ in spite of impressive developments in loop quantum gravity.
However, the picture in \eq{q-diag} suggests a completely new quantization scheme
where quantum gravity is defined by quantizing spacetime itself in terms of $\alpha'$,
leading to a dynamical noncommutative spacetime described
by a noncommutative $U(1)$ gauge theory \cite{ly-jkps2018}.
The duality relation in \eq{q-diag} may be more accessible with the corresponding relation for solutions
of the self-duality equation, i.e., $U(1)$ instantons. Indeed it was shown in \cite{gi-u1-prl,gi-u1-plb,gi-u1-epl} that the commutative limit of noncommutative $U(1)$ instantons
are equivalent to Calabi-Yau manifolds.
{\bf Acknowledgments} \\
A.S. was supported in part by JSPS
KAKENHI Grant Number 16K05138.
The work of H.S.Y. was supported by the National Research Foundation of Korea (NRF) grant funded
by the Korea government (MOE) (No. NRF-2015R1D1A1A01059710) and (No. NRF-2018R1D1A1B07050113).
|
1,477,468,751,041 | arxiv | \section{Introduction}
This bundle provides two classfiles, namely \verb+cas-sc.cls+ and
\verb+cas-dc.cls+ and corresponding template files for typesetting
journal articles supposed to go through Elsevier's updated workflow.
\verb+cas-sc.cls+ is meant for one-column, the other
\verb+cas-dc.cls+ for two-column layout. These are now accepted for
submitting articles both in Elsevier's electronic submission system and
elsewhere.
\subsection{Usage}
\begin{enumerate}
\item \verb+cas-sc.cls+ for single column journals.
\begin{vquote}
\documentclass[<options>]{cas-sc}
\end{vquote}
\item \verb+cas-dc.cls+ for single column journals.
\begin{vquote}
\documentclass[<options>]{cas-dc}
\end{vquote}
\end{enumerate}
and have an option \verb+longmktitle+ to handle long front matter.
\section{Front matter}
\begin{vquote}
\title [mode = title]{This is a specimen $a_b$ title}
\tnotemark[1,2]
\tnotetext[1]{This document is the results of the research
project funded by the National Science Foundation.}
\tnotetext[2]{The second title footnote which is a longer text
matter to fill through the whole text width and overflow
into another line in the footnotes area of the first page.}
\end{vquote}
\begin{vquote}
\author[1,3]{J.K. Krishnan}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000]
\cormark[1]
\fnmark[1]
\ead{jkk@example.in}
\ead[url]{www.jkkrishnan.in}
\credit{Conceptualization of this study,
Methodology, Software}
\affiliation[1]{organization={Department of Physics,
J.K. Institute of Science},
addressline={Jawahar Nagar},
city={Trivandrum},
postcode={695013},
state={Kerala},
country={India}}
\author[2,4]{Han Thane}[style=chinese]
\author[2,3]{William {J. Hansen}}[%
role=Co-ordinator,
suffix=Jr,
]
\fnmark[2]
\ead{wjh@example.org}
\ead[URL]{https://www.university.org}
\credit{Data curation, Writing - Original draft preparation}
\end{vquote}
\begin{vquote}
\affiliation[2]{organization={World Scientific University},
addressline={Street 29},
postcode={1011 NX},
postcodesep={},
city={Amsterdam},
country={The Netherlands}}
\author[1,3]{T. Rafeeq}
\cormark[2]
\fnmark[1,3]
\ead{t.rafeeq@example.in}
\ead[URL]{www.campus.in}
\affiliation[3]{organization={University of Intelligent
Studies},
addressline={Street 15},
city={Jabaldesh},
postcode={825001},
state={Orissa},
country={India}}
\cortext[cor1]{Corresponding author}
\cortext[cor2]{Principal corresponding author}
\fntext[fn1]{This is the first author footnote, but is common
to third author as well.}
\fntext[fn2]{Another author footnote, this is a very long
footnote and it should be a really long footnote. But this
footnote is not yet sufficiently long enough to make two
lines of footnote text.}
\nonumnote{This note has no numbers. In this work we
demonstrate $a_b$ the formation Y\_1 of a new type of
polariton on the interface between a cuprous oxide slab
and a polystyrene micro-sphere placed on the slab.
}
\end{vquote}
\begin{vquote}
\begin{abstract}[S U M M A R Y]
This template helps you to create a properly formatted
\LaTeX\ manuscript.
\begin{abstract} ... \end{abstract} and \begin{keyword}
... \end{keyword} which contain the abstract and keywords
respectively. Each keyword shall be separated by
a \sep command.
\end{abstract}
\begin{keywords}
quadrupole exciton \sep polariton \sep WGM \sep BEC
\end{keywords}
\maketitle
\end{vquote}
\subsection{Title}
\verb+\title+ command have the below options:
\begin{enumerate}
\item \verb+title:+ Document title
\item \verb+alt:+ Alternate title
\item \verb+sub:+ Sub title
\item \verb+trans:+ Translated title
\item \verb+transsub:+ Translated sub title
\end{enumerate}
\begin{vquote}
\title[mode=title]{This is a title}
\title[mode=alt]{This is a alternate title}
\title[mode=sub]{This is a sub title}
\title[mode=trans]{This is a translated title}
\title[mode=transsub]{This is a translated sub title}
\end{vquote}
\subsection{Author}
\verb+\author+ command have the below options:
\begin{enumerate}
\item \verb+auid:+ Author id
\item \verb+bioid:+ Biography id
\item \verb+alt:+ Alternate author
\item \verb+style:+ Style of author name, eg.\ chinese
\item \verb+prefix:+ Prefix, eg.\ Sir
\item \verb+suffix:+ Suffix
\item \verb+degree:+ Degree
\item \verb+role:+ Role
\item \verb+orcid:+ ORCID
\item \verb+collab:+ Collaboration
\item \verb+anon:+ Anonymous author
\item \verb+deceased:+ Deceased author
\item \verb+twitter:+ Twitter account
\item \verb+facebook:+ Facebook account
\item \verb+linkedin:+ LinkedIn account
\item \verb+plus:+ Google plus account
\item \verb+gplus:+ Google plus account
\end{enumerate}
\begin{vquote}
\author[1,3]{Author Name}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000,
facebook=<facebook id>,
twitter=<twitter id>,
linkedin=<linkedin id>,
gplus=<gplus id>]
\end{vquote}
\begin{figure}
\includegraphics[width=\textwidth]{sc-sample.pdf}
\caption{Single column output (classfile: cas-sc.cls).}
\end{figure}
\begin{figure}
\includegraphics[width=\textwidth]{dc-sample.pdf}
\caption{Double column output (classfile: cas-dc.cls).}
\end{figure}
\subsection{Various Marks in the Front Matter}
The front matter becomes complicated due to various kinds
of notes and marks to the title and author names. Marks in
the title will be denoted by a star ($\star$) mark;
footnotes are denoted by super scripted Arabic numerals,
corresponding author by an Conformal asterisk (*) mark.
\subsubsection{Title marks}
Title mark can be entered by the command, \verb+\tnotemark[<num>]+
and the corresponding text can be entered with the command
\verb+\tnotetext[<num>]+ \verb+{<text>}+. An example will be:
\begin{vquote}
\title[mode=title]{Leveraging social media news to predict
stock index movement using RNN-boost}
\tnotemark[1,2]
\tnotetext[1]{This document is the results of the research
project funded by the National Science Foundation.}
\tnotetext[2]{The second title footnote which is a longer
text matter to fill through the whole text width and
overflow into another line in the footnotes area of
the first page.}
\end{vquote}
\verb+\tnotemark+ and \verb+\tnotetext+ can be anywhere in
the front matter, but should be before \verb+\maketitle+ command.
\subsubsection{Author marks}
Author names can have many kinds of marks and notes:
\begin{vquote}
footnote mark : \fnmark[<num>]
footnote text : \fntext[<num>]{<text>}
affiliation mark : \author[<num>]
email : \ead{<emailid>}
url : \ead[url]{<url>}
corresponding author mark : \cormark[<num>]
corresponding author text : \cortext[<num>]{<text>}
\end{vquote}
\subsubsection{Other marks}
At times, authors want footnotes which leave no marks in
the author names. The note text shall be listed as part of
the front matter notes. Class files provides
\verb+\nonumnote+ for this purpose. The usage
\begin{vquote}
\nonumnote{<text>}
\end{vquote}
\noindent and should be entered anywhere before the \verb+\maketitle+
command for this to take effect.
\subsection{Abstract and Keywords}
Abstract shall be entered in an environment that starts
with\break \verb+\begin{abstract}+ and ends with
\verb+\end{abstract}+. Longer abstracts spanning more than
one page is also possible in slass file even in double
column mode. We need to invoke \verb+longmktitle+ option in the
class loading line for this to happen smoothly.
The key words are enclosed in a \verb+{keyword}+
environment.
\begin{vquote}
\begin{abstract}
This is an abstract. \lipsum[3]
\end{abstract}
\begin{keywords}
First keyword \sep Second keyword \sep Third
keyword \sep Fourth keyword
\end{keywords}
\end{vquote}
\section{Main Matter}
Main matter contains sections, paragraphs, equations and floats like
tables, figures, textboxes etc.
\subsection{Tables}
\subsubsection{Normal tables}
\begin{vquote}
\begin{table}
\caption{This is a test caption.}
\begin{tabular*}{\tblwidth}{@{} LLLL@{} }
\toprule
Col 1 & Col 2\\
\midrule
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
12345 & 12345\\
\bottomrule
\end{tabular*}
\end{table}
\end{vquote}
\subsubsection{Span tables}
\begin{vquote}
\begin{table*}[width=.9\textwidth,cols=4,pos=h]
\caption{This is a test caption.}
\begin{tabular*}{\tblwidth}{@{} LLLLLL@{} }
\toprule
Col 1 & Col 2 & Col 3 & Col4 & Col5 & Col6 & Col7\\
\midrule
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
12345 & 12345 & 123 & 12345 & 123 & 12345 & 123 \\
\bottomrule
\end{tabular*}
\end{table*}
\end{vquote}
\subsection{Figures}
\subsubsection{Normal figures}
\begin{vquote}
\begin{figure}
\centering
\includegraphics[scale=.75]{Fig1.pdf}
\caption{The evanescent light - $1S$ quadrupole coupling.
See also Fig. \protect\ref{FIG:2}).}
\label{FIG:1}
\end{figure}
\end{vquote}
\subsubsection{Span figures}
\begin{vquote}
\begin{figure*}
\centering
\includegraphics[width=\textwidth,height=2in]{Fig2.pdf}
\caption{Schematic of formation of the evanescent polariton on
linear chain of \PMS.}
\label{FIG:2}
\end{figure*}\end{vquote}
\subsection{Theorem and theorem like environments}
CAS class file provides a few hooks to format theorems and
theorem like environments with ease. All commands the
options that are used with \verb+\newtheorem+ command will work
exactly in the same manner. Class file provides three
commands to format theorem or theorem like environments:
\begin{enumerate}
\item \verb+\newtheorem+ command formats a theorem in
\LaTeX's default style with italicized font for theorem
statement, bold weight for theorem heading and theorem
number typeset at the right of theorem heading. It also
optionally accepts an argument which will be printed as an
extra heading in parentheses. Here is an example coding and
output:
\begin{vquote}
\newtheorem{theorem}{Theorem}
\begin{theorem}\label{thm}
The \WGM evanescent field penetration depth into the
cuprous oxide adjacent crystal is much larger than the
\QE radius:
\begin{equation*}
\lambda_{1S}/2 \pi \left({\epsilon_{Cu2O}-1}
\right)^{1/2} = 414 \mbox{ \AA} \gg a_B = 4.6
\mbox{ \AA}
\end{equation*}
\end{theorem}
\end{vquote}
\item \verb+\newdefinition+ command does exactly the same
thing as with except that the body font is up-shape instead
of italic. See the example below:
\begin{vquote}
\newdefinition{definition}{Definition}
\begin{definition}
The bulk and evanescent polaritons in cuprous oxide
are formed through the quadrupole part of the light-matter
interaction:
\begin{equation*}
H_{int} = \frac{i e }{m \omega_{1S}} {\bf E}_{i,s}
\cdot {\bf p}
\end{equation*}
\end{definition}
\end{vquote}
\item \verb+\newproof+ command helps to define proof and
custom proof environments without counters as provided in
the example code. Given below is an example of proof of
theorem kind.
\begin{vquote}
\newproof{pot}{Proof of Theorem \ref{thm}}
\begin{pot}
The photon part of the polariton trapped inside the \PMS
moves as it would move in a micro-cavity of the effective
modal volume $V \ll 4 \pi r_{0}^{3} /3$. Consequently, it
can escape through the evanescent field. This evanescent
field essentially has a quantum origin and is due to
tunneling through the potential caused by dielectric
mismatch on the \PMS surface. Therefore, we define the
\emph{evanescent} polariton (\EP) as an evanescent light -
\QE coherent superposition.
\end{pot}
\end{vquote}
\end{enumerate}
\subsection{Enumerated and Itemized Lists}
CAS class files provides an extended list processing macros
which makes the usage a bit more user friendly than the
default LaTeX list macros. With an optional argument to the
\verb+\begin{enumerate}+ command, you can change the list
counter type and its attributes. You can see the coding and
typeset copy.
\begin{vquote}
\begin{enumerate}[1.]
\item The enumerate environment starts with an optional
argument `1.' so that the item counter will be suffixed
by a period as in the optional argument.
\item If you provide a closing parenthesis to the number in the
optional argument, the output will have closing
parenthesis for all the item counters.
\item You can use `(a)' for alphabetical counter and `(i)' for
roman counter.
\begin{enumerate}[a)]
\item Another level of list with alphabetical counter.
\item One more item before we start another.
\begin{enumerate}[(i)]
\item This item has roman numeral counter.
\end{vquote}
\begin{vquote}
\item Another one before we close the third level.
\end{enumerate}
\item Third item in second level.
\end{enumerate}
\item All list items conclude with this step.
\end{enumerate}
\section{Biography}
\verb+\bio+ command have the below options:
\begin{enumerate}
\item \verb+width:+ Width of the author photo (default is 1in).
\item \verb+pos:+ Position of author photo.
\end{enumerate}
\begin{vquote}
\bio[width=10mm,pos=l]{tuglogo.jpg}
\textbf{Another Biography:}
Recent experimental \cite{HARA:2005} and theoretical
\cite{DEYCH:2006} studies have shown that the \WGM can travel
along the chain as "heavy photons". Therefore the \WGM
acquires the spatial dispersion, and the evanescent
quadrupole polariton has the form (See Fig.\ref{FIG:3}):
\endbio
\end{vquote}
\section[CRediT...]{CRediT authorship contribution statement}
Give the authorship contribution after each author as
\begin{vquote}
\credit{Conceptualization of this study, Methodology,
Software}
\end{vquote}
To print the details use \verb+\printcredits+
\begin{vquote}
\author[1,3]{J.K. Krishnan}[type=editor,
auid=000,bioid=1,
prefix=Sir,
role=Researcher,
orcid=0000-0001-0000-0000]
\end{vquote}
\begin{vquote}
\cormark[1]
\fnmark[1]
\ead{jkk@example.in}
\ead[url]{www.jkkrishnan.in}
\credit{Conceptualization of this study, Methodology, Software}
\affiliation[1]{organization={Department of Physics,
J.K. Institute of Science},
addressline={Jawahar Nagar},
city={Trivandrum},
postcode={695013},
state={Kerala},
country={India}}
\author[2,4]{Han Thane}[style=chinese]
\author[2,3]{William {J. Hansen}}[%
role=Co-ordinator,
suffix=Jr,
]
\fnmark[2]
\ead{wjh@example.org}
\ead[URL]{https://www.university.org}
\credit{Data curation, Writing - Original draft preparation}
. . .
. . .
. . .
\printcredits
\end{vquote}
\section{Bibliography}
For CAS categories, two reference models are recommended.
They are \file{model1-num-names.bst} and \file{cas-model2-names.bst}.
Former will format the reference list and their citations according to
numbered scheme whereas the latter will format according name-date or
author-year style. Authors are requested to choose any one of these
according to the journal style. You may download these from
The above bsts are available in the following location for you to
download:
\url{https://support.stmdocs.in/wiki/index.php?title=Model-wise_bibliographic_style_files}
\hfill $\Box$
\end{document}
\section{Introduction}\label{intro}
Biometrics technology uses various physiological characteristics, such as faces, fingerprints, DNA, and iris, to identify or recognize a person. However, most of them require his or her cooperation, e.g. taking a facial picture in high resolution or fingerprints by a fingerprinting technician. Gait, a person's pattern of walking, is one of the biometric modalities that can be collected easily even using a low-resolution camera over a long distance. Also, a person's gait pattern is hard to fake. Therefore, gait has been one of the most important biometrics technologies widely used in video surveillance systems.
While gait can be captured by different devices, such as video cameras or motion sensors, we focus on video-based gait recognition in this work. The inputs of most video-based gait recognition algorithms are human silhouette sequences (silhouette-based gait recognition) or human skeleton sequences (skeleton-based gait recognition) which are detected from people walking videos. The performance of gait recognition models can be sensitive to two factors: original gait diversity from the scenes where gait videos are captured, and the human body silhouette segmentation (or skeleton detection) methods. For the first one, people may be walking with coats or carrying items, the video cameras could be in different views, there could also be clutter in the scenes, etc. The second factor comes from the data preprocessing stage of gait recognition models, whose effects can be reduced by the recent developments in human body silhouette segmentation (and human body skeleton detection) research. All these complex factors make gait recognition more challenging.
In the past two decades, lots of research studies have been conducted to solve challenges in gait recognition\cite{wan2018survey,kusakunniran2020review,rida2019robust,sepas2021deep}. Several gait datasets were collected, including the well-known CASIA-B\cite{CASIA} and OU-MVLP\cite{MVLP}. Some challenging factors for gait recognition, such as carrying, dressing, and different views, are considered in these gait datasets. Also, plants of gait recognition models were developed, ranging from non-deep methods to the recent deep learning-based networks. Recently, the most popular two classes of gait recognition models are the appearance-based (silhouette-based) models and model-based models, which use human silhouettes and human pose as input respectively.
The silhouette-based models were studied a lot and achieved state-of-the-art results in most gait datasets by the introduction of several significant methods. In 2016, K.Shiraga et al. proposed a gait recognition model named GEINet using a convolutional neural network, which yields two times better accuracy better than past models. GEINet \cite{shiraga2016geinet} was one of the first groups of silhouette-based models using deep learning-based networks. Since then, the performance of silhouette-based models has increased sharply. Most new models focused on extracting both the spatial information and temporal information of a gait sequence. GaitSet\cite{chao2021gaitset,chao2019gaitset} is the first silhouette-based model which regards gait as a set to extract temporal information. Then B.Lin et al. used multiple-temporal-scale 3D CNN to combine both small and large temporal scales spatial-temporal features\cite{lin2020gait}. Recently, T. Chai et al. developed the state-of-the-art silhouette-based model Vi-GaitGL \cite{chai2021silhouette} which uses the multi-task learning method with GaitGL as the backbone.
Compared with silhouette-based models, skeleton-based models have several advantages. Firstly, human skeletons can be extracted from images or videos more easily. Secondly, human skeletons consist of several key points, that are convenient for data storage and transformation. Thirdly, human skeletons are free from redundant features such as hairstyle, which makes the skeleton-based model more robust. Great improvement in skeleton-based models has been observed in recent years. In 2019, R.Liao et al. proposed the PoseGait\cite{liao2020model} which uses estimated human 3D poses as inputs, while a simple CNN was applied to get Spatio-temporal features. In 2021, T.Teepe et al. proposed the GaitGraph\cite{teepe2021gaitgraph} which uses ResGCN\cite{song2020ResGCN} as basic blocks. The ResGCN is composed of a graph convolutional network followed by a temporal convolutional network. In the same year, the state-of-the-art skeleton-based model Gait‐D\cite{gao2022gait} was proposed which applies a similar network as the gait feature extractor.
However, the performance of most existing skeleton-based models is worse than that of silhouette-based models. To get better spatial-temporal features from skeleton gait sequence, in this work, we propose a new skeleton-based gait recognition model, which applies the spatial transformer network\cite{plizzari2021spatial} as the spatial feature extractor, and the temporal convolutional network as the temporal feature extractor.
The main contributions of this work can be summarized as follows:
\begin{itemize}
\item
We propose a new skeleton‐based gait recognition model called Gait-TR, which for the first time applies the spatial transformer framework for skeleton‐based gait recognition.
\item
Gait-TR achieves state-of-the-art results on the CASIA-B dataset, compared to existing skeleton-based gait recognition models. Especially in walking with coat cases, Gait-TR is better than both existing skeleton-based and silhouette-based gait recognition models.
\item
Our experiment on CASIA‐B shows that the spatial transformer can extract gait features from the human skeleton better than the graph convolutional network.
\item
The proposed model can be faster with fewer parameters by reducing the model layers or gait sequence length, while the accuracy decreases a few (4-6\%). The faster inference speed, higher accuracy, and better robustness of our model make gait recognition a step closer to the applications of gait recognition in the wild.
\end{itemize}
\section{Related Work}\label{work}
In this section, we provide a brief overview of the two important groups of gait recognition methods: appearance-based methods and model-based methods.
As the human skeleton is the input of our proposed model, we briefly introduce human pose estimation at the end of this section.
\subsection{Gait Recognition}\label{review}
\textbf{Appearance-based methods.} The appearance-based gait recognition methods identify different objects by features extracted from the appearance of individuals. The raw inputs of appearance-based methods are human silhouettes. Therefore, a data preprocessing step is required to segment human silhouettes from videos or image sequences. One of the popular gait features is gait energy image(GEI) which is the average of sequential silhouettes over one gait period. GEI-based methods (such as GEI+PCA) achieved good accuracy and were easy to be calculated, thus GEI-based methods were well studied in the early stage of appearance-based gait recognition research. However, the temporal average operator in GEI leads to the missing of some temporal information. Also, large performance variations from view and orientation changes were observed.
In recent years, appearance-based gait recognition research mainly focused on the application of deep neural network architectures and used the whole sequence of human silhouettes as input. These deep appearance-based methods achieved much better performance than the old methods. Various neural network frameworks have been used, including convolutional neural networks (CNNs)\cite{shiraga2016geinet,wu2016comprehensive}, Recurrent Neural Networks (RNNs)\cite{jun2020feature,hasan2020multi}, and Generative Adversarial Networks (GANs)\cite{hu2018gan,wang2019gan}. Moreover, recently several deep learning strategies were applied to improve the performance of gait recognition models, including self-supervised learning and multi-task learning. In ref.\cite{chao2019gaitset}, H.Chao et al. regarded a gait sequence as a set consisting of independent gait frames, which could drop unnecessary sequential constraints. Their proposed model, GaitSet, achieves 96.1\% rank-1 recognition accuracy on the CASIA-B gait dataset under normal walking conditions (The gait recognition accuracy is calculated with identical-view excluded in this work unless otherwise stated). Moreover, GaitSet even got 85.0\% accuracy using only 7 frames.
On the other hand, MT3D applies a multiple-temporal-scale 3D Convolutional Neural Network to extract both small and large temporal scales gait information. MT3D achieves state-of-the-art results with accuracy of 96.7\% and 93.0\%, under normal walking and walking with a bag condition, respectively. The state-of-the-art appearance-based gait recognition model is Vi-GaitGL proposed by T.Chai et al in Ref.\cite{chai2021silhouette} with an average accuracy of 92.2\%. Vi-GaitGL adopts multi-task Learning to view-specific gait recognition model by fitting view angle along with gait recognition. And GaitGL, which consists of global and local convolutional neural network blocks, is used as the backbone. Under the walking with coats condition, Vi-GaitGL achieves an accuracy of 87.2\%.
\textbf{Model-based methods.}
Model-based gait recognition method is defined gait recognition approach which uses an underlying mathematical construct modeling the body structures or local body movements to discriminate different gait styles. Compared with appearance-based methods, model-based methods are free from several noisy variations from human silhouettes in conditions such as clothing and
carrying, making model-based methods focus on the gait dynamics. Therefore, model-based methods were thought to be more robust. However, the accuracy of model-based methods in most of the existing research is lower than that of appearance-based methods, which made model-based methods less popular. Ref.\cite{nixon1996earlest} is one of the easiest works about model-based methods. In Ref.\cite{nixon1996earlest}, M. S. Nixon et al. got gait features by applying a simple Fourier transform to the motion of legs. Then k-nearest neighbors algorithm was used to classify ten gait subjects. After that, many feature extraction methods were proposed by analyzing patterns in gait databases, which was very tedious.
Developments of the deep neural network and human pose estimation methods led to a new stage of skeleton-based gait recognition research. In Ref.\cite{liao2020model}, R.Liao et al. proposed the PoseGait which is based on
human 3D poses extracted by the pose estimation model OpenPose\cite{cao2017openpose}. Specially designed Spatio-temporal features, such as joint angle, limb length, and joint motion are used as input of a deep feature extractor composed of CNN layers. PoseGait achieved good performance in identical-view cases, while the
accuracy in cross-view cases is still less than that of appearance-based methods.
More recently, with the Graph Convolutional Network\cite{zhang2019graph,kipf2016semi} applied as a better skeleton feature extractor, model-based methods got breakthroughs with better accuracy and robustness, such as GaitGraph and Gait-D. The GaitGraph, proposed by T.Teepe, is composed of multiple ResGCN blocks. And a better 2D human pose estimator, HRNet, is applied. Gait-D is the state-of-the-art model-based gait recognition method proposed in Ref.\cite{gao2022gait}. The network structure of Gait-D is similar to GaitGraph. While in Gait-D, the canonical polyadic decomposition algorithm is used to decompose features extracted from ST‐GCN\cite{yan2018spatial} blocks. The accuracy of Gait-D is close to the best result of appearance-based methods in the CASIA-B dataset.
\subsection{Human Pose Estimation}\label{review}
Human pose estimation is one of the most popular fundamental tasks in computer vision. Human pose estimation aims to localize human body parts and human body keypoints from images or videos. Information about the human body (parts, key points, or skeleton) extracted by human pose estimation could be used in a lot of applications such as human-computer interaction, virtual reality, and augmented reality. Therefore, a lot of research about human pose estimation has been conducted in academia, for comprehensive reviews about human pose estimation see Ref.\cite{khan2018review,liu20182,zheng2020deep,gamra2021review}. The human pose estimation methods are categorized into single-person and multi-person settings, or 3D based and 2D based. OpenPose\cite{cao2017openpose} and HRNet\cite{sun2019deep} are the two most popular human pose estimation methods. In this work, we use the SimDR$^\ast$-HRNet proposed in Ref.\cite{li20212d} for 2D human pose estimation.
\section{Method}\label{method}
\begin{figure}
\centering
\includegraphics[width=0.99\textwidth]{pip.pdf}
\caption{Pipeline of our framework}\label{fig1}
\end{figure}
In this part, we will illustrate our proposed framework for the skeleton-based gait recognition method. Fig.\ref{fig1} shows the pipeline of our framework. Firstly, we use a human pose estimator to extract skeleton sequences from the raw video. Secondly, we normalize the skeleton sequences and prepare different designed skeleton features(such as joints, bones, and velocities) as input channels. Finally, the Gait-TR processes with prepared input channels and outputs a 128 dimension embedding vector. In the inference phase, the Euclidean distances between the embedding vectors of two input videos are applied to distinguish different objects.
Before going into detail, we introduce the most important part of our framework, namely, the spatial transformer.
\subsection{Spatial Transformer}\label{ST}
The transformer is the most popular neural network architecture in the past five years, proposed by A.Vaswani at el. in the paper "Attention is all you need"\cite{vaswani2017attention}. At first, the transformer was designed to replace RNN models widely used in natural language processing(NLP) and achieved state-of-the-art results in most of the NLP tasks\cite{wolf2020transformers,kalyan2021ammus,wolf2019huggingface,qiu2020pre}. Then the success of transformer architecture makes the transformer famous and be applied in nearly all AI tasks, including computer vision\cite{han2020survey,ruan2022survey,liu2021swin}, biometrics\cite{sandouka2021transformers,zhong2021face}, music generation\cite{huang2018improved,huang2018music}, etc.
The kernel of transformer architecture is the multi-head self-attention mechanism, which is described as follows. Given an input embedding $\textbf{x} \in \mathbb{R}^n$, firstly, compute a query vector $\textbf{q}_h \in \mathbb{R}^{d_q}$, a key vector $\textbf{k}_h \in \mathbb{R}^{d_k}$, and a value vector $\textbf{v}_h \in \mathbb{R}^{d_v}$ by multiplying $\textbf{x}$ with the parameter matrix, $\textbf{W}^q_h \in \mathbb{R}^{n \times d_q}$, $\textbf{W}^k_h \in \mathbb{R}^{n \times d_k}$ and $\textbf{W}^v_h \in \mathbb{R}^{n \times d_v}$, respectively, for each head $i$ of the total $H$ heads.
Then a scaled dot-product attention function is applied to each query, key, and value:
\begin{eqnarray*}
{\rm head}_h={\rm Attention}\left(\textbf{q}_h,\textbf{k}_h,\textbf{v}_h \right)={\rm softmax} \left( \frac{\textbf{q}_h \textbf{k}_h^{\rm T}}{\sqrt{d_k}} \right) \textbf{v}_h
\end{eqnarray*}
Finally, embedding vectors from $h$ heads are concatenated and linear projected to final embedding $\textbf{z}\in \mathbb{R}^{d_{model}}$:
\begin{eqnarray*}
\textbf{z}={\rm Concat}({\rm head}_1,{\rm head}_2, \cdots,{\rm head}_H)W^o
\end{eqnarray*}
where $W^o \in \mathbb{R}^{h*d_v \times d_{model}}$ is the projection matrix.
In this work, our inputs are human skeleton sequences: $X_{v}^t \in \mathbb{R}^{C\times T \times V} $ for T frames, V joints, and C channels. Therefore, the spatial self-attention module of the spatial transformer proposed in Ref.\cite{plizzari2021spatial} is applied here. In the spatial self-attention module, the attention functions contain correlations between the different nodes, that is:
\begin{eqnarray*}
{\rm head}_{h}^t={\rm Attention}\left(\textbf{q}_h^t,\textbf{k}_h^t,\textbf{v}_h^t \right)=\sum_{j}{\rm softmax}_j \left( \frac{\textbf{q}_{h,i} \textbf{k}_{h,j}^{\rm T}}{\sqrt{d_k}} \right) \textbf{v}_{h,j}
\end{eqnarray*}
All parameters in spatial self-attention are shared among different frames. In this work, we employ h=8 heads. For the dimension of query, key, and value vector, $d_q=d_k=f_k\times d_{model}$,$d_v=f_v \times d_{model}$, where $d_{model}$ is the output channel number of spatial self-attention block, $f_k$ and$f_v$ are fixed factors.
\subsection{Data Preprocessing}\label{preprocess}
We use SimDR$^\ast$-HRNet as the 2D human pose estimator. The outputs of SimDR$^\ast$-HRNet are coordinates of 17 human body joints which are the nose, left ear, right ear, etc. In the training phase, we randomly select continuous skeleton sequences from the total skeleton sequence of a gait video, while in the testing phase, total skeleton sequences are used.
As multiple inputs (which are simple features eg. bones, velocities, etc.) have been shown to be useful in some human skeleton-based tasks\cite{song2020ResGCN,shi2019two}, here we imply multiple inputs to get better performance. Given raw human skeleton joints $X$, joint features include joint coordinates $X[:,:,i]$ and joint coordinates related to the nose $X[:,:,i]-X[:,:,i_{nose}]$. For velocity features, we use the first and second-order frame differences as $X[:,t+1,:]-X[:,t,:]$, $X[:,t+2,:]-X[:,t,:]$. The bone feature is defined as $X[:,:,i]-X[:,:,i_{adj}]$, where $i_{adj}$ denotes the adjacent joint of the i-th joint.
Finally, we concatenate these features as input of Gait-TR.
\subsection{Gait-TR}\label{network}
\begin{figure}
\centering
\includegraphics[width=0.80\textwidth]{TR.pdf}
\caption{Structure of gait-TR. TCN is the temporal convolutional network module, and ST is the spatial transformer module. FC denotes full connect layer. Batch-norm is BatchNorm2D for input $ X_{v}^t \in \mathbb{R}^{C\times T \times V} $, while Batch-norm* denotes BatchNorm1D for input $ X_{v}^t \in \mathbb{R}^{C*V\times T } $. }\label{fig2}
\end{figure}
Our proposed network Gait TRansformer (Gait-TR) is constructed by stacking some basic blocks composed of a temporal convolutional network(TCN) module and a spatial transformer(ST) module, shown in Fig.\ref{fig2}. Temporal convolutional network blocks are a plain convolutional network with kernel size $L$ along the temporal dimension, followed by the Mish activation function and batch normalization. Mish activation function is defined as $Mish(x)=x*tanh(softplus(x))$ proposed in Ref.\cite{misra2019mish}. Mish activation function and batch normalization are also used in the spatial transformer(ST) module. At the end of Gait-TR, an average pooling layer over temporal and spatial dimensions is used, and a full connect layer is applied to transform the dimension of features to the desired dimension.
The dense residual connection is used inside each TCN+ST block. The residual function is defined as:
\begin{equation*}
H_{res}(x)=\left\{\begin{array}{ll}
F(x)+x&{\rm size}(x)=={\rm size}(F(x)),\\
F(x)+ {\rm Batchnorm}\left({\rm Mish}\left(Wx\right)\right) & {\rm else}
\end{array}\right.
\end{equation*}
where the last terms in the right equation are residual terms.
\section{Experimental Results}\label{experiments}
In this section, we evaluate the performance of the proposed Gait-TR on the gait dataset CASIA-B. First, we will introduce the details of the experiment, including the dataset, network structure, training setup, etc. Then we compare our result with both skeleton-based and silhouette-based gait
recognition methods. Finally, we survey the Gait-TR with different setups.
\subsection{CASIA-B}
CASIA-B dataset is a famous large-scale multiple-view human gait dataset widely used in gait recognition research. CASIA-B consists of 13,640 gait sequences from 124 persons. The view angle of CASIA-B ranges from 0 ${}^{\circ}$ to 180${}^{\circ}$ with 18${}^{\circ}$ increments. There are 10 gait sequences per view of each person, under three different walking conditions: 6 sequences in normal walking(NM), 2 sequences in carrying bag(BG), and 2 sequences in walking with coats(CL). Following the settings in most research, we use the first 24, 62, and 74 objects as train-set, denoted as small-sample(ST), medium-sample (MT), and large-sample (LT) respectively. In the inference phase, the first four sequences in NM condition are used as gallery set, the last two sequences in NM condition(NM \#5-6), two sequences in BG condition(BG \#1-2), and two sequences in CL condition (CL \#1-2) make three probe subsets.
\subsection{Implementation Details}
As said in previous sections, Gait-TR is composed of TCN+ST blocks.
Configuration of Gait-TR is shown in Tab.\ref{table1}, with output dimensions and numbers of parameters. Block0-Block3 are four stacked TCN+ST blocks with different channels.
\begin{table}[htbp]
\caption{Overview configuration of Gait-TR. The shape of input data is chosen to be $\left(10\times 60 \times 17\right)$.}\label{table1}
\begin{tabular}{c|c|c|c}
\thickhline
Block&Module & Output dimension & Parameters\\
\hline
Multi-input & input & $10\times 60 \times 17$ &- \\
\hline
Data Norm & Batch Norm & $10\times 60 \times 17$ & - \\
\hline
Block0 & \multirow{4}{*}{TCN+ST} & $64\times 60 \times 17$ & 8,278 \\
Block1 & & $64\times 60 \times 17$ & 49,760 \\
Block2 & & $128\times 60 \times 17$ & 856,32 \\
Block3 & & $256\times 60 \times 17$ & 335,104 \\
\hline
Avg-pooling & pooling & $256\times 1 \times 1$ & - \\
\hline
FC & Full connect & $128\times 1 \times 1$ & 32,768 \\
\hline
\end{tabular}
\end{table}
\textbf{Loss.} For the loss function, we imply the online mining batch-hard triple loss. For a sample triplet $(a,p,n)$ where, $a$ denotes an anchor, $p$ as a positive object of the same class as the anchor, $n$ as a negative object, the triple loss is defined as:
\begin{eqnarray*}
\mathcal{L}_{\rm triple}={\rm max}(d(f_a, f_p) - d(f_a, f_n) + {\rm margin}, 0)
\end{eqnarray*}
where $f_a$ denotes the feature vector of anchor, and $d(f_a, f_p)$ is the Euclidean distance between feature vectors of $a$ and $p$. In this work, the margin in triple loss is set to 0.3. Batch-hard means that for each $a$, we select the positive with the biggest distance $d(a, p)$ and the negative with the smallest distance $d(a, n)$ among the batch.
\textbf{Augment.} We apply several human gait data augment methods in the training phase. Firstly, we apply an inverse operator to the human skeleton by swapping the coordinates of the left parts and the right parts of a skeleton, eg. ${\rm Swap}(x_{\rm Lnose},x_{\rm Rnose})$. Gaussian noises are added to each joint, and the same gaussian noise is added to all joints in a gait sequence. Finally, we randomly select a continuous joint sequence with a length of 60.
\textbf{Training.} Adam optimizer is used with a weight decay of 2e-5. Training data batches are sampled with batch size $(4,64)$, which means 4 persons and 64 gait sequences each. We applied the three-phase 1-cycle learning rate schedule strategy, where initial, maximum, and final learning rates are set to 1e-5, 1e-3, and 1e-8, respectively. Finally, we train our model for 10k-30K iterations.
\subsection{Results and analysis}
\begin{table}[htbp]
\caption{Averaged rank-1 accuracies on CASIA-B dataset for skeleton-based methods, excluding identical-view cases. Results of PoseGait, GaitGraph, Gait-D are also shown for comparison.}\label{table2}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\thickhline
\multicolumn{3}{c|}{Gallery NM\#1-4}&\multicolumn{11}{|c|}{0${}^{\circ}$-180${}^{\circ}$} & mean\\
\hline
\multicolumn{3}{c|}{Probe}& 0${}^{\circ}$&18${}^{\circ}$&36${}^{\circ}$ &54${}^{\circ}$ &72${}^{\circ}$ &90${}^{\circ}$ &108 ${}^{\circ}$ &126${}^{\circ}$ &144${}^{\circ}$ &162${}^{\circ}$ &180${}^{\circ}$ & mean\\
\hline
\multirow{3}{*}{ST}& NM\#5-6&Gait-TR &72.2&77.4&77.5&79.6&76.7&76.7&76.8&78.2&76.0&71.8&64.0&75.2\\ \cline{2-15}
& BG\#1-2&Gait-TR &60.7&65.9&65.5&70.0&61.5&64.3&65.2&66.5&66.3&63.7&53.7&63.9\\ \cline{2-15}
& CL\#1-2&Gait-TR &56.9&61.2&61.8&63.7&62.7&61.5&62.6&63.8&59.2&59.8&48.3&60.1\\ \cline{2-15}
\thickhline
\multirow{9}{*}{MT}&\multirow{3}{*} {NM\#5-6}&PoseGait &55.3& 69.6 &73.9& 75.0& 68.0& 68.2& 71.1& 72.9& 76.1& 70.4& 55.4& 68.7\\
&&Gait-D &87.7&92.5&93.6&\textbf{95.7}&93.3&92.4&92.8&93.4&90.6&88.6&87.3&91.6\\
&&Gait-TR &\textbf{93.2}&\textbf{94.6}&\textbf{93.7}&93.1&\textbf{95.6}&\textbf{93.2} &\textbf{93.1}&\textbf{94.7}&\textbf{95.1}&\textbf{94.0}&\textbf{87.7}&\textbf{93.5}\\
\cline{2-15}
& \multirow{3}{*} {BG\#1-2}&PoseGait &35.3&47.2&52.4&46.9&45.5&43.9&46.1&48.1&49.4&43.6&31.1&44.5\\
&&Gait-D &78.2&80.1&79.3&80.2&78.4&77.6&80.4&78.6&79.1&80.2&\textbf{76.5}&79.0\\
&&Gait-TR &\textbf{87.1}&\textbf{88.7}&\textbf{89.4}&\textbf{91.1} &\textbf{87.1}&\textbf{88.6}&\textbf{89.3}&\textbf{90.8}&\textbf{92.9}&\textbf{88.5}&74.0&\textbf{88.0}\\
\cline{2-15}
& \multirow{3}{*} {CL\#1-2}&PoseGait &24.3&29.7&41.3&38.8&38.2&38.5&41.6&44.9&42.2&33.4&22.5&36.0\\
&&Gait-D &73.2&71.7&75.4&73.2&74.6&72.3&74.1&70.5&69.4&71.2&66.7&72.0\\
&&Gait-TR &\textbf{78.7}&\textbf{81.7}&\textbf{84.0}&\textbf{87.0} &\textbf{86.5}&\textbf{85.7}&\textbf{88.3}&\textbf{85.0}&\textbf{85.7}&\textbf{84.0}&\textbf{78.3}&\textbf{84.0}\\
\thickhline
\multirow{6}{*}{LT}&\multirow{2}{*} {NM\#5-6}&GaitGraph &85.3&88.5&91.0&92.5&87.2&86.5&88.4&89.2&87.9&85.9&81.9&87.7\\
&&Gait-TR &\textbf{95.7}&\textbf{96.4}&\textbf{97.9}&\textbf{97.0}&\textbf{96.9} &\textbf{95.5}&\textbf{95.1}&\textbf{96.1}&\textbf{96.6}&\textbf{96.0}&\textbf{92.4}&\textbf{96.0}\\
\cline{2-15}
& \multirow{2}{*} {BG\#1-2}&GaitGraph &75.8&76.7&75.9&76.1&71.4&73.9&78.0&74.7&75.4&75.4&69.2&74.8\\
&&Gait-TR &\textbf{90.9}&\textbf{92.4}&\textbf{91.4}&\textbf{93.2} &\textbf{91.9}&\textbf{90.2}&\textbf{91.4}&\textbf{93.9}&\textbf{93.9}&\textbf{92.7}&\textbf{82.9}&\textbf{91.3}\\
\cline{2-15}
& \multirow{2}{*} {CL\#1-2}&GaitGraph &69.6&66.1&68.8&67.2&64.5&62.0&69.5&65.6&65.7&66.1&64.3&66.3\\
&&Gait-TR &\textbf{86.7}&\textbf{88.2}&\textbf{88.4}&\textbf{89.7} &\textbf{91.1}&\textbf{90.7}&\textbf{93.2}&\textbf{93.8}&\textbf{93.2}&\textbf{91.2}&\textbf{83.6}&\textbf{90.0}\\
\hline
\end{tabular}
\end{table}
\begin{table}[htbp]
\caption{Averaged rank-1 accuracies on CASIA-B dataset, compared with silhouette-based methods, including GaitSet, MT3D, Vi-GaitGL.}\label{table3}
\begin{tabular}{c|c|c|c|c|c|c|c|c|c|c|c|c|c|c}
\thickhline
\multicolumn{3}{c|}{Gallery NM\#1-4}&\multicolumn{11}{|c|}{0${}^{\circ}$-180${}^{\circ}$} & mean\\
\hline
\multicolumn{3}{c|}{Probe}& 0${}^{\circ}$&18${}^{\circ}$&36${}^{\circ}$ &54${}^{\circ}$ &72${}^{\circ}$ &90${}^{\circ}$ &108 ${}^{\circ}$ &126${}^{\circ}$ &144${}^{\circ}$ &162${}^{\circ}$ &180${}^{\circ}$ & mean\\
\hline
\multirow{12}{*}{ST}&\multirow{4}{*} {NM\#5-6}&GaitSet &71.6&\textbf{87.7}&\textbf{92.6}&89.1&\textbf{82.4}&\textbf{80.3}&\textbf{84.4}&\textbf{89.0}&89.8&82.9&66.6&\textbf{83.3}\\
&&MT3D &71.9&83.9&90.9&\textbf{90.1}&81.1&75.6&82.1&89.0&\textbf{91.1}&\textbf{86.3}&\textbf{69.2}&82.8\\
&&Vi-GaitGL &70.7&83.6&89.0&89.1&78.5&71.8&79.6&86.1&88.8&84.7&66.5&80.7\\
&&Gait-TR&\textbf{72.2}&77.4&77.5&79.6&76.7&76.7&76.8&78.2&76.0&71.8&64.0&75.2\\
\cline{2-15}
& \multirow{4}{*} {BG\#1-2}&GaitSet &64.1&76.4&81.4&82.4&\textbf{77.2}& \textbf{71.8}&\textbf{75.4}&80.8&81.2&75.7&59.4&\textbf{75.1}\\
&&MT3D &\textbf{64.5}&\textbf{76.7}&\textbf{82.8}&\textbf{82.8}&73.2&66.9&74.0&\textbf{81.9} &\textbf{84.8}&\textbf{80.2}&\textbf{63.0}&74.0\\
&&Vi-GaitGL &64.2&75.0&82.6&81.5&70.2&63.9&70.4&77.8&81.0&77.6&58.3&72.9\\
&&Gait-TR &60.7&65.9&65.5&70.0&61.5&64.3&65.2&66.5&66.3&63.7&53.7&63.9\\
\cline{2-15}
& \multirow{4}{*} {CL\#1-2}&GaitSet &36.4&49.7&54.6&49.7&48.7&45.2&45.5&48.2&47.2&41.4&30.6&45.2\\
&&MT3D &46.6&61.6&66.5&63.3&57.4&52.1&58.1&58.9&58.5&57.4&41.9&56.6\\
&&Vi-GaitGL &50.8&64.3&\textbf{68.6}&\textbf{67.1}&60.4&54.2&59.6&\textbf{63.9} &\textbf{62.9}&\textbf{59.9}&41.5&59.4\\
&&Gait-TR &\textbf{56.9}&\textbf{61.2}&61.8&63.7&\textbf{62.7}&\textbf{61.5} &\textbf{62.6}&63.8&59.2&59.8&\textbf{48.3}&\textbf{60.1}\\
\thickhline
\multirow{12}{*}{MT}&\multirow{4}{*} {NM\#5-6}&GaitSet &89.7&\textbf{97.9}&98.3&\textbf{97.4}&92.5&90.4&93.4&97.0&\textbf{98.9}&\textbf{95.9}&86.6&94.3\\
&&MT3D &91.9&96.4&\textbf{98.5}&95.7&93.8&90.8&93.9&97.3&97.9&95.0&86.8&\textbf{94.4}\\
&&Vi-GaitGL &90.8&95.9&97.7&95.9&93.3&91.5&\textbf{94.4}&\textbf{97.3}&97.3&95.4&86.9&94.2\\
&&Gait-TR &\textbf{93.2}&94.6&93.7&93.1&\textbf{95.6}&\textbf{93.2}&93.1&94.7&95.1&94.0&\textbf{87.7}&93.5\\
\cline{2-15}
& \multirow{4}{*} {BG\#1-2}&GaitSet &79.9&89.8&91.2&86.7&81.6&76.7&81.0&88.2&90.3&88.5&73.0&84.3\\
&&MT3D &86.7&92.9&\textbf{94.9}&92.8&88.5&82.5&87.5&92.5&95.3&92.9&81.2&89.8\\
&&Vi-GaitGL &83.6&\textbf{92.9}&94.7&\textbf{93.1}&\textbf{89.4}&83.6&88.6& \textbf{93.6}&\textbf{96.1}&\textbf{93.3}&\textbf{81.5}&\textbf{90.0}\\
&&Gait-TR &\textbf{87.1}&88.7&89.4&91.1&87.1&\textbf{88.6}&\textbf{89.3}&90.8&92.9&88.5&74.0&88.0\\
\cline{2-15}
& \multirow{4}{*} {CL\#1-2}&GaitSet &52.0&66.0&72.8&69.3&63.1&61.2&63.5&66.5&67.5&60.0&45.9&62.5\\
&&MT3D &67.5&81.0&85.0&80.6&75.9&69.8&76.8&81.0&80.8&73.8&59.0&75.6\\
&&Vi-GaitGL &71.2&\textbf{86.5}&\textbf{90.9}&89.0&83.9&77.2&84.8&\textbf{89.1}&\textbf{88.6} &81.0&63.7&82.3\\
&&Gait-TR &\textbf{78.7}&81.7&84.0&87.0&\textbf{86.5} &\textbf{85.7}&\textbf{88.3} &85.0&85.7&\textbf{84.0}&\textbf{78.3}&\textbf{84.0}\\
\thickhline
\multirow{12}{*}{LT}&\multirow{4}{*} {NM\#5-6}&GaitSet &91.1&\textbf{99.0}&\textbf{99.9}&\textbf{97.8}&95.1&94.5&\textbf{96.1} &98.3&99.2&98.1&88.0&96.1\\
&&MT3D &95.7&98.2&99.0&97.5&95.1&93.9&96.1&\textbf{98.6}&\textbf{99.2}&\textbf{98.2}&92.0&\textbf{96.7}\\
&&Vi-GaitGL &93.7&96.9&98.6&97.4&95.5&93.9&97.3&98.6&98.6&97.7&89.7&96.2\\
&&Gait-TR &\textbf{95.7}&96.4&97.9&97.0&\textbf{96.9}&\textbf{95.5}&95.1&96.1&96.6&96.0&\textbf{92.4}&96.0\\
\cline{2-15}
& \multirow{4}{*} {BG\#1-2}&GaitSet &86.7&94.2&95.7&93.4&88.9&85.5&89.0&91.7&94.5&95.9&83.3&90.8\\
&&MT3D &\textbf{91.0}&\textbf{95.4}&\textbf{97.5}&94.2&92.3&86.9&91.2&95.6&97.3&96.4&\textbf{86.6}&\textbf{93.0}\\
&&Vi-GaitGL &89.6&94.5&95.6&\textbf{95.2}&\textbf{93.2}&87.3&\textbf{91.7}&\textbf{95.9} &\textbf{97.8}&\textbf{96.1}&85.5&92.9\\
&&Gait-TR &90.9&92.4&91.4&93.2&91.9&\textbf{90.2}&91.4&93.9&93.9&92.7&82.9&91.3\\
\cline{2-15}
& \multirow{4}{*} {CL\#1-2}&GaitSet &59.5&75.0&78.3&74.6&71.4&71.3&70.8&74.1&74.6&69.4&54.1&70.3\\
&&MT3D &76.0&87.6&89.8&85.0&81.2&75.7&81.0&84.5&85.4&82.2&68.1&81.5\\
&&Vi-GaitGL &81.2&\textbf{92.4}&\textbf{94.9}&\textbf{93.3}&87.8&82.1&87.4&89.8&90.2&87.9&72.5&87.2\\
&&Gait-TR &\textbf{86.7}&88.2&88.4&89.7&\textbf{91.1}&\textbf{90.7}&\textbf{93.2}&\textbf{93.8}&\textbf{93.2}&\textbf{91.2}&\textbf{83.6}&\textbf{90.0}\\
\hline
\end{tabular}
\end{table}
\textbf{Comparison with skeleton-based methods.} In Tab.\ref{table2}, we show the average rank-1 accuracies on CASIA-B dataset of our Gait-TR, under different conditions, alongside the existing skeleton-based gait recognition methods, including PoseGait, Gait-D, and GaitGraph. Tab.\ref{table2} clearly shows that our model Gait-TR achieves state-of-the-art performance under most of the cross-view and probe conditions. Firstly in LT cases, the largest improvement happens under the CL situation, where the rank-1 accuracy of Gait-TR is 90\% which is 23.7\% larger than that of GaitGraph. In the NM and BG situations, our average rank-1 accuracies are 96.0\% and 91.3\%, and the improvements over that of GaitGraph are 8.3\% to 16.5\%. Then in MT cases, a large increase of average accuracies is achieved under BG and CL situations, 9\% and 12\%, compared to that of Gait-D. A small improvement of about 2\% is got under NM situation.
Finally, for the first time, we calculate the rank-1 accuracies under the ST sample setting, while the mean rank-1 accuracies are 75.2\%, 63.9\%, and 60.1\% for NM, BG, and CL probe situations, respectively.
The accuracies of Gait-TR vary less under different probe situations, compared to Gait-D and GaitGraph, which means that our model has better robustness against probe condition changes such as bagging and clothing.
In addition, it can also be observed from Tab.\ref{table2} that accuracy drops a lot in all conditions, from 4\% to 14\%.
A similar drop in accuracy happens in other models, however, with a smaller gap.
\textbf{Comparison with silhouette-based methods.} We compare the result of Gait-TR with that of the state-of-the-art silhouette-based gait models, including GaitSet, MT3D, Vi-GaitGL, shown in Tab.\ref{table3}. Firstly, under LT cases, our rank-1 accuracies of Gait-TR is bigger than the best by 3\%, in the CL situation. Meanwhile, the accuracies in NM and BG are very close to those of the best silhouette-based methods, only 0.7\% and 1.7\% less than that of the best silhouette-based methods. Performances in MT cases are similar to that in the LT cases. However, in ST cases, the accuracy of Gait-TR drops larger than the accuracy of these silhouette-based gaits, which means that Gait-TR needs more gait data to get good enough performance. In ST cases, the performance with CL\#1-2 probe is still better than silhouette-based methods.
\begin{table}[htbp]
\caption{ Mean Rank-1 accuracy, number of parameters and FLOPs of Gait-TR-s, along with other models including Gait-TR, GaitSet and GaitGraph. The FLOPs are calculated using gait sequences of 60 frames. }\label{table-small}
\begin{tabular}{c|c|c|c|c|c}
\thickhline
Model& {NM\#5-6} & {BG\#1-2} & {CL\#1-2}&Parameter& FLOPs \\
\hline
GaitGraph &87.7&74.8&66.3&0.32M&0.28G\\
GaitSet &96.1&90.8&70.3&2.59M&13.02G\\
Gait-TR-s &92.2&86.2&85.3&0.16M&0.29G\\
Gait-TR &96.0&91.3&90.0&0.51M&0.98G\\
\hline
\end{tabular}
\end{table}
\textbf{Smaller model.}
To get faster inference speed, we propose a model with fewer parameters, named Gait-TR-s, whose structure is similar to Gait-TR, with the last TCN+ST block removed from Gait-TR. The performance (including rank-1 accuracy, number of parameters, and FLOPs) of Gait-TR-s is shown in Tab.\ref{table-small}, compared with other models. The mean rank-1 accuracy of Gait-TR-s is lower than that of Gait-TR by 4\%-5\%. Parameters and FlOPs of Gait-TR-s are 0.16M and 0.29GFlOPs, respectively, which are 2/3 less than that of Gait-TR. Silhouette-based methods (eg, GaitSet) need more parameters and FLOPs than skeleton-based methods. The faster inference speed and fewer parameters of skeleton-based methods provide other evidence to support the opinion that skeleton-based methods are more suitable for practical gait recognition.
\textbf{Limited inference frame.}
In the practical application of gait recognition, the total number of frames in which a target is walking could be limited. Therefore, we test our model Gait-TR on limited frames of gait sequences. The gait sequences for inference are continuous gait sequences with length $T$. Fig.\ref{fig_frame} shows the mean ran-1 accuracy vs different sequences length for different probe conditions, under the LT sample set. The accuracies decrease sharply as frame length decreases from 50, which is twice a common gait cycle, 25. This indicates that our Gait-TR depends on the long frame feature of a gait sequence. To get an accuracy large than 80\% under CL condition, the length of gait sequences need to be longer than 40.
\begin{figure}
\centering
\includegraphics[width=0.5\textwidth]{lim_frame.pdf}
\caption{ Mean Rank-1 accuracy with limited inference frames. }\label{fig_frame}
\end{figure}
\textbf{Spatial Transformer vs Graph Convolutional Network.}
Graph Convolutional Network(GCN) is a widely used spatial feature extractor for human skeleton sequences. Here we compare the spatial feature extractor of our Gait-TR, Spatial Transformer(ST), with GCN. We replace the ST module in Gait-TR with GCN, and name the resulting model as Gait-GCN. Tab.\ref{table-gcn} shows the performance of Gait-TR and Gait-GCN. The accuracy of Gait-TR is higher than Gait-GCN by 2\% to 3\% with a similar inference speed. This result implies that ST can be a better spatial feature extractor than GCN in skeleton-based gait recognition.
\begin{table}[htbp]
\caption{Comparison between Gait-TR and Gait-GCN under LT sample condition.}\label{table-gcn}
\begin{tabular}{c|c|c|c|c|c}
\thickhline
model& {NM\#5-6} & {BG\#1-2} & {CL\#1-2}¶meters& FLOPs \\
\hline
Gait-TR &96.0&91.3&90.0&0.513M&0.976G\\
Gait-GCN &94.5&88.8&87.1&0.482M&0.937G\\
\hline
\end{tabular}
\end{table}
\section{Conclusion}\label{conclusion}
In this work, we investigated, for the first time, the spatial transformer framework in skeleton-based gait recognition models. Our proposed model gait-TR achieves state-of-the-art results on the CASIA-B dataset compared to current skeleton-based models. Especially in walking with coats cases, the proposed model is even better than the existing silhouette-based models. Our experiment on CASIA‐B also shows that spatial transformer can extract gait features from the human skeleton better than the graph convolutional network.
In real-world scenarios, most silhouette extraction methods are more complex and slower than skeleton detection methods. Compared to silhouette-based models which need silhouette extraction in the data preprocessing step, skeleton-based models can do better in practical applications. However, in past works, the performance of skeleton-based models was worse than the performance of silhouette-based models. Therefore the better performance of skeleton-based than silhouette-based models in our work, although only in the walking with coats cases, shows the potential of skeleton-based models for higher accuracy and better robustness. Our proposed state-of-the-art skeleton-based gait recognition model makes gait recognition a step closer to the applications of gait recognition in the wild.
As gait-TR is a skeleton-based model, better skeleton sequences from a better human pose estimator are beneficial.
Also, Gait-TR requires gait sequences of long-frame, about twice a gait cycle, to get good performance. A temporal feature extractor better than the simple temporal convolutional network could be valuable for better performance and practical applications with faster inference speed.
\printcredits
\bibliographystyle{unsrt}
|
1,477,468,751,042 | arxiv | \section{Introduction}
The starting point for this work is the classical generating series
\[ \exp\frac{1}{2}(\Lambda X - \Lambda/X) = \sum_{i\in{\bf Z}} J_i(\Lambda)X^i \]
for the Bessel functions $\{J_i(\Lambda)\}_{i\in{\bf Z}}$ due to Schl\"omilch\cite{Sc} that was the foundation for his treatment of Bessel functions (see \cite[page~14]{W}). Suitably normalized, it also played a fundamental role in Dwork's construction\cite{D2} of $p$-adic cohomology for~$J_0(\Lambda)$. Our realization that the series itself (suitably normalized) could be viewed as a distinguished element in Dwork's relative dual complex led us to the present generalization.
Let $A\subseteq{\bf Z}^n$ be a finite subset that spans ${\bf R}^n$ as
real vector space and set
\[ f_\Lambda(X) = \sum_{a\in A} \Lambda_aX^a\in{\bf Z}[\{\Lambda_a\}_{a\in
A}][X_1^{\pm 1},\dots,X_n^{\pm 1}], \]
where the $\Lambda_a$ and the $X_i$ are indeterminates and where $X^a =
X_1^{a_1}\cdots X_n^{a_n}$ for $a = (a_1,\dots,a_n)$. Let ${\bf F}_q$ be the finite field of $q=p^\epsilon$ elements, $p$ a prime, and let $\bar{\bf F}_q$ be its algebraic closure. For each $\bar{\lambda} =
(\bar{\lambda}_a)_{a\in A}\in(\bar{\bf F}_q)^{|A|}$, let
\[ f_{\bar\lambda}(X) = \sum_{a\in A} \bar{\lambda}_a X^a\in{\bf
F}_q(\bar{\lambda})[X_1^{\pm 1},\dots,X_n^{\pm 1}], \]
a regular function on the $n$-torus ${\bf T}^n$ over ${\bf F}_q(\bar{\lambda})$.
Fix a nontrivial additive character $\Theta:{\bf F}_q\to {\bf Q}_p(\zeta_p)$ and
let $\Theta_{\bar{\lambda}}$ be the additive character $\Theta_{\bar{\lambda}}=
\Theta\circ{\rm Tr}_{{\bf F}_q(\bar{\lambda})/{\bf F}_q}$ of the field ${\bf
F}_q(\bar{\lambda})$. For each positive integer $l$, let ${\bf
F}_q(\bar{\lambda},l)$ denote the extension of degree~$l$ of ${\bf
F}_q(\bar{\lambda})$ and define an exponential sum
\[ S_l = S_l(f_{\bar{\lambda}},\Theta_{\bar{\lambda}},{\bf T}^n) = \sum_{x\in {\bf
T}^n({\bf F}_q(\bar{\lambda},l))} \Theta_{\bar{\lambda}}\circ{\rm Tr}_{{\bf
F}_q(\bar{\lambda},l)/{\bf F}_q(\bar{\lambda})}(f_{\bar{\lambda}}(x)). \]
The associated $L$-function is
\[ L(f_{\bar{\lambda}};T) = L(f_{\bar{\lambda}},\Theta_{\bar{\lambda}},{\bf T}^n;T)=
\exp\biggl(\sum_{l=1}^\infty S_l\frac{T^l}{l}\bigg). \]
It is well-known that $L(f_{\bar{\lambda}};T)\in{\bf Q}(\zeta_p)(T)$ and that its reciprocal zeros and poles are algebraic integers. We note that among these
reciprocal zeros and poles there must be at least one $p$-adic unit: if ${\bf F}_q(\bar{\lambda})$ has cardinality $q^\kappa$, then $S_l$ is the sum
of $(q^{\kappa l}-1)^n$ $p$-th roots of unity, so $S_l$ itself is a $p$-adic
unit for every $l$. On the other hand, a simple consequence of the
Dwork trace formula will imply (see Section~3) that there is at most a
single unit root, and it must occur amongst the reciprocal zeros (as
opposed to the reciprocal poles) of
$L(f_{\bar{\lambda}};T)^{(-1)^{n+1}}$. We denote this unit root by
$u(\bar{\lambda})$. It is the goal of this work to exhibit an explicit
$p$-adic analytic formula for $u(\bar{\lambda})$ in terms of certain
$A$-hypergeometric functions.
Consider the series
\begin{align}
\exp f_\Lambda(X) &= \prod_{a\in A} \exp(\Lambda_aX^a) \\
& = \sum_{i\in {\bf Z}^n} F_i(\Lambda) X^i \nonumber
\end{align}
where the $F_i(\Lambda)$ lie in ${\bf Q}[[\Lambda]]$. Explicitly, one has
\begin{equation}
F_i(\Lambda) = \sum_{\substack{u = (u_a)_{a\in A} \\ \sum_{a\in A} u_a a = i}}
\frac{\Lambda^u}{\prod_{a\in A} (u_a!)}.
\end{equation}
The $A$-hypergeometric system with parameter $\alpha =
(\alpha_1,\dots,\alpha_n)\in{\bf C}^n$ (where ${\bf C}$ denotes the complex numbers) is the system of partial differential
equations consisting of the operators
\[ \Box_\ell = \prod_{\ell_a>0}\biggl(\frac{\partial}{\partial
\Lambda_a}\biggr)^{\ell_a} - \prod_{\ell_a<0} \biggl(\frac{\partial}{\partial
\Lambda_a}\biggr)^{-\ell_a} \]
for all $\ell = (\ell_a)_{a\in A}\in{\bf Z}^{|A|}$ satisfying $\sum_{a\in A}\ell_a
a = 0$
and the operators
\[ Z_j = \sum_{a\in A} a_j\Lambda_a\frac{\partial}{\partial \Lambda_a} - \alpha_j
\]
for $a=(a_1,\dots,a_n)\in A$ and $j=1,\dots,n$. Using Equations (1.1) and (1.2), it is straightforward to check that for $i\in{\bf Z}^n$, $F_i(\Lambda)$ satisfies the
$A$-hypergeometric system with parameter $i$.
Fix $\pi$ satisfying $\pi^{p-1} = -p$ and $\Theta(1) \equiv \pi\pmod{\pi^2}$. It
follows from Equation~(1.2) that the $F_i(\pi\Lambda)$ converge $p$-adically for all
$\Lambda$ satisfying $|\Lambda_a|<1$ for all $a\in A$. Let ${\mathcal F}(\Lambda)=F_0(\pi\Lambda)/F_0(\pi\Lambda^p)$. The main result of this paper is the following statement. Note that we make no restriction (such as nondegeneracy) on the choice of $\bar{\lambda}\in(\bar{\bf F}_q)^{|A|}$.
\begin{theorem}
The series ${\mathcal F}(\Lambda)$ converges $p$-adically for $|\Lambda_a|\leq 1$ for all $a\in A$ and the unit root of $L(f_{\bar{\lambda}};T)$ is given by
\[ u(\bar{\lambda}) = {\mathcal F}({\lambda}){\mathcal F}({\lambda}^p){\mathcal F}(\lambda^{p^2})\cdots
{\mathcal F}({\lambda}^{p^{\epsilon d(\bar{\lambda})-1}}), \]
where ${\lambda}$ denotes the Teichm\"{u}ller lifting of $\bar{\lambda}$ and $d(\bar{\lambda}) = [{\bf F}_q(\bar{\lambda}):{\bf F}_q]$.
\end{theorem}
\section{Analytic continuation}
We begin by proving the analytic continuation of the function ${\mathcal F}$ defined in the introduction.
Let $C\subseteq{\bf R}^n$ be the real cone generated by the elements of $A$ and let $\Delta\subseteq{\bf R}^n$ be the convex hull of the set $A\cup\{(0,\dots,0)\}$.
Put $M=C\cap{\bf Z}^n$. For $\nu\in M$, define the {\em weight}\/ of $\nu$,
$w(\nu)$, to be the least nonnegative real (hence rational) number such that
$\nu\in w(\nu)\Delta$. There exists $D\in{\bf Z}_{>0}$ such that $w(\nu)\in{\bf
Q}_{\geq 0}\cap{\bf Z}[1/D]$. The weight function $w$ is easily seen to have the
following properties:
\begin{align*}
\text{(i) }& w(\nu) \geq 0 \text{ and } w(\nu) = 0 \text{ if and only
if } \nu = 0, \\
\text{(ii) }& w(c\nu) = cw(\nu) \text{ for } c \in {\bf Z}_{\geq 0}, \\
\text{(iii) }& w(\nu + \mu) \leq w(\nu) + w(\mu) \text{ with equality holding
if and only if } \nu \text{ and } \mu \text{ are} \\
&\text{ cofacial, that is, } \nu \text{ and } \mu \text{ lie in a cone
over the same closed face of } \Delta. \\
\text{(iv) }& \text{If $\dim\Delta=n$, let $\{\ell_i\}_{i=1}^N$ be linear forms such that the codimension-one faces}\\
&\text{of $\Delta$ not containing the origin lie in the hyperplanes $\{\ell_i=1\}_{i=1}^N$. Then}\\
& w(\nu) = \max\{\ell_i(\nu\}_{i=1}^N.
\end{align*}
Let $\Omega$ be a finite extension of ${\bf Q}_p$ containing $\pi$ and an element $\tilde{\pi}$ satisfying ${\rm ord}\:\tilde{\pi} = (p-1)/p^2$ (we always normalize the valuation so that ${\rm ord}\:p = 1$). Put
\[ R = \bigg\{ \xi(\Lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^{|A|}} c_\nu\Lambda^\nu\mid \text{$c_\nu\in\Omega$ and $\{|c_\nu|\}_\nu$ is bounded}\bigg\}, \]
\[ R' = \bigg\{ \xi(\Lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^{|A|}} c_\nu\Lambda^\nu\mid \text{$c_\nu\in\Omega$ and $c_\nu\to 0$ as $\nu\to\infty$}\bigg\}. \]
Equivalently, $R$ (resp.\ $R'$) is the ring of formal power series in $\{\Lambda_a\}_{a\in A}$ that converge on the open unit polydisk in $\Omega^{|A|}$ (resp.\ the closed unit polydisk in $\Omega^{|A|}$). Define a norm on $R$ by setting $|\xi(\Lambda)| = \sup_\nu\{|c_\nu|\}$. Both $R$ and $R'$ are complete in this norm.
Note that $(1.2)$ implies that the coefficients $F_i(\pi\Lambda)$ of $\exp\pi f_{\Lambda}(X)$ belong to~$R$.
Let $S$ be the set
\begin{multline*}
S = \\
\bigg\{\xi(\Lambda,X) = \sum_{\mu\in M} \xi_\mu(\Lambda) \tilde{\pi}^{-w(\mu)}X^{-\mu} \mid \text{$\xi_\mu(\Lambda)\in R$ and $\{|\xi_\mu(\Lambda)|\}_\mu$ is bounded}\bigg\}.
\end{multline*}
Let $S'$ be defined analogously with the conditions ``$\xi_\mu(\Lambda)\in R$'' replaced by ``$\xi_\mu(\Lambda)\in R'$''. Define a norm on $S$ by setting
\[ |\xi(\Lambda,X)| = \sup_\mu\{|\xi_\mu(\Lambda)|\}. \]
Both $S$ and $S'$ are complete under this norm.
Define $\theta(t) = \exp(\pi(t-t^p)) = \sum_{i=0}^\infty b_it^i$. One has (Dwork\cite[Section 4a)]{D})
\begin{equation}
{\rm ord}\: b_i\geq \frac{i(p-1)}{p^2}.
\end{equation}
Let
\[ F(\Lambda,X) = \prod_{a\in A}\theta(\Lambda_aX^a) = \sum_{\mu\in M} B_\mu(\Lambda)X^\mu. \]
\begin{lemma}
One has $B_\mu(\Lambda)\in R'$ and $|B_\mu(\Lambda)|\leq |\tilde{\pi}|^{w(\mu)}$.
\end{lemma}
\begin{proof}
From the definition,
\[ B_\mu(\Lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^{|A|}} B^{(\mu)}_\nu\Lambda^\nu, \]
where
\[ B^{(\mu)}_\nu = \begin{cases} \prod_{a\in A}b_{\nu_a} & \text{if $\sum_{a\in A}\nu_a a = \mu$,} \\ 0 & \text{if $\sum_{a\in A} \nu_a a\neq\mu$.} \end{cases} \]
It follows from (2.1) that $B^{(\mu)}_\nu\to 0$ as $\nu\to\infty$, which shows that $B_\mu(\Lambda)\in R'$. We have
\[ {\rm ord}\: B^{(\mu)}_\nu\geq\sum_{a\in A}{\rm ord}\: b_{\nu_a}\geq \sum_{a\in A}\frac{\nu_a(p-1)}{p^2}\geq w(\mu)\frac{p-1}{p^2}, \]
which implies $|B_\mu(\Lambda)|\leq |\tilde{\pi}|^{w(\mu)}$.
\end{proof}
By the proof of Lemma 2.2, we may write $B^{(\mu)}_\nu = \tilde{\pi}^{w(\mu)}\tilde{B}^{(\mu)}_\nu$ with $|\tilde{B}^{(\mu)}_\nu|\leq 1$. We may then write $B_\mu(\Lambda) = \tilde{\pi}^{w(\mu)}\tilde{B}_\mu(\Lambda)$ with $\tilde{B}_\mu(\Lambda) = \sum_\nu \tilde{B}^{(\mu)}_\nu \Lambda^\nu$ and $|\tilde{B}_\mu(\Lambda)|\leq 1$. Let
\[ \xi(\Lambda,X) = \sum_{\nu\in M} \xi_\nu(\Lambda)\tilde{\pi}^{-w(\nu)}X^{-\nu}\in S. \]
We claim that the product $F(\Lambda,X)\xi(\Lambda^p,X^p)$ is well-defined. Formally we have
\[ F(\Lambda,X)\xi(\Lambda^p,X^p) = \sum_{\rho\in{\bf Z}^n} \zeta_\rho(\Lambda)X^{-\rho}, \]
where
\begin{equation}
\zeta_\rho(\Lambda) = \sum_{\substack{\mu,\nu\in M \\ \mu-p\nu = -\rho}} \tilde{\pi}^{w(\mu)-w(\nu)}\tilde{B}_\mu(\Lambda)\xi_\nu(\Lambda^p).
\end{equation}
To prove convergence of this series, we need to show that $w(\mu)-w(\nu)\to\infty$ as $\nu\to\infty$. By property~(iv) of the weight function, for a given $\nu\in M$ we may choose a linear form $\ell$ (depending on $\nu$) for which $w(\nu) = \ell(\nu)$ while $w(\mu)\geq\ell(\mu)$. Since $\mu = p\nu-\rho$, we get
\begin{equation}
w(\mu)-w(\nu) \geq \ell(\mu-\nu) = \ell((p-1)\nu)-\ell(\rho) = (p-1)w(\nu)-\ell(\rho).
\end{equation}
As $\nu\to\infty$, $(p-1)w(\nu)\to\infty$ while $\ell(\rho)$ takes values in a finite set of rational numbers (there are only finitely many possibilities for $\ell$). This gives the desired result.
For a formal series $\sum_{\rho\in{\bf Z}^n} \zeta_\rho(\Lambda)X^{-\rho}$ with $\zeta_\rho(\Lambda)\in\Omega[[\Lambda]]$, define
\[ \gamma'\bigg(\sum_{\rho\in{\bf Z}^n} \zeta_\rho(\Lambda) X^{-\rho}\bigg) = \sum_{\rho\in M}\zeta_\rho(\Lambda) X^{-\rho} \]
and define for $\xi(\Lambda,X)\in S$
\begin{align*}
\alpha^*(\xi(\Lambda,X)) &= \gamma'(F(\Lambda,X)\xi(\Lambda^p,X^p)) \\
&= \sum_{\rho\in M}\zeta_\rho(\Lambda)X^{-\rho}.
\end{align*}
For $\rho\in M$ put $\eta_\rho(\Lambda) = \tilde{\pi}^{w(\rho)}\zeta_\rho(\Lambda)$, so that
\begin{equation}
\alpha^*(\xi(\Lambda,X)) = \sum_{\rho\in M} \eta_\rho(\Lambda)\tilde{\pi}^{-w(\rho)} X^{-\rho}
\end{equation}
with
\begin{equation}
\eta_\rho(\Lambda) = \sum_{\substack{\mu,\nu\in M\\ \mu-p\nu = \rho}} \tilde{\pi}^{w(\rho)+w(\mu)-w(\nu)}\tilde{B}_\mu(\Lambda)\xi_\nu(\Lambda^p). \end{equation}
Since $w(\rho)\geq\ell(\rho)$ for $\rho\in M$, Equation (2.4) implies that
\begin{equation}
w(\rho)+w(\mu)-w(\nu)\geq (p-1)w(\nu),
\end{equation}
so by Equation (2.6), $|\eta_\rho(\Lambda)|\leq |\xi(\Lambda,X)|$ for all $\rho\in M$. This shows $\alpha^*(\xi(\Lambda,X))\in S$ and
\[ |\alpha^*(\xi(\Lambda,X))|\leq |\xi(\Lambda,X)|. \]
Furthermore, this argument also shows that $\alpha^*(S')\subseteq S'$.
\begin{lemma}
If $|\xi_0(\Lambda)|\leq |\tilde{\pi}|^{(p-1)/D}$, then $|\alpha^*(\xi(\Lambda,X))|\leq |\tilde{\pi}|^{(p-1)/D}|\xi(\Lambda,X)|$.
\end{lemma}
\begin{proof}
This follows immediately from Equations (2.6) and (2.7) since $w(\nu)\geq 1/D$ for $\nu\neq 0$.
\end{proof}
From Equation (2.6), we have
\begin{equation}
\eta_0(\Lambda) = \sum_{\nu\in M} \tilde{B}_{p\nu}(\Lambda)\xi_\nu(\Lambda^p)\tilde{\pi}^{(p-1)w(\nu)}.
\end{equation}
Note that $\tilde{B}_0(\Lambda) = B_0(\Lambda)\equiv 1\pmod{\tilde{\pi}}$ since ${\rm ord}\:b_i>0$ for all $i>0$ implies ${\rm ord}\:B^{(0)}_\nu>0$ for all $\nu\neq 0$. Thus $B_0(\Lambda)$ is an invertible element of $R'$. The following lemma is then immediate from Equation (2.9).
\begin{lemma}
If $\xi_0(\Lambda)$ is an invertible element of $R$ (resp.\ $R'$), then so is $\eta_0(\Lambda)$.
\end{lemma}
Put
\[ T = \{\xi(\Lambda,X)\in S\mid \text{$|\xi(\Lambda,X)|\leq 1$ and $\xi_0(\Lambda) = 1$}\} \]
and put $T' = T\cap S'$. Using the notation of Equation~(2.5), define $\beta:T\to T$ by
\[ \beta(\xi(\Lambda,X)) = \frac{\alpha^*(\xi(\Lambda,X))}{\eta_0(\Lambda)}. \]
Note that $\beta(T')\subseteq T'$.
\begin{proposition}
The operator $\beta$ is a contraction mapping on the complete metric space $T$. More precisely, if $\xi^{(1)}(\Lambda,X),\xi^{(2)}(\Lambda,X)\in T$, then
\[ |\beta(\xi^{(1)}(\Lambda,X))-\beta(\xi^{(2)}(\Lambda,X))|\leq |\tilde{\pi}|^{(p-1)/D}|\xi^{(1)}(\Lambda,X)-\xi^{(2)}(\Lambda,X)|. \]
\end{proposition}
\begin{proof}
We have (in the obvious notation)
\begin{equation*}
\begin{split}
\beta(\xi^{(1)}(\Lambda,X))-\beta(\xi^{(2)}(\Lambda,X)) &= \frac{\alpha^*(\xi^{(1)}(\Lambda,X))}{\eta^{(1)}_0(\Lambda)} -
\frac{\alpha^*(\xi^{(2)}(\Lambda,X))}{\eta^{(2)}_0(\Lambda)} \\
&= \frac{\alpha^*(\xi^{(1)}(\Lambda,X)-\xi^{(2)}(\Lambda,X))}{\eta^{(1)}_0(\Lambda)} \\
& \qquad - \alpha^*(\xi^{(2)}(\Lambda,X))\frac{\eta^{(1)}_0(\Lambda) - \eta^{(2)}_0(\Lambda)}{\eta^{(1)}_0(\Lambda)\eta^{(2)}_0(\Lambda)}.
\end{split}
\end{equation*}
Since $\eta^{(1)}_0(\Lambda)-\eta^{(2)}_0(\Lambda)$ is the coefficient of $X^0$ in $\alpha^*(\xi^{(1)}(\Lambda,X)-\xi^{(2)}(\Lambda,X))$, we have
\[ |\eta^{(1)}_0(\Lambda)-\eta^{(2)}_0(\Lambda)|\leq
|\alpha^*(\xi^{(1)}(\Lambda,X)-\xi^{(2)}(\Lambda,X))|. \]
And since the coefficient of $X^0$ in $\xi^{(1)}(\Lambda,X)-\xi^{(2)}(\Lambda,X)$ equals $0$, the proposition follows from Lemma~2.8.
\end{proof}
{\it Remark}: Proposition 2.11 implies that $\beta$ has a unique fixed point in $T$. And since $\beta$ is stable on $T'$, that fixed point must lie in $T'$. Let $\xi(\Lambda,X)\in T'$ be the unique fixed point of $\beta$. The equation $\beta(\xi(\Lambda,X)) = \xi(\Lambda,X)$ is equivalent to the equation
\[ \alpha^*(\xi(\Lambda,X)) = \eta_0(\Lambda)\xi(\Lambda,X). \]
Since $\alpha^*$ is stable on $S'$, it follows that
\begin{equation}
\eta_0(\Lambda)\xi_\mu(\Lambda)\in R'\quad\text{for all $\mu\in M$}.
\end{equation}
In particular, since $\xi_0(\Lambda) = 1$, we have $\eta_0(\Lambda)\in R'$.
Put $C_0=C\cap(-C)$, the largest subspace of ${\bf R}^n$ contained in $C$, and put $M_0 = {\bf Z}^n\cap C_0$, a subgroup of $M$.
For a formal series $\sum_{\mu\in{\bf Z}^n} c_\mu(\Lambda)X^\mu$ with $c_\mu(\Lambda)\in\Omega[[\Lambda]]$ we define
\[ \gamma\bigg(\sum_{\mu\in{\bf Z}^n} c_\mu(\Lambda)X^\mu\bigg) = \sum_{\mu\in M_0} c_\mu(\Lambda)X^\mu \]
and set
\[ \zeta(\Lambda,X) = \gamma(\exp(\pi f_\Lambda(X))). \]
Of course, when the origin is an interior point of $\Delta$, then $M_0={\bf Z}^n$ and $\zeta(\Lambda,X) = \exp(\pi f_\Lambda(X))$. In any case, the coefficients of $\zeta(\Lambda,X)$ belong to $R$.
Since $\exp(\pi f_\Lambda(X)) = \prod_{a\in A}\exp(\pi\Lambda_aX^a)$, we can expand this product to get
\begin{align*}
\zeta(\Lambda,X) &= \gamma\bigg(\prod_{a\in A} \sum_{\nu_a=0}^\infty \frac{(\pi\Lambda_aX^a)^{\nu_a}}{\nu_a!}\bigg) \\
&= \sum_{\mu\in M_0}G_\mu(\Lambda)\tilde{\pi}^{-w(\mu)}X^{-\mu},
\end{align*}
where $G_\mu(\Lambda) = \sum_{\nu\in({\bf Z}_{\geq 0})^{|A|}} G^{(\mu)}_\nu\Lambda^\nu$ with
\[ G^{(\mu)}_\nu = \begin{cases} \tilde{\pi}^{w(\mu)}\prod_{a\in A}\frac{\pi^{\nu_a}}{\nu_a!} & \text{if $\sum_{a\in A}\nu_a a = -\mu$,} \\
0 & \text{if $\sum_{a\in A}\nu_a a\neq -\mu$.}
\end{cases} \]
Since ${\rm ord}\:\pi^i/i!>0$ for all $i>0$, it follows that $G_\mu(\Lambda)\in R$, $|G_\mu(\Lambda)|\leq |\tilde{\pi}|^{w(\mu)}$, and $G_0(\Lambda)$ is invertible in $R$. This implies that $\zeta(\Lambda,X)/G_0(\Lambda)\in T$.
Note also that since $F(\Lambda,X) = \exp(\pi f_{\Lambda}(X))/\exp(\pi f_{\Lambda^p}(X^p))$, it is straightforward to check that
\[ \gamma'(F(\Lambda,X)) = \gamma(F(\Lambda,X)) = \gamma\bigg( \frac{\exp \pi f_{\Lambda}(X)}{\exp\pi f_{\Lambda^p}(X^p)}\bigg) = \frac{\zeta(\Lambda,X)}{\zeta(\Lambda^p,X^p)}. \]
It follows that if $\xi(\Lambda,X)$ is a series satisfying $\gamma(\xi(\Lambda,X))\in S$, then
\begin{align}
\alpha^*(\gamma(\xi(\Lambda,X))) &= \gamma'(F(\Lambda,X)\gamma(\xi(\Lambda^p,X^p))) = \gamma(F(\Lambda,X))\gamma(\xi(\Lambda^p,X^p)) \\ &=\frac{\zeta(\Lambda,X)\gamma(\xi(\Lambda^p,X^p))}{\zeta(\Lambda^p,X^p)}. \nonumber
\end{align}
{\it Remark}: In terms of the $A$-hypergeometric functions $\{F_i(\Lambda)\}_{i\in M}$ defined in Equation~(1.1), we have $\exp(\pi f_\Lambda(X)) = \sum_{i\in M}F_i(\pi\Lambda)X^i$, so for $i\in M_0$ we have the relation
\begin{equation}
F_i(\pi\Lambda) = \tilde{\pi}^{-w(-i)}G_{-i}(\Lambda).
\end{equation}
\begin{proposition}
The unique fixed point of $\beta$ is $\zeta(\Lambda,X)/G_0(\Lambda)$.
\end{proposition}
\begin{proof}
By Equation~(2.13), we have
\begin{equation}
\alpha^*\bigg(\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg) = \frac{G_0(\Lambda)}{G_0(\Lambda^p)}\frac{\zeta(\Lambda,X)}{G_0(\Lambda)},
\end{equation}
which is equivalent to the assertion of the proposition.
\end{proof}
By the Remark following Proposition 2.11, $\zeta(\Lambda,X)/G_0(\Lambda)\in T'$. This gives the following result.
\begin{corollary}
For all $\mu\in M_0$, $G_\mu(\Lambda)/G_0(\Lambda)\in R'$.
\end{corollary}
In the notation of the Remark following Proposition 2.11, one has $\xi(\Lambda,X) = \zeta(\Lambda,X)/G_0(\Lambda)$ and $\eta_0(\Lambda) = G_0(\Lambda)/G_0(\Lambda^p)$, so Equation~(2.12) implies the following result.
\begin{corollary}
For all $\mu\in M_0$, $G_\mu(\Lambda)/G_0(\Lambda^p)\in R'$.
\end{corollary}
In view of Equation~(2.14), Corollary~2.18 implies that the function ${\mathcal F}(\Lambda) = F_0(\pi\Lambda)/F_0(\pi\Lambda^p)$ converges on the closed unit polydisk, which was the first assertion of Theorem~1.3.
\section{ $p$-adic Theory}
Fix $\bar{\lambda} = (\bar{\lambda}_a)_{a\in A}\in(\bar{\bf F}_q)^{|A|}$ and let $\lambda = (\lambda_a)_{a\in A}\in(\bar{\bf Q}_p)^{|A|}$, where $\lambda_a$ is the Teichm\"uller lifting of $\bar{\lambda}_a$. We recall Dwork's description of $L(f_{\bar{\lambda}};T)$. Let $\Omega_0 = {\bf Q}_p(\lambda,\zeta_p,\tilde{\pi})$ ($ = {\bf Q}_p(\lambda,\pi,\tilde{\pi})$) and let ${\mathcal O}_0$ be the ring of integers of $\Omega_0$.
We consider certain spaces of functions with support in $M$.
We will assume that $\Omega_0$ has been extended by a finite totally ramified
extension so that there is an element $\tilde {\pi}_0 $ in $\Omega_0$ satisfying $\tilde{\pi}_0^D = \tilde {\pi}$. We shall write $\tilde {\pi}^{w(\nu)}$ and mean by it $\tilde {\pi}_0^{Dw(\nu)}$ for $\nu \in M$. Using this convention to
simplify notation, we define
\begin{equation}
B=\bigg\{\sum_{\nu \in M}A_\nu\tilde{\pi}^{w(\nu)}X^{\nu} \mid A_\nu \in
\Omega_0,\;A_\nu\rightarrow 0 \text{ as } \nu \rightarrow \infty \bigg\}.
\end{equation}
Then $B$ is an $\Omega_0$-algebra which is complete under the norm
$$ \bigg|\sum_{\nu \in M}A_\nu\tilde{\pi}^{w(\nu)}X^{\nu}\bigg| = \sup_{\nu \in M}|A_\nu|. $$
We construct a Frobenius map with arithmetic import in the usual way. Let
$$ F(\lambda,X) = \prod_{a\in A}\theta(\lambda_aX^a)
= \sum_{\mu \in M} B_\mu(\lambda)X^{\mu}, $$
i.e., $F(\lambda,X)$ is the specialization of $F(\Lambda,X)$ at $\Lambda = \lambda$, which is permissible by Lemma~2.2.
\begin{comment}
$$ B_\mu(\lambda) = \sum_{\substack{\nu \in {\bf Z}_{\geq 0}^{|A|} \\
\sum_{a\in A}\nu_a a = \mu}}\bigg(\prod_{a\in A}b_{\nu_{a}}\bigg)\lambda^\nu. $$
Furthermore, we set $B_{\mu\nu} =(\prod_{a\in A}b_{\nu_{a}})$, so that
$$ B_\mu(\lambda) = \sum_{\substack{\nu \in {\bf Z}_{\geq
0}^{|A|} \\ \sum_{a\in A}\nu_{a}a = \mu}}B_{\mu\nu}\lambda^{\nu}. $$
Note that
$$ {\rm ord}\:B_{\mu\nu} \geq (p-1)\sum_{a\in A}\nu_{a}/p^{2} $$
and that
$$ {\rm ord}\:B_\mu(\lambda) \geq \inf_{\substack{\nu\in{\bf
Z}_{\geq 0}^{|A|}\\ \sum_{a\in A}\nu_{a}a = \mu}}
\bigg\{(p-1)\sum_{a\in A}\nu_{a}/p^{2}\bigg\}
\geq (p-1)w(\mu)/p^{2}. $$
\end{comment}
Note also that Lemma~2.2 implies
\[ {\rm ord}\:B_\mu(\lambda)\geq \frac{w(\mu)(p-1)}{p^2}, \]
so we may write $B_\mu(\lambda) = \tilde{\pi}^{w(\mu)}\tilde{B}_\mu(\lambda)$ with $\tilde{B}_\mu(\lambda)$ $p$-integral.
Let
$$ \Psi(X^{\mu}) = \begin{cases} X^{\mu/p} & \text{if $p|\mu_i$
for all $i$,} \\ 0 & \text{otherwise}.
\end{cases} $$
We show that $\Psi \circ F(\lambda,X) $ acts on $B$. If
$\xi = \sum_{\nu \in M}A_\nu\tilde{\pi}^{w(\nu)}X^{\nu} \in B$, then
$$\Psi\bigg(\bigg(\sum_{\nu\in M} \tilde{\pi}^{w(\mu)} \tilde{B}_\mu(\lambda)X^{\mu}\bigg) \bigg(\sum_{\nu\in M}
A_\nu\tilde {\pi}^{w(\nu)} X^{\nu}\bigg)\bigg) = \sum_{\omega\in M} C_\omega(\lambda) \tilde{\pi}^{w(\omega)}X^{\omega}$$
where
$$ C_\omega(\lambda) = \sum_\nu \tilde {\pi}^{w(p\omega-\nu)+w(\nu) - w(\omega)}\tilde{B}_{p\omega -\nu}(\lambda)A_\nu $$
(a finite sum). We have
$$ pw(\omega) = w(p\omega) \leq w(p\omega - \nu) + w(\nu) $$
so that
\begin{equation}
{\rm ord}\:C_\omega(\lambda) \geq \inf_\nu \{{\rm ord}\:\tilde{\pi}^{(p-1)w(\omega)}A_\nu\} = \frac{(p-1)^2w(\omega)}{p^2} + \inf_\nu \{{\rm ord}\:A_\nu\}.
\end{equation}
This implies that $\Psi(F(\lambda,X)\xi) \in B$.
Let $d(\bar{\lambda}) = [{\bf F}_q(\bar{\lambda}):{\bf F}_q]$, so that $\lambda^{p^{\epsilon d(\bar{\lambda})}}=\lambda$. Put
\[ \alpha_\lambda = \Psi^{\epsilon d(\bar{\lambda})}\circ\bigg(\prod_{i=0}^{\epsilon d(\bar{\lambda}) - 1} F(\lambda^{p^i},X^{p^i})\bigg). \]
\begin{comment}
\[ {\mathcal C}(\lambda) = (\alpha_{\mu\nu}(\lambda))
(\alpha_{\mu\nu}^\sigma(\lambda^p))\cdots
(\alpha_{\mu\nu}^{\sigma^{d(\bar{\lambda})-1}}(\lambda^{p^{d(\bar{\lambda})-1}})),\]
where the superscript $\sigma$ means apply $\sigma$ to the coefficients of the
power series $\alpha_{\mu\nu}(\lambda)$.
\end{comment}
For any power series $P(T)$ in the variable $T$ with constant term $1$, define $P(T)^{\delta_{\bar{\lambda}}} = P(T)/P(p^{\epsilon d(\bar{\lambda})}T)$.
Then $\alpha_\lambda$ is a completely continuous operator on $B$ and the Dwork Trace Formula (see Dwork\cite{D}, Serre\cite{S}) gives
\begin{equation}
L(f_{\bar{\lambda}},\Theta_{\bar{\lambda}},{\bf T}^n;T)^{(-1)^{n+1}} = \det(I-T\alpha_\lambda | B)^{\delta_{\bar{\lambda}}^n}.
\end{equation}
By Equation~(3.2), the $(\omega,\nu)$-entry of the matrix of $\alpha_\lambda$ (\cite[Section 2]{S}) has ${\rm ord}>0$ unless $\omega=\nu=0$. The formula for $\det(I-T\alpha_\lambda)$ (\cite[Proposition~7a)]{S}) then shows that this Fredholm determinant can have at most a single unit root. Since $L(f_{\bar{\lambda}};T)$ has at least one unit root (Section~1), Equation~(3.3) proves that $L(f_{\bar{\lambda}};T)$ has exactly one unit root.
\section{Dual theory}
It will be important to consider the trace formula in the dual theory as well. The basis for this construction goes back to \cite{D+} and \cite{S}. We define
$$ B^{\ast} = \bigg\{\xi^*=\sum_{\mu \in M}A^{\ast}_\mu \tilde
{\pi}^{-w(\mu)}X^{-\mu}\mid \text{ $\{A^{\ast}_\mu\}_{\mu\in M}$ is a
bounded subset of $\Omega_0$}\bigg\}, $$
a $p$-adic Banach space with the norm
$|\xi^{\ast}| = \sup_{\mu \in M} \{|A^{\ast}_\mu|\}$.
We define a pairing $\langle\;,\;\rangle: B^*\times B\rightarrow{\Omega}_0$: if $\xi = \sum_{\mu \in M} A_\mu \tilde {\pi}^{w(\mu)}X^{\mu}$, $\xi^{\ast}= \sum_{\mu \in M} A^{\ast}_\mu \tilde{\pi}^{-w(\mu)}X^{-\mu} $, set
\[ \langle\xi^*, \xi\rangle = \sum_{\mu \in M}A_\mu A^{\ast}_\mu \in {\Omega}_0. \]
The series on the right converges since $A_\mu \rightarrow 0$ as $\mu \rightarrow\infty $ and $\{A^{\ast}_\mu\}_{\mu\in M}$ is bounded. This pairing identifies $B^*$ with the dual space of $B$, i.e., the space of continuous linear mappings from $B$ to $\Omega_0$ (see \cite[Proposition~3]{S}).
Let $\Phi$ be the endomorphism of the space of formal series defined by
\[ \Phi\bigg(\sum_{\mu\in{\bf Z}^n}c_\mu X^{-\mu}\bigg) = \sum_{\mu\in{\bf Z}^n} c_\mu X^{-p\mu}, \]
and let $\gamma'$ be the endomorphism
\[ \gamma'\bigg(\sum_{\mu\in{\bf Z}^n}c_\mu X^{-\mu}\bigg) = \sum_{\mu\in M} c_\mu X^{-\mu}. \]
Consider the formal composition $\alpha_\lambda^* = \gamma'\circ\bigg(\prod_{i=0}^{\epsilon d(\bar{\lambda})-1} F(\lambda^{p^i},X^{p^i})\bigg)\circ\Phi^{\epsilon d(\bar{\lambda})}$.
\begin{proposition}
The operator $\alpha_\lambda^*$ is an endomorphism of $B^*$ which is adjoint to $\alpha_\lambda:B\to B$.
\end{proposition}
\begin{proof}
As $\alpha_\lambda^*$ is the composition of the operators $\gamma'\circ F(\lambda^{p^i},X)\circ\Phi$ and $\alpha_\lambda$ is the composition of the operators $\Psi\circ F(\lambda^{p^i},X)$, $i=0,\dots,\epsilon d(\bar{\lambda})-1$,
it suffices to check that $\gamma'\circ F(\lambda,X)\circ\Phi$ is an endomorphism of $B^*$ adjoint to $\Psi\circ F(\lambda,X):B\to B$. Let $\xi^*(X)= \sum_{\mu \in M}A^{\ast}_\mu\tilde{\pi}^{-w(\mu)}X^{-\mu}\in B^*$. The proof that the product $F(\lambda,X)\xi^*(X^p)$ is well-defined is analogous to the proof of convergence of the series~(2.3). We have
\[ \gamma'(F(\lambda,X)\xi^*(X^p)) = \sum_{\omega\in M} C_\omega(\lambda)\tilde{\pi}^{-w(\omega)}X^{-\omega}, \]
where
\begin{equation}
C_\omega(\lambda) = \sum_{\mu-p\nu=-\omega}\tilde{B}_\mu(\lambda)A^*_\nu
\tilde{\pi}^{w(\omega)+w(\mu)-w(\nu)}.
\end{equation}
Note that
$$ pw(\nu) = w(p\nu) \leq w(\omega) + w(\mu) $$
since $p\nu = \omega + \mu $. Thus
$$ (p-1)w(\nu) \leq w(\omega) + w(\mu) -w(\nu), $$
which implies that the series on the right-hand side of (4.2) converges and that $|C_\omega(\lambda)|\leq |\xi^*|$ for all $\omega\in M$. It follows that $\gamma'(F(\lambda,X)\xi^*(X^p))\in B^*$. It is straightforward to check that
$\langle\Phi(X^{-\mu}),X^\nu\rangle =\langle X^{-\mu},\Psi(X^\nu)\rangle$
and that
\[ \langle \gamma'(F(\lambda,X)X^{-\mu}),X^\nu\rangle =
\langle X^{-\mu}, F(\lambda,X)X^\nu\rangle \]
for all $\mu,\nu\in M$, which implies the maps are adjoint.
\end{proof}
By \cite[Proposition~15]{S} we have $\det(I-T\alpha^*_\lambda\mid B^*) = \det(I-T\alpha_\lambda\mid B)$, so Equation~(3.3) implies
\begin{equation}
L(f_{\bar{\lambda}},\Theta_{\bar{\lambda}},{\bf T}^n;T)^{(-1)^{n+1}} = \det(I-T\alpha^*_\lambda\mid B^*)^{\delta^n_{\bar{\lambda}}}.
\end{equation}
From Equations~(2.14) and~(2.16), we have
\[ \alpha^*\bigg(\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg) = {\mathcal F}(\Lambda)\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}. \]
It follows by iteration that for $m\geq 0$,
\begin{equation}
(\alpha^*)^m\bigg(\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg) = \bigg(\prod_{i=0}^{m-1}{\mathcal F}(\Lambda^{p^i})\bigg)\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}.
\end{equation}
We have
\[ \frac{\zeta(\Lambda,X)}{G_0(\Lambda)} = \sum_{\mu\in M_0} \frac{G_\mu(\Lambda)}{G_0(\Lambda)}\tilde{\pi}^{-w(\mu)}X^{-\mu}, \]
so by Corollary~2.17 we may evaluate at $\Lambda = \lambda$ to get an element of $B^*$:
\[ \frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg|_{\Lambda = \lambda} = \sum_{\mu\in M_0} \frac{G_\mu(\Lambda)}{G_0(\Lambda)}\bigg|_{\Lambda = \lambda}\tilde{\pi}^{-w(\mu)}X^{-\mu}\in B^*. \]
It is straightforward to check that the specialization of the left-hand side of Equation~(4.4) with $m = \epsilon d(\bar{\lambda})$ at $\Lambda = \lambda$ is exactly
$\alpha^*_\lambda((\zeta(\Lambda,X)/G_0(\Lambda))|_{\Lambda = \lambda})$,
so specializing Equation~(4.4) with $m=\epsilon d(\bar{\lambda})$ at $\Lambda = \lambda$ gives
\begin{equation}
\alpha^*_\lambda\bigg( \frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg|_{\Lambda = \lambda}\bigg) = \bigg(\prod_{i=0}^{\epsilon d(\bar{\lambda})-1}{\mathcal F}(\lambda^{p^i})\bigg)\frac{\zeta(\Lambda,X)}{G_0(\Lambda)}\bigg|_{\Lambda = \lambda}.
\end{equation}
Equation~(4.5) shows that $\prod_{i=0}^{\epsilon
d(\bar{\lambda})-1}{\mathcal F}(\lambda^{p^i})$ is a (unit)
eigenvalue of $\alpha^*_\lambda$, hence by Equation~(4.3) it is the
unique unit eigenvalue of $L(f_{\bar{\lambda}};T)$.
|
1,477,468,751,043 | arxiv | \section{Introduction}
The task of time stretching or pitch shifting music signals is fundamental in computer music and has many applications within areas such as transcription, mixing, transposition, and auto-tuning \cite{Ishizaki2009,Risset20022}. Time stretching is the operation of changing the length of a signal, without affecting its spectral content, whereas pitch shifting is the operation of raising or lowering the original pitch of a sound without affecting its length. As pitch shifting can be performed by combining time stretching and sampling rate conversion, we shall only focus on time stretching in this paper.
Introduced by Flanagan and Golden in \cite{Flanagan1966}, the phase vocoder (PV) stretches a signal by modifying its short time Fourier transform (STFT) in such a way that a stretched version can be obtain by reconstructing with respect to a different hop size. Through the years many improvements have been made and the PV is today a well-established technique \cite{Portnoff1976,Griffen1984,Laroche1995,Laroche1999}. Unfortunately, it is known that the PV induces artifacts known as "phasiness" and "transient smearing" \cite{Laroche1999}. Phasiness is perceived as a characteristic colouration of the sound whereas transient smearing is heard as a lack of sharpness at the transients. Many modern techniques exist for dealing with these problems \cite{Laroche1999,Roebel2003,Dorran2004}, but with only few exceptions \cite{Bonada2000,Derrien2007,Liuni2013}, they are all based on the traditional idea of modifying a time-frequency (TF) representation obtained through the STFT. The STFT applies a sampling grid corresponding to a uniform TF resolution over the whole TF plane. For music signals it is often more appropriate to use good time resolution for the onset of attack transients and good frequency resolution for the sinusoidal components. We will consider the task of time stretching in the framework of Gabor theory \cite{Christensen2016,Grochenig2001}. Applying nonstationary Gabor frames (NSGFs) \cite{NuHAGnsgf2011,Dorfler2014} we extend the theory of the PV to incorporate TF representations with the above-mentioned adaptive TF resolution.
In Section \ref{Sec:stateoftheart} of this article we describe some related work and explain the contributions of the proposed algorithm in relation to state of the art. In Section \ref{Sec:GaborTheory} we introduce the necessary tools from Gabor theory, including the painless condition for NSGFs. We use this framework to present the classical PV in Section \ref{Sec:PhaseVocoder} and the proposed algorithm in Section \ref{Sec:PVNSGF}. We include the derivation of the classical PV for two reasons: Firstly, because it makes the transition to the nonstationary case easier and secondly, because we have not found any other thorough derivation in the literature that uses the framework of Gabor theory. Finally, in Section \ref{Sec:Experiments} we provide the numerical experiments and in Section \ref{Sec:Conclusion} we give the conclusions.
\section{State of the art}\label{Sec:stateoftheart}
Traditionally, time-stretching algorithms are categorized into time-domain and frequency-domain techniques \cite{Laroche1995}. Time-domain techniques, such as \emph{synchronous overlap-add} (SOLA) \cite{Roucos1985} (and its extension PSOLA \cite{Charpentier1986}), are capable of producing good results for monophonic signals, at a low computational cost, but tend to perform poorly when applied to polyphonic signals such as music.
In contrast, frequency-domain methods, such as the PV \cite{Flanagan1966}, also work for polyphonic signals but with induced artifacts of their own, namely phasiness and transient smearing. As a first improvement to reduce phasiness, Puckette \cite{Puckette1995} suggested to use \emph{phase-locking} to keep phase coherence intact over neighbouring frequency channels. This method was further studied by Laroche and Dolson \cite{Laroche1999} who proposed to separate the frequency axis into \emph{regions of influences}, located around \emph{peak channels}, and to lock the phase values of channels in a given region according to the phase value of the corresponding peak channel.
To deal with the problem of transient smearing, Bonada \cite{Bonada2000} proposed to keep the stretch factor equal to one during attack transients and then \emph{reinitialize} all phase values for channels above a certain frequency cut, i.e. the phase values of these channels are set equal to the original phase values. In this way, the original timbre is kept intact without ruining the phase coherence for stationary partials at the lower frequencies. A more advanced approach for reducing transient smearing was presented by R{\"o}bel in \cite{Roebel2003}. Here, the transient detection algorithm works on the level of frequency channels and the reinitialization of a detected channel is performed for all time instants influenced by the transient. In this way, there is no need to set the stretch factor equal to one, which is a great advantage in regions with a dense set of transients.
More recent techniques have successfully reduced the PV artifacts by applying more sophisticated TF representations than the STFT. Bonada proposed the application of different FFTs for each time instant, which results in a TF representation with good frequency resolution at the lower frequencies and good time resolution at the higher frequencies. Derrien \cite{Derrien2007} suggested to construct an adaptive TF representation by choosing TF coefficients from a multi-scale Gabor dictionary under a matching constraint. A more recent algorithm, based on the theory of NSGFs, was proposed by Liuni et al. \cite{Liuni2013}. The idea behind their algorithm is to choose a fixed number of frequency bands and to apply, in each band, a NSGF with resolution varying in time. The window functions corresponding to the NSGFs are adapted to the signal by minimizing the \emph{R{\'e}nyi entropy}, which ensures a sparse TF representation. The techniques described in \cite{Roebel2003} and \cite{Liuni2013} are both implemented in the (commercialized) \emph{super phase vocoder} (SuperVP) from IRCAM\footnote{\url{http://anasynth.ircam.fr/home/english/software/supervp}}.
\subsection*{Contributions to state of the art}
In order to generalize the techniques from the classical PV to the case where the TF representation is obtained through a NSGF, it is necessary to use the same number of frequency channels for each time instant. This construction corresponds to a \emph{uniform} NSGF and, since the number of frequency channels must be at least equal to the length of the largest window function, necessarily leads to a high redundancy of the resulting transform.
In this paper we propose an algorithm, which fully exploits the potential of NSGFs to provide adaptivity while keeping a redundancy similar to the classical PV. This is achieved by letting the number of frequency channels for a given time instant equal the length of the window function selected for that particular time instant. This approach allows for using very long window functions, which is an advantage in regions with stationary partials. We summarize the contributions of this article as follows:
\begin{enumerate}
\item We explain the classical PV and the proposed algorithm in a unified framework using discrete Gabor theory.
\item We present a new time stretching algorithm, which uses an adaptive TF representation of lower redundancy than any other previously published algorithm.
\item While the proposed algorithm combines various familiar techniques from the literature, several new techniques are introduced in order to tackle the challenges arising from the application of non-uniform NSGFs. Hence, the proposed algorithm relies on techniques such as phase locking \cite{Laroche1999}, transient detection \cite{Dixon06}, and quadratic interpolation \cite{Beauregard2015} and integrates new methods for dealing with attack transients (cf. Section \ref{Sec:PVNSGFmod}), for determining the phase values from frequencies estimated by quadratic interpolation (cf. Section \ref{Sec:PVNSGFmod}), and for constructing the stretched signal from the modified (non-uniform) NSGF (cf. Section \ref{Sec:PVNSGFsyn}).
\item We provide a collection of sound files on-line (cf. Section \ref{Sec:Experiments}) and include all source code necessary for reproducing the results.
\end{enumerate}
\section{Discrete Gabor theory}\label{Sec:GaborTheory}
We write $f=(f[0],\ldots,f[L-1])^T$ for a vector $f\in \bb{C}^L$ and $\mathbb{Z}_L=\{0,\ldots,L-1\}$ for the cyclic group. Given $a,b\in \bb{Z}_L$, we define the \emph{translation} operator $\textbf{T}_a:\bb{C}^L\rightarrow \bb{C}^L$ and the \emph{modulation} operator $\textbf{M}_b:\bb{C}^L\rightarrow \bb{C}^L$ by
\begin{equation*}
\textbf{T}_af[l]:=f[l-a]\quad \text{and}\quad \textbf{M}_bf[l]:=f[l]e^{\frac{2\pi i bl}{L}},
\end{equation*}
for $l=0,\ldots,L-1$ and with translation performed modulo $L$. For $g\in \bb{C}^L$ and $a,b\in \bb{Z}_L$, we define the \emph{Gabor system} $\{g_{m,n}\}_{m\in \bb{Z}_M,n\in \bb{Z}_N}$ as
\begin{equation*}
g_{m,n}[l]:=\textbf{T}_{na}\textbf{M}_{mb}g[l]=g[l-na]e^{\frac{2\pi imb(l-na)}{L}},
\end{equation*}
with $Na=Mb=L$ for some $N,M\in \bb{N}$ \cite{Strohmer1998,Sondergaard2007}. If $\{g_{m,n}\}_{m,n}$ spans $\bb{C}^L$, then it is called a \emph{Gabor frame}. The associated \emph{frame operator} $\textbf{S}:\bb{C}^L\rightarrow \bb{C}^L$, defined by
\begin{equation*}
\textbf{S}f:=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\scalarp{f,g_{m,n}}g_{m,n},\quad \forall f\in \bb{C}^L,
\end{equation*}
is invertible if and only if $\{g_{m,n}\}_{m,n}$ is a Gabor frame\cite{Christensen2016}. If $\textbf{S}$ is invertible, then we have the expansions
\begin{equation}\label{eq:DGTexpansion}
f=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}\scalarp{f,g_{m,n}}\tilde{g}_{m,n},\quad \forall f\in \bb{C}^L,
\end{equation}
with $\tilde{g}_{m,n}:=\textbf{T}_{na}\textbf{M}_{mb}\textbf{S}^{-1}g$. We say that $\{\tilde{g}_{m,n}\}_{m,n}$ is the \emph{canonical dual frame} of $\{g_{m,n}\}_{m,n}$ and that $\textbf{S}^{-1}g$ is the \emph{canonical dual window} of $g$. The \emph{discrete Gabor transform} (DGT) of $f\in \bb{C}^L$ is the matrix $\textbf{c}\in \bb{C}^{M\times N}$ given by the coefficients $\{\langle f,g_{m,n}\rangle\}_{m,n}$ in the expansion \eqref{eq:DGTexpansion}. Finally, the ratio $MN/L$ is called the \emph{redundancy} of $\{g_{m,n}\}_{m,n}$.
\subsection*{Nonstationary Gabor frames}\label{Sec:NSGFTheory}
In this section we extend the classical Gabor theory to the nonstationary case \cite{NuHAGnsgf2011}. Just as for the stationary case, we denote the total number of sampling points in time by $N\in \bb{N}$, however, we do not assume these points to be uniformly distributed. Further, instead of using just one window function, we apply $N_w\leq N$ different window functions $\{g_n\}_{n\in \bb{Z}_{N_w}}$ to obtain a flexible resolution. The window function corresponding to time point $n\in \bb{Z}_N$ is denoted by $g_{j(n)}$ with $j:\bb{Z}_N\rightarrow \bb{Z}_{N_w}$ being a surjective mapping. The number of frequency channels corresponding to time point $n\in \bb{Z}_N$ is denoted by $M_n\in \bb{Z}_L$ and the resulting frequency hop size by $b_n:=L/M_n$. Finally, the window functions $\{g_n\}_{n\in \bb{Z}_{N_w}}$ are assumed to be symmetric around zero and we use translation parameters $\{a_n\}_{n\in \bb{Z}_N}\subset \bb{Z}_L$ to obtain to the proper support. With this notation, the \emph{nonstationary Gabor system} (NSGS) $\{g_{m,n}\}_{m\in \bb{Z}_{M_n},n\in \bb{Z}_N}$ is defined as
\begin{equation*}
g_{m,n}[l]:=\textbf{T}_{a_n}\textbf{M}_{mb_n}g_{j(n)}[l]=g_{j(n)}[l-a_n]e^{\frac{2\pi i mb_n (l-a_n)}{L}}.
\end{equation*}
If $\{g_{m,n}\}_{m,n}$ spans $\bb{C}^L$, then it is called a NSGF. If $M_n:=M$, for all $n\in \bb{Z}_N$, then it is called a uniform NSGS (or uniform NSGF if it is also a frame). In Fig. \ref{Fig:NSGTIllustration} we see an example of a simple (non-uniform) NSGS with $N_w=2$ and $N=4$.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos1.pdf}
\caption{Illustration of a NSGS with $N_w=2$ and $N=4$.}
\label{Fig:NSGTIllustration}
\end{figure}
Let us now show that the theory of NSGFs extends the theory of standard Gabor frames.
\begin{Exa}
Let $g\in \bb{C}^L$ and $a,b\in \bb{Z}_L$ satisfy $Na=Mb=L$ for some $N,M\in \bb{N}$. Then, with $g_{j(n)}:=g$, $a_n:=na$, and $b_n:=b$ for all $n\in \bb{Z}_N$, we obtain the NSGS
\begin{equation*}
g_{m,n}[l]=\textbf{T}_{na}\textbf{M}_{mb}g[l],\quad m\in \bb{Z}_{M},\quad n\in\bb{Z}_N,
\end{equation*}
which just corresponds to a standard Gabor system.
\end{Exa}
The total number of elements in a NSGS $\{g_{m,n}\}_{m,n}$ is given by $P=\sum_{n=0}^{N-1}M_n$ and the redundancy is therefore $P/L$. The associated frame operator $\textbf{S}:\bb{C}^L\rightarrow \bb{C}^L$, defined by
\begin{equation*}
\textbf{S}f:=\sum_{n=0}^{N-1}\sum_{m=0}^{M_n-1}\scalarp{f,g_{m,n}}g_{m,n},\quad \forall f\in \bb{C}^L,
\end{equation*}
is invertible if and only if $\{g_{m,n}\}_{m,n}$ constitutes a NSGF. If $\textbf{S}$ is invertible, then we have the expansions
\begin{equation}\label{eq:NSGTexpansion}
f=\sum_{n=0}^{N-1}\sum_{m=0}^{M_n-1}\scalarp{f,g_{m,n}}\tilde{g}_{m,n},\quad \forall f\in \bb{C}^L,
\end{equation}
with $\{\tilde{g}_{m,n}\}_{m,n}:=\{\textbf{S}^{-1}g_{m,n}\}_{m,n}$ being the canonical dual frame of $\{g_{m,n}\}_{m,n}$. The \emph{nonstationary Gabor transform} (NSGT) of $f\in \bb{C}^L$ is given by the coefficients $\{c\{n\}(m)\}_{m,n}:=\{\langle f,g_{m,n}\rangle\}_{m,n}$ in the expansion \eqref{eq:NSGTexpansion}. We note that these coefficients do not form a matrix in the general case. We now consider an important case for which the calculation of $\{\tilde{g}_{m,n}\}_{m,n}$ is particularly simple.
\paragraph*{Painless NSGFs} If $\supp(g_{j(n)})\subseteq [c_{j(n)},d_{j(n)}]$ and $d_{j(n)}-c_{j(n)}\leq M_n$ for all $n\in \bb{Z}_n$, then $\{g_{m,n}\}_{m,n}$ is called a \emph{painless} NSGS (or painless NSGF if it is also a frame). In this case we have the following result \cite{NuHAGnsgf2011}.
\begin{Pro}\label{Pro:PainlessNSGT}
If $\{g_{m,n}\}_{m,n}$ is a painless NSGS, then the frame operator $\textbf{S}$ is an $L\times L$ diagonal matrix with entries
\begin{equation*}
S_{ll}=\sum_{n=0}^{N-1}M_n\abs{g_{j(n)}[l-a_n]}^2,\quad \forall l\in \bb{Z}_L.
\end{equation*}
The system $\{g_{m,n}\}_{m,n}$ constitutes a frame for $\bb{C}^L$ if and only if $\sum_{n=0}^{N-1}M_n\abs{g_{j(n)}[l-a_n]}^2>0$ for all $l\in \bb{Z}_L$, and in this case the canonical dual frame $\{\tilde{g}_{m,n}\}_{m,n}$ is given by
\begin{equation*}
\tilde{g}_{m,n}[l]=\frac{g_{m,n}[l]}{\sum_{n'=0}^{N-1}M_{n'}\abs{g_{j(n')}[l-a_{n'}]}^2},
\end{equation*}
for all $n\in \bb{Z}_N$ and all $m\in \bb{Z}_{M_n}$.
\end{Pro}
We note that the canonical dual frame is also a painless NSGF, which is a property not shared by general NSGFs. An immediate consequence of Proposition \ref{Pro:PainlessNSGT} is the classical result for painless nonorthogonal expansions \cite{Daubechies1986}, which just corresponds to the painless case for standard Gabor frames.
\section{The phase vocoder}\label{Sec:PhaseVocoder}
In this section we explain the classical PV \cite{Laroche1999} in the framework of Gabor theory. The PV stretches the length of a signal by means of modifying its discrete STFT. Since the discrete STFT corresponds to a DGT, this technique can be perfectly well explained using Gabor theory. The main idea is to construct a DGT of the signal with respect to an \emph{analysis} hop size $a$, modifying the DGT, and then reconstructing from the modified DGT using a different \emph{synthesis} hop size $a_*$. We only consider the case $a_*=ra$ for a constant modification rate $r>0$. The case $r>1$ corresponds to slowing down the signal by extending its length whereas $r<1$ corresponds to speeding it up by shortening its length. The PV is a classic analysis-modification-synthesis technique, and we will explain each of these three steps separately in the following sections.
\subsection{Analysis}\label{Sec:PhaseVocoderana}
Let $\{g_{m,n}\}_{m,n}$ be a painless Gabor frame for $\bb{C}^L$. Given a real valued signal $f\in \bb{R}^{L}$, we calculate the DGT $\textbf{c}\in \bb{C}^{M\times N}$ of $f$ with respect to $\{g_{m,n}\}_{m,n}$ as
\begin{equation}\label{eq:DGTcoefficients}
c_{m,n}=\scalarp{f,g_{m,n}}=\sum_{l=0}^{L-1}f[l]\overline{g[l-na]}e^{\frac{-2\pi imb(l-na)}{L}},
\end{equation}
for all $m\in \bb{Z}_{M}$ and $n\in \bb{Z}_N$. Let us explain the consequences of the phase convention used in \eqref{eq:DGTcoefficients}. Define $\Omega_m:=2\pi m/M$ as the center frequency of the $m$'th channel and assume that $g$ is real and symmetric around zero. Then, since $\{g_{m,n}\}_{m,n}$ is painless and $b/L=1/M$, we may write \eqref{eq:DGTcoefficients} as
\begin{equation}\label{eq:Gaborconvolution}
c_{m,n}=\sum_{l=0}^{M-1}f[l]g[na-l]e^{-i\Omega_m(l-na)}=e^{i\Omega_m na}\left(f_m\ast g\right)[na],
\end{equation}
with $f_m[l]:=f[l]e^{-i\Omega_ml}$. If $g$ and $\hat{g}$ are both well-localized around zero, the convolution in \eqref{eq:Gaborconvolution} extracts the \emph{baseband} spectrum of $f_m$ at time $na$. Recalling that $f_m$ is just a version of $f$ that has been modulated down by $m$, this baseband spectrum corresponds to the spectrum of $f$ in a neighbourhood of frequency $m$ at time $na$. Finally, modulating back by $m$ we obtain the \emph{bandpass} spectrum of $f$ in a neighbourhood of frequency $m$ at time $na$. This phase convention is the traditional one used in the PV \cite{Laroche1995,Laroche1999,Robel2013}.
\subsection{Modification}\label{Sec:Mod}
To explain the modification step of the PV, we refer to a quasi-stationary sinusoidal model that $f$ is assumed to satisfy \cite{Laroche2002,McAulay1986}. This model is not used explicitly anywhere in the derivation of the PV, but it serves an important role for explaining the underlying ideas. We assume that $f$ can be written as a \emph{finite} sum of sinusoids
\begin{equation}\label{eq:sinusoidmodel}
f(t)=\sum_{k}A_k(t)e^{i\theta_k(t)},
\end{equation}
in which $A_k(t)$ is the \emph{amplitude}, $\theta_k(t)$ is the \emph{phase}, and $\theta'_k(t)$ is the \emph{frequency} of the $k$'th sinusoid at time $t$. Since the model is quasi-stationary, $A_k(t)$ and $\theta'_k(t)$ are assumed to be slowly varying functions. In particular, they are assumed to be almost constant over the duration of $g$. Based on \eqref{eq:sinusoidmodel}, the perfectly stretched signal $f_*$ at time $na_*=nra$ is given by
\begin{equation}\label{eq:timescaledversion}
f_*[na_*]=\sum_{k}A_k(na)e^{ir\theta_k(na)}.
\end{equation}
We note that the amplitudes and frequencies of the stretched signal $f_*$ at time $na_*$ equal the amplitudes and frequencies of the original signal $f$ at time $na$.
The idea behind the modification step is to construct a new DGT $\textbf{d}\in \bb{C}^{M\times N}$, based on $\textbf{c}\in \bb{C}^{M\times N}$, such that reconstruction from $\textbf{d}$, with respect to $a_*$, yields a time stretched version of $f$ in the sense of \eqref{eq:timescaledversion}. Since the amplitudes need to be preserved we set
\begin{equation*}
d_{m,n}=\abs{c_{m,n}}e^{i\angle d_{m,n}} ,\quad m\in \bb{Z}_M,\quad n\in \bb{Z}_N,
\end{equation*}
using polar coordinates. Estimating the phases $\{\angle d_{m,n}\}_{m,n}$ involves a task called \emph{phase unwrapping} \cite{Laroche1999}.
\paragraph*{Phase unwrapping}
Assume there is a sinusoid of frequency $\omega$ in the vicinity of channel $m$ at time $na$. Then, we make the estimate
\begin{equation}\label{eq:actualphase}
e^{i\angle d_{m,n}}=e^{i(\angle d_{m,n-1}+\omega a_*)},
\end{equation}
since the two DGT samples $d_{m,n-1}$ and $d_{m,n}$ are $a_*$ time samples apart. Using the same argument we may write $e^{i\angle c_{m,n}}=e^{i\left(\angle c_{m,n-1}+\omega a\right)}$. Setting $\omega=\Delta\omega+\Omega_m$, and isolating the deviation $\Delta\omega$, yields
\begin{equation*}
\text{princarg}\left\{\Delta\omega a\right\}=\text{princarg}\left\{\angle c_{m,n}-\angle c_{m,n-1}-\Omega_ma\right\},
\end{equation*}
with "princarg" denoting the principal argument in the interval $]-\pi,\pi]$. Assuming $\omega$ is close to the center frequency $\Omega_m$, such that $\Delta\omega\in ]-\pi/a,\pi/a]$, we arrive at
\begin{equation*}
\Delta\omega=\frac{\text{princarg}\left\{\angle c_{m,n}-\angle c_{m,n-1}-\Omega_m a\right\}}{a}.
\end{equation*}
We can now calculate $\omega$ as $\Delta\omega + \Omega_m$ and use \eqref{eq:actualphase} to determine $\{\angle d_{m,n}\}_{m,n}$ by initializing $d_{m,0}=c_{m,0}$ for all $m\in \bb{Z}_M$.
\subsection{Synthesis}\label{Sec:PhaseVocodersyn}
The final step of the PV is to construct a time stretched version of $f$ in the sense of \eqref{eq:timescaledversion} from the modified DGT $\textbf{d}\in \bb{C}^{M\times N}$. This is done by reconstructing from $\textbf{d}$ with respect to the synthesis hop size $a_*$. According to \eqref{eq:DGTexpansion}, such a reconstruction yields
\begin{equation}\label{eq:synthesisformulagabormodified}
f_*[l]=\sum_{m=0}^{M-1}\sum_{n=0}^{N-1}d_{m,n}\textbf{T}_{na_*}\textbf{M}_{mb}\textbf{S}_*^{-1}g[l],
\end{equation}
with $\textbf{S}_*:\bb{C}^L\rightarrow \bb{C}^L$ being the modified frame operator
\begin{equation*}
\textbf{S}_*x[l]=\sum_{m=0}^{M-1}\sum_{n=0}^{N_*-1}\scalarp{x,\textbf{T}_{na_*}\textbf{M}_{mb}g}\textbf{T}_{na_*}\textbf{M}_{mb}g[l],
\end{equation*}
where $N_*:=L/a_*$. The length of the reconstructed signal $f_*$ is given by $L_*=Na_*=Lr$ and translation is performed modulo $L_*$ in \eqref{eq:synthesisformulagabormodified}. In practice, the reconstruction formula \eqref{eq:synthesisformulagabormodified} is realized by applying an inverse FFT and overlap-add.
Traditionally, a DGT with $75\%$ overlap is used in the analysis step, which allows for modification factors $r\leq 4$. We note that if no modifications are made ($r=1$), we recover the original signal. In the next section we consider some of the problems connected with the PV.
\subsection{Drawbacks}\label{Sec:Drawbacks}
The idea behind the PV is intuitive and easily implementable, which makes it attractive from a practical point of view. Unfortunately, the assumptions made in the modification part are not easily satisfied. This is true even for signals constructed explicitly from the sinusoidal model \eqref{eq:sinusoidmodel}. We now list three main problems to be considered.
\begin{enumerate}
\item \textbf{Vertical coherence:} The PV ensures \emph{horizontal coherence} \cite{Laroche1999} within each frequency channel but no attempt is made to ensure \emph{vertical coherence} \cite{Laroche1999} across the frequency channels. If a sinusoid moves from one channel to another, the corresponding phase estimate might change dramatically. This is undesirable since a small change in frequency should only imply a small change in phase.
\item \textbf{Resolution:} In practice, we cannot assume that the sinusoids constituting $f$ are well resolved in the DGT in the sense at most one sinusoid is present within each frequency channel. The channels will only provide a "blurred" image of various neighbouring sinusoids. Furthermore, the amplitudes and frequencies of each sinusoid will often not be constant over the entire duration of $g$. As a consequence, the estimates made in the modification part will be subject to error.
\item \textbf{Transients:} The presence of attack transients is not well modelled within the PV as the phase values at such time instants cannot be predicted from previous estimates. Also, for music signals we often want the attack transients to stay intact after time stretching, which is not accounted for in the PV approach.
\end{enumerate}
In the next section we construct a new PV, which addresses the above-mentioned problems.
\section[A phase vocoder based on nonstationary Gabor frames]{A phase vocoder based on nonstationary \\Gabor frames}\label{Sec:PVNSGF}
As mentioned in the introduction, the DGT is not always preferable for representing music signals as it corresponds to a uniform resolution over the whole TF plane. A poor TF resolution conflicts with the fundamental idea of well resolved sinusoids and therefore causes problems for the PV. In this section we change the TF representation from the DGT to an adaptable NSGT, which better matches the sinusoidal model \eqref{eq:sinusoidmodel}. To be consistent with the description of the PV in Section \ref{Sec:PhaseVocoder} we separately explain the analysis, modification, and synthesis steps of the proposed algorithm.
\subsection{Analysis}\label{Sec:PVNSGFana}
First of all, an adaptation procedure must be chosen for the NSGT. We choose to work with the procedure described in \cite{NuHAGnsgf2011} since it is suitable for representing signals, which consist mainly of transient and sinusoidal components. The adaptation procedure is based on the idea that window functions with small support should be used around the onsets of attack transients whereas window functions with longer support should be used between these onsets.
\begin{Rem} The construction presented here necessarily yields the problem of a coarse frequency resolution for the transient regions. However, as we propose to keep the stretch factor equal to one during attack transients (cf. Section \ref{Sec:PVNSGFmodtrans}), the impact of this problem is limited.
\end{Rem}
The onsets are calculated using a separate algorithm \cite{Dixon06} and the window functions are constructed as scaled versions of a single window prototype (a Hanning window or similar). The resulting system is referred to as a \emph{scale frame}. In the following paragraphs we explain the construction of scale frames in details.
\paragraph*{Transient detection}
To perform the transient detection we use a spectral flux (SF) onset detection function as described in \cite{NuHAGnsgf2011,Dixon06}. This function is computed with a DGT of redundancy 16, and it measures the sum of (positive) change in magnitude for all frequency channels. A time instant, corresponding to a local maximum of the SF function, is determined as an onset if its SF value is larger than the SF mean value in a certain neighbourhood of time frames. Hence, for region with a dense set of transients, only the most significant onsets are calculated. It is clear that such an approach must be taken to avoid an undesirably low frequency resolution in such regions. The redundant DGT used for the SF onset function is not used anywhere else in our algorithm and does not contribute significantly to the overall complexity.
\paragraph*{Constructing the window functions}\label{Sec:PVNSGFanawindow}
After a set of onsets has been extracted, the window functions are constructed following the rule that the space between two onsets is spanned in such a way that the window length first increases (as we get further away from the first onset) and then decreases (as we approach the next onset). The construction is performed in a smooth way such that the change from one step to the next corresponds to a window function that is either half as long, twice as long or of the same length. For details see \cite{NuHAGnsgf2011}. The overlap between the window functions is chosen such that at most one onset is present within each time frame, we shall elaborate further on this particular construction in Section \ref{Sec:PVNSGFsyn}.
\paragraph*{Constructing the NSGT} Once the window functions $\{g_n\}_{n\in \bb{Z}_{N_w}}$ have been constructed, we choose the numbers of frequency channels $\{M_n\}_{n\in \bb{Z}_N}$ such that the resulting system constitutes a painless NSGF. Additionally, we choose a lower bound on $\{M_n\}_{n\in \bb{Z}_N}$ to avoid an undesirably low number of channels around the onsets (explicit choices of parameters are described in Section \ref{Sec:Experiments}). Given a real valued signal $f\in \bb{R}^L$, we calculate the NSGT $\{c\{n\}(m)\}_{m\in \bb{Z}_{M_n},n\in \bb{Z}_N}$ of $f$ with respect to the scale frame $\{g_{m,n}\}_{m,n}$ as
\begin{equation*}
c\{n\}(m)=\scalarp{f,g_{m,n}}=\sum_{l=0}^{L-1}f[l]\overline{g_{j(n)}[l-a_n]}e^{\frac{-2\pi imb_n(l-a_n)}{L}},
\end{equation*}
for all $n\in \bb{Z}_N$ and all $m\in \bb{Z}_{M_n}$. We note that the phase convention is the same as used in the PV (cf. Section \ref{Sec:PhaseVocoderana}).
\subsection{Modification}\label{Sec:PVNSGFmod}
The idea behind the modification step is the same as for the PV. We assume $f$ satisfies \eqref{eq:sinusoidmodel}, and we construct a modified NSGT $\{d\{n\}(m)\}_{m,n}$, based on $\{c\{n\}(m)\}_{m,n}$, such that reconstruction from $\{d\{n\}(m)\}_{m,n}$, with respect to a set of synthesis translation parameters, yields a time stretched version of $f$ in the sense of \eqref{eq:timescaledversion}. Given a stretch factor $r>0$, the distance between synthesis time sample $n$ and $n+1$ is
\begin{equation}\label{eq:synthhopsize}
a^*_n:=r(a_{n+1}-a_{n}),\quad n\in \bb{Z}_N.
\end{equation}
Since we do not want the transients to be stretched, we let $r=1$, when $a_n$ corresponds to the onset of a transient, and then stretch with a correspondingly larger factor $r'>r$ in remaining regions. Using polar coordinates we set
\begin{equation*}
d\{n\}(m)=\abs{c\{n\}(m)}e^{i\angle d\{n\}(m)} ,\quad n\in \bb{Z}_N,\quad m\in \bb{Z}_{M_n},
\end{equation*}
with $\angle d\{0\}(m)=\angle c\{0\}(m)$ for all $m\in \bb{Z}_{M_0}$. Hence, in complete analogy with the approach in the PV, the problem boils down to estimating the phase values $\{\angle d\{n\}(m)\}_{m,n}$.
Making the transition from stationary Gabor frames to NSGFs, we are facing a fundamental problem. The DGT corresponds to a uniform sampling grid over the TF plane, whereas the NSGF corresponds to a sampling grid which is irregular over time but regular over frequency for each fixed time position. This is illustrated in Fig. \ref{Fig:Grid}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos2.pdf}
\caption{Sampling grids corresponding to a DGT and a NSGT.}
\label{Fig:Grid}
\end{figure}
As a consequence, we cannot guarantee that each sampling point has a horizontal neighbour that can be used for estimating the frequency as in the PV (cf. Section \ref{Sec:PhaseVocoder}). We therefore generalize the approach from \cite{Beauregard2015} to the nonstationary case and calculate the frequencies using \emph{quadratic interpolation}.
\paragraph*{Calculating the frequencies}
For fixed $n\in \bb{Z}_N$, we define channel $m_p$ as a \emph{peak} if its magnitude $\abs{c\{n\}(m_p)}$ is larger than the magnitudes of its two vertical neighbours, i.e. $\abs{c\{n\}(m_p)}>\abs{c\{n\}(m_p\pm 1)}$. If there is a sinusoid of frequency $\omega$ in the vicinity of peak channel $m_p$, the "true" peak position will differ from $m_p$ unless $\omega$ is exactly equal to $2\pi m_p/M_n$. The idea is thus to interpolate the true peak position, using the neighbouring channels $m_p\pm 1$, and then to apply this value as an estimate for $\omega$. To describe the setup we set the position of the peak channel $m_p$ to $0$, and the positions of its two neighbours to $-1$ and $1$, respectively. Also, we denote the true peak position by $p$ and define
\begin{equation*}
\alpha:=\abs{c\{n\}(-1)}, \quad \beta:=\abs{c\{n\}(0)},\quad \text{and} \quad \gamma:=\abs{c\{n\}(1)}.
\end{equation*}
The situation is illustrated in Fig. \ref{Fig:QIFigure}, with $y$ denoting the parabola to be interpolated.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos3.pdf}
\caption{Illustration of quadratic interpolation.}
\label{Fig:QIFigure}
\end{figure}
Writing $y(x)=a(x-p)^2+b$ and solving for $p$ yields
\begin{equation*}
p=\frac{1}{2}\cdot \frac{\alpha-\gamma}{\alpha-2\beta+\gamma}\in \left(-\frac{1}{2},\frac{1}{2}\right).
\end{equation*}
The value of $p$ determines the deviation from the peak channel to the true peak proportional to the size of the channel. After $p$ has been determined, we calculate the frequency as
\begin{equation}\label{eq:freqestimate}
\omega=\frac{2\pi(m_p+p)}{M_n}.
\end{equation}
In practice, the calculations are done on a dB scale for higher accuracy. Let us now explain how the frequency estimate \eqref{eq:freqestimate} is used to calculate the corresponding phase value $\angle d\{n\}(m_p)$.
\paragraph*{Calculating the phases}
Between each pair of peaks we define the (lowest) channel with smallest magnitude as a \emph{valley} and then use these valleys to separate the frequency axis into \emph{regions of influence}. As noted in \cite{Laroche1999}, if a peak switches from channel $m_{p'}$ at time $n-1$ to channel $m_p$ at time $n$, the corresponding phase estimate should take this behaviour into account. A simple way of determining the previous peak $m_{p'}$ is to choose the peak of the corresponding region of influence that channel $m_p$ would have belonged to in time frame $n-1$. This is illustrated in Fig. \ref{Fig:RegionOfInfluence}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos4.pdf}
\caption{Illustration of peak, valley and region of influence.}
\label{Fig:RegionOfInfluence}
\end{figure}
Based on this construction, with $a^*_{n-1}$ given in \eqref{eq:synthhopsize}, the phase estimate at peak channel $m_p$ is
\begin{equation}\label{eq:peakchannelphase}
d\{n\}(m_p)=\abs{c\{n\}(m_p)}e^{i(\angle d\{n-1\}(m_{p'})+\omega a^*_{n-1})}.
\end{equation}
For the neighbouring channels in the corresponding region of influence, the phase values will be locked to the phase of the peak. Following the approach in \cite{Laroche1999}, we let
\begin{equation*}
e^{i\angle d\{n\}(m)}=e^{i\left(\angle d\{n\}(m_p)+\angle c\{n\}(m)-\angle c\{n\}(m_p)\right)},
\end{equation*}
for all channels $m$ in the region of influence corresponding to peak channel $m_p$. Hence, the phase locking is such that the difference in synthesis phase is the same as the difference in analysis phase. It is important to note that the actual phase estimates are done only at peak channels, which allows for a fast implementation. As mentioned in Section \ref{Sec:Drawbacks}, the phase estimate \eqref{eq:peakchannelphase} is not well suited for modelling attack transients. In the next section we explain our approach for dealing with this problem.
\paragraph*{Transient preservation}\label{Sec:PVNSGFmodtrans}
Since the phase values $\angle d\{n\}(m)$ at transients locations cannot be predicted from previous estimates, one might choose to simply reinitialize all phase values at such locations $\angle d\{n\}(m)=\angle c\{n\}(m)$. However, for stationary partials passing through the transient, such a reinitialization completely destroys the horizontal phase coherence, thereby producing undesirable artifacts in the resulting sound. To deal with this problem, we propose the following rule for phase estimation at transient locations: Assume time-instant $n$ corresponds to the onset of an attack transient. Consider channel $m$, belonging to the region of influence dominated by a peak channel $m_p$, and let $m_{p'}$ denote the peak channel of the region of influence that channel $m_p$ would have belonged to in time frame $n-1$ (same notation as in \eqref{eq:peakchannelphase}, see also Fig. \ref{Fig:RegionOfInfluence}). Then, given a tolerance $\varepsilon>0$, we reinitialize $\angle d\{n\}(m)=\angle c\{n\}(m)$ if and only if
\begin{equation}\label{eq:transpre}
\abs{c\{n\}(m)}>\abs{c\{n-1\}(m_{p'})}+\varepsilon.
\end{equation}
For the implementation, the calculations are done on a dB scale with $\varepsilon = 2$dB. We note that in contrast to previously proposed techniques for onset reinitialization\cite{Bonada2000,Derrien2007,Roebel2003}, our algorithm has the advantage that it tracks sinusoids \emph{across} frequency channels.
\subsection{Synthesis}\label{Sec:PVNSGFsyn}
Before we can provide the actual synthesis formula, we need to return to the problem of choosing the overlap between window functions (cf. Section \ref{Sec:PVNSGFanawindow}). Originally, scale frame were invented with the intention of construction adaptive TF representations with a very low redundancy. To ensure a low redundancy, and a stable reconstruction, the overlap between adjacent window functions is chosen as $1/3$ of the length for equal windows and $2/3$ of the length of the shorter window for different windows \cite{NuHAGnsgf2011}.
This construction makes sense in the general settings, since the resulting system constitutes a frame for $\bb{C}^L$ as long as the painless condition from Proposition \ref{Pro:PainlessNSGT} is satisfied. However, in the case of time-stretching with a factor $r>1$, this construction cannot guarantee that the dual windows (cf. Proposition \ref{Pro:PainlessNSGT}) overlap coherently when placed at the synthesis time instants. To tackle this problem, we have chosen the overlap between window functions in the following way:
\begin{enumerate}
\item First the onsets of attack transients are calculated (using the onset detection algorithm from Section \ref{Sec:PVNSGFana}).
\item Then these onsets are relocated such that the distance between the relocated onsets is $r$ times the distance between the original onsets.
\item The window functions are now calculated according to the relocated onsets, using the approach in \cite{NuHAGnsgf2011}, and afterwards centred at the original time instants.
\end{enumerate}
While this approach might give the impression that we just stretch the window lengths by a factor of $r$, this is not the case. Calculating the windows with respect to the relocated onsets still produce a sequence of windows functions of the same lengths as if the original onsets had been used. This is illustrated in Fig. \ref{Fig:WindowFunctions}.
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos5.pdf}
\caption{Construction of the overlap between window functions.}
\label{Fig:WindowFunctions}
\end{figure}
With this choice of overlap, we can construct the stretched signal $f_*$ using the synthesis formula
\begin{equation}\label{eq:synthesisformula}
f_*=\sum_{n\in \bb{Z}_N}\sum_{m\in \bb{Z}_{M_n}}d\{n\}(m)\tilde{g}_{m,n},
\end{equation}
with $\{\tilde{g}_{m,n}\}_{m,n}$ being the canonical dual frame from Proposition \ref{Pro:PainlessNSGT} constructed using the synthesis time instants. In practice, the reconstruction formula \eqref{eq:synthesisformula} is realised by applying an inverse FFT and overlap-add as in the classical PV.
\subsection{Advances}
In this section we explain how the proposed algorithm improves the techniques of the PV. We do so by separately addressing the three drawbacks described in Section \ref{Sec:Drawbacks}.
\begin{enumerate}
\item \textbf{Vertical coherence:} If a sinusoid moves from channel $m_{p'}$ at time $n-1$ to channel $m_p$ at time $n$, then the corresponding peak channel also changes from $m_{p'}$ to $m_p$. The estimate given in \eqref{eq:peakchannelphase} therefore ensures that the corresponding phase increment takes this behaviour into account. In this way we get coherence \emph{across} the various frequency channels in contrast to the standard PV which only provides coherence \emph{within} each frequency channel.
\item \textbf{Resolution:} Changing the representation from that of a DGT to an adaptable NSGT automatically improves the TF resolution for signals, which are well represented by the sinusoidal model \eqref{eq:sinusoidmodel}. Furthermore, calculating the phase increment only at peak channels replaces the underlying assumption of well resolved sinusoids in each frequency channel with the weaker assumption of well resolved sinusoids in each region of influence.
\item \textbf{Transients:} To reduce transient smearing, we keep the stretch factor equal to one during attack transients and we reinitialize the phase values of relevant channels according to \eqref{eq:transpre}.
\end{enumerate}
While the PV serves as a good starting point for understanding the ideas behind the proposed algorithm, it is not the main goal of this article only to improve the resulting sound quality compared to this classical technique. The main advantage of the proposed algorithm is the ability to produce good results, when compared to state of the art, while keeping a low redundancy of the applied TF transform.
\paragraph*{Redundancy of the NSGT}
As mentioned in Section \ref{Sec:PhaseVocodersyn}, the classical PV applies an overlap of 75$\%$ corresponding to a redundancy of $4$ in the DGT. There is some mathematical justification to this choice \cite{Laroche1999}, but mainly the overlap is chosen to ensure a good TF resolution. It should be noted that the redundancy of the DGT is independent of the signal under consideration --- it only depends on the analysis hop size and the length of the window function (assuming the painless condition is satisfied).
For multi-resolution methods, the situation changes as the TF resolution adapts to the particular signal. A standard approach for multi-resolution methods is to choose non-uniform sampling points in time, with corresponding window functions, and a \emph{uniform} number of frequency channels corresponding to the length of the largest window function \cite{Liuni2013,Laroche1995}. This construction corresponds to applying a uniform NSGF (cf. Section \ref{Sec:NSGFTheory}). Such an approach is desirable from a practical point of view as the coefficients then form a matrix and the standard techniques from the PV (and its improvements) immediately apply. However, the choice of a uniform NSGF naturally implies a high redundancy of the transform as the sampling density is much higher than needed for the painless case (cf. Proposition \ref{Pro:PainlessNSGT}). For real world signals, such a high redundancy is undesirable as it implies a high computational cost for the time-stretching algorithm.
In contrast to previously suggested methods, our algorithm takes full advantage of the painless condition and produces good results with a redundancy of $\approx 3$ for a stretch factor of $r=1.5$. It is important to note that the redundancy of the proposed algorithm depends \emph{both} on the signal under consideration and the stretch factor (at least in the case where $r>1$). For different signals, the onset detection algorithm calculates different onsets, which results in different time sampling points and different numbers of frequency channels. As for the stretch factor, we recall the choice of overlap as described in Section \ref{Sec:PVNSGFsyn}. For a large stretch factor, we need a large overlap between the window functions to guarantee that the synthesis formula \eqref{eq:synthesisformula} makes sense. We do not consider the dependency between the redundancy and the stretch factor a problem, since the redundancy is still manageable even for large stretch factors. For a stretch factor of $r=3$, the redundancy is $\approx 5$ and for a stretch factor of $r=4$, the redundancy is $\approx 7$.
In the next section we present the numerical experiments and compare the proposed algorithm with state of the art algorithms for time stretching (cf. Section \ref{Sec:stateoftheart}).
\section{Experiments}\label{Sec:Experiments}
The proposed algorithm has been implemented in MATLAB R2017A and the corresponding source code is available at
\begin{center}
\url{http://homepage.univie.ac.at/monika.doerfler/NSPV.html}
\end{center}
The source code depends on the following two toolboxes: The LTFAT \cite{ltfatnote030} (version 2.1.2 or above) freely available from \url{http://ltfat.github.io/} and the NSGToolbox \cite{NuHAGnsgf2011} (version 0.1.0 or above) freely available from \url{http://nsg.sourceforge.net/}.
For the classical PV, we use an implementation by Ellis \cite{Ellis2002}, which includes some improvements to the procedure described in Section \ref{Sec:PhaseVocoder} (in particular, interpolation of magnitudes). As these improvements result in a significantly improved audio quality, we have chosen this implementation for comparison.
In Section \ref{Sec:Experimentssynth} we compare the proposed algorithm to the classical PV by stretching synthetic (music) signals and in Section \ref{Sec:Experimentsreal} we turn to the analysis of real world signals and compare the proposed algorithm with the algorithms from Derrien \cite{Derrien2007} and Liuni et al. \cite{Liuni2013}.
\subsection{Synthetic signals}\label{Sec:Experimentssynth}
Analysing synthetic signals has the advantage that the perfect stretched version is available and can be used as ground truth. For this experiment, we construct a large number of synthetic signals and compare the performance of the proposed algorithm with the classical PV for each of these signals. More precisely, the approach is as follows:
\begin{enumerate}
\item For each synthetic melody we choose a random number of notes between $4$ and $10$. Each note has a randomly chosen duration of either $0.5$ or $1$ second and the corresponding tone consists of a fundamental frequency and three harmonics of decreasing amplitudes. The fundamental frequencies are set to coincide with those of a piano and the melody is allowed to move either $1$ or $2$ half notes up or down (randomly chosen) per step. A randomly chosen envelope ensures that the tones have both an attack and a release. The sampling frequency of the resulting signal $s$ is $16000$ Hz.
\item A stretch factor $0.5\leq r\leq 3.75$ is chosen at random and another synthetic signal $s_{perf}$ is constructed, such that $s_{perf}$ corresponds to a perfectly time stretched version of $s$ in the sense of \eqref{eq:timescaledversion}. The classical PV and the proposed algorithm are applied to the original signal $s$, with respect to the stretch factor $r$, resulting in the time stretched versions $s_{pv}$ and $s_{nsgt}$.
\item Three DGTs $S_{perf}$, $S_{pv}$, and $S_{nsgt}$ are constructed from the time stretched versions $s_{perf}$, $s_{pv}$, and $s_{nsgt}$ using the same parameter settings for each signal. With $|S|$ denoting a vector consisting of the absolute values of a DGT $S$, we use the following error measure
\begin{equation}\label{eq:SpecMagComp}
E(S_{perf},S)=\frac{\norm{\abs{S_{perf}}-\abs{S}}_2}{\norm{\abs{S_{perf}}}_2},
\end{equation}
with $S$ being either $S_{pv}$ or $S_{nsgt}$.
\end{enumerate}
Note that we cannot apply a sample by sample error measure in the time domain, since in this case a small change in phase for the stretched signals might cause a large error, which does not reflect the actual sound quality. We therefore choose to compare the stretched versions using the magnitude difference of their DGTs. Let us now define the parameters used for the TF representations in this experiment.
\paragraph*{Choice of parameters}
For the DGT used in the PV, we apply two different parameter settings. Using the notation $(\text{hopsize},\text{number of frequency channels})$ we use the parameters $(256,1024)$ and $(128,512)$. For the first parameter setting we use a Hanning window of length $1024$ and for the second parameter setting we use a Hanning window of length $512$. In this way we obtain painless DGTs of redundancy $4$.
For the NSGT used in the proposed algorithm, we use $5$ different Hanning windows with lengths varying from $96$ samples (at attack transients) to $96\cdot 2^4=1536$ samples. The lower bound on the number of frequency channels is set to $96\cdot 2^3=768$, corresponding to the length of the second largest window functions.
For the DGT used for computing $S_{perf}$, $S_{pv}$, and $S_{nsgt}$, we use parameters $(128,2048)$ and a Hanning window of $2048$ samples. This results in a painless DGT of redundancy 16.
\paragraph*{Results}
Repeating the experiment described above for $1000$ synthetic test signals we get the following average results for the redundancy and error of each algorithm:
\begin{table}[h!]
\caption{Average results for $1000$ synthetic test signals}
\label{tab:SpecMagTable}
\centering
\begin{tabular}{l c c c }
\hline
Algorithm: & PV$(256,1024)$ & PV$(128,512)$& Proposed\\
Average red.: & $3.954813$ & $3.977300$ & $3.637370$\\
Average error: & $0.439982$ & $0.415139$ & $0.095104$\\
\hline
\end{tabular}
\end{table}
We note that the proposed algorithm outperforms the classical PV, with respect to the error measure in \eqref{eq:SpecMagComp}, while keeping a comparable redundancy of the applied transform. For a visualization of the performances of the algorithms we have plotted, in Fig. \ref{Fig:Spectrograms}, the spectrograms corresponding to the three DGTs $S_{perf}$, $S_{pv}$ (with parameters $(128,512)$), and $S_{nsgt}$ for one particular synthetic test signal (with $r=1.5$).
\begin{figure}[h!]
\centering
\includegraphics[width=\columnwidth]{Ottos6.pdf}
\caption{Spectrograms for stretched versions of a synthetic signal with $r=1.5$.}
\label{Fig:Spectrograms}
\end{figure}
We can easily see how the proposed algorithm more accurately reproduces the onsets, and how it reduces the noisy components between the harmonics compared to the PV. However, we can also see how the frequencies corresponding to the harmonics are better reproduced with the PV than with the proposed algorithm. The proposed algorithm induces a certain amplitude modulation due to the peak detection and phase locking approach described in Section \ref{Sec:PVNSGFmod}.
We have provided sound files on-line for the particular test signal shown in Fig. \ref{Fig:Spectrograms} with respect to the stretch factors $r= [0.75, 1.25, 1.5, 2.25, 3.0, 3.75]$. The sound files are available for the perfect stretched version, the PV$(128,512)$, and the proposed algorithm. It is important to note that the error measure given in \eqref{eq:SpecMagComp} is not a direct reflection of the actual audio quality --- it is for instance not true that the proposed algorithm consistently performs $4$ times as good as the classical PV. The results for the proposed algorithm are particularly convincing for stretch factors $r\leq 2$, where the timbre at attack transients is nicely preserved in contrast to the classical PV. However for larger stretch factors $r\geq 2$, the impact of the amplitude modulation, and of the coarse frequency resolution around onsets, becomes audible. Eventually, this results in an overall sound quality comparable to the PV (or even below for very large stretch factors $r\geq 3$).
Since the authors do not have access to the source code of the more sophisticated algorithms as proposed in \cite{Bonada2000,Derrien2007,Liuni2013}, the comparison for synthetic signals could only be done for the PV and the proposed algorithm. However, as the authors from \cite{Derrien2007} and \cite{Liuni2013} kindly provided us with sound files for real world signals, we have included these algorithms for the comparison in the next section.
\subsection{Real world signals}\label{Sec:Experimentsreal}
For this experiment we consider three real world signals, each of length $\approx 4$ seconds and with a sampling frequency of 44100 Hz. The signals are chosen such that they challenge different aspects of the time stretching algorithms:
\begin{enumerate}
\item The first signal is a glockenspiel signal with few transients and many harmonics at the higher frequencies.
\item The second signal is a piece of piano music consisting of a dense set of transients with most of the energy concentrated at the lower frequencies.
\item The third signal is from a rock song played by a full band, thereby producing a complex polyphonic sound.
\end{enumerate}
We chose to work with the stretch factors 0.75, 1.25, 1.5 and 2.25 for the comparison. The algorithms we include are:
\begin{enumerate}
\item The PV as described in Section \ref{Sec:PhaseVocoder} and implemented in \cite{Ellis2002}. For the DGT used in the PV, we use parameters $(512,2048)$ and a Hanning window of length $2048$.
\item The proposed algorithm from Section \ref{Sec:PVNSGF}. We use $5$ Hanning windows with lengths varying from $384$ to $384\cdot 2^4=6144$ and with $384\cdot 2^2=1536$ being the lower bound on the number of frequency channels.
\item The matching pursuit algorithm by Derrien \cite{Derrien2007}.
\item The SuperVP from IRCAM based on the theory of R{\"o}bel \cite{Roebel2003} and Liuni et al. \cite{Liuni2013}. The algorithm uses only one frequency band and chooses between window lengths of 1024, 2048, 3072, and 4096 samples for the adaptive (uniform) NSGT. We refer the reader to \cite{Liuni2013} for details.
\end{enumerate}
Since all the stretched sounds are available on-line, we only give the main conclusions. The classical PV and the algorithm by Derrien are rather similar in performance --- they both produce a good overall sound quality but with significant transient smearing. The proposed algorithm, on the other hand, does a much better job of preserving the original timbre at attack transient, but induces a certain "roughness" to the sounds (mainly for $r=2.25$). Also, some of the weaker transients, which are not detected by the onset detection algorithm, suffer from transient smearing for the proposed algorithm (in particular, the "tapping" noises in the background of the piano music). The SuperVP does not have this problem as the transient detection algorithm works on the level of spectral bins. Overall, the SuperVP provides the best audio quality for the three signals, which is to be expected as it applies a TF representation of much higher redundancy than the other algorithms. Calculating the average redundancies for the proposed algorithm (over the $4$ stretch factors) for each signal we get $2.40$, $2.90$ and $2.65$. Finally, let us note that the third signal (the rock band signal) reveals a fundamental problem with the application of NSGFs. For $r=2.25$, neither the proposed algorithm nor the SuperVP are capable of maintaining a steady bass, which results from the changing window lengths. This particular issue is better resolved by the classical PV as well as the algorithm by Derrien.
\section{Conclusion and perspectives}\label{Sec:Conclusion}
Using discrete Gabor theory we have presented the classical PV and proposed a new time stretching algorithm in a unified framework. This approach has allowed us to address and improve on the disadvantages of the classical PV, while preserving the mathematical structure provided by Gabor theory. The proposed algorithm is the first attempt to use non-uniform NSGFs for time-stretching, which allows for a low redundancy of the adaptive TF representation and leads to a fast implementation. The proposed algorithm has been compared to other multi-resolution methods, in a reproducible manner, and we have discussed its advantages and its shortcomings. As a future improvement it could be interesting to connect the techniques presented in this article with the ideas proposed by R{\"o}bel in \cite{Roebel2003}, possibly allowing for an algorithm that uses non-uniform NSGFs without the need for fixing the stretch factor during attack transients.
\section*{Acknowledgment}
We would like to thank the three anonymous reviewers for their suggestions, which clearly improved the over-all presentation of this manuscript. Also a special thanks to Olivier Derrien and Marco Liuni for providing us with the sound files used for comparison.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\bibliographystyle{IEEEtran}
|
1,477,468,751,044 | arxiv |
\section{Introduction}
Let $\Sigma$ be a finite ordered non-empty set which we refer to as an \emph{alphabet}. The elements of~$\Sigma$ are called \emph{characters}. characters are treated as integers in a range $[0, \sigma-1]$; and we assume that a pair of characters can be compared in $\mathcal{O}(1)$ time. A finite ordered sequence of characters (possibly empty) is called a \emph{string}.
characters in a string are enumerated starting from~$1$, that is, a string $w$ of \emph{length} $n$ consists of characters $w[1], w[2], \ldots, w[n]$.
\paragraph{Wavelet trees.}
The wavelet tree, invented by Grossi, Gupta, and Vitter \cite{Grossi:2003:HET:644108.644250}, is an important data structure with a vast number of applications to stringology, computational geometry, and others (see \cite{DBLP:journals/jda/Navarro14} for an excellent survey). Despite this, the problem of wavelet tree construction has not received much attention in the literature. For a string $w$ of length $n$, one can derive a construction algorithm with $\mathcal{O}(n \log \sigma)$ running time directly from the definition. Apart from this, two recent works~\cite{WaveletTreeConstruction1,WaveletTreeConstruction2} present construction algorithms in the setting when limited extra space is allowed. Running time of the algorithms is higher than that of the naive algorithm without these space restrictions.
The first result of our paper is a novel deterministic algorithm for constructing a wavelet tree in $\mathcal{O}(n \log \sigma/ \sqrt{\log{n}})$ time.
Our algorithm is capable of building arbitrary-shaped (binary) wavelet trees of $\mathcal{O}(\log \sigma)$ height in asymptotically the same time.
Hence, it can be used to construct wavelet trees which do not form a perfect binary tree, for example wavelet trees for Huffman encoding~\cite{DBLP:conf/soda/GrossiGV04}, or wavelet tries~\cite{DBLP:conf/pods/GrossiO12} that have the shape of a trie for a given set of strings.
Similar results on wavelet trees were obtained independently by Munro, Nekrich, and Vitter~\cite{DBLP:conf/spire/MunroNV14}.
Our contribution also lies in applying the wavelet tree construction algorithm and the underlying ideas to derive some improved bounds for range queries. Namely, given an array $A[1..n]$ of integers, one could ask to compute the number of integers in $A[i..j]$ that
are smaller than a given $A[k]$ (\emph{range rank} query);
or to find, for given $i$, $j$, and~$k$,
the $k$-th smallest integer in $A[i..j]$ (\emph{range select} query);
or to determine the smallest value in $A[i..j]$ which is at least as large as a given $A[k]$ (\emph{range successor} query).
These problems have been widely studied; see e.g.
Chan and P\u{a}tra\c{s}cu~\cite{DBLP:conf/soda/ChanP10} and
Brodal et al.~\cite{BrodalMedian} for range rank/select queries and \cite{SortedRangeReporting,DBLP:journals/corr/Zhou13b} for range successor queries.
By slightly tweaking our construction of wavelet trees, we can build in $\mathcal{O}(n \sqrt{\log n})$ deterministic time
an $\mathcal{O}(n)$ size data structure for answering range rank/select queries in $\mathcal{O}(\frac{\log n}{\log \log n})$
time.
Our approach yields
a $\sqrt{\log n}$-factor improvement to the construction time upon
Brodal et al. \cite{BrodalMedian} and
a $\log \log n$-factor improvement to the query time upon
Chan and P\u{a}tra\c{s}cu \cite{DBLP:conf/soda/ChanP10}.
To answer range successor queries online, we show a reduction to range rank and select queries, which yields the same bounds.
Additionally, we show how to answer $q$ range successor queries in an offline manner
in $\mathcal{O}((n+q)\sqrt{\log n})$ total time using a wavelet-tree-based algorithm.
In particular, this implies $\mathcal{O}(n\sqrt{\log n})$ time for $q=\Theta(n)$ queries, which improves by a factor of $\sqrt{\log n}$ over previously known solutions.
To the best of our knowledge, no prior algorithm could answer $q=\Theta(n)$ queries in $o(n\log n)$ time.
The previously existing solutions have better query time, but require $\Omega(n\log n)$-time
preprocessing. More precesiely, $\mathcal{O}(\log\log n)$ query time can be achieved using $\mathcal{O}(n\log\log n)$ space~\cite{DBLP:journals/corr/Zhou13b}
and $\mathcal{O}(\log^{\varepsilon} n)$ query time (for any $\varepsilon>0$) with $\mathcal{O}(n)$ space~\cite{SortedRangeReporting}.
\paragraph{Wavelet suffix trees.}
Then we switch to stringological context and extend our approach to the so-called
\emph{internal} string problems (see~\cite{DBLP:journals/corr/KociumakaRRW13}).
This type of problems involves construction of a compact data structure for a given fixed string $w$
capable of answering certain queries for substrings of $w$. This line of development was originally inspired by suffix trees,
which can answer some basic internal string queries (e.g. equality testing and longest common extension
computation) in constant time and linear space.
Lately a number of studies emerged addressing compressibility \cite{CormodeMuthukrishnan05,Keller201442},
range longest common prefixes (range LCP) \cite{RangeLCP,FasterRangeLCP},
periodicity \cite{FactorPeriodicity2012,Crochemore:2010:EPP:1928328.1928362},
minimal/maximal suffixes \cite{Minmaxsuf,MinmaxsufRevisited}, substring hashing~\cite{PerfectHashing,WeightedAncestors}, and fragmented pattern matching~\cite{FragmentedPatternMatching,WeightedAncestors}.
Our work focuses on rank/select problems for suffixes of a given substring.
Given a fixed string~$w$ of length $n$, the goal is to preprocess it into a compact data structure
for answering the following two types of queries efficiently:
\begin{enumerate}[1)]
\item \emph{Substring suffix rank}: For substrings $x$ and $y$ of $w$
(given by their endpoints) count the number of suffixes of $x$ that
are lexicographically smaller than~$y$;
\item \emph{Substring suffix select}: For a substring $x$ of $w$ (given by its endpoints) and an integer $k$,
find the $k$-th lexicographically smallest suffix of $x$.
\end{enumerate}
Note that for $k = 1$ and $k = |x|$ substring suffix select queries reduce to computing
the lexicographically minimal and the lexicographically maximal suffixes of a given substring.
Study of this problem was started by Duval~\cite{Duval}. He showed that the maximal and the minimal suffixes of \emph{all prefixes} of a string can be found in linear time and constant additional space. Later this problem was addressed in~\cite{Minmaxsuf,MinmaxsufRevisited}. In~\cite{MinmaxsufRevisited} it was shown that the minimal suffix of any substring can be computed in $\mathcal{O}(\tau)$ time by a linear space data structure with $\mathcal{O}(n \log n / \tau)$ construction time for any parameter $\tau$, $1 \le \tau \le \log n$. As an application of this result it was shown how to construct the Lyndon decomposition of any substring in $\mathcal{O}(\tau s)$ time, where $s$ stands for the number of distinct factors in the decomposition.
For the maximal suffixes an optimal linear-space data structure with $\mathcal{O}(1)$ query and $\mathcal{O}(n)$ construction time was presented. We also note that~\cite{MakinenRankSelect} considered a problem with a similar name, namely substring rank and select. However, the goal there is to preprocess a string, so that given any pattern, we can count its occurrences in a specified prefix of the string, and select the $k$-th occurrence in the whole string. One can easily see that this problem is substantially different than the one we are interested in.
Here, we both generalize the problem to an arbitrary $k$ (thus enabling to answer general \emph{substring suffix select} queries) and also consider \emph{substring suffix rank} queries. We devise a linear-space data structure with $\mathcal{O}(n \sqrt{\log n})$ expected construction time supporting both types of the queries in $\mathcal{O}(\log |x|)$ time.
Our approach to substring suffix rank/select problems is based on wavelet trees and attracts a number of additional combinatorial and algorithmic ideas. Combining wavelet trees with suffix trees we introduce a novel concept of \emph{wavelet suffix trees}, which forms a foundation of our data structure. Like usual suffix trees, wavelet suffixes trees provide a compact encoding for all substrings of a given text; like wavelet trees they maintain logarithmic height.
Our hope is that these properties will make wavelet suffix trees an attractive tool for a large variety of stringological problems.
We conclude with an application of wavelet suffix trees to substring compression,
a class of problems introduced by Cormode and Muthukrishnan~\cite{CormodeMuthukrishnan05}.
Queries of this type ask for a compressed representation of a given substring.
The original paper, as well as a more recent work by Keller et al.~\cite{Keller201442},
considered Lempel-Ziv compression schemes. Another family of methods, based on Burrows-Wheeler transform~\cite{BWT}
was left open for further research. We show that wavelet suffix trees allow to compute a run-length-encoded Burrows-Wheeler transform of an arbitrary substring $x$ of $w$ (again, given by its endpoints) in $\mathcal{O}(s \log |x|)$ time, where $s$ denotes the length of the resulting run-length encoding.
\section{Wavelet Trees}\label{sec:wt}
Given a string $s$ of length $n$ over an alphabet $\Sigma = [0,\sigma-1]$, where $\sigma\leq n$,
we define the wavelet tree of $s$ as follows. First, we create the root node $r$ and construct its
bitmask $B_r$ of length $n$. To build the bitmask, we think of every
$s[i]$ as of a binary number consisting of exactly $\log\sigma$ bits (to make the
description easier to follow, we assume that $\sigma$ is a power of $2$), and put
the most significant bit of $s[i]$ in $B_{r}[i]$. Then we partition $s$ into $s_{0}$ and $s_{1}$
by scanning through $s$ and appending $s[i]$ with the most significant bit removed to
either $s_{1}$ or $s_{0}$, depending on whether the removed bit
of $s[i]$ was set or not, respectively. Finally, we recursively define the wavelet trees
for $s_{0}$ and $s_{1}$, which are strings over the alphabet $[0,\sigma/2-1]$,
and attach these trees to the root. We stop when the alphabet is unary. The final result
is a perfect binary tree on $\sigma$ leaves with a bitmask attached at every non-leaf node.
Assuming that the edges are labelled by {\bf 0} or {\bf 1} depending
on whether they go to the left or to the right respectively, we can define a \emph{label} of a node
to be the concatenation of the labels of the edges on the path from the root to this node.
Then leaf labels are the binary representations of characters in $[0,\sigma-1]$; see Figure~\ref{fig:wavelet example1} for an example.
In virtually all applications,
each bitmask $B_{r}$ is augmented with a rank/select structure. For a bitmask $B[1..N]$ this structure implements operations $\mathrm{rank}_{b}(i)$, which counts
the occurrences of a bit $b \in \{0,1\}$ in $B[1..i]$, and $\mathrm{select}_{b}(i)$, which selects the
$i$-th occurrence of $b$ in the whole $B[1..n]$, for any $b\in\{0,1\}$,
both in $\mathcal{O}(1)$ time.
\begin{figure}[tbp]
\include{example1}
\caption{Wavelet tree for a string $1100_2$ $0111_2$ $1011_2$ $1111_2$ $1001_2$ $0110_2$ $0100_2$ $0000_2$ $0001_2$ $0010_2$ $1010_2$ $0011_2$ $1101_2$ $0101_2$ $1000_2$ $1110_2$, the leaves are labelled with their corresponding characters.}
\label{fig:wavelet example1}
\end{figure}
The bitmasks and their corresponding rank/select structures are stored one after another, each
starting at a new machine word. The total space occupied by the bitmasks alone
is $\mathcal{O}(n \log \sigma)$ bits, because there are $\log\sigma$ levels, and the lengths of
the bitmasks for all nodes at one level sum up to $n$. A rank/select structure built for a bit
string $B[1..N]$ takes $o(N)$ bits~\cite{Clark,Jacobson}, so the space taken by all of them is
$o(n\log \sigma)$ bits. Additionally, we might lose one machine word per node
because of the word alignment, which sums up to $\mathcal{O}(\sigma)$.
For efficient navigation, we number the nodes in a heap-like fashion and, using $\mathcal{O}(\sigma)$ space,
store for every node the offset where its bitmasks and the corresponding rank/select
structure begin. Thus, the total size of a wavelet tree
is $\mathcal{O}(\sigma + n \log \sigma / \log n)$, which for $\sigma \le n$ is
$\mathcal{O}(n \log \sigma / \log n)$.
Directly from the recursive definition, we get a construction algorithm taking $\mathcal{O}(n\log\sigma)$ time,
but we can hope to achieve a better time complexity as the size of the output is just $\mathcal{O}(n \log \sigma / \log n)$
words. In Section~\ref{sec:WTconstruction} we show that one can construct all bitmasks augmented
with the rank/select structures in total time $\mathcal{O}(n \log \sigma/ \sqrt{\log{n}})$.
\paragraph{Arbitrarily-shaped wavelet trees.}
A generalized view on a wavelet tree is that apart from a string $s$ we are given
an arbitrary full binary tree $T$ (i.e., a rooted tree whose nodes have 0 or 2 children) on $\sigma$ leaves, together
with a bijective mapping between the leaves and the characters in $\Sigma$.
Then, while defining the bitmasks~$B_v$, we do not remove the most significant bit of each character,
and instead of partitioning the values based on this bit,
we make a decision based on whether the leaf corresponding to the character lies in the left or in the right subtree of $v$.
Our construction algorithm generalizes to such arbitrarily-shaped wavelet trees without increasing the time complexity
provided that the height of $T$ is $\mathcal{O}(\log\sigma)$.
\paragraph{Wavelet trees of larger degree.}
Binary wavelet trees can be generalized to higher degree trees in a natural way as follows.
We think of every $s[i]$ as of a number in base $d$. The degree-$d$ wavelet
tree of $s$ is a tree on $\sigma$ leaves, where every inner node is of degree $d$, except for the
last level, where the nodes may have smaller degrees. First, we create
its root node $r$ and construct its string $D_{r}$ of length $n$ setting as $D_r[i]$ the most significant digit of $s[i]$. We partition
$s$ into $d$ strings $s_{0},s_{1},\ldots,s_{d-1}$ by scanning through $s$ and appending
$s[i]$ with the most significant digit removed to $s_{a}$, where $a$ is the removed
digit of $s[i]$. We recursively repeat the construction for every $s_{a}$ and attach
the resulting tree as the $a$-th child of the root. All strings $D_{u}$ take
$\mathcal{O}(n\log \sigma)$ bits in total, and every $D_{u}$ is augmented with a generalized rank/select
structure.
We consider $d=\log^{\varepsilon}n$, for a small constant $\varepsilon>0$. Remember that we assume
$\sigma$ to be a power of $2$, and because of similar technical reasons we assume $d$
to be a power of two as well. Under such assumptions, our construction algorithm for
binary wavelet trees can be used to construct a higher degree wavelet tree. More precisely,
in Section~\ref{sec:higher degree construction} we show how to construct such
a higher degree tree in $\mathcal{O}(n\log\log n)$, assuming that we are already given the binary
wavelet tree, making the total construction time $\mathcal{O}(n\sqrt{\log n})$ if $\sigma\leq n$, and allowing
us to derive improved bounds on range selection, as explained in Section~\ref{sec:higher degree construction}.
\subsection{Wavelet Tree Construction}
\label{sec:WTconstruction}
Given a string $s$ of length $n$ over an alphabet $\Sigma$, we want to construct its wavelet tree, which
requires constructing the bitmasks and their rank/select structures, in $\mathcal{O}(n \log \sigma / \sqrt{\log n})$ time.
\paragraph{Input representation.}
The input string $s$ can be considered as a list of $\log \sigma$-bit integers. We assume that it is represented in a packed form, as described below.
A single machine word of length $\log n$ can accommodate $\frac{\log n}{b}$ $b$-bit integers.
Therefore a list of $N$ such integers can be stored in $\frac{Nb}{\log n}$ machine words. (For $s$, $b = \log \sigma$, but later we will also consider other values of $b$.)
We call such a representation a \emph{packed list}.
We assume that packed lists are implemented as resizable bitmasks (with each block of $b$ bits
representing a single entry) equipped with the size of the packed list.
This way a packed list of length $N$ can be appended to another packed list in $\mathcal{O}(1+\frac{Nb}{\log n})$ time,
since our model supports bit-shifts in $\mathcal{O}(1)$ time.
Similarly, splitting into lists of length at most $k$ takes
$\mathcal{O}(\frac{Nb}{\log n}+\frac{N}{k})$ time.
\paragraph{Overview.}
Let $\tau$ be a parameter
to be chosen later to minimize the total running time.
We call a node $u$ \emph{big} if its depth is a multiple of $\tau$, and \emph{small} otherwise.
The construction conceptually proceeds in two phases.
First, for every big node $u$ we build a list $S_u$. Remember that the label $\ell_{u}$ of a node $u$ at distance $\alpha$
from the root consists of $\alpha$ bits, and the characters corresponding to the leaves below $u$ are exactly the
characters whose binary representation starts with $\ell_u$. $S_u$ is defined to
be a subsequence of $s$ consisting of these characters.
Secondly, we construct the bitmasks $B_{v}$ for every node $v$ using $S_u$ of the nearest big ancestor $u$ of $v$ (possibly $v$ itself).
\paragraph{First phase.} Assume that for a big node $u$ at distance $\alpha \tau$ from the root we are given the list~$S_u$.
We want to construct the lists $S_v$ for all big nodes $v$ whose deepest (proper) big ancestor is~$u$.
There are exactly $2^{\tau}$ such nodes $v$, call them $v_{0},\ldots,v_{2^{\tau}-1}$.
To construct their lists $S_{v_i}$, we scan through $S_u$ and append $S_u[j]$
to the list of $v_{t}$, where $t$ is a bit string of length $\tau$ occurring in
the binary representation of $S_u[j]$ at position $\alpha\tau+1$.
We can extract $t$ in constant time, so the total complexity is linear in the total
number of elements on all lists, i.e., $\mathcal{O}(n \log \sigma / \tau)$ if $\tau\le \log \sigma$.
Otherwise the root is the only big node, and the first phase is void.
\paragraph{Second phase.} Assume that we are given the list $S_u$ for a big node
$u$ at distance $\alpha \tau$ from the root. We would like to
construct the bitmask $B_{v}$ for every node $v$ such that $u$ is the nearest big
ancestor of $v$. First, we observe that to construct all these bitmasks we only need
to know $\tau$ bits of every $S_u[j]$ starting from the $(\alpha \tau+1)$-th one. Therefore,
we will operate on \emph{short lists} $L_v$ consisting of $\tau$-bit integers instead of
the lists $S_{v}$ of whole $\log\sigma$-bit integers. Each short list is stored as a packed list.
We start with extracting the appropriate part of each $S_u[j]$ and appending it to $L_u$.
The running time of this step is proportional to the length of $L_u$. (This step is void if $\tau>\log \sigma$; then $L_v=S_v$.)
Next, we process all nodes $v$ such that $u$ is the deepest big ancestor of $v$.
For every such $v$ we want to construct the short list $L_v$. Assuming that we already have $L_v$ for a node $v$
at distance $\alpha \tau + \beta$ from the root, where $\beta\in [0,\tau)$,
and we want to construct
the short lists of its children and the bitmask $B_{v}$.
This can be (inefficiently) done by scanning $L_v$ and appending the next element to the short list of the right or the left child of $v$, depending on whether its $(\beta+1)$-th bit is set or not, respectively. The bitmask $B_{v}$ simply stores
all these $(\beta+1)$-th most significant bits. In order to compute
these three lists efficiently we apply the following claim.
\begin{claim}\label{clm:pct}
Assuming $\tilde\mathcal{O}(\sqrt{n})$ space and preprocessing shared by all instances of the structure, the following operation
can be performed in $\mathcal{O}(\frac{Nb}{\log n})$ time:
given a packed list $L$ of $N$ $b$-bit integers, where $b=o(\log n)$, and a position $t\in[0,b-1]$, compute
packed lists $L_0$ and $L_1$ consisting of the elements of $L$ whose $t$-th most significant bit is $0$ or $1$, respectively,
and a bitmask $B$ being a concatenation of the $t$-th most significant bits of the elements of $L$.
\end{claim}
\begin{proof}
As a preprocessing, we precompute all the answers for lists $L$ of length at most $\frac{1}{2}\frac{\log n}{b}$.
This takes $\tilde{\mathcal{O}}(\sqrt{n})$ time.
For a query we split $L$ into lists of length $\frac{1}{2}\frac{\log n}{b}$, apply the preprocessed mapping
and merge the results, separately for $L_0,L_1$ and $B$. This takes $\mathcal{O}(\frac{Nb}{\log n})$ time in total.
\end{proof}
Consequently, we spend $\mathcal{O}(|L_v|\tau/\log n)$ per node $v$.
The total number of the elements of all short lists is $\mathcal{O}(n \log \sigma)$,
but we also need to take into the account the fact that the lengths of some short lists might
be not divisible by $\log n / \tau$, which adds $\mathcal{O}(1)$ time per a node of the
wavelet tree, making the total complexity $\mathcal{O}(\sigma + n \log \sigma \tau / \log n)=\mathcal{O}(n \log \sigma \tau / \log n)$.
\paragraph{Intermixing the phases.}
In order to make sure that space usage of the construction algorithm does not exceed the size of the the final structure,
the phases are intermixed in the following way. Assuming that we are given the lists $S_u$
of all big nodes $u$ at distance $\alpha \tau$ from the root, we construct the lists
of all big nodes at distance $(\alpha+1)\tau$ from the root, if any. Then we construct
the bitmasks $B_{u}$ such that the nearest big ancestor of $u$ is at distance
$\alpha\tau$ from the root and increase $\alpha$. To construct the bitmasks,
we compute the short lists for all nodes at distances $\alpha\tau,\ldots,(\alpha+1)\tau-1$ from the
root and keep the short lists only for the nodes at the current distance.
Then the peak space of the construction process is just
$\mathcal{O}(n \log \sigma / \log n)$ words.
The total construction time is $\mathcal{O}(n \log \sigma / \tau)$ for the first phase and $\mathcal{O}(n \log \sigma \tau / \log n)$ for the second phase.
The bitmasks, lists and short lists constructed by the algorithm are illustrated in Figure~\ref{fig:wavelet example2}.
Choosing $\tau=\sqrt{\log n}$ as to minimize the total time, we get the following theorem.
\begin{figure}
\begin{center}
\include{example2}
\vspace{-.7cm}
\end{center}
\caption{Elements of the wavelet tree construction algorithm for the string from Figure~\ref{fig:wavelet example1}. The first figure shows the lists $S_{u}$ of all big nodes $u$ when $\tau=2$, and the third shows the short lists $L_{u}$ of all nodes $u$, as defined in the proof of Theorem~\ref{th:wavelet_tree}.}
\label{fig:wavelet example2}
\end{figure}
\begin{theorem}\label{th:wavelet_tree}
Given a string $s$ of length $n$ over an alphabet $\Sigma$, we can construct all
bitmasks $B_{u}$ of its wavelet tree in $\mathcal{O}(n\log \sigma / \sqrt{\log n})$ time.
\end{theorem}
Additionally, we want to build a rank/select structure for every $B_{u}$. While it
is well-known that given a bit string of length $N$ one can construct
an additional structure of size $o(N)$ allowing executing both rank
and select in $\mathcal{O}(1)$ time~\cite{Clark,Jacobson}, we must verify that the construction
time is not too high. The following lemma is proved in Appendix~\ref{app:wt}.
\begin{lemma}\label{lem:rank_select}
Given a bit string $B[1..N]$ packed in $\frac{N}{\log n}$ machine
words, we can extend it in $\mathcal{O}(\frac{N}{\log n})$ time with a rank/select
structure occupying additional $o(\frac{N}{\log n})$ space, assuming
an $\tilde\mathcal{O}(\sqrt{n})$ time and space preprocessing shared by all instances
of the structure.
\end{lemma}
\subsection{Arbitrarily-Shaped Wavelet Trees}\label{ssec:aswt}
To generalize the algorithm to arbitrarily-shaped wavelet trees of degree $\mathcal{O}(\log\sigma)$,
instead of operating on the characters we work with the labels of the root-to-leaf paths, appended with
{\bf 0}s so that they all have the same length. The heap-like numbering of the nodes
is not enough in such setting, as we cannot guarantee that all leaves have the same depth, so
the first step is to show how to efficiently return a node given the length of its label
and the label itself stored in a single machine word.
If $\log \sigma = o(\log n)$ we can afford storing nodes in a simple array, otherwise we use the
deterministic dictionary of Ru\v{z}i\'{c}~\cite{DBLP:conf/icalp/Ruzic08}, which can be constructed
in $\mathcal{O}(\sigma(\log\log \sigma)^{2})=\mathcal{O}(n(\log\log n)^{2})$ time. In either case, we can
return in $\mathcal{O}(1)$ time the corresponding pointer, which might be null if the node
does not exist. Then the construction algorithm works as described previously:
first we construct the list $S_{u}$ of every big node $u$, and then we construct
every $B_{v}$ using the $S_{u}$ of the nearest big ancestor of $v$. The only difference
is that when splitting the list $S_{u}$ into the lists $S_{v}$ of all big nodes $v$ such that
$u$ is the first proper big ancestor of $v$, it might happen that the retrieved pointer
to $v$ is null. In such case we simply continue without appending anything to the
non-existing $S_{v}$. The running time stays the same.
\begin{theorem}\label{thm:aswt}
Let $s$ be a string of length $n$ over~$\Sigma$ and
$T$ be a full binary tree of height $\mathcal{O}(\log \sigma)$ with $\sigma$ leaves, each assigned a distinct character in $\Sigma$.
Then the $T$-shaped wavelet tree of $s$ can be constructed in $\mathcal{O}(n\log\sigma/\sqrt{\log n})$ time.
\end{theorem}
\subsection{Wavelet Trees of Larger Degree}
\label{sec:higher degree construction}
We move to constructing a wavelet tree of degree $d=\log^{\varepsilon}n$, where $d$ is a power of two. Such higher degree tree can
be also defined through the binary wavelet tree for $s$ as follows.
We remove all inner nodes whose depth is not a multiple of $\log d$. For each surviving node we set its nearest preserved ancestor as a parent. Then each inner node
has $d$ children (the lowest-level nodes may have fewer children), and we order them consistently with the left-to-right order in the original tree.
Recall that for each node $u$ of the binary wavelet tree we define the string $S_u$ as a subsequence of $s$
consisting of its characters whose binary representation starts with the label $\ell_u$ of $u$. Then we create the bitmask
storing, for each character of $S_u$, the bit following its label $\ell_u$. Instead of $B_u$ a node $u$ of the wavelet tree of
degree $d$ now stores a string $D_u$, which contains the next $\log d$ bits following $\ell_u$.
The following lemma allows to use binary wavelet tree construction as a black box, and consequently gives an $\mathcal{O}(n\sqrt{\log n})$-time construction
algorithm for wavelet trees of degree~$d$.
\begin{lemma}
\label{lm:rebuilding}
Given all bitmasks $B_{u}$, we can construct all strings $D_{u}$ in $\mathcal{O}(n\log\log n)$
time.
\end{lemma}
\begin{proof}
Consider a node $u$ at depth $k$ of the wavelet tree of degree $d$. Its children
correspond to all descendants of $u$ in the original wavelet tree at depth $(k+1)d$.
For each descendant $v$ of $u$ at depth $(k+1)\log d-\delta$, where $\delta\in [0,\log d]$,
we construct a temporary string $D'_{v}$ over the alphabet $[0,2^{\delta}-1]$. Every character
of this temporary string corresponds to a leaf in the subtree of $v$. The characters are arranged
in order of the identifiers of the corresponding leaves and describe prefixes of length $\delta$
of paths from $u$ to the leaves. Clearly $D_{u}=D'_{u}$. Furthermore, if the children of $v$ are
$v_{1}$ and $v_{2}$, then $D'_{v}$ can be easily defined by looking at $D'_{v_{1}}$, $D'_{v_{2}}$,
and $B_{v}$ as follows. We prepend {\bf 0} to all characters in $D'_{v_{1}}$, and {\bf 1} to all characters in $D'_{v_{2}}$.
Then we construct $D'_{v}$ by appropriately interleaving $D'_{v_{1}}$ and $D'_{v_{2}}$ according
to the order defined by $B_{v}$. We consider the bits of $B_{v}$ one-by-one. If the $i$-th bit is
{\bf 0}, we append the next character of $D'_{v_{1}}$ to the current $D'_{v}$, and otherwise we append
the next character of $D'_{v_{2}}$. Now we only have to show to implement this process efficiently.
We pack every $\frac{1}{4}\frac{\log n}{\log d}$ consecutive characters of $D'_{v}$ into a single machine word.
To simplify the implementation, we reserve $\log d$ bits for every character irrespective of the value of $\delta$.
This makes prepending {\bf 0}s or {\bf 1}s to all characters in any $D'_{v}$ trivial, because the result
can be preprocessed in $\tilde\mathcal{O}(d^{\frac{1}{4}\frac{\log n}{\log d}})=o(n)$ time and space.
Interleaving $D'_{v_{1}}$ and $D'_{v_{2}}$ is more complicated. Instead of accessing $D'_{v_{1}}$ and $D'_{v_{2}}$ directly,
we keep two buffers, each containing at most next $\frac{1}{4}\frac{\log n}{\log d}$ characters from the
corresponding string. Similarly, instead of accessing $B_{v}$ directly, we keep
a buffer of at most $\frac{1}{4}\log n$ next bits there.
Using the buffers, we can keep processing bits from $B_{v}$ as long as there are enough characters
in the input buffers. The input buffers for $D'_{v_{1}}$ and $D'_{v_{2}}$ become
empty after generating $\frac{1}{4}\frac{\log n}{\log d}$ characters, and the input buffer for $B_{v}$ becomes empty after
generating $\frac{1}{4}\log n$ characters. Hence the total number of times we need to reload one of the
input buffers is $\mathcal{O}(|D'_{v}|/ \frac{\log n}{\log d})$.
We preprocess all possible scenarios between two reloads by simulating, for every possible initial
content of the input buffers, processing the bits until one
of the buffers becomes empty.
We store the generated data (which is at most $\frac{1}{2}\log n$ bits) and the final
content of the input buffers. The whole preprocessing takes $\tilde\mathcal{O}(2^{\frac{3}{4}\log n})=o(n)$ time and space,
and then the number of operations required to generate packed
$D'_{v}$ is proportional to the number of times we need to reload the buffers, so by summing over
all~$v$ the total complexity is $\mathcal{O}(n\log\log n)$.
\end{proof}
Then we extend every $D_{u}$ with a generalized rank/select data structure.
Such a structure for a string $D[1..N]$ implements operations $\mathrm{rank}_{c}(i)$, which counts
positions $k \in [1,i]$ such that $D[k] \leq c$, and $\mathrm{select}_{c}(i)$, which selects the
$i$-th occurrence of $c$ in the whole $D[1..n]$, for any $c\in\Sigma$, both in $\mathcal{O}(1)$ time.
Again, it is well-known that such a structure can be implemented using just $o(n\log \sigma)$
additional bits if $\sigma=\mathcal{O}(\polylog(n))$~\cite{Ferragina}, but its construction time
is not explicitly stated in the literature. Therefore, we prove the following lemma in Appendix~\ref{app:wt}.
\begin{lemma}
\label{lm:rank/select construction 2}
Let $d\le \log^{\varepsilon} n$ for $\varepsilon<\frac13$.
Given a string $D[1..N]$ over the alphabet $[0, d-1]$ packed in $\frac{N\log d}{\log n}$ machine
words, we can extend it in $\mathcal{O}(\frac{N\log d}{\log n})$ time with a generalized rank/select data
structure occupying additional $o(\frac{N}{\log n})$ space, assuming $\tilde\mathcal{O}(\sqrt{n})$ time and space preprocessing shared by all instances
of the structure.
\end{lemma}
\subsection{Range Selection}
\label{sec:range selection}
A classic application of wavelet trees is that, given an array $A[1..n]$ of integers,
we can construct a structure of
size $\mathcal{O}(n)$, which allows answering any range rank/select query in $\mathcal{O}(\log n)$
time. A range select query is to return the $k$-th smallest
element in $A[i..j]$, while a range rank query is to count
how many of $A[i..j]$ are smaller than given $x = A[k]$.
Given the wavelet tree for $A$,
any range rank/select query can be answered by traversing a root-to-leaf path of the tree using the rank/select data structures for bitmasks $B_u$ at subsequent nodes.
With $\mathcal{O}(n\sqrt{\log n})$ construction algorithm this matches the bounds of Chan and
P\u{a}tra\c{s}cu~\cite{DBLP:conf/soda/ChanP10} for range select queries, but is slower by a factor of $\log\log n$
than their solution for range rank queries.
We will show that one can in fact answer
any range rank/select query in $\mathcal{O}(\frac{\log n}{\log\log n})$ time with an $\mathcal{O}(n)$
size structure, which can be constructed in $\mathcal{O}(n\sqrt{\log n})$ time.
For range rank queries this is not a new result, but we feel that our proof gives more
insight into the structure of the problem. For range select queries, Brodal et al.~\cite{BrodalMedian}
showed that one can answer a query in $\mathcal{O}(\frac{\log n}{\log\log n})$ time
with an $\mathcal{O}(n)$ size structure, but with $\mathcal{O}(n\log n)$ construction time.
Chan and P\u{a}tra\c{s}cu asked if methods of \cite{BWT} can be combined with the
efficient construction. We answer this question affirmatively.
A range rank query is easily implemented in $\mathcal{O}(\frac{\log n}{\log\log n})$ time using
wavelet tree of degree $\log^{\varepsilon}n$ described in the previous section.
To compute the rank of $x = A[k]$ in $A[i..j]$, we traverse the path from the root to the leaf
corresponding to $A[k]$. At every node
we use the generalized rank structure to update the answer and the current interval $[i..j]$ before
we descend.
Implementing the range select queries in $\mathcal{O}(\frac{\log n}{\log \log n})$ time is more complicated. Similar to the rank queries, we start the traverse at the root and descend along a root-to-leaf path. At each node we select its child we will descend to, and update the current interval $[i..j]$ using the generalized rank data structure. To make this query algorithm fast, we preprocess each string $D_u$ and store extracted information in matrix form. As shown by Brodal et al.~\cite{BrodalMedian}, constant time access to such information is enough to implement range select queries in $\mathcal{O}(\log n/\log\log n)$ time.
The matrices for a string $D_u$ are defined as follows. We partition $D_u$ into superblocks of length $d\log^2 n$.\footnote{In the original paper, superblocks are of length $d\log n$, but this does not change the query algorithm.} For each superblock we store the
cumulative generalized rank of every character, i.e., for every character $c$ we store the number positions
where characters $c'\le c$ occur in the prefix of the string up to the beginning of the superblock. We think of this as a $d\times \log n$
matrix $M$.
The matrix is stored in two different ways. In the first copy, every row is stored as a single word. In the second copy, we divide the matrix into sections of $\log n / d$ columns, and store every section in a single word. We make sure there is an overlap of four columns between the sections, meaning that the first section contains columns $1,\ldots,\log n / d$, the second section contains columns $\log n / d -3,\ldots, 2\log n / d-4$, and so on.
Each superblock is then partitioned into blocks of length $\log n/\log d$ each.
For every block, we store the cumulative generalized rank within the superblock
of every character, i.e., for every
character $c$ we store the number of positions where characters $c'\le c$ occur in the prefix
of the superblock up to the end beginning of the block.
We represent this information in a \emph{small matrix}~$M'$,
which can be packed in a single word, as it requires only $\mathcal{O}(d \log (d\log^2 n))$ bits.
\begin{lemma}
\label{lm:Brodal construction}
Given a string $D[1..N]$ over the alphabet $[0, d-1]$ packed in $\frac{N\log d}{\log n}$
machine words, we can extend it in $\mathcal{O}(\frac{N\log d}{\log n})$ time and space with the following information:
\begin{enumerate}[1)]\compact
\item The first copy of the matrix $M$ for each superblock (multiple of $d\log^2 n$);
\item The second copy of the matrix $M$ for each superblock (multiple of $d\log^2 n$);
\item The small matrix $M'$ for each block (multiple of $\log n/d$);
\end{enumerate}
occupying additional $o(\frac{N\log d}{\log n})$ space, assuming
an $\tilde\mathcal{O}(\sqrt{n})$ time and space preprocessing shared by all instances
of the structure and $d =\log^{\varepsilon}n$.
\end{lemma}
\begin{proof}
To compute the small matrices $M'$, i.e., the cumulative generalized ranks for blocks,
and the first copies of the matrices $M$, i.e., the cumulative generalized ranks for superblocks,
we notice that the standard solution for generalized rank queries in $\mathcal{O}(1)$ time is to split
the string into superblocks and blocks. Then, for every superblock we store the cumulative generalized
rank of every character, and for every block we store the cumulative generalized rank within
the superblock for every character. As explained in the proof of Lemma~\ref{lm:rank/select construction 2}
presented in the appendix, such data can be computed in $\mathcal{O}(\frac{N\log d}{\log n})$ time
if the size of the blocks and the superblocks are chosen to be $\frac{1}{3}\frac{\log n}{\log d}$
and $d\log^2n$, respectively. Therefore, we obtain in the same complexity every
first copy of the matrix $M$, and a small matrix every $\frac{1}{3}\frac{\log n}{\log d}$ characters.
We can simply extract and save every third such small matrix, also in the same complexity.
The second copies of the matrices are constructed from the first copies in $\mathcal{O}(d^{2})$ time each;
we simply partition each row into (overlapping) sections and append each part to the appropriate machine words.
This takes $\mathcal{O}(\frac{N d^{2}}{d^{2}\log n})=\mathcal{O}(\frac{N}{\log n})$ in total.
\end{proof}
As follows from the lemma, all strings $D_u$ at a single level of the tree can be extended in $\mathcal{O}(n \log \log n / \log n)$ time,
which, together with constructing the tree itself, sums up to $\mathcal{O}(n\sqrt{\log n})$ time in total.
\subsection{Range Successor Queries}
\label{sec:orpq}
In this section we show how wavelet trees can be used to answer range successor queries.
In our setting, these queries can be interpreted as follows: Given a range $R=[i,j]$ of positions of a string $s$ and a character $c$ compute $c'=\min\{s[k] : i\in R, s[k]\ge c\}$, the successor of $c$ in $R$.
\subsubsection{Online algorithm}
We first show how to answer successor queries online by a straightforward reduction to range rank and select queries.
\begin{theorem}\label{thm:rsq:on}
Given an array $A[1..n]$ of integers, in $\mathcal{O}(n\sqrt{\log n})$ time we can construct a data structure of size $\mathcal{O}(n)$,
which can answer range successor queries in $\mathcal{O}(\log n / \log \log n)$ time.
\end{theorem}
\begin{proof}
To compute the successor of $A[k]$ in $A[i..j]$, we determine the rank $r$ of $A[k]$ in $A[i..j]$ and use a range selection query
to find the $(r+1)$-th smallest element in $A[i..j]$. This element is the successor of $A[k]$.
\end{proof}
\subsubsection{Offline Algorithm}
We begin with a simple (though inefficient) online algorithm which uses just the wavelet tree.
Later we will show how to speed it up by making it offline.
Recall that for each node $v$ of the wavelet tree we defined $S_v$ as the subsequence of $s$ containing the characters corresponding to the leaves in the subtree rooted at $v$.
For a fixed range $R$ let $R_v$ be the \emph{inherited} range of positions, i.e., a range of positions in $S_v$ such that the corresponding positions of $s$ belong to $R$.
Note that if $v$ is the root of the wavelet tree, then $R_v = R$.
Moreover, if $w$ is the parent of $v$, we can use $R_w$ and the rank structure on the bitmask $B_w$ to compute $R_v$ in constant time.
The problem of efficiently determining $R_v$ for a given $R$ and $v$, known as \emph{ball inheritance problem}~\cite{DBLP:journals/siamcomp/Chazelle88,DBLP:conf/compgeom/ChanLP11}, is the heart of state-of-the-art online algorithms for range successor queries~\cite{SortedRangeReporting,DBLP:journals/corr/Zhou13b}.
Our solutions do not rely this tool as a black box but computing and maintaining inherited ranges
is still their important ingredient.
We identify leaves of the wavelet trees with characters in $\Sigma$.
Note that $c'$ is the leftmost leaf to the right of $c$ with a non-empty inherited range $R_{c'}$.
While computing $c'$, as an intermediate step we determine the lowest common ancestor $w$ of $c$ and $c'$.
If $c'\ne c$, then $w$ is the deepest ancestor of $c$ whose left subtree contains $c$ and right child
$w_{r}$ satisfies $R_{w_r}\ne \emptyset$.
Moreover, $c'$ is then the leftmost leaf in the subtree of $w_r$ for which the inherited range is non-empty.
This leads to the following algorithm: we traverse the path from the root to $c$, maintaining the inherited ranges.
Whenever we visit a node $w$ such that $c$ is in its left subtree, we check whether
$R_{w_r}=\emptyset$ where $w_r$ is the right child of $w$. During the algorithm we remember the last node $w$ satisfying this property and the inherited range $R_w$.
If $R_c\ne \emptyset$, then $c'=c$ is the successor of $c$ in $R$. Otherwise, we find the leftmost leaf in the subtree of $w_r$ with a non-empty inherited range.
For this we descend the tree from $w_r$ going to the left child if its inherited range is non-empty and to the right child otherwise.
In both phases of the algorithm we follow a single path down the tree, so the running time is $\mathcal{O}(\log \sigma)$.
To speed up the algorithm, we reuse the concept of big nodes, introduced in Section~\ref{sec:WTconstruction}.
Recall that a node is big if its depth is a multiple of $\tau = \sqrt{\log n}$ and small otherwise.
The construction algorithm explicitly generates subsequences $S_v$ for all big nodes $v$.
To answer range successor queries, we store these subsequences and augment them with data structures
for range minimum and range maximum queries~\cite{DBLP:journals/siamcomp/HarelT84,Bender:2000:LPR:646388.690192}.
Improving the running time of the second part of the algorithm is easy:
once we visit a big node $v$, instead of proceeding down to the leaf, we ask a range minimum query for the range $R_v$ of $S_v$
to determine the final answer. Consequently, this part takes $\mathcal{O}(\tau)$ time only.
On the other hand, range maximum queries let us easily test if a given big ancestor $v$ of $c$ is also an ancestor of $c'$:
we check whether the maximum in the range $R_v$ of $S_v$ is at least $c$. Thus, if we knew inherited ranges for all
$\mathcal{O}(\frac{\log \sigma}{\tau})$ big ancestors of $c$, we could determine the lowest of them which is an ancestor of $c'$
and run the first part of the original algorithm starting at that ancestor and terminating at the next big ancestor (or at $c'$, if none).
Hence, we could determine the lowest common ancestor $w$ in $\mathcal{O}(\frac{\log \sigma}{\tau}+\tau)$ time.
Computing the inherited ranges of big ancestors is where we benefit from the offline setting.
\begin{lemma}
Given a collection of $q$ queries consisting of a range $R_i$ and a character $c_i$,
we can compute the inherited ranges of all big ancestors for all $c_i$ in $\mathcal{O}((n+q)\frac{\log \sigma}{\tau})$ time.
\end{lemma}
\begin{proof}
We only show how to find the right endpoints of the ranges. Computing the left endpoints is analogous.
The algorithm resembles the first phase of the wavelet tree construction.
We compute the endpoints in a level-by-level top-down manner.
Let $R_{i,u}$ be the range induced by $R_i$ at a big node $u$.
In a single step we use the endpoints of the ranges $R_{i,u}$ at the node $u$ to find the endpoints of the ranges at nodes whose deepest (proper) big ancestor is $u$.
There are exactly $2^\tau$ such nodes $v_0,\ldots,v_{2^\tau-1}$.
We process the sequence $S_u$ maintaining a counter for each node $v_t$, $t = 0..2^\tau-1$.
For each $j$ we first increment the counter of the node $v_t$ such that $S_{v_t}$ contains $S_u[j]$.
Then, we process the ranges $R_{i,u}$ whose right endpoint is $j$. We identify the node $v_t$ that is the ancestor of $c_i$ and set the endpoint of $R_{i,v_t}$ to be the current value of the counter for $v_t$.
Such a single step takes $\mathcal{O}(|S_u|+q_u)$ time where $q_u$ is the number of queries with $c_i$ in the subtree of $u$.
This sums up to $\mathcal{O}(n+q)$ per level and $\mathcal{O}((n+q)\frac{\log \sigma}{\tau})$ in total.
\end{proof}
The total running time, including wavelet tree construction, is $\mathcal{O}(n\frac{\tau \log \sigma}{\log n} + (n+q)\frac{\log \sigma}{\tau} + q\tau)$.
With $\tau = \sqrt{\log \sigma}$, this gives $\mathcal{O}((n+q)\sqrt{\log \sigma} + n\log^{3/2} \sigma/\log n)$ time and yields the following result.
\begin{theorem}
A collection of $q$ range successor queries on a string $s$ of length $n$ over an alphabet~$\Sigma$
can be answered in $\mathcal{O}((n+q)\sqrt{\log \sigma})$ time.
\end{theorem}
\begin{corollary}\label{cor:rsq:off}
Given an array $A[1..n]$ of $n$ integers, we can answer $q$ range successor queries in $\mathcal{O}((n+q)\sqrt{\log n})$ total time.
\end{corollary}
\begin{proof}
A standard reduction lets us assume that values in the array are in $[0,n-1]$.
This is at the price of sorting the array, which takes $\mathcal{O}((n+q)\log \log n)$ time if one uses a deterministic sorting algorithm by Han~\cite{Han}.
\end{proof}
\section{Wavelet Suffix Trees}\label{sec:wst}
In this section we generalize wavelet trees to obtain wavelet suffix trees.
With logarithmic height and shape resembling the shape of the suffix tree wavelet suffix trees, augmented with additional stringological data structures, become a very powerful tool. In particular, they allow to answer the following queries efficiently: (1) find the $k$-th lexicographically minimal suffix of a substring of the given string (\emph{substring suffix selection}), (2) find the rank of one substring among the suffixes of another substring (\emph{substring suffix rank}), and (3) compute the run-length encoding of the Burrows-Wheeler transform of a substring.
\paragraph{Organisation of Section~\ref{sec:wst}.}
In Section~\ref{ssec:prelim} we introduce several, mostly standard, stringological notions
and recall some already known algorithmic results.
Section~\ref{ssec:over} provides a high-level description of the wavelet suffix trees. It forms an interface
between the query algorithms (Section~\ref{ssec:apps}) and the more technical
content: full description of the data structure (Section~\ref{ssec:full}) and its construction algorithm (Section~\ref{ssec:constr}).
Consequently, Sections~\ref{ssec:full} and~\ref{ssec:apps} can be read separately.
The latter additionally contains cell-probe lower bounds for some of the queries (suffix rank \& selection),
as well as a description of a generic transformation of the data structure, which allows to replace a dependence on $n$ with a dependence on $|x|$
in the running times of the query algorithms.
\subsection{Preliminaries}\label{ssec:prelim}
Let $w$ be a string of length $|w| = n$ over the alphabet $\Sigma = [0, \sigma-1]$.
For $1 \le i \le j \le n$, $w[i..j]$ denotes the \emph{substring} of $w$ from position $i$ to position $j$ (inclusive).
For $i = 1$ or $j = |w|$, we use shorthands $w[..j]$ and $w[i..]$.
If $x=w[i..j]$, we say that $x$ \emph{occurs} in $w$ at position $i$.
Each substring $w[..j]$ is called a \emph{prefix} of $w$, and each substring $w[i..]$ is called a \emph{suffix} of $w$.
A substring which occurs both as a prefix and as a suffix of $w$ is called a \emph{border} of $w$.
The length longest common prefix of two strings $x,y$ is denoted by $\lcp(x,y)$.
We extend $\Sigma$ with a sentinel symbol $ \$$, which we assume to be smaller than any other character.
The order on $\Sigma$ can be generalized in a standard way to the \emph{lexicographic} order of the strings over $\Sigma$: a string $x$ is lexicographically smaller than $y$ (denoted $x\prec y$) if either $x$ is a proper prefix, or there exists a position~$i$, $0 \le i < \min\{|x|, |y|\}$, such that $x[1..i] = y[1..i]$ and $x[i+1] \prec y[i+1]$.
The following lemma provides one of the standard tools in stringology.
\begin{lemma}[LCP Queries \cite{AlgorithmsOnStrings}]\label{lem:all}
A~string $w$ of length $n$ can be preprocessed in $\mathcal{O}(n)$ time so that the following queries can be answered in $\mathcal{O}(1)$ time: Given substrings $x$ and $y$ of $w$, compute $\lcp(x, y)$ and decide whether $x \prec y$, $x = y$, or $x \succ y$.
\end{lemma}
We say that a sequence $p_0< p_1 < \ldots < p_k$ of positions in a string $w$
is a \emph{periodic progression} if $w[p_0 .. p_{1}-1]=\ldots=w[p_{k-1}.. p_k-1]$.
Periodic progressions $p,p'$ are called \emph{non-overlapping} if the maximum term in $p$ is smaller than the minimum term in $p'$ or vice versa, the maximum term in $p'$ is smaller than the minimum term in $p$.
Note that any periodic progression is an arithmetic progression and consequently it can be represented by three integers: $p_0$, $p_1-p_0$, and $k$.
Periodic progressions appear in our work because of the following result:
\begin{theorem}[\cite{DBLP:journals/corr/KociumakaRRW13}]\label{th:occurrences}
Using a data structure of size $\mathcal{O}(n)$ with $\mathcal{O}(n)$-time randomized (Las Vegas) construction, the following queries can be answered in constant time: Given two substrings $x$ and $y$ such that $|x|=\mathcal{O}(|y|)$, report the positions of all occurrences of $y$ in~$x$,
represented as at most $\frac{|x|+1}{|y|+1}$ non-overlapping periodic progressions.
\end{theorem}
\subsection{Overview of wavelet suffix trees}\label{ssec:over}
For a string $w$ of length $n$, a \emph{wavelet suffix tree} of $w$ is a full binary tree of logarithmic height.
Each of its $n$ leaves corresponds to a non-empty suffix of $w\$$.
The lexicographic order of suffixes is preserved as the left-to-right order of leaves.
Each node $u$ of the wavelet suffix tree stores two bitmasks.
Bits of the first bitmask correspond to suffixes below $u$ sorted by their starting positions,
and bits of the second bitmask correspond to these suffixes sorted by pairs (preceding character, starting position).
The $i$-th bit of either bitmask is set to $0$ if the $i$-th suffix belongs to the left subtree of $u$ and to $1$ otherwise.
Like in the standard wavelet trees, on top of the bitmasks we maintain a rank/select data structure.
See Figure~\ref{fig:wst} for a sample wavelet suffix tree with both bitmasks listed down in nodes.
Each edge $e$ of the wavelet suffix tree is associated with a sorted list $L(e)$ containing substrings of~$w$.
The wavelet suffix tree enjoys an important \emph{lexicographic property}.
Imagine we traverse the tree depth-first, and when going \emph{down} an edge $e$ we write out the contents of $L(e)$,
whereas when visiting a leaf we output the corresponding suffix of $w\$$.
Then, we obtain the lexicographically sorted list of all substrings of $w\$$ (without repetitions).%
\footnote{A similar property holds for suffix trees if we define $L(e)$ so that it contains the labels of
all implicit nodes on $e$ and the label of the lower explicit endpoint of $e$.}
This, in particular, implies that the substrings in $L(e)$ are consecutive prefixes of the longest substring in $L(e)$,
and that for each substring $y$ of $w$ there is exactly one edge $e$ such that the $y\in L(e)$.
\begin{figure}[t]
\input{fig_wst}
\caption{A wavelet suffix tree of $w=\texttt{ababbabababb}$.
Leaves corresponding to $w[i..]\$$ are labelled with $i$.
Elements of $L(e)$ are listed next to $e$, with $\dots$
denoting further substrings up to the suffix of $w$. Suffixes of $x=\texttt{\color{red}bababa}$ are marked red, of $x=\texttt{\color{blue}abababb}$: blue. Note that the prefixes of $w[i..]$ do not need to lie above the leaf~$i$ (see $w[1,5]=\texttt{ababb}$),
and the substrings above the leaf $i$ do not need to be prefixes of $w[i..]$ (see $w[10..]$ and $\texttt{aba}$).}
\label{fig:wst}
\end{figure}
In the query algorithms, we actually work with $L_x(e)$, containing the suffixes of $x$ among the elements of $L(e)$.
For each edge $e$, starting positions of these suffixes form $\mathcal{O}(1)$ non-overlapping periodic progressions,
and consequently the list $L_x(e)$ admits a constant-space representation.
Nevertheless, we do not store the lists explicitly, but instead generate some of them on the fly.
This is one of the auxiliary operations,
each of which is supported by the wavelet suffix tree in constant~time.
\begin{enumerate}[(1)]\compact
\item For a substring $x$ and an edge $e$, output the list $L_x (e)$ represented as $\mathcal{O}(1)$ non-overlapping periodic progressions;
\item Count the number of suffixes of $x = w[i..j]$ in the left/right subtree of a node (given along with the segment of its first bitmask corresponding to suffixes that start inside $[i, j]$);
\item Count the number of suffixes $x=w[i..j]$ that are preceded by a character~$c$ and lie in the left/right subtree of a node (given along with the segment of its second bitmask corresponding to suffixes that start inside $[i, j]$ and are preceded by $c$);
\item For a substring $x$ and an edge $e$, compute the run-length encoding of the sequence of characters preceding suffixes in $L_x(e)$.
\end{enumerate}
\subsection{Full description of wavelet suffix trees}\label{ssec:full}
We start the description with Section~\ref{ssec:tools}, where we introduce \emph{string intervals},
a notion central to the definition of wavelet suffix tree.
We also present there corollaries of Lemma~\ref{lem:all} which let us efficiently deal with string intervals.
Then, in Section~\ref{ssec:WSTdefinition}, we give a precise definition of wavelet suffix trees and prove its several combinatorial consequences.
We conclude with Section~\ref{ssec:ops}, where we provide the implementations of auxiliary operations defined in Section~\ref{ssec:over}.
\subsubsection{String intervals}\label{ssec:tools}
To define wavelet suffix trees, we often need to compare substrings of $w$ trimmed to a certain number of characters.
If instead of $x$ and $y$ we compare their counterparts trimmed to $\ell$ characters,
i.e., $x[1..\min\{\ell,|x|\}]$ and $y[1..\min\{\ell,|y|\}]$,
we use $\ell$ in the subscript of the operator, e.g., $x=_{\ell}y$ or $x \preceq_\ell y$.
For a pair of strings $s, t$ and a positive integer $\ell$ we define a \emph{string interval} $[s, t]_\ell=\{z \in \bar{\Sigma}^*: s \preceq_\ell z \preceq_\ell t\}$ and $(s,t)_\ell = \{z\in \bar{\Sigma}^* : s\prec_\ell z \prec_\ell t\}$. Intervals $[s, t)_\ell$ and $(s, t]_\ell$ are defined analogously. The strings $s,t$ are called the \emph{endpoints} of these intervals.
In the remainder of this section, we show that the data structure of Lemma~\ref{lem:all} can answer queries
related to string intervals and periodic progressions, which arise in Section~\ref{ssec:ops}.
We start with a simple auxiliary result; here $y^\infty$ denotes a (one-sided) infinite string
obtained by concatenating an infinite number of copies of $y$.
\begin{lemma}\label{lem:corr}
The data structure of Lemma~\ref{lem:all} supports the following queries in $\mathcal{O}(1)$ time:
\begin{enumerate}[(1)]\compact
\item\label{it:lexStrong} Given substrings $x$, $y$ of $w$ and an integer $\ell$, determine if $x \prec_\ell y$, $x =_\ell y$, or $x \succ_\ell y$.
\item\label{it:pp} Given substrings $x,y$ of~$w$,
compute $\lcp(x,y^\infty)$ and determine whether $x\prec y^\infty$ or $x\succ y^\infty$.
\end{enumerate}
\end{lemma}
\begin{proof}
\noindent
(\ref{it:lexStrong}) By Lemma~\ref{lem:all}, we may assume to know $\lcp(x,y)$. If $\lcp(x,y) \ge \ell$, then $x =_\ell y$. Otherwise,
trimming $x$ and $y$ to $\ell$ characters does not influence the order between these two substrings.
\smallskip
\noindent
(\ref{it:pp})
If $\lcp(x,y)< |y|$, i.e., $y$ is not a prefix of $x$,
then $\lcp(x,y^\infty)=\lcp(x,y)$ and the order between $x$ and $y^\infty$ is the same as between $x$ and $y$.
Otherwise, define $x'$ so that $x=yx'$. Then $\lcp(x,y^\infty) = |y|+\lcp(x',x)$ and the order
between $x$ and $y^\infty$ is the same as between $x'$ and~$x$.
Consequently, the query can be answered in constant time in both cases.
\end{proof}
\begin{lemma}\label{lem:IntervalSelection}
The data structure of Lemma~\ref{lem:all} supports the following queries in $\mathcal{O}(1)$ time:
Given a periodic progression $p_0<\ldots<p_k$ in $w$, a position $j\ge p_k$, and a string interval $I$ whose endpoints are substrings of~$w$,
report, as a single periodic progression, all positions $p_i$ such that $w[p_i..j] \in I$.
\end{lemma}
\begin{proof}
If $k=0$, it suffices to apply Lemma~\ref{lem:corr}(\ref{it:lexStrong}).
Thus, we assume $k\ge 1$ in the remainder of the proof.
Let $s$ and $t$ be the endpoints of $I$, $\rho=w[p_0..p_1-1]$, and $x_i = w[p_i..j]$. Using Lemma~\ref{lem:corr}(\ref{it:pp})
we can compute $r_0=\lcp(x_0, \rho^\infty)$ and $r'=\lcp(s,\rho^\infty)$.
Note that $r_i := \lcp(x_i, \rho^\infty)=r-i|\rho|$, in particular $r_0\ge k|\rho|$.
If $r'\ge \ell$, we distinguish two cases:
\begin{enumerate}[1)]\compact
\item $r_i \ge \ell$. Then $\lcp(x_i, s)\ge \ell$; thus $x_i =_\ell s$.
\item $r_i < \ell$. Then $\lcp(x_i,s)=r_i$; thus $x_i\prec_\ell s$ if $x_0 \prec \rho^\infty$, and $x_i\succ_\ell s$ otherwise.
\end{enumerate}
On the other hand, if $r'< \ell$, we distinguish three cases:
\begin{enumerate}[1)]\compact
\item $r_i > r'$. Then $\lcp(x_i,s)=r'$; thus $x_i \prec_\ell s$ if $\rho^\infty \prec s$, and $x_i\succ_\ell s$ otherwise.
\item $r_i = r'$. Then we use Lemma~\ref{lem:corr}(\ref{it:pp}) to determine the order between $x_i$ and $s$ trimmed to $\ell$ characters. This, however, may happen only for a single value $i$.
\item $r_i < r'$. Then $\lcp(x_i,s)=r_i$; thus $x_i \prec_\ell s$ if $x_0 \prec \rho^\infty$, and $x_i\succ_\ell s$ otherwise.
\end{enumerate}
Consequently, in constant time we can partition indices $i$ into at most three ranges,
and for each range determine whether $x_i \prec_\ell s$, $x_i=_\ell s$, or $x_i\succ_\ell s$
for all indices $i$ in the range. We ignore from further computations those ranges for which we already know that $x_i \notin I$,
and for the remaining ones repeat the procedure above with $t$ instead of $s$.
We end up with $\mathcal{O}(1)$ ranges of positions $i$ for which $x_i\in I$. However, note that
as the string sequence $(x_i)_{i=0}^k$ is always monotone (decreasing if $x_0 \preceq \rho^\infty$, increasing otherwise),
these ranges (if any) can be merged into a single range, so in the output we end up with a single (possibly empty) periodic progression.
\end{proof}
\begin{figure}[ht]
\input{fig_wst2}
\caption{An auxiliary tree $T$ introduced to define the wavelet suffix tree of $w=\texttt{ababbabababb}$.
Levels are written inside nodes. Gray nodes are dissolved during the construction of the wavelet suffix tree.}
\label{fig:wst2}
\end{figure}
\subsubsection{Definition of wavelet suffix trees}
\label{ssec:WSTdefinition}
Let $w$ be a string of length $n$. To define the wavelet suffix tree of $w$, we start from an auxiliary tree $T$ of height $\mathcal{O}(\log n)$ with $\mathcal{O}( n \log n)$ nodes.
Its leaves represent non-empty suffixes of $w\$$, and the left-to-right order of leaves corresponds to the lexicographic order on the suffixes.
Internal nodes of $T$ represent all substrings of $w$ whose length is a power of two, with an exception of the root, which represents
the empty word. Edges in $T$ are defined so that a node representing $v$ is an ancestor of a node representing $v'$
if and only if $v$ is a prefix of $v'$. To each non-root node $u$ we assign a \emph{level} $\ell(u) := 2|v|$,
where $v$ is the substring that $u$ represents. For the root $r$, we set $\ell(r):= 1$.
See Figure~\ref{fig:wst2} for a sample tree $T$ with levels assigned to nodes.
For a node $u$, we define $\S(u)$ to be the set of suffixes of $w\$$ that are represented by descendants of $u$.
Note that $\S(u)$ is a singleton if $u$ is a leaf.
The following observation characterizes the levels and sets $\S(u)$.
\begin{figure}
\input{fig_wst3}
\caption{The wavelet suffix tree of $w=\texttt{ababbabababb}$ (see also Figures~\ref{fig:wst} and~\ref{fig:wst2}).
Levels are written inside nodes. Gray nodes have been introduced as inner nodes of replacement trees.
The corresponding suffix is written down below each leaf. Selected edges $e$ are labelled with the intervals $I(e)$.
}
\label{fig:wst3}
\end{figure}
\begin{observation}\label{obs:ell}
For any node $u$ other than the root:
\begin{enumerate}[(1)]\compact
\item\label{it:mon} $\ell(\mathrm{parent}(u))\le \ell(u)$,
\item\label{it:here} if $y\in \S(u)$ and $y'$ is a suffix of $w\$$ such that $\lcp(y,y')\ge \ell(\mathrm{parent}(u))$, then $y'\in \S(u)$,
\item\label{it:sim} if $y,y'\in \S(u)$, then $\lcp(y,y')\ge \floor{\frac{1}{2}\ell(u)}$.
\end{enumerate}
\end{observation}
\pagebreak
Next, we modify $T$ to obtain a binary tree of $\mathcal{O}(n)$ nodes.
In order to reduce the number of nodes, we dissolve all nodes with exactly one child, i.e., while there is a non-root node $u$ with exactly one child $u'$,
we set $\mathrm{parent}(u'):=\mathrm{parent}(u)$ and remove $u$.
To make the tree binary, for each node $u$ with $k>2$ children, we remove the edges
between $u$ and its children, and introduce a \emph{replacement tree}, a full binary tree
with $k$ leaves whose root is $u$, and leaves are the $k$ children of $u$
(preserving the left-to-right order).
We choose the replacement trees, so that the resulting tree still has depth $\mathcal{O}(\log n)$.
In Section~\ref{ssec:ttools} we provide a constructive proof that such a choice is possible.
This procedure introduces new nodes (inner nodes of the replacement trees); their levels are inherited
from the parents.
The obtained tree is the wavelet suffix tree of $w$; see Figure~\ref{fig:wst3}
for an example.
Observe that, as claimed in Section~\ref{ssec:over} it is a full binary tree of logarithmic height,
whose leaves corresponds to non-empty suffixes of $w\$$.
Moreover, it is not hard to see that this tree still satisfies Observation~\ref{obs:ell}.
As described in Section~\ref{ssec:over}, each node of $u$ (except for the leaves)
stores two bitmasks. In either bitmask each bit corresponds to a suffix $y\in \S(u)$, and it is equal to 0 if $y\in \S(\mathrm{lchild}(u))$
and to 0 if $y\in \S(\mathrm{rchild}(u))$, where $\mathrm{lchild}(u)$ and $\mathrm{rchild}(u)$ denote the children of $u$.
In the first bitmask the suffixes $y=w[j..]\$$ are ordered by the starting position $j$,
and in the second bitmask~--- by pairs $(w[j-1], j)$ (assuming $w[-1]=\$$).
Both bitmasks are equipped with rank/select data structures.
Additionally, each node and each edge of the wavelet suffix tree are associated
with a string interval whose endpoints are suffixes of $w\$$.
Namely, for an arbitrary node $u$ we define $I(u)=[\min \S(u), \max\S(u)]_{\ell(u)}$.
Additionally, if $u$ is not a leaf, we set
$I(u, \mathrm{lchild}(u)) = [\min \S(u), y]_{\ell(u)}$
and
$I(u, \mathrm{rchild}(u)) = (y, \max\S(u)]_{\ell(u)}$,
where $y=\max\S(\mathrm{lchild}(u))$ is the suffix corresponding to
the rightmost leaf in the left subtree of $u$; see also Figure~\ref{fig:wst3}.
For each node $u$ we store the starting positions of $\min\S(u)$ and $\max\S(u)$
in order to efficiently retrieve a representation of $I(u)$ and $I(e)$ for adjacent edges $e$.
The following lemma characterizes the intervals.
\begin{lemma}\label{lem:intervals}
For any node $u$ we have:
\begin{enumerate}[(1)]\compact
\item\label{it:disj} If $u$ is not a leaf, then $I(u)$ is a disjoint union of $I(u, \mathrm{lchild} (u))$ and $I(u, \mathrm{rchild}(u))$.
\item\label{it:iff} If $y$ is a suffix of $w\$$, then $y\in I(u)$ if and only if $y\in \S(u)$.
\item\label{it:sub} If $u$ is not the root, then $I(u)\subseteq I(\mathrm{parent}(u),u)$.
\end{enumerate}
\end{lemma}
\begin{proof}
\noindent (\ref{it:disj}) is a trivial consequence of the definitions.
\smallskip
\noindent (\ref{it:iff}) Clearly $y\in \S(u)$ iff $y\in [\min \S(u),\max \S(u)]$.
Therefore, it suffices to show that if $\lcp(y, y')\ge \ell(u)$ for $y'=\min\S(u)$ or $y'=\max\S(u)$, then $y\in \S(u)$.
This is, however, a consequence of points (\ref{it:mon}) and (\ref{it:here}) of Observation~\ref{obs:ell}.
\smallskip
\noindent (\ref{it:sub}) Let $\ell_p=\ell(\mathrm{parent}(u))$.
If $u=\mathrm{lchild}(\mathrm{parent}(u))$, then $\S(u)\subseteq \S(\mathrm{parent}(u))$ and, by Observation~\ref{obs:ell}(\ref{it:mon}), $\ell(u)\le \ell_p$, which implies the statement.
Therefore, assume that $u=\mathrm{rchild}(\mathrm{parent}(u))$,
and let $u'$ be the left sibling of $u$. Note that $I(\mathrm{parent}(u), u)=(\max \S(u'), \max\S(u)]_{\ell_p}$
and $I(u)\subseteq [\min \S(u),\max \S(u)]_{\ell_p}$, since $\ell(u)\le \ell_p$.
Consequently, it suffices to prove that $\max \S(u') \prec_{\ell_p} \min \S(u)$.
This is, however, a consequence of Observation~\ref{obs:ell}(\ref{it:here}) for $y=\min \S(u)$ and $y'=\max\S(u')$,
and the fact that the left-to-right order of leaves coincides with the lexicographic order of the corresponding
suffixes of $w\$$.
\end{proof}
For each edge $e=(\mathrm{parent}(u),u)$ of the wavelet suffix tree,
we define $L(e)$ to be the sorted list of those substrings of $w$ which belong to $I(e) \setminus I(u)$.
Recall that the wavelet suffix tree shall enjoy the \emph{lexicographic property}:
if we traverse the tree, and when going \emph{down} an edge $e$ we write out the contents of $L(e)$, whereas when visiting a leaf we output the corresponding suffix of $w\$$, we shall obtain a lexicographically sorted list of all substrings of $w\$$.
This is proved in the following series of lemmas.
\begin{lemma}\label{lem:prelex}
Let $e=(\mathrm{parent}(u),u)$ for a node $u$.
Substrings in $L(e)$ are smaller than any string in~$I(u)$.
\end{lemma}
\begin{proof}
We use a shorthand $\ell_p$ for $\ell(parent(u))$.
Let $y=\max \S(u)$ be the rightmost suffix in the subtree of~$u$. Consider a substring $s = w[k..j] \in L(e)$,
also let $t=w[k..]\$$.
We first prove that $s\preceq y$. Note that $I(e)=[x,y]_{\ell_p}$ or $I(e)=(x,y]_{\ell_p}$ for some string $x$.
We have $s\in L(e)\subseteq I(e)$, and thus $s\preceq_{\ell_p} y$. If $\lcp(s,y)<\ell_p$, this already implies that $s\preceq y$.
Thus, let us assume that $\lcp(s,y)\ge \ell_p$. The suffix $t$ has $s$ as a prefix, so this also means that $\lcp(t,y)\ge \ell_p$.
By Observation~\ref{obs:ell}(\ref{it:here}), $t\in \S(u)$, so $t\preceq y$.
Thus $s\preceq t \preceq y$, as claimed.
To prove that $s\preceq y$ implies that $s$ is smaller than any string in $I(u)$,
it suffices to note that $y\in \S(u)\subseteq I(u)$, $s\notin I(u)$, and $I(u)$ is an interval.
\end{proof}
\begin{lemma}\label{lem:lex}
The wavelet suffix tree satisfies the lexicographic property.
\end{lemma}
\begin{proof}
Note that for the root $r$ we have $I(r)=[\$, c]_1$ where $c$ is the largest character present in $w$.
Thus, $I(r)$ contains all substrings of $w\$$ and it suffices to show that if we traverse the subtree of $r$,
writing out the contents of $L(e)$ when going down an edge $e$, and the corresponding suffix when visiting a leaf,
we obtain a sorted list of substrings of $w\$$ contained in $I(r)$. But we will show even a stronger claim~--- we will show that, in fact, this property holds for all nodes $u$ of the tree.
If $u$ is a leaf this is clear, since $I(u)$ consists of the corresponding suffix of $w\$$ only.
Next, if we have already proved the hypothesis for $u$, then prepending the output with the contents of $L(\mathrm{parent}(u),u)$,
by Lemmas~\ref{lem:prelex} and~\ref{lem:intervals}(\ref{it:sub}), we obtain a sorted list of substrings of $w\$$ contained in $I(\mathrm{parent}(u),u)$.
Applying this property for both children of a non-leaf $u'$, we conclude that if the hypothesis
holds for children of $u'$ then, by Lemma~\ref{lem:intervals}(\ref{it:disj}), it also holds for $u'$.
\end{proof}
\begin{corollary}\label{cor:cons_pref}
Each list $L(e)$ contains consecutive prefixes of the largest element of $L(e)$.
\end{corollary}
\begin{proof}
Note that if $x\prec y$ are substrings of $w$ such that $x$ is not a prefix of $y$,
then $x$ can be extended to a suffix $x'$ of $w\$$ such that $x\prec x' \prec y$.
However, $L(e)$ does not contain any suffix of~$w\$$.
By Lemma~\ref{lem:lex}, $L(e)$ contains a consecutive collection of substrings of $w\$$,
so $x$ and $y$ cannot be both present in $L(e)$.
Consequently, each element of $L(e)$ is a prefix of $\max L(e)$.
Similarly, since $L(e)$ contains a consecutive collection of substrings of $w\$$,
it must contain all prefixes of $\max L(e)$ no shorter than $\min L(e)$.
\end{proof}
\subsubsection{Implementation of auxiliary queries}\label{ssec:ops}
Recall that $L_x(e)$ is the sublist of $L(e)$ containing suffixes of $x$.
The wavelet suffix tree shall allow the following four types of queries in constant time:
\begin{enumerate}[(1)]\compact
\item\label{q:lxe} For a substring $x$ and an edge $e$, output the list $L_x (e)$ represented as $\mathcal{O}(1)$ non-overlapping periodic progressions;
\item\label{q:cnt} Count the number of suffixes of $x = w[i..j]$ in the left/right subtree of a node (given along with the segment of its first bitmask corresponding to suffixes that start inside $[i, j]$);
\item\label{q:ccnt} Count the number of suffixes $x=w[i..j]$ that are preceded by a character~$c$ and lie in the left/right subtree of a node (given along with the segment of its second bitmask corresponding to suffixes that start inside $[i, j]$ and are preceded by $c$);
\item\label{q:rle} For a substring $x$ and and edge $e$, compute the run-length encoding of the sequence of characters preceding suffixes in $L_x(e)$.
\end{enumerate}
We start with an auxiliary lemma applied in the solutions to all four queries.
\begin{lemma}\label{lem:aux}
Let $e=(u,u')$ be an edge of a wavelet suffix tree of $w$, with $u'$ being a child of $u$.
The following operations can be implemented in constant time.
\begin{enumerate}[(1)]\compact
\item\label{it:subs} Given a substring $x$ of $w$, $|x|< \ell(u)$, return, as a single periodic progression of starting positions, all suffixes $s$ of $x$ such that $s\in I(e)$.
\item\label{it:range} Given a range of positions $[i,j]$, $j-i\le \ell(u)$, return all positions $k\in [i,j]$ such that $w[k..]\$ \in I(e)$,
represented as at most two non-overlapping periodic progressions.
\end{enumerate}
\end{lemma}
\begin{proof}
Let $p$ be the longest common prefix of all strings in $I(u)$; by Observation~\ref{obs:ell}(\ref{it:sim}), we have $|p|\ge \lfloor\frac{1}{2}\ell(u)\rfloor$.
\noindent
(\ref{it:subs}) Assume $x=w[i..j]$. We apply Theorem~\ref{th:occurrences} to find all occurrences of $p$ within $x$,
represented as a single periodic progression since $|x|+1< 2(|p|+1)$. Then, using Lemma~\ref{lem:IntervalSelection}, we filter positions $k$
for which $w[k,j]\in I(e)$.
\noindent
(\ref{it:range}) Let $x=w[i..j+|p|-1]$ ($x=w[i..]\$$ if $j+|p|-1>|w|$). We apply Theorem~\ref{th:occurrences} to find all occurrences of $p$ within~$x$,
represented as at most two periodic progressions since $|x|+1\le \ell(u)+|p|+1\le 2\lfloor\frac12\ell(u)\rfloor+|p|+2<3(|p|+1)$.
Like previously, using Lemma~\ref{lem:IntervalSelection} we filter positions $k$
for which $w[k..]\$\in I(e)$.
\end{proof}
\begin{lemma}\label{lem:sxe}
The wavelet suffix tree allows to answer queries (\ref{q:lxe}) in constant time. In more details, for an edge $e=(\mathrm{parent}(u), u)$, the starting positions of suffixes in $L_x(e)$ form at most three non-overlapping periodic progressions, which can be reported in $\mathcal{O}(1)$ time.
\end{lemma}
\begin{proof}
First, we consider short suffixes.
We use Lemma~\ref{lem:aux}(\ref{it:subs}) to find all suffixes $s$ of~$x$, $|s|<\ell(\mathrm{parent}(u))$,
such that $s\in I(\mathrm{parent}(u),u)$. Then, we apply Lemma~\ref{lem:IntervalSelection} to filter
out all suffixes belonging to $I(u)$. By Lemma~\ref{lem:prelex}, we obtain
at most one periodic progression.
Now, it suffices to generate suffixes $s$, $|s| \ge \ell(\mathrm{parent}(u))$, that belong to $L(e)$. Suppose
$s=w[k..j]$. If $s\in I(e)$, then equivalently $w[k..]\$\in I(e)$,
since $s$ is a long enough prefix of $w[k..]\$$ to determine
whether the latter belongs to $I(e)$.
Consequently, by Lemma~\ref{lem:intervals}, $w[k..]\$\in I(u)$.
This implies $|s|<\ell$ (otherwise we would have $s\in I(u)$), i.e., $k\in [j-\ell+2..j-\ell(\mathrm{parent}(u))+1]$.
We apply Lemma~\ref{lem:aux}(\ref{it:range}) to compute all positions $k$ in this range for which $w[k..]\$\in I(e)$.
Then, using Lemma~\ref{lem:IntervalSelection}, we filter out positions $k$ such that $w[k..j]\in I(u)$.
By Lemma~\ref{lem:prelex}, this cannot increase the number of periodic progressions,
so we end up with three non-overlapping periodic progressions in total.
\end{proof}
\begin{lemma}\label{lem:i}
The wavelet suffix tree allows to answer queries (\ref{q:cnt}) in constant time.
\end{lemma}
\begin{proof}
Let $u$ be the given node and $u'$ be its right/left child (depending on the variant of the query).
First, we use Lemma~\ref{lem:aux}(\ref{it:subs}) to find all suffixes $s$ of~$x$, $|s|<\ell(u)$,
such that $s\in I(u,u')$, i.e., $s$ lies in the appropriate subtree of $u$.
Thus, it remains to count suffixes of length at least $\ell(u)$.
Suppose $s=w[k..j]$ is a suffix of $x$ such that $|s|\ge \ell(u)$ and $s\in I(u,u')$.
Then $w[k..]\$\in I(u,u')$, and the number of suffixes $w[k..]\$\in I(u,u')$ such that $k \in [i, j]$
is simply the number of 1's or 0's in the given segment of the first bitmask in~$u$,
which we can compute in constant time.
Observe, however, that we have also counted positions $k$ such that $|w[k..j]|<\ell(u)$,
and we need to subtract the number of these positions.
For this, we use Lemma~\ref{lem:aux}(\ref{it:range}) to compute
the positions $k\in [j-\ell+2, j]$ such that $w[k..]\$\in I(u,u')$.
We count the total size of the obtained periodic progressions,
and subtract it from the final result, as described.
\end{proof}
\begin{lemma}
The wavelet suffix tree allows to answer queries (\ref{q:ccnt}) and (\ref{q:rle}) in $\mathcal{O}(1)$ time.
\end{lemma}
\begin{proof}
Observe that for any periodic progression $p_0\ldots,p_k$ we have $w[p_1-1]=\ldots=w[p_k-1]$.
Thus, it is straightforward to determine which positions of such a progression are preceded by $c$.
Answering queries (\ref{q:ccnt}) is analogous to answering queries (\ref{q:cnt}), we just use the second bitmask at the given node and consider only positions preceded by $c$ while counting the sizes of periodic progressions.
To answer queries (\ref{q:rle}), it suffices to use Lemma~\ref{lem:sxe} to obtain $L_x(e)$.
By Corollary~\ref{cor:cons_pref}, suffixes in $L_x(e)$ are prefixes of one another,
so the lexicographic order on these suffixes coincides with the order of ascending lengths.
Consequently, the run-length encoding of the piece corresponding to $L_x(e)$ has at most
six phrases and can be easily found in $\mathcal{O}(1)$ time.
\end{proof}
\subsection{Construction of wavelet suffix trees}\label{ssec:constr}
The actual construction algorithm is presented in Section~\ref{sec:constr}.
Before, in Section~\ref{ssec:ttools}, we introduce several
auxiliary tools for abstract weighted trees.
\subsubsection{Toolbox for weighted trees}\label{ssec:ttools}
Let $T$ be a rooted ordered tree with positive integer weights on edges, $n$ leaves and no inner nodes of degree one.
We say that $L_1,\ldots,L_{n-1}$ is an \emph{LCA sequence} of $T$,
if $L_i$ is the (weighted) depth of the lowest common ancestor
of the $i$-th and $(i+1)$-th leaves.
The following fact is usually applied to construct the suffix tree of a string from the suffix array and the LCP table~\cite{AlgorithmsOnStrings}.
\begin{fact}\label{fct:lca}
Given a sequence $(L_i)_{i=1}^{n-1}$ of non-negative integers,
one can construct in $\mathcal{O}(n)$ time a tree whose LCA sequence is $(L_i)_{i=1}^{n-1}$.
\end{fact}
The LCA sequence suffices to detect if a tree is binary.
\begin{observation}\label{obs:lca}
A tree is a binary tree if and only if its LCA sequence $(L_i)_{i=1}^{n-1}$ satisfies the following property
for every $i<j$: if $L_i = L_j$ then there exists $k$, $i<k<j$, such that $L_k<L_i$.
\end{observation}
Trees constructed by the following lemma can be
seen as a variant of the weight-balanced trees, whose existence for arbitrary weights was by proved Blum and Mehlhorn \cite{DBLP:journals/tcs/BlumM80}.
\begin{lemma}\label{lem:wb}
Given a sequence $w_1,\ldots,w_n$ of positive integers,
one can construct in $\mathcal{O}(n)$ time a binary tree $T$ with $n$ leaves,
such that the depth of the $i$-th leaf is $\mathcal{O}(1+\log \frac{W}{w_i})$, where $W =\sum_{j=1}^{n} w_j$.
\end{lemma}
\begin{proof}
For $i=0,\ldots,n$ define $W_i = \sum_{j=1}^i w_j$.
Let $p_i$ be the position of the most significant bit where the binary representations of $W_{i-1}$ and $W_i$ differ,
and let $P=\max_{i=1}^n p_i$.
Observe that $P=\Theta(\log W)$ and $p_i = \Omega(\log w_i)$.
Using Fact~\ref{fct:lca}, we construct a tree $T$ whose LCA sequence is $L_i=P-p_i$. Note
that this sequence satisfies the condition of Observation~\ref{obs:lca}, and thus the tree is binary.
Next, we insert an extra leaf between the two children of any node to make the tree ternary.
The $i$-th of these leaves is inserted at (weighted) depth $1+L_i = \mathcal{O}(1+\log W-\log w_i)$,
which is also an upper bound for its unweighted depth. Next, we remove the original leaves.
This way we get a tree satisfying the lemma, except for the fact that inner nodes may have between one and three children,
rather than exactly two.
In order to resolve this issue, we remove (dissolve) all inner nodes with exactly one child, and for each node $u$ with three children $v_1, v_2, v_3$,
we introduce a new node $u'$, setting $v_1, v_2$ as the children of $u'$ and $u', v_3$ as the children of $u$.
This way we get a full binary tree, and the depth of any node may increase at most twice,
i.e., for the $i$-th leaf it stays~$\mathcal{O}(1+\log\frac{W}{w_i})$.
\end{proof}
Let $T$ be an ordered rooted tree and let $u$ be a node of $T$, which is neither the root nor a leaf.
Also, let $v$ be the parent of $u$. We say that $T'$ is obtained from $T$ by \emph{contracting} the edge $(v,u)$,
if $u$ is removed and the children of $u$ replace $u$ at its original location in the list of children of $v$.
If $T'$ is obtained from $T$ by a sequence of edge contractions, we say that $T'$ is a \emph{contraction} of $T$.
Note that contraction does not alter the pre-order and post-order of the preserved nodes, which implies
that the ancestor-descendant relation also remains unchanged for these nodes.
\begin{corollary}
\label{cor:wb}
Let $T$ be an ordered rooted tree of height $h$, which has $n$ leaves and no inner node with exactly one child.
Then, in $\mathcal{O}(n)$ time one can construct a full binary ordered rooted tree $T'$ of height $\mathcal{O}(h+\log n)$ such that
$T$ is a contraction of $T'$ and
$T'$ has $\mathcal{O}(n)$ nodes.
\end{corollary}
\begin{proof}
For any node $u$ of $T$ with three or more children, we replace the star-shaped tree joining it with its children $v_1,\ldots,v_k$
by an appropriate replacement tree. Let $W(u)$ be the number of leaves in the subtree of $u$, and let $W(v_i)$ be the number of leaves in the subtrees below $v_i$, $1\le i \le k$. We use Lemma~\ref{lem:wb} for $w_i = W(v_i)$ to construct the replacement tree. Consequently, a node $u$ with depth $d$ in $T$ has depth $\mathcal{O}(d+\log\frac{n}{W(u)})$ in $T'$ (easy top-down induction). The resulting tree has height $\mathcal{O}(h+\log n)$, as claimed.
\end{proof}
\subsubsection{The algorithm}\label{sec:constr}
In this section we show how to construct the wavelet suffix tree of a string $w$ of length $n$ in $\mathcal{O}(n\sqrt{\log n})$ time.
The algorithm is deterministic, but the data structure of Theorem~\ref{th:occurrences},
required by the wavelet suffix tree, has a randomized construction only, running in $\mathcal{O}(n)$ expected time.
The construction algorithm has two phases: first, it builds the \emph{shape}
of the wavelet suffix tree, following a description in Section~\ref{ssec:WSTdefinition},
and then it uses the results of Section~\ref{sec:wt} to obtain the bitmasks.
We start by constructing the suffix array and the LCP table for $w\$$ (see~\cite{AlgorithmsOnStrings}).
Under the assumption that $\sigma < n$, this takes linear time.
Recall that in the definition of the wavelet suffix tree we started with a tree of size $\mathcal{O}(n\log n)$.
For an $o(n\log n)$-time construction we cannot afford that. Thus, we construct the tree $T$ already without inner nodes
having exactly one child.
Observe that this tree is closely related to the suffix tree of $w\$$. The only difference
is that if the longest common prefix of two consecutive suffixes is~$d$, their root-to-leaf paths diverge at depth $\floor{\log d}$ instead of $d$.
To overcome this difficulty, we use Fact~\ref{fct:lca} for $L_i = \floor{\log LCP[i]}$, rather than $LCP[i]$ which we
would use for the suffix tree. This way an inner node $u$ at depth
$j$ represents a substring of length $2^j$. The level $\ell(u)$ of an inner node $u$ is set to $2^{j+1}$, and if $u$ is a leaf representing a suffix $s$ of $w\$$, we have $\ell(u)=2|s|$.
After this operation, the tree $T$ may have inner nodes of large degree,
so we use Corollary~\ref{cor:wb} to obtain a binary tree such that $T$ is its contraction.
We set this binary tree as the shape of the wavelet suffix tree.
Since $T$ has height $\mathcal{O}(\log n)$, so does the wavelet suffix tree.
To construct the bitmasks, we apply Theorem~\ref{thm:aswt} for $T$ with the leaf representing
$w[i..]\$$ assigned to $i$.
For the first bitmask, we simply set $s[i]=i$.
For the second bitmask,
we sort all positions $i$ with respect to $(w[i-1],i)$ and take the resulting sequence as $s$.
This way, we complete the proof of the main theorem concerning wavelet suffix trees.
\begin{theorem}\label{th:wst}
A wavelet suffix tree of a string $w$ of length $n$ occupies $\mathcal{O}(n)$ space and can be constructed in $\mathcal{O}(n \sqrt{\log{n}})$ expected time.
\end{theorem}
\subsection{Applications}\label{ssec:apps}
\subsubsection{Substring suffix rank/select}
In the substring suffix rank problem, we are asked to find the rank of a substring $y$ among the suffixes of another substring $x$.
The substring suffix selection problem, in contrast,
is to find the $k$-th lexicographically smallest suffix of $x$ for a given an integer $k$ and a substring $x$ of $w$.
\begin{theorem}
\label{thm:ssrank}
The wavelet suffix tree can solve the substring suffix rank problem in $\mathcal{O}(\log n)$ time.
\end{theorem}
\begin{proof}
Using binary search on the leaves of the wavelet suffix tree of $w$, we locate the minimal suffix $t$ of $w\$$ such that $t\succ y$.
Let $\pi$ denote the path from the root to the leaf corresponding to $t$.
Due to the lexicographic property, the rank of $y$ among the suffixes of $x$ is equal to the sum of two numbers.
The first one is the number of suffixes of $x$ in the left subtrees hanging from the path $\pi$,
whereas the second summand is the number of suffixes not exceeding $y$ in the lists $L_x(e)$ for $e\in \pi$.
To compute those two numbers, we traverse $\pi$ maintaining a segment $[\ell,r]$ of the first bitmask corresponding
to the suffixes of $w\$$ starting within $x$.
When we descend to the left child, we set $[\ell, r]: = [rank_0(\ell), rank_0(r)]$, while for the right child, we set $[\ell, r]: = [rank_1(\ell), rank_1(r)]$.
In the latter case, we pass $[\ell,r]$ to type (\ref{q:cnt}) queries, which let us count the suffixes of
$x$ in the left subtree hanging from $\pi$ in the current node.
This way, we compute the first summand.
For the second number, we use type (\ref{q:lxe}) queries to generate all lists $L_x(e)$ for $e\in \pi$.
Note that if we concatenated these lists $L_x(e)$ in the root-to-leaf order of edges,
we would obtain a sorted list of strings.
Thus, while processing the lists in this order (ignoring the empty ones), we add up the sizes of $L_x(e)$ until $\max L_x(e) \succ y$.
For the first encountered list $L_x(e)$ satisfying this property, we binary search within $L_x(e)$
to determine the number of elements not exceeding $y$, and also add this value to the final result.
The described procedure requires $\mathcal{O}(\log n)$ time,
since type (\ref{q:lxe}) and (\ref{q:cnt}) queries, as well as substring comparison queries (Lemma~\ref{lem:all}), run in $\mathcal{O}(1)$ time.
\end{proof}
\begin{theorem}
\label{thm:ssselect}
The wavelet suffix tree can solve the substring suffix selection problem in $\mathcal{O}(\log n)$ time.
\end{theorem}
\begin{proof}
The algorithm traverses a path in the wavelet suffix tree of $w$.
It maintains a segment $[\ell,r]$ of the first bitmask corresponding to suffixes of $w$ starting within $x=w[i..j]$,
and a variable $k'$ counting the suffixes of $x$ represented in the left subtrees hanging from the path
on the edges of the path.
The algorithm starts at the root initializing $[\ell, r]$ with $[i,j]$ and $k'=0$.
At each node $u$, it first decides to which child of $u$ to proceed.
For this, is performs a type (\ref{q:cnt}) query to determine $k''$, the number of suffixes of $x$
in the left subtree of $u$. If $k'+k''\ge k$, it chooses to go to the left child, otherwise to the right one;
in the latter case it also updates $k':= k'+k''$.
The algorithm additionally updates the segment $[\ell,r]$ using the rank queries on the bitmask.
Let $u'$ be the child of $u$ that the algorithm has chosen to proceed to.
Before reaching $u'$, the algorithm performs a type (\ref{q:lxe}) query to compute $L_x(u,u')$.
If $k'$ summed with the size of this list is at least $k$, then the algorithm
terminates, returning the $k-k'$-th element of the list (which is easy to retrieve from the representation
as a periodic progression).
Otherwise, it sets $k':= k'+|L_x(u,u')|$, so that $k'$ satisfies the definition for the extended path from the root to $u'$.
The correctness of the algorithm follows from the lexicographic property,
which implies that at the beginning of each step, the sought suffix of $x$ is the $k-k'$-th
smallest suffix in the subtree of~$u$.
In particular, the procedure always terminates before reaching a leaf.
The running time of the algorithm is $\mathcal{O}(\log n)$ due to $\mathcal{O}(1)$-time implementations
of type (\ref{q:lxe}) and (\ref{q:cnt}) queries.
\end{proof}
We now show that the query time for the two considered problems is almost optimal. We start by reminding lower bounds by P\u{a}tra\c{s}cu and by J{\o}rgensen and Larsen.
\begin{theorem}[\cite{DBLP:journals/siamcomp/Patrascu11,DBLP:conf/stoc/Patrascu07}]
In the cell-probe model with $W$-bit cells, a static data structure
of size $c \cdot n$ must take $\Omega(\frac{\log n}{\log c + \log W})$ time for orthogonal range counting queries.
\end{theorem}
\begin{theorem}[\cite{DBLP:conf/soda/JorgensenL11}]
In the cell-probe model with $W$-bit cells, a static data structure
of size $c \cdot n$ must take $\Omega(\frac{\log n}{\log c + \log W})$ time for orthogonal range selection queries.
\end{theorem}
Both of these results allow the coordinates of points to be in the \emph{rank space}, i.e.,
for each $i\in \{1,\ldots,n\}$ there is one point $(i,A[i])$,
and values $A[i]$ are distinct integers in $\{1,\ldots,n\}$.
This lets us construct a string $w=A[1]\ldots A[n]$ for any given point set $P$. Since $w$ has pairwise distinct characters, comparing suffixes of any substring of $w$ is equivalent to
comparing their first characters. Consequently, the substring suffix selection in $w$ is equivalent to the orthogonal range selection in~$P$, and the substring suffix rank in $w$ is equivalent to the orthogonal range counting in $P$ (we need to subtract
the results of two suffix rank queries to answer an orthogonal range counting query). Consequently, we obtain
\begin{corollary}\label{cor:lb}
In the cell-probe model with $W$-bit cells, a static data structure
of size $c \cdot n$ must take $\Omega(\frac{\log n}{\log c + \log W})$ time for the substring suffix rank and the substring suffix select queries.
\end{corollary}
\subsubsection{Run-length encoding of the BWT of a substring}
\label{sec:BWTruns}
\renewcommand{\b}{\mathrm{b}}
Wavelet suffix trees can be also used to compute the run-length encoding of the Burrows-Wheeler transform of a substring. We start by reminding the definitions.
The \emph{Burrows-Wheeler transform}~\cite{BWT} (BWT) of a string $x$ is a string $b_0 b_1 \ldots b_{|x|}$, where $b_k$ is the character preceding the $k$-th lexicographically smallest suffix of $x\$$. The BWT tends to contain long segments of equal characters,
called \emph{runs}. This, combined with \emph{run-length encoding}, allows to compress strings efficiently.
The \emph{run-length encoding} of a string is obtained by replacing each maximal run by a pair: the character that forms the run and the length of the run.
For example, the BWT of a string $banana$ is $annb\$aa$, and the run-length encoding of $annb\$aa$ is $a1n2b1\$1a2$.
Below, we show how to compute the run-length encoding of the BWT of a substring $x = w[i..j]$ using the wavelet suffix tree of $w$.
Let $x=w[i..j]$ and for $k\in\{1,\ldots,|x|\}$ let $s_k = w[i_k..j]$ be the suffixes of $x$ sorted in the lexicographic order. Then the BWT of $x$ is equal to $b_0 b_1 \ldots b_{|x|}$, where $b_0 = w[j]$, and for $k\ge 1$, $b_{k} = w[i_k-1]$, unless $i_k = i$ when $b_{k} = \$$.
Our algorithm initially generates a string $\b(x)$ which instead of $ \$$ contains $w[i-1]$. However, we know that $ \$$ should occur at the position equal to the rank of $x$ among all the suffixes of $x$. Consequently, a single substring suffix rank query suffices to find the position which needs to be corrected.
Remember that the wavelet suffix tree satisfies the lexicographic property. Consequently, if we traverse the tree and write out the characters preceding the suffixes in the lists $L_x(e)$, we obtain $\b(x)$ (without the first symbol $b_0$).
Our algorithm simulates such a traversal.
Assume that the last character appended to $\b(x)$ is $c$, and the algorithm is to move down an edge $e=(u,u')$.
Before deciding to do so, it checks whether all the suffixes of $x$ in the appropriate (left or right) subtree of $u$ are preceded with~$c$.
For this, it performs type (\ref{q:cnt}) and (\ref{q:ccnt}) queries, and if both results are equal to some value $q$,
it simply appends $c^q$ to $\b(x)$ and decides not to proceed to $u'$.
In order to make these queries possible, for each node on the path from the root to $u$,
the algorithm maintains segments corresponding to $[i,j]$ in the first bitmasks, and to $(c, [i,j])$ in the second bitmasks.
These segments are computed using rank queries on the bitmasks while moving down the tree.
Before the algorithm continues at $u'$, if it decides to do so, suffixes in $L_x(e)$ need to be handled.
We perform a type (\ref{q:rle}) query to compute the characters preceding these suffixes, and append
the result to $\b(x)$.
This, however, may result in $c$ no longer being the last symbol appended to $\b(x)$.
If so, the algorithm updates the segments of the second bitmask for all nodes on the path from the root to $u'$.
We assume that the root stores all positions $i$ sorted by $(w[i-1],i)$, which
lets us use a binary search to find either endpoint of the segment for the root.
For the subsequent nodes on the path, the the rank structures on the second bitmasks are applied.
Overall, this update takes $\mathcal{O}(\log n)$ time and it is necessary at most once per run.
Now, let us estimate the number of edges visited. Observe that if we go down an edge, then the last character of $\b(x)$ changes before we go up this edge. Thus, all the edges traversed down between such character changes form a path.
The length of any path is $\mathcal{O}(\log n)$, and consequently the total number of visited edges is $\mathcal{O}(s \log n)$,
where $s$ is the number of runs.
\begin{theorem}
The wavelet suffix tree can compute the run-length encoding of the BWT of a substring $x$ in
$\mathcal{O}(s \log n)$ time, where $s$ is the size of the encoding.
\end{theorem}
\subsubsection{Speeding up queries}\label{sec:speedup}
Finally, we note that building wavelet suffix trees for several
substrings of $w$, we can make the query time adaptive to the length of the query substring $x$,
i.e., replace $\mathcal{O}(\log n)$ by $\mathcal{O}(\log |x|)$.
\begin{theorem}
\label{thm:speedup}
Using a data structure of size $\mathcal{O}(n)$, which can be constructed in $\mathcal{O}(n \sqrt{\log n})$ expected time, substring suffix rank and selection problems can be solved in $\mathcal{O}(\log |x|)$ time. The run-length encoding $b(x)$ of the BWT of a substring $x$ can be found in
$\mathcal{O}(|b(x)|\log |x|)$ time.
\end{theorem}
\begin{proof}
We build wavelet suffix trees for some substrings of length $n_k=\lfloor{n^{2^{-k}}}\rfloor$,
$0\le k \le \log\log n$. For each length $n_k$ we choose every $\lfloor \frac{1}{2}n_k\rfloor$-th substring, starting from the prefix
and, additionally, we choose the suffix.
Auxiliary data structures of Lemma~\ref{lem:all} and Theorem~\ref{th:occurrences}, are built for $w$ only.
We have $n_{k}=\floor{\sqrt{n_{k-1}}}$, so $n_{k-1}\le (n_{k}+1)^2$ and thus any substring $x$ of $w$ lies within a substring $v$, $|v|\le 2(|x|+1)^2$, for which we store the wavelet suffix tree.
For each $m$, $1\le m \le n$, we store such $n_k$ that $2m\le n_k \le 2(m+1)^2$.
This reduces finding an appropriate substring $v$ to simple arithmetics.
Using the wavelet suffix tree for $v$ instead of the tree for the whole string $w$ gives the announced query times.
The only thing we must be careful about is that the input for the substring suffix rank problem also consists of a string $y$, which does not need to be a substring of $v$.
However, looking at the query algorithm, it is easy to see that $y$ is only used through the data structure of Lemma~\ref{lem:all}.
It remains to analyze the space usage and construction time.
Observe that the wavelet suffix tree of a substring $v$ is simply a binary tree with two bitmasks at each node
and with some pointers to the positions of the string $w$. In particular, it does not contain any characters of $w$ and,
if all pointers are stored as relative values, it can be stored using $\mathcal{O}(|v|\log |v|)$ bits, i.e., $\mathcal{O}(|v|\frac{\log |v|}{\log n})$ words. For each $n_k$ the total length of selected substrings is $\mathcal{O}(n)$, and thus the space usage is
$\mathcal{O}(n\frac{\log n_k}{\log n})=\mathcal{O}(n 2^{-k})$, which sums up to $\mathcal{O}(n)$ over all lengths $n_k$.
The construction time is $\mathcal{O}(|v|\sqrt{\log |v|})$ for any substring (including alphabet renumbering),
and this sums up to $\mathcal{O}(n \sqrt{2^{-k}\log n})$ for each length, and $\mathcal{O}(n\sqrt{\log n})$ in total.
\end{proof}
\subsection*{Acknowledgement}
The authors gratefully thank Djamal Belazzougui, Travis Gagie, and Simon J. Puglisi who pointed out that wavelet suffix trees can be useful for BWT-related applications.
\bibliographystyle{plain}
|
1,477,468,751,045 | arxiv | \section{Introduction.}
The challenge of the error free quantum
computation~\cite{Kitaev2002,Childs2002,Preskill1998} resulted in a surge of
interest to many physical systems and mathematical models that were considered
very exotic before. While it is clearly very difficult (if not impossible) to
satisfy the conditions of long decoherence rate and scalability in simple
physical systems\cite{Ioffe2004}, both can be in principle satisfied if
elementary bits are represented by anyons, the particles that indergo
non-trivial transformations when moved adiabatically around each other
(braided)~\cite{Kitaev1997,Mochon2003,Mochon2004}. One of the most famous
examples of such excitations is provided by the fractional Quantum Hall
Effect~\cite{Halperin84,Arovas84}. The difficult part is, of course, to
identify a realistic physical system that has such excitations and allows
their manipulations. This problem should be separated into different layers.
The bottom layer is the physical system itself, the second is the theoretical
model that identifies the low energy processes, the third is the mathematical
model that starts with the most relevant low energy degrees of freedom and
gives the properties of anyons while the fourth deals with construction of the
set of rules on how to move the anyons in order to achieve a set of universal
quantum gates (further lies the layer of algorithms and so on).
One of the most interesting set of problems of the third layer is provided by
the Chern-Simons theories with discrete groups on the lattice: on one hand in
these theories one expects to have a non-local interaction between fluxes that
gives them the anyonic properties, on the other hand they might describe
physics of some solid state arrays. In particular, we have shown very recently
\cite{Doucot2005} that ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ Chern-Simons theory can be realistically
implemented in a Josephson junction array on a square lattice. Furthermore,
the ground state of these physical systems is doubly degenerate but locally
instinguishable so that even a small sized lattice provides a very good
protection of this degeneracy against the external noise. Such protection is
expected to become ideal for the larger lattice sizes if the gap to
excitations is finite. The lack of the exact solution did not allow us to make
definite conclusions on the properties of this model for larger lattice sizes.
However, the analytical solution in limiting cases combined with the extensive
numerics \cite{Dorier2005} indicates that the gap for the low energy
excitations closes in the thermodynamic limit. Practically, this would imply
that protection that can be achieved in such arrays does not grow with the
array size beyond a certain limit. In order to get a better understanding of
the properties of such models we consider here a similar (${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$
Chern-Simons) model on a triangular lattice. It turns out that for $n=2$ the
model can be solved exactly by a mapping to Majorana fermions; we find the
gapless spectrum of the photons which acquires a gap if additional
('magnetic') terms are included in the Hamiltonian. We hope that long wave
properties of this model are analogous to the properties of the square lattice
model and thus the problem of the low energy excitations of the latter can be
remedied by the 'magnetic' field term in the Hamiltonian that plays the role
of a tunable chemical potential for fluxes. Because it is difficult to avoid
boundaries in a physical implementations, it is important to understand what
is the effect of edges on the Chern-Simons theory. In particular, from the
view point of the topological protection, it is important to understand
whether they lead to gapless excitations such as edge states in Quantum Hall Effect.
Generally, while the lattice gauge theories without Chern-Simons term are well
studied and understood, much less is known about their Chern-Simons
counterparts. The reason for this is that Chern-Simons term implies coupling
of the charge and flux; on a lattice the charge resides on the sites while the
flux is associated with the plaquettes. Furthermore, one usually wants to
preserve the symmetry of the lattice. To satisfy these criteria one has to
couple charge with the total flux of a few adjacent plaquettes which leads to
novel features absent in the continuous theories. We discuss the details of
this construction in Section II. In particular, for a continuous Abelian gauge
group the Chern-Simons term remains quadratic but becomes non-local in space,
i.e. its Fourier transform acquires momentum dependence; moreover it is zero
for some values of the momenta in the Brillouin zone. This leads to the
appearance of the gapless modes in these theories in the absence of magnetic
energy. Our analysis presented in Section III of the exactly solvable
Chern-Simons theory with ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ group shows that this qualitative picture
holds for the discrete group as well. Finally, in Section IV we study the
boundary effects and find that they are also similar for continuous and
discrete groups. Namely, for continuous groups, the general arguments show
\cite{Witten89} the appearance of a non-gauge invariant matter field on the
boundary in Chern-Simons theories, similarly the exact solution of ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$
model shows that the same phenomena happens in discrete models as well.
\section{Model with ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$ symmetry}
Let us first begin to discuss the construction of the Chern-Simons model with
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$ symmetry on the triangular lattice. The main element of this
construction are the expressions for the electric field operators,
$\mathcal{E}_{kl}^{\pm}$ and the local field translation operators
$\mathcal{T}_{kl}^{\pm}$.\cite{Doucot2005} Both should preserve the symmetry
of the lattice; furthermore, the expression for the electric field operators,
$\mathcal{E}_{kl}^{\pm}$ \ should be gauge invariant while those for
$\mathcal{T}_{kl}^{\pm}$ should preserve the electric fields operators. By
analogy with our previous discussion for the square lattice~\cite{Doucot2005},
we write:
\begin{align*}
\mathcal{E}_{kl}^{\pm} & =\exp\left( \mp i\frac{2\pi}{n}\left( \Pi
_{kl}-\sum_{(mn)}\nu(kl;mn)A_{mn}\right) \right) \\
\mathcal{T}_{kl}^{\pm} & =\exp\left( \mp i\frac{2\pi}{n}\left( \Pi
_{kl}+\sum_{(mn)}\nu(kl;mn)A_{mn}\right) \right)
\end{align*}
Here $A_{kl}$ and $\Pi_{mn}$ are (discrete) gauge potential and canonical
conjugate momentum on lattice link that satisfy usual canonical commutation
relations:
\[
\lbrack A_{kl},\Pi_{mn}]=i\delta_{(kl),(mn)}%
\]
where $(kl)$ and $(mn)$ denote oriented links on the triangular lattice.
Reversing the orientation of a link changes the sign of the corresponding
variables $A$ and $\Pi$. In the case of a ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$ symmetry, the local
vector potentials $A_{kl}$ are constrained to be integer multiples of $2\pi
/n$. The Chern-Simons coefficient $\nu(kl;mn)$ is zero unless $(mn)$ is one of
the four neighbor links of $(kl)$ as illustrated on Fig.~\ref{link}. In this
case, $\nu(kl;mn)=\nu$, if the link $(mn)$ is oriented from right to left for
an observer standing on link (kl) and looking towards site $l$. For the
converse relative orientation, $\nu(kl;mn)=-\nu$
\begin{figure}[h]
\includegraphics[width=2.0in]{ElectricField}\caption{(Color online) The
orientation convention used in the construction of a Chern-Simons model on a
triangular lattice. The thick (black)\ line represents a $\Pi_{kl}$ operator,
conjugated to the vector potential attached to the link. The arrow is required
since \mbox{$\Pi_{kl}=-\Pi_{lk}$}. The gray (red) lines keep track of signs of
$\nu(kl;lm)$ coefficients which appear in the phase-factors imposed by the
Chern-Simons term in the definition of local electric operators $\mathcal{E}%
_{kl}^{\pm}$ and field translation operators $\mathcal{T}_{kl}^{\pm}$. Gray
(red) lines are oriented from site $m$ to site $n$ whenever the coefficient
$\nu(kl;mn)$ is positive.}%
\label{link}%
\end{figure}
The main consequence of these definitions is that the commutation relations of
electric field operators on nearby links are modified by phase factors. For
$(kl)$ and $(mn)$ oriented as on Fig.~\ref{link}, we have:
\begin{equation}
\mathcal{E}_{kl}^{+}\mathcal{E}_{mn}^{+}=\exp\left( i(\frac{2\pi}{n})^{2}%
2\nu\right) \mathcal{E}_{mn}^{+}\mathcal{E}_{kl}^{+}%
\end{equation}
Similarly, the local field translation operators satisfy:
\begin{equation}
\mathcal{T}_{kl}^{+}\mathcal{T}_{mn}^{+}=\exp\left( -i(\frac{2\pi}{n}%
)^{2}2\nu\right) \mathcal{T}_{mn}^{+}\mathcal{T}_{kl}^{+}%
\end{equation}
We shall focuss here on the case where these phase factors are equal to $-1$.
This requires $\nu$ to be of the form:
\begin{equation}
\nu=\pi(\frac{n}{2\pi})^{2}(m+\frac{1}{2}) \label{constnu}%
\end{equation}
where $m$ is an integer.
\begin{figure}[h]
\includegraphics[width=2.0in]{GaugeTransformation}\caption{(Color online)
Construction of the local gauge transformation generator based at the central
site, see Eq.~(\ref{defgauge}). Thick full links correspond to electric field
operators $\mathcal{E}_{kl}^{+}$, and grey (red) oriented links to phase
factors, $\exp(i\frac{8\pi\nu}{n}A_{mn})$, whose total contribution is equal
to $\exp(-i\frac{8\pi\nu}{n}\Phi_{k})$, where $\Phi_{k}$ is the total flux
(counted counterclockwise) through the hexagon.}%
\label{site}%
\end{figure}
To construct the ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$ model, we start from the system where local vector
potentials can be arbitrary (and in particular unbounded) integer multiples of
$2\pi/n$, and keep then only the subspace of periodic states $|\Psi\rangle$
which satisfy:
\begin{equation}
(\mathcal{T}_{kl}^{+})^{n}|\Psi\rangle=|\Psi\rangle
\end{equation}
In general, this condition is not compatible with the possibility to perform
arbitrary local gauge transformations, because the $\mathcal{T}_{kl}^{+}$ no
longer commute with generators of local gauge transformations $U_{k}$ defined
as:
\begin{equation}
U_{k}=\prod_{l}^{(k)}\mathcal{T}_{kl}^{+}=\prod_{l}^{(k)}\mathcal{E}_{kl}%
^{+}\exp(-i\frac{2\pi}{n}4\nu\Phi_{k}) \label{defgauge}%
\end{equation}
where $\Phi_{k}=\sum_{hex}A_{mn}$ is the total flux through the loop composed
by the six first neighbors of site $k$ and oriented counterclockwise, as
illustrated on Fig.~\ref{site}. This is an integer multiple of $2\pi/n$. Note
that some care is required in chosing the order of the six operators involved
in the above products. A convenient choice is to lump together each link
$(kl)$ with its opposite $(kl^{\prime})$. This convention is sufficient to
specify the total phase factor because $\mathcal{T}_{kl}^{+}$ operators on all
but adjacent links commute (so that $[\mathcal{T}_{kl}^{+},\mathcal{T}%
_{kl^{\prime}}^{+}]=0$) while different pairs $\mathcal{T}_{kl}^{+}%
\mathcal{T}_{kl^{\prime}}^{+}$ commute because of the cancellation of the
phase factors for any value of $\nu$. So, this choice yields three pairwise
products which commute among themselves. With the notations of Fig.~\ref{site}%
, we have:
\begin{equation}
U_{k}\mathcal{T}_{mn}=\exp\left( -i(\frac{2\pi}{n})^{2}4\nu\right)
\mathcal{T}_{mn}U_{k}%
\end{equation}
Therefore, the ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{n}$ periodicity conditions are compatible with local
gauge invariance provided the quantity $(2\pi/n)^{2}4\nu n$ is an integer
multiple of $2\pi$, which is clearly the case for $\nu$ chosen according to
Eq.~(\ref{constnu}). Another important consequence of this choice is the very
simple connection between generators of gauge transformations and local
electric fields:
\begin{equation}
U_{k}=\prod_{l}^{(k)}\mathcal{E}_{kl}^{+} \label{gaugelec}%
\end{equation}
A last consequence is that for any \emph{even} value of $n$:
\begin{equation}
(\mathcal{E}_{kl}^{+})^{n}=(\mathcal{T}_{kl}^{+})^{n} \label{elecn}%
\end{equation}
Once this set of basic operators has been defined, the natural gauge-invariant
Hamiltonian is constructed as a sum of two terms: the first one involves local
electric field operators on individual links and the second one the local
magnetic fluxes around plaquettes. Specifically:
\begin{equation}
H=-\frac{1}{2}\sum_{(k,l)}(\mathcal{E}_{kl}^{+}+\mathcal{E}_{kl}^{-}%
)+\sum_{(j,k,l)}f(\Phi(j,k,l)) \label{H}%
\end{equation}
The first sum is over links $(k,l)$ and the second one over elementary
triangular plaquettes $(j,k,l)$ which are oriented counterclockwise. At this
stage, $f$ may be any function and the flux
\mbox{$\Phi(j,k,l)=A_{ij}+A_{jk}+A_{kl}$}. Note that we are not dealing here
with the pure Chern-Simons theory, but rather with a discrete analogue of a
Maxwell-Chern Simons theory. Indeed, the former has only a small Hilbert space
which dimension is independent of the system size, and a vanishing
Hamiltonian. In the pure Chern-Simons theory, only fluxless configurations are
allowed, unless the ambiant space has a non-trivial topology. The Hilbert
space of the Maxwell Chern-Simons theory is much larger, and is much more
likely to correspond to the low energy sector of a real physical system. The
ground-state sector of this model is then expected to be described by a pure
Chern-Simons model.
\section{The $n=2$ case}
From now on, we assume $n=2$. We shall map the subspace of gauge invariant and
$2\pi$ periodic states (the \textquotedblleft physical
subspace\textquotedblright) on a system of Majorana fermions attached to the
plaquettes of the original triangular lattice. The periodicity condition
implies, thanks to eq.~(\ref{elecn}) that physical states $|\Psi\rangle$
satisfy:
\begin{equation}
(\mathcal{E}_{kl}^{+})^{2}|\Psi\rangle=|\Psi\rangle
\end{equation}
We may then set:
\begin{equation}
\mathcal{E}_{kl}^{+}=\mathcal{E}_{kl}^{-}\equiv\mathcal{E}_{kl}%
\end{equation}
on this physical subspace.
\begin{figure}[h]
\includegraphics[width=3.0in]{FluxCreation}\caption{(Color online) Strings
of electric field operators (thick lines), $S_{\mathbf{r}}$ and
$S_{\mathbf{r^{\prime}}}$ that create or remove ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ fluxes in the
plaquettes indicated by the shaded triangles. For two arbitrary fluxes the
corresponding strings have a common part (full lines) and individual parts
(gray lines). Two individual parts form a contour that connects two fluxes.
Each of these three substrings anticommutes with all others that results in
the anticommutation of the flux creation operators (see text) }%
\label{string}%
\end{figure}
As for the square lattice, it is fruitful to introduce string-like operators
$S_{\mathbf{r}}$ which create or remove a ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ flux on a plaquette
labelled by a site $\mathbf{r}$ on the hexagonal dual lattice. This operator
involves the product of $\mathcal{E}_{kl}$ on a contour that starts from a
specific bond of the lattice, as shown on Fig. \ref{string}. For the gauge
invariant states all such contours are equivalent, so this contour can be
chosen arbitrarily. For instance, one can choose the contours that first go up
and then left toward the plaquette $r$. Using the fact that electric operators
on nearby links anti-commute, we have:
\begin{equation}
S_{\mathbf{r}}^{2}=(-1)^{l(\mathbf{r})-1} \label{S^2}%
\end{equation}
where $l(\mathbf{r})$ is the length of the contour $\mathbf{r}$, namely it is
the total number of electric operators involved in the construction of
$S_{\mathbf{r}}$. Because a gauge transformation changes the length of the
contour by an even number, the right-hand side of Eq.~(\ref{S^2}) depends only
on the parity, $[\mathbf{r}]\equiv(l(r)-1)\operatorname{mod}2$, of the
plaquette $\mathbf{r}$ and not on the contour leading to it. Note that
$S_{\mathbf{r}}$ is not always a hermitian operator, since
\begin{equation}
S_{\mathbf{r}}^{\dagger}=(-1)^{[\mathbf{r}]}S_{\mathbf{r}}%
\end{equation}
The commutation rules obeyed by these string operators are the following:
\begin{equation}
\{S_{\mathbf{r}},S_{\mathbf{r^{\prime}}}\}=2\delta(\mathbf{r}%
,\mathbf{r^{\prime}})(-1)^{[\mathbf{r}]} \label{S_rS_r'}%
\end{equation}
This crucial property which exhibits a clear fermionic behavior can be proved
in two ways. First, one can deform the contours so that they overlap between
point $0$ and $\mathbf{r}$. Then $S_{\mathbf{r^{\prime}}}=S_{\mathbf{r^{\prime
},r}}S_{\mathbf{r}}$ where $S_{\mathbf{r^{\prime},r}}$ is the contour
beginning at $\mathbf{r}$ and ending at $\mathbf{r}^{\prime}\mathrm{.}$ This
contour $S_{\mathbf{r^{\prime},r}}$ contains exactly one electric field
operator namely its first one, which anticommutes with the last electric field
operator entering in $S_{\mathbf{r}}$, so operators $S_{\mathbf{r^{\prime},r}%
}$ and $S_{\mathbf{r}}$ anticommute which proves (\ref{S_rS_r'}).
Alternatively, one can write the original $S$ operators as products
$S_{\mathbf{r}}=\widetilde{S}_{\mathbf{r}}S_{0}$ and $S_{\mathbf{r}^{\prime}%
}=\widetilde{S}_{\mathbf{r}^{\prime}}S_{0}$ \ where $S_{0}$ is the product of
electric field operators on the common part of the contour leading to
plaquettes $\mathbf{r}$ and $\mathbf{r}^{\prime}$, see Fig.~\ref{string}. The
operators $\widetilde{S}_{\mathbf{r}}$ and $S_{0}$ have exactly one pair of
nearest neighboring electric fields, so they anticommute: $\{\widetilde
{S}_{\mathbf{r}^{\prime}},S_{0}\}=\{\widetilde{S}_{\mathbf{r}},S_{0}\}=0$.
Thus, $S_{\mathbf{r}}S_{\mathbf{r^{\prime}}}=-S_{0}^{2}\widetilde
{S}_{\mathbf{r}}\widetilde{S}_{\mathbf{r}^{\prime}}$ and $S_{\mathbf{r}%
^{\prime}}S_{\mathbf{r}}=-S_{0}^{2}\widetilde{S}_{\mathbf{r}^{\prime}%
}\widetilde{S}_{\mathbf{r}}$, further, noticing that the operators
$\widetilde{S}_{\mathbf{r}}\widetilde{S}_{\mathbf{r}^{\prime}}$ contain
exactly one pair of anticommuting electric fields, we get (\ref{S_rS_r'}).
A very important property of this model restricted to its physical subspace
defined above is that local electric operators on link $(kl)$ are simply
related to bilinear expressions of the form $S_{\mathbf{r}}%
S_{\mathbf{r^{\prime}}}$ where the plaquettes $\mathbf{r}$ and
$\mathbf{r^{\prime}}$ are located on both sides of the link $(kl)$. If
$\mathbf{r}$ and $\mathbf{r^{\prime}}$ are nearest neighbor plaquettes, we
have:
\begin{equation}
S_{\mathbf{r}}=\mathcal{E}_{kl}S_{\mathbf{r^{\prime}}} \label{horizontalhop}%
\end{equation}
which is a direct consequence of the definition of string operators. Thus,
\begin{equation}
\mathcal{E}_{kl}=(-1)^{[\mathbf{r}^{\prime}]}S_{\mathbf{r}}%
S_{\mathbf{r^{\prime}}} \label{E_kl_1}%
\end{equation}
The last stage is to introduce Majorana fermions $\chi_{\mathbf{r}}$ related
to string operators by:
\begin{equation}
S_{\mathbf{r}}=i^{[\mathbf{r}]}\chi_{\mathbf{r}}%
\end{equation}
With this definition, it is easy to check that:
\begin{align}
\chi_{\mathbf{r}}^{\dagger} & =\chi_{\mathbf{r}}\\
\{\chi_{\mathbf{r}},\chi_{\mathbf{r^{\prime}}}\} & = 2 \delta_{\mathbf{r}%
,\mathbf{r^{\prime}}}%
\end{align}
Eq.~(\ref{E_kl_1}) becomes:
\begin{equation}
\mathcal{E}_{kl}=i\chi_{\mathbf{r}}\chi_{\mathbf{r^{\prime}}}
\label{E_kl_Majorana}%
\end{equation}
if site $\mathbf{r}^{\prime}$ is even ($[\mathbf{r}^{\prime}]=0$) and site
$\mathbf{r}$ is therefore odd.
The electrical part of the Hamiltonian thus maps into the hopping of Majorana
fermions. The full Hamiltonian includes also the magnetic field part. In case
of ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ model it is given by the operator that takes two values
depending on the flux in a given plaquette, so a general function $f(\Phi)$
from (\ref{H}) is reduced to a linear function $f(\Phi)=\mu\Phi$. Writing the
Majorana fermion as a sum of two usual fermion operators
\mbox{$\chi=c+c^{\dagger}$}, we see that the total fermion number $c^{\dagger
}c$ changes by $\pm1$ by the flux creation operator, so in the fermion
language the full Hamiltonian becomes:
\[
H=-i\sum_{(\mathbf{r},\mathbf{s})}(c_{\mathbf{r}}+c_{\mathbf{r}}^{\dagger
})(c_{\mathbf{s}}+c_{\mathbf{s}}^{\dagger})+\mu\sum_{\mathbf{r}}c_{\mathbf{r}%
}^{\dagger}c_{\mathbf{r}}%
\]
where the indices $\mathbf{r},\mathbf{s}$ run over the sites of the dual
(hexagon) lattice and the first sum goes over all nearest neighbours on this
lattice. The spectrum of the excitations
\begin{equation}
E(k)=\sqrt{\mu^{2}+|t(k)|^{2}}\pm|t(k)| \label{E(k)}%
\end{equation}
where $t(k)$ is spectrum of fermions on the honeycomb lattice with a purely
nearest neighbour hopping Hamiltonian $H_{t}=\sum_{(\mathbf{r},\mathbf{s}%
)}c_{\mathbf{r}}^{\dagger}c_{\mathbf{s}}$. To compute $t(k)$ we choose
elementary cell consisting of two sites, for instance the ones that belong to
the same vertical bond. In momentum space the Hamiltonian becomes a matrix
$H=c_{a,k}^{\dagger}K_{ab}c_{b,k}$ with
\[
K=\left(
\begin{array}
[c]{cc}%
0 & 1+e^{i\mathbf{k\xi}}+e^{i\mathbf{k\eta}}\\
1+e^{-i\mathbf{k\xi}}+e^{-i\mathbf{k\eta}} & 0
\end{array}
\right)
\]
where $\mathbf{\xi=(}\frac{\sqrt{3}}{2},\frac{3}{2})$, $\mathbf{\eta=(-}%
\frac{\sqrt{3}}{2},\frac{3}{2})$ are unit vectors of the lattice of vertical
bonds. We get
\[
t(k)=\sqrt{1+4\cos\left( \frac{\sqrt{3}}{2}k_{x}\right) \cos\left( \frac
{3}{2}k_{y}\right) +4\cos^{2}\left( \frac{\sqrt{3}}{2}k_{x}\right) }%
\]
In the absence of magnetic term the spectrum has dispersionless mode and a
dispersive one which has zero only at the isolated Fermi points $(\pm
4\pi/(3\sqrt{3}),0)$, $(\pm2\pi/(3\sqrt{3}),2\pi/3)$. Near the Fermi point the
spectrum is linear $E(k)=3|k|$. In the presence of magnetic term two things
happen: the dispersive mode acquires a gap $2\mu$, near the Fermi point the
spectrum becomes massive $E(k)=\sqrt{(3/2)^{2}|k|^{2}+\mu}+(3/2)|k|$ and the
dispersionless mode acquires $k$-dependence with the gap $\sqrt{\mu^{2}+9}-3.$
The presence of a dispersionless band for $\mu=0$ is directly connected to the
presence of a set of local symmetries in the model. Indeed, if we introduce
the Majorana operators \mbox{$\widetilde{\chi}=i(c-c^{\dagger})$}, they
satisfy:
\begin{align}
\widetilde{\chi}_{\mathbf{r}}^{\dagger} & =\widetilde{\chi}_{\mathbf{r}}\\
\{\widetilde{\chi}_{\mathbf{r}},\widetilde{\chi}_{\mathbf{r^{\prime}}}\} & =
2 \delta_{\mathbf{r},\mathbf{r^{\prime}}}\\
\{\chi_{\mathbf{r}},\widetilde{\chi}_{\mathbf{r}}\} & = 0
\end{align}
As a result, $\widetilde{\chi}_{\mathbf{r}}$ commutes with $H$ for any
$\mathbf{r}$, therefore providing a set of non-commuting local symmetries
which are destroyed by the presence of a magnetic term $\mu$.
\section{Edge States}
An interesting feature of the present model is that it exhibits edge states
localized around boundaries. This is known to be a general property of
Chern-Simons theories~\cite{Witten89}, but an advantage of a lattice gauge
theory with a discrete gauge group over a model defined on continuous space is
to yield a \emph{finite dimensional} Hilbert space. Therefore, we do not
require any sophisticated analysis to specify boundary conditions. In the
presence of a boundary, the definition of basic operators $\mathcal{E}%
_{kl}^{\pm}$ and $\mathcal{T}_{kl}^{\pm}$ for a link $kl$ along the boundary
is the same as before, with the exception that the associated phase-factor
only involves a smaller set of vector potentials $A_{mn}$ attached to the
neighboring links lying \emph{inside} the system. It is then easy to check
that this does \emph{not} modify the basic commutation relations between these
operators. In particular, we shall still concentrate here on the $n=2$ case
where nearby local field translation operators anticommute. The gauge
generators $U_{k}$ located at site $k$ is still the product of all
$\mathcal{T}_{kl}^{\pm}$ operators such that $kl$ lies inside the system. As
illustrated on Fig.~\ref{EdgeStates} this implies that $U_{k}$ and $U_{l}$
still commute when at least site $k$ or $l$ is not on the boundary, but they
now \emph{anticommute} if $k$ and $l$ are nearest neighbor sites both located
on the boundary. This holds for most possible shapes of the boundary, with few
exceptions which are depicted on Fig.~\ref{EdgeStates}. The striking
consequence of this is that it is no longer possible to diagonalize
simultanenously all local gauge generators $U_{k}$ belonging to the same
boundary. In particular, gauge singlets no longer exist since they are
replaced by degenerate multiplets corresponding to \emph{projective}
representations of the gauge group associated to boundary sites.
\begin{figure}[h]
\includegraphics[width=3.0in]{EdgeStatesPRA}\caption{(Color online)
Commutation relations between gauge generators attached to boundary sites in
the ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ Chern-Simons model. We show various possible edge shapes and
indicate the corresponding opening angle expressed in degrees. For each
situation, we display the gauge generators for a pair of nearest-neighbor
sites. These generators involve the electric operators originating from a
given site, and are depicted either as full (red) thick lines or dashed (blue)
thick lines. For all situations appearing on the outer boundary of the
lattice, these gauge generators are found to \emph{anticommute}, which induces
degenerate multiplets for the gauge symmetry. An exception to this rule is the
$-60$ cusp shown on an inner hole for which the two nearby generators
commute.}%
\label{EdgeStates}%
\end{figure}
Let us now describe these multiplets in some detail. For this, we consider a
finite triangular lattice with an outer boundary, and possibly with some inner
holes, each of them bringing its own boundary. We assume that these holes are
not too close from each other, so that local gauge generators associated to
sites belonging to two different boundaries always commute. This assumption
allows us to treat each boundary separately from the others. To simplify the
discussion, we shall consider first the edge without $-60$ degrees turns that
contains an even number $L$ of sites, which will be labelled by indices $n$
running from 1 to $L$. We suppose that fluxes in each plaquette and through
each inner hole have been fixed, thereby concentrating on the degrees of
freedom attached to gauge transformations. We may then associate to our
boundary a Hilbert space containing $2^{L}$ independent states. In order to
construct irreducible representations of the boundary gauge-group, we have to
find a maximal subset of mutually commuting generators, which are then
diagonalized simultaneously. Such set may be chosen as follows: take first
generators $U_{2n}$ on even sites, and add then a global gauge generator
$\prod_{n=1}^{L}U_{n}$ for the boundary. This yields $2^{1+L/2}$ possible sets
of quantum numbers corresponding to:
\begin{align}
U_{2n}|\Psi\rangle & =\tau_{2n}|\Psi\rangle\\
\prod_{n=1}^{L}U_{n}|\Psi\rangle & =\tau|\Psi\rangle
\end{align}
where eigenvalues $\tau_{2n}$ and $\tau$ can be $\pm1$. We may still interpret
$\tau$ as the total ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ charge of matter induced on the boundary by the
Chern-Simons term. Starting from a common eigenstate $|\Psi\rangle$ as above,
applying $U_{2n+1}$ produces a new eigenstate in which $\tau_{2n}$ and
$\tau_{2n+2}$ have changed sign simultaneously. Note that we have:
\begin{equation}
\prod_{n=1}^{\frac{L}{2}}U_{2n-1}|\Psi\rangle=\tau\prod_{n=1}^{\frac{L}{2}%
}\tau_{2n}|\Psi\rangle
\end{equation}
so only $(L/2)-1$ generators located on odd sites act independently. We
therefore generate a $2^{(L/2)-1}$ dimensional irreducible multiplet, so the
Hilbert space attached to the boundary splits into $2^{(L/2)+1}$ such
degenerate subspaces.
The above construction provides a basis in each multiplet for which $U_{2n}$
is represented by a diagonal matrix, whereas $U_{2n+1}$'s play the role of
raising or lowering operators. A convenient way to visualize these multiplets
is to describe them in terms of an effective spin 1/2 model, attached to even
boundary sites. This correspondence is given by:
\begin{align}
U_{2n} & =\tau_{2n}^{z},\;\;\;(1\leq n\leq\frac{L}{2})\\
U_{1} & =\tau_{L}^{x}\tau_{2}^{x}\\
U_{2n-1} & =\tau_{2n-2}^{x}\tau_{2n}^{x},\;\;\;(2\leq n\leq\frac{L}{2}-1)\\
U_{L-1} & =\tau\left( \prod_{n=1}^{\frac{L}{2}}\tau_{2n}^{z}\right)
\tau_{L-2}^{x}\tau_{L}^{x}%
\end{align}
A given multiplet corresponds to a fixed eigenvalue for $\prod_{n=1}^{\frac
{L}{2}}\tau_{2n}^{z}$, so that the number of independent Ising spins is only
$(L/2)-1$ as discussed above. Note that these conclusions apply to
sufficiently large holes in the lattice. For very small holes consisting of
two triangles glued together to form a rhombus (and of course for a single
triangle itself) all boundary operators commute. The first pair of
anticommuting operators appears in a trapezoidal hole shown in Fig.
\ref{EdgeStates}. This hole carries an effective spin $1/2$ degree of freedom
because it is possible to diagonalize all gauge generators except one. Note
that in this special case the product $\prod_{n=1}^{5}U_{n}$
\emph{anticommutes} with the two local generators on upper sites, so the
corresponding charge is no longer a conserved quantum number. In larger holes,
such as hexagon, all nearest neighbor gauge generators anticommute. For
instance, the boundary matter of elementary hexagon can be described as an
effective system composed of \emph{two} spins 1/2.
What is the interaction between these new matter-like degrees of freedom and
the fluxons described in the previous section? Suppose first that these
fluxons are not allowed to jump accross boundaries. This corresponds to a
Hamiltonian where the electrical operators attached to boundary links are
removed. The only interaction between the two sets of degrees of freedom
occurs via the usual Aharonov-Casher phase-factor for fluxons going around
inner holes. Each hole generates an effective ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ orbital magnetic
field which produces an adiabatic phase $\phi$ related to the ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$
charge by $e^{i\phi}=\tau$. This effect does not disturb the internal states
of the boundaries which remain constants of motion. A stronger interaction
occurs when fluxons are allowed to cross boundaries (that is to disappear from
a boundary placquette). A simple description of this interaction is possible
if a fluxon moves around a closed loop which crosses twice a given boundary.
In this process, the final internal state of the boundary is connected to the
initial one by applying the product of all local boundary gauge generators
which are also enclosed in the closed loop. In the example of a hole with the
shape of an elementary hexagon, these scattering processes can be simply
expressed as products of operators taken in the set
\mbox{$\tau^{z}_{2},\tau^{x}_{2},\tau^{z}_{4},\tau^{x}_{4}$}. In connection
with the idea of topological quantum computation, we may wonder if these well
defined operations can be used as a physical basis for qubits. The problem
with this idea is that local gauge symmetry operators induce transitions
between the various states of this effective spin system attached to the
boundary. Since in any real implementation, noise acts through basically any
local operator, such states cannot be protected from environment-induced
decoherence. If we now consider the effect of either a large boundary, or many
small inner holes, on the fluxon dynamics, we see that these processes tend to
entangle fluxon states with an effective environment associated to boundary
matter, resulting in the absence of long-ranged phase coherence for the
Majorana fermions describing the fluxons. It is also very unlikely that the
model preserves its integrabiblity, as soon as several fluxons and boundary
matter are simultaneously present.
\section{Conclusions}
In this paper, we have solved exactly a ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ gauge theory with a
Chern-Simons term on a triangular lattice by mapping the gauge-invariant
subspace into a free Majorana fermion system. These fermions keep track of the
dynamics of local fluxes. In the absence of an energy cost to create a
${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$ flux, the spectrum is very degenerate:\ it exhibits a flat band at
zero energy and a dispersive one with a vanishing energy at isolated points on
the Brillouin zone boundary. This picture is very similar to the solution
found by Kitaev for a quantum spin 1/2 model on the hexagonal
lattice~\cite{Kitaev03}. In this model, he considered nearest-neighbor
interactions of the form $(\mbox{\boldmath $\sigma$}_{i}%
.\mbox{\boldmath $n$}_{ij})(\mbox{\boldmath $\sigma$}_{j}%
.\mbox{\boldmath $n$}_{ij})$ where $\mathbf{n}_{ij}=\mathbf{e}_{x}%
,\mathbf{e}_{y},\mathbf{e}_{z}$ depending on the unit vector joining sites $i$
and $j$. These operators have the same anticommutation properties as the local
fluxon moving operators
\mbox{$\mathcal{E}_{kl}=i\chi_{\mathbf{r}}\chi_{\mathbf{r^{\prime}}}\label{E_kl}$}
introduced in the Chern-Simons lattice gauge model. However, we have not found
a way to establish a one to one mapping between these two models. We do not
see any operator in Kitaev's spin model which could be interpreted as a local
fluxon number, since it would have to anticommute with the three components
$\sigma^{x}$, $\sigma^{y}$, $\sigma^{z}$ of the local spin at any site. In our
gauge theory, fluxons have to be created by non-local operators $S_{\mathbf{r}%
}$ associated to contours, so it does not seem easy to express them in terms
of local spin operators.
The presence of a gapless spectrum in the absence of a magnetic energy term is
reminiscent of the gapless phase found on a square
lattice~\cite{Doucot2005,Dorier2005}. On a square lattice, this ${\mathchoice {\hbox{$\sf\textstyle Z\kern-0.4em Z$}_{2}$
Chern-Simons model can be mapped into a spin 1/2 model with anisotropic
exchange interactions of the form $\sigma^{x}_{i}\sigma^{x}_{j}$ or
$\sigma^{z}_{i}\sigma^{z}_{j}$ depending on the orientation of the link $ij$.
Adding a magnetic term in the Chern-Simons theory is then equivalent to
imposing a magnetic field along the $\mathbf{y}$ direction in the spin model.
It would be interesting to check that it also induces a spectral gap.
Unfortunately, such a magnetic field term does not commute with the non-local
symmetry operators associated to rows and columns of the square lattice which
were responsible for the two-fold degeneracy of all energy eigenstates.
Finally, we have shown explicitely that charged matter degrees of freedom are
always induced along edges associated to the external boundary or to inner
holes. These degrees of freedom appear naturally since the presence of an edge
forces the representation of the local gauge symmetry to become projective,
i.e. gauge generators attached to nearest neighbor sites both located along an
edge do not commute. We have shown that these new degrees of freedom provide
static Aharonov-Bohm fluxes for the orbital motion of fluxons, provided the
latter are not allowed to cross edges. Transitions within these degenerate
matter multiplets are induced by processes where a fluxon goes back and forth
accross a boundary. Unfortunately, these multiplets are very sensitive to
external noise acting through local operators, so it is unlikely they may
serve the purpose of designing protected qubits.
It is an immediate extension of this work to add local static charges in the
system and see how they interact with this fluid of Majorana fermions
associated to fluxons. Very likely, the interaction will be of the
Ahoronv-Casher type, which is too weak to induce a confining force for a pair
of opposite charges. So again, we have a direct evidence that a Chern-Simons
term distroys the confining regime expected in gauge theories with small
magnetic energy in the absence of a Chern-Simons
term~\cite{Pisarski86,Affleck89}.
The most important question connected to protected quantum computation is the
construction of similar models for a finite non-Abelian
group~\cite{Mochon2003,Mochon2004}. This is the subject of a forthcoming work.
\textbf{Acknowledgments}
We are thankful to M.V. Feigelman and A. Silva for the critical reading of the
paper. LI is thankful to LPTHE, Jussieu for their hospitality while BD has
enjoyed the hospitality of the Physics Department at Rutgers University. We
thank A. Kitaev for kindly sending us his unpublished work on the honeycomb
lattice which encouraged us to study the Chern-Simons theory on the triangular
lattice. This work was made possible by support from NATO CLG grant 979979,
NSF DMR 0210575.
|
1,477,468,751,046 | arxiv | \section{Introduction}
Time crystals are nonequilibrium many-body phases, in which time-translation symmetry is spontaneously broken \cite{Wilczek2012, Shapere2012, Sacha2020, Else20,Khemani2019}.
Time crystals that are induced by periodic driving exhibit a robust subharmonic response in relation to the driving frequency.
Isolated systems under periodic driving, however, continuously heat up, in general, causing the time crystalline order to ``melt" into a featureless state. One approach to stabilize periodically driven time crystals, also known as Floquet time crystals, consists of adding strong disorder to push the system into a many-body localized phase \cite{Else2016, Yao2017, Khemani2016}. This has enabled the experimental observation of discrete time crystals in various periodically driven systems \cite{Zhang2017,Choi2017,Rovny2018,Randall2021,Mi2022}. Discrete time crystals, which do not rely on many-body localization, have been realized in other experimental platforms \cite{Smits2018,Autti2018,Monroe2021}.
Alternative strategies to stabilize time crystals include coupling the system to an environment \cite{Else2017,Gong2018,Iemini2018,Buca2019,Michal2022} or including long-range interaction \cite{Russomanno2017,Kozin2019,Kelly2021,Pizzi2021a,Ye2021}.
Dissipation, in particular, has been utilized to create a dissipative time crystal (DTC) in an atom-cavity system \cite{Kessler2021}. Due to the approximation of the atom-cavity system via the Dicke model \cite{Mivehvar2021}, this paradigmatic DTC can be regarded as a realisation of the Dicke time crystal \cite{Gong2018}. The open Dicke model describes an ensemble of two-level systems interacting with photons in a leaky cavity \cite{Kirton2019}. Note that the standard Dicke model does not have any notion of spatial dimension as it is a zero-dimensional model. That is, it excludes spatially dependent potential for the atoms. More importantly, it only captures the all-to-all photon-mediated coupling between the atoms.
These approximations provide an idealized limit, in which the spatial $\mathbb{Z}_2$ symmetry breaking in the superradiant phase is intimately tied to the temporal $\mathbb{Z}_2$ symmetry breaking of the period-doubled Dicke time crystal. Moreover, the infinite-range nature of the cavity-mediated interaction makes this approximate description mean-field solvable and, in fact, exactly solvable in the thermodynamic limit \cite{Gong2018,Zhu2019}.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\columnwidth]{fig1}
\caption{(a) Schematic diagram of the atom-cavity system consisting of a Bose-Einstein condensate with a harmonic trap inside a high-finesse optical resonator. The solid curve denotes the particle distribution $\rho(z)$ in the self-organized density wave phase and the dashed curve denotes the combined dipole potential $U(z)$ due to the cavity field and the harmonic trap. The cavity photon loss rate is $\kappa$. The pump and cavity wavelength is $\lambda$. Dynamics of the atomic density for a (b) stable and (c) metastable dissipative time crystal with harmonic trap frequency $\omega = \hbar/m(3.5\lambda)^2$ and short-range interaction energy $E_\mathrm{int}/E_\mathrm{rec} = 0.26$ with $E_\mathrm{int}$ as defined Sec.~\ref{sec:dia}. In (b), the driving strength is $f_\mathrm{d} = 0.7$ and the driving frequency is $\omega_\mathrm{d}/2\pi = 5$ kHz. While in (c), $f_\mathrm{d} = 0.5$ and $\omega_\mathrm{d}/2\pi = 3.5$ kHz.}
\label{fig:schem}
\end{figure}
The absence of an inhomogeneous potential and other forms of interaction distinguishes the Dicke model from an atom-cavity system in Ref.~\cite{Kessler2021} and any other realisations, where an external harmonic trap and inherent collisional interactions between the atoms are present. These subtle yet important distinctions pose the question about the stability of DTCs, or any prediction based on the Dicke approximation, in realistic atom-cavity setups, in addition to the finite lifetime of Bose-Einstein condensates.
On the one hand, inhomogeneous potentials, such as a harmonic trap, break the spatial symmetry, which could affect the $\mathbb{Z}_2$ symmetry broken states responsible for the period-doubling response. On the other hand, short-range interactions break the mean-field solvability of the Dicke model \cite{Zhu2019}. Given that these features are ubiquitous in any atom-cavity system, it is imperative to point out their influence.
In the absence of dissipation, beyond mean-field effects on discrete time crystals in a spin model with competing short- and long-range interactions are studied in Ref.~\cite{Pizzi2021b}.
In this work, we investigate the influence of harmonic confinement and short-range interaction on the DTC found in a periodically driven atom-cavity system. We show that a DTC remains stable in the presence of weak perturbations that explicitly break spatial symmetry and mean-field solvability of the model.
This is in contrast to the absence of a stable period-doubling response predicted in a similar atom-cavity setup but for the bad cavity limit, wherein the cavity dynamics is orders of magnitude faster than the atomic dynamics \cite{Molignini2018}.
While it was mentioned in Ref.~\cite{Kessler2021} that strong contact interaction may lead to a DTC with finite lifetime, similar to a metastable Dicke time crystal \cite{Zhu2019}, a detailed analysis of this phase, which we call metastable DTC, is still lacking.
Aside from the Dicke time crystal, other dynamical phases in dissipative systems are found to exhibit metastability when pushed out of the idealized limit \cite{Sarkar2021,Jamir2022}.
Here, we show that metastable DTCs may emerge not only because of short-range interaction competing with the infinite-range interaction but also due to the influence of harmonic confinement.
This work is organized as follows. We discuss the system, the method for simulating the dynamics, and the driving protocol in Sec.~\ref{sec:system}. The properties of the stable DTC in the ideal atom-cavity system is reviewed in Sec.~\ref{sec:ideal}. In Sec.~\ref{sec:dia}, we map out and analyze the dynamical phase diagram for different combinations of contact interaction strength and harmonic trap frequency. In Sec.~\ref{sec:mdtc}, we further study the metastable dissipative time crystal and its lifetime. Finally, we conclude this paper in Sec.~\ref{sec:conc}
\section{System}\label{sec:system}
We consider an atom-cavity system with a Bose-Einstein condensate (BEC) of $^{87}$RB atoms as depicted in Fig.~\ref{fig:schem}(a). An external laser is applied along the $y$ direction, which is perpendicular to the cavity axis aligned in the $z$ direction. Photons leak out of the cavity at a rate of $\kappa$. The cavity wavelength is $\lambda$. An external harmonic trap is present along the $z$ direction. In the following, we consider the one-dimensional limit of the system and investigate only the dynamics along the cavity axis.
We vary the pump intensity $\epsilon$ to investigate the dynamical response of the system. The transversely pumped atom-cavity system hosts a self-organisation phase transition \cite{Ritsch2013}. Above a critical value $\epsilon_\mathrm{crit}$, it becomes energetically favourable for the atoms to self-organise into a chequerboard density wave (DW) phase to scatter photons from the pump into the cavity \cite{Baumann2010, Klinder2015}. This self-organisation phase transition is as an approximate emulation of the superradiant phase transition in the Dicke model \cite{Kirton2019}.
In the density wave phase, the system breaks the $\mathbb{Z}_2$ symmetry as the atoms spontaneously localise either in the odd or even sites of the emergent standing wave formed by the cavity photons. These two symmetry broken states can be distinguished by the sign of the density wave order parameter $\Theta= \langle \mathrm{cos}(kz)\rangle$ where $k=2\pi/\lambda$. That is, a non-zero positive (negative) value for $\Theta$ corresponds to an even (odd) DW state \cite{Ritsch2013, Nagy2008, Baumann2010, Klinder2015}.
The Hamiltonian for the system is a combination of the cavity and the atom Hamiltonian, as well as the short-range atom-atom interaction and the atom-cavity interaction, i.e.,
\begin{equation}\label{H}
\hat{H} = \hat{H}_\mathrm{C}+\hat{H}_\mathrm{A}+\hat{H}_\mathrm{AA}+\hat{H}_\mathrm{AC}.
\end{equation}
The Hamiltonian for the cavity with a mode function $\cos(kz)$ is given by
\begin{equation}\label{Hc}
\hat{H}_\mathrm{C} = -\hbar\delta_\mathrm{C}\hat{\alpha}^\dagger \hat{\alpha}.
\end{equation}
where $\delta_\mathrm{C}$ is the pump-cavity detuning and $\hat{\alpha}^\dagger$ ($\hat{\alpha}$) is the creation (annihilation) operator for the cavity photon.
We include a harmonic trap for the atoms with trap frequency $\omega$. The single-particle Hamiltonian for the atoms is
\begin{equation}\label{Ha}
\hat{H}_\mathrm{A} = \int \hat{\Psi}^\dagger(z) \left[ -\frac{\hbar^2}{2m}\frac{d^2}{dz^2}+\frac{1}{2}m\omega^2 z^2\right]\hat{\Psi}(z) \;dz
\end{equation}
where $m$ is the mass of a $^{87}$Rb atom and $\Psi(z)$ is the bosonic field operator associated with the BEC.
We are interested in the interplay between the infinite-range cavity-mediated interaction and the inherent short-range collisional interaction between the atoms. The short-range interaction is described by
\begin{equation}
\hat{H}_\mathrm{AA} = \frac{g_\mathrm{aa}}{2} \int \hat{\Psi}^\dagger (z) \hat{\Psi}^\dagger(z) \hat{\Psi} (z) \hat{\Psi}(z)\;dz.\label{Haa}
\end{equation}
where $g_\mathrm{aa}$ is the contact interaction strength. On the other hand, the atom-cavity interaction, which gives rise to a dynamical infinite-range interaction between the atoms, is modeled by
\begin{align}
\hat{H}_\mathrm{AC} =\int \hat{\Psi}^\dagger(z) \hbar U_0 [\mathrm{cos}^2(kz)\hat{\alpha}^\dagger \hat{\alpha} \label{Hac}\\ \nonumber
+\sqrt{\frac{{\epsilon}}{\hbar|U_0|}} \mathrm{cos}(kz)(\hat{\alpha}^\dagger +\hat{\alpha})]\hat{\Psi}(z)\; dz.
\end{align}
The pump frequency is red-detuned with respect to the atomic transition frequency leading to a negative light shift per photon $U_0<0$. Note that the atom-cavity interaction strength depends on $U_0$ and the pump intensity $\epsilon$.
The dynamics of the system is captured by the following Heisenberg-Langevin equations
\begin{align}\label{eq:eom}
\frac{\partial}{\partial t}\hat{\Psi} &= \frac{i}{\hbar}[\hat{H}, \hat{\Psi}] \\
\frac{\partial}{\partial t}\hat{\alpha} &= \frac{i}{\hbar}[\hat{H}, \hat{\alpha}]-\kappa \hat{\alpha}+\xi,
\end{align}
where $\xi$ is the stochastic noise due to the cavity dissipation with $\langle \xi^*(t)\xi(t')\rangle = \kappa \delta(t-t')$ \cite{Ritsch2013}.
We employ the truncated Wigner approximation (TWA) to simulate the quantum dynamics \cite{Polkovnikov2010, Carusotto2013}. The TWA goes beyond the mean-field approximation through the inclusion of quantum noise from the initial state and the fluctuations corresponding to the dissipation in the cavity. This method treats the quantum operators as $c$ numbers and it is applicable for large number of atoms and weak coupling. For the initial states, we choose coherent states for the BEC and the empty cavity mode.
We then propagate an ensemble of initial states, which samples the initial Wigner distributions, according to the coupled stochastic differential equations in Eq.~\eqref{eq:eom}. The TWA has been used to confirm robustness of dissipative time crystals \cite{Cosme2019,Kessler2019,Kessler2020} and in comparison with experiment \cite{Kessler2021,Popla2022}.
We assume the initial state of the BEC to be homogeneous in the absence of a harmonic trap. When a harmonic trap is present, we use imaginary time propagation to initialize the system in the ground state (see Appendix \ref{sec:imag} for details). We consider the pump protocol depicted in Fig.~\ref{fig:cleanDTC}(a). The pump intensity is linearly increased for 2.5 ms until it reaches $\epsilon_0 = 1.02 \epsilon_\mathrm{crit}$, where $\epsilon_\mathrm{crit}$ is the critical pump intensity for self-organisation for a given contact interaction strength and harmonic oscillator frequency. Next, $\epsilon$ is held constant until 30 ms allowing the system to relax to the corresponding DW state. Finally, the pump intensity is periodically modulated according to
\begin{equation}
\epsilon (t) = \epsilon_0(1 + f_\mathrm{d} \sin(\omega_\mathrm{d}t)),
\end{equation}
where $f_\mathrm{d}$ is the driving strength and $\omega_\mathrm{d}$ is the driving frequency. The driving period is $T=2\pi/\omega_{\mathrm{d}}$.
In the following, we use realistic parameters according to the experimental set-up in Ref.~\cite{Kessler2021}. The particle number is $N_\mathrm{a} = 65 \times 10^3$ atoms, recoil frequency $\omega_\mathrm{rec} = 2\pi^2\hbar/m\lambda^2 = 2\pi \times 3.55$ kHz, decay rate $\kappa = 2\pi \times 4.55$ kHz, $U_0 = -2\pi \times 0.36$ Hz, and effective pump detuning $\delta_\mathrm{eff} = \delta_\mathrm{C}-N_\mathrm{a}U_0/2 =-2\pi \times 18.5$ kHz. We simulate the dynamics for 200 driving cycles.
\section{Ideal Dissipative Time Crystal}\label{sec:ideal}
In this section, we recall the defining properties of a DTC in the ideal limit when both contact interaction and harmonic trap are absent, as a preparational step to determine their influence in Sec.~\ref{sec:dia}. In this limit, the atom-cavity system maps approximately onto the Dicke model if inhomogeneous trap and contact interactions are neglected, and thus, the DTC observed here is equivalent to the paradigmatic Dicke time crystal \cite{Gong2018}.
\begin{figure}[!htb]
\centering
\includegraphics[width=1\columnwidth]{fig2}
\caption{(a) Protocol for the pump intensity. Dynamics of a single TWA trajectory for the (b) order parameter during periodic modulation. Zoom-in of the dynamics for the (c) order parameter, (d) intracavity photon number $|\alpha|^2$, and (e) the correlation function $C(t)$. The driving parameters are $f_\mathrm{d} = 0.5$ and $\omega_\mathrm{d}/2\pi = 4$ kHz in the absence of a harmonic trap and contact interactions.}
\label{fig:cleanDTC}
\end{figure}
Modulation of the pump intensity leads to the formation of a DTC in the atom-cavity system, for a specific regime of driving strength and frequency \cite{Kessler2021,Cosme2019}. This dynamical phase is characterized by a period-doubled switching between the symmetry broken DW states. The mean-field approximation of the ideal DTC is depicted in Fig.~\ref{fig:cleanDTC}(b-c). The periodic switching of the sign of the order parameter in Fig.~\ref{fig:cleanDTC}(c) underpins how the system switches between the odd and even DW states. Moreover, the switching occurs at twice the driving period as seen in Fig.~\ref{fig:cleanDTC}(c). Another important characteristic of a DTC is seen from the dynamics of the cavity mode occupation $|\alpha|^2$, which exhibits pulsating behavior at the driving frequency, see Fig.~\ref{fig:cleanDTC}(d). This means that the DTC rely on the presence of cavity photons, which mediate an infinite-range interaction between the atoms, and thus highlights the many-body nature of the DTC phase.
To quantify the behavior of the time crystal using the TWA, we obtain the two-point temporal correlation function
\begin{equation}
C(t) = \mathrm{Re}\{\langle \hat{\alpha}^\dagger(t) \hat{\alpha}(t_0)\rangle\}/\langle \hat{\alpha}^\dagger(t_0) \hat{\alpha}(t_0)\rangle,
\end{equation}
where $t_0$ is the time before modulation is switched on. In Fig.~\ref{fig:cleanDTC}(e), we present an example of the dynamics of $C(t)$ in a ideal DTC. Note that it closely follows the behavior of the order parameter $\Theta$. In the following, we use $C(t)$ instead of $\Theta$, which averages out in TWA due to the $\mathbb{Z}_2$ symmetry breaking response of the DTC.
\section{Dynamical Phase Diagrams}\label{sec:dia}
\begin{figure*}[!htp]
\centering
\includegraphics[width=2\columnwidth]{fig3}
\caption{Gallery of dynamical phase diagrams for increasing contact interaction strength, from left (zero contact interaction) to right, and increasing harmonic oscillator trap frequency, from top (no harmonic trap, $\omega=0$) to bottom. The oscillator strengths in units of recoil energy from top to bottom are $E_\mathrm{osc}/E_\mathrm{rec}=\{0,0.004,0.013\}$.}
\label{fig:phases}
\end{figure*}
We now explore the influence of harmonic confinement and short-range interactions between the atoms on the stability of the DTC.
The harmonic trap frequency $\omega$ is related to the oscillator length $l_z$ via $\omega = \hbar/(l_z^2 m)$. Alternatively, we measure the confinement strength by comparing $E_\mathrm{osc} = \hbar \omega $ with the recoil energy $E_\mathrm{rec} = \hbar \omega_\mathrm{rec}$
We quantify the contact interaction via the mean-field interaction energy $E_\mathrm{int} = g_\mathrm{aa} N_\mathrm{a}/\lambda$, where $g_\mathrm{aa}>0$ is the repulsive contact interaction strength.
We use $C(t)$ to classify the phases in the dynamical phase diagrams shown in Fig.~\ref{fig:phases}. Specifically, we classify a stable or persistent DTC if $C(t)$ perfectly switches sign every driving cycle for the final 100 driving periods.
We also observe the emergence of a metastable DTC phase. On the level of a single TWA trajectory, we define a metastable DTC by having a $C(t)$ that switches sign at least six consecutive times (or equivalently three consecutive period doubling) during the initial stage of driving, $t\in[0,6T]$, before eventually becoming chaotic, in which $C(t)$ does not change sign over multiple driving periods with irregular intervals. We identify a completely chaotic phase by the lack of consecutive period doubling over $t\in[0,6T]$ in addition to the obvious irregular dynamics of $C(t)$ (see Appendix \ref{sec:chaos} for an example). In Fig.~\ref{fig:phases}, the dynamical phase diagrams are arranged in increasing contact interaction strength from left to right and increasing harmonic oscillator frequency from top to bottom. The resonant nature of DTCs in atom-cavity systems \cite{Kessler2021,Skulte2021,Popla2021} is evident from the fact that both stable and metastable DTCs are only found in some range of the driving frequency in Fig.~\ref{fig:phases}.
The dynamical phase diagram in the ideal scenario, in which both harmonic trap and contact interaction are neglected is shown in Fig.~\ref{fig:phases}(a). A stable DTC can be found in a large area of the driving parameter space, specifically for $\omega_d/2\pi \in [1,6]~\mathrm{kHz}$. We also observe a DTC in a much smaller region of the parameter space for low driving frequencies, $\omega_d/2\pi < 1.0~\mathrm{kHz}$. This phase is distinct from the usual DTC phase due to the presence of faster but subdominant oscillations in the order parameter corresponding to third harmonics as exemplified in Appendix \ref{sec:dtclow}. While this phase is robust against the quantum noise included in TWA, it is noticeably less robust against contact interaction and harmonic confinement as inferred from the gradual disappearance of the relevant region in Fig.~\ref{fig:phases}.
\begin{figure*}[!htbp]
\centering
\includegraphics[width=2\columnwidth]{fig4}
\caption{(a)-(c) Behaviour of the correlation function for varying contact interaction strength $(E_\mathrm{int}>0)$ and zero oscillator frequency $(E_\mathrm{osc}=0)$ obtained using TWA with $10^3$ trajectories. (a) Exemplary dynamics of $C(t)$. (b) Stroboscopic correlation function $\overline{C}(t)$ for different contact interaction strength. (c) Dependence of the lifetime $\tau$ on the harmonic oscillator frequency. The driving parameters are $f_\mathrm{d} = 0.5$ and $\omega_\mathrm{d}/2\pi = 4$ kHz. (d)-(f) Similar to (a)-(c) but for varying harmonic trap frequency $(E_\mathrm{osc}>0)$ and zero contact interaction $(E_\mathrm{int} = 0)$.}
\label{fig:lz}
\end{figure*}
We find stable DTC in the presence of inhomogeneous trapping and short-range interaction. This suggests that indeed a stable DTC phase can form in the atom-cavity system with harmonic trap and inherent collisional interaction, and thus supports the experimental observation of a DTC in Ref.~\cite{Kessler2021}.
Moreover, the presence of both short-range collisional interaction and infinite-range cavity-mediated interaction between the atoms implies the departure of the DTC from the mean-field regime.
This persistence of the DTC phase agrees with the prediction of a long-lived Dicke time crystal despite the presence of short-range interaction, which breaks the mean-field solvability of the Dicke model \cite{Zhu2019}. Typical dynamics of the atomic distribution in the stable DTC phase in the nonideal limit is demonstrated in Fig.~\ref{fig:schem}(b), which shows the system periodically switching between the odd and even DW states.
The parameter regime with a stable DTC phase decreases with increasing short-range interaction strength and harmonic confinement as seen in Fig.~\ref{fig:phases}. The stable DTC is replaced by either a chaotic phase or a metastable DTC. In general, the region of the chaotic phase expands with increasing $E_\mathrm{int}$, which is a consequence of the nonlinear nature of the short-range interaction that couples the periodic motion to a continuum of excitations of the atomic cloud. Strong driving enables a DTC with large photon number and deep intracavity field, thereby forming large density modulations in the atomic distribution (see Appendix \ref{sec:dtcfd}).
The energy associated with the collisional interaction is large for a distribution with large density modulation, which means that repulsive contact interaction penalizes its formation. As demonstrated in Figs.~\ref{fig:phases}(a)-(d), this leads to the suppression of stable DTCs in the strong driving regime, $f_\mathrm{d} > 0.5$, for increasingly strong contact interaction.
In addition to contact interaction, strong harmonic confinement can also destabilize a DTC due to trap-induced coupling between relevant momentum modes becoming dominant over cavity-induced coupling, see Appendix \ref{sec:coup}.
Note that strong harmonic confinement increases the density at the center of the trap, while strong contact interaction reduces it. These two system properties therefore have competing influence on the density.
This explains how strong harmonic confinement ``melts" the DTCs with small density modulation corresponding to weak driving strength $f_\mathrm{d}$, as demonstrated in Fig.~\ref{fig:phases}(i), which is in contrast to the effect of contact interaction shown in Fig.~\ref{fig:phases}(c). Because of their competing effect on the particle density, one may have naively expected that the contact interaction may be tuned appropriately to cancel the effect of the harmonic trap and therefore stabilize the DTC phase.
However, we highlight in Figs.~\ref{fig:phases}(e)-(h) and \ref{fig:phases}(i)-(l) that this is not the case and, in fact, increasing the contact interaction strength leads to further destabilisation of the DTC.
Similarly, for a fixed contact interaction strength, tightening the trap in an attempt to counteract the repulsive interaction shrinks the area in the phase diagram where a stable DTC persists, as seen in Figs.~\ref{fig:phases}(b), \ref{fig:phases}(f), and \ref{fig:phases}(j). The harmonic trap or any inhomogeneous potentials, in general, will couple various momentum modes. Such a coupling may significantly deplete the momentum modes that are important for the $\mathbb{Z}_2$ symmetry broken states participating in the DTC phase, namely the $|\mathbf{k}=0\rangle$ and $|\mathbf{k}=2\pi/\lambda\rangle$ momentum modes. Thus, we demonstrate the importance of ensuring a weak inhomogeneous trap to obtain a stable DTC.
\section{Metastable Dissipative Time Crystal}\label{sec:mdtc}
We now further investigate the metastable dissipative time crystal, the predominant nontrivial dynamical phase for large oscillator frequency and large contact interaction strength, as seen in the phase diagrams in Fig.~\ref{fig:phases}. The metastable DTC exhibits a period-doubled response on a time scale larger than the oscillation period before the dynamics become irregular as shown in Fig.~\ref{fig:schem}(c), for example.
We first focus on the case without a harmonic trap but with a nonzero short-range interaction. In the metastable DTC phase, the irregularity in the dynamics for a single trajectory translates into an exponentially decaying oscillations of the temporal correlation $C(t)$ after averaging over multiple trajectories in TWA as shown in Fig.~\ref{fig:lz}(a). Moreover, we obtain the stroboscopic correlation function $\overline{C}(t)$ defined as the envelope of the oscillations in the correlation function.
A metastable DTC is characterized by having a finite lifetime, $\tau$, which we extract by fitting an exponential decay $\sim \exp(-t/\tau)$ to the corresponding stroboscopic correlation function $\overline{C}(t)$, as depicted in Fig.~\ref{fig:lz}(b).
We demonstrate in Fig.~\ref{fig:lz}(c) that the lifetime decreases with the contact interaction strength. Similar to the metastable Dicke time crystal \cite{Zhu2019}, we emphasise that the metastable DTC for $E_\mathrm{int}>0$ without harmonic confinement is distinct from the prethermal discrete time crystals, which rely on high driving frequency to increase the relaxation time towards a featureless thermal state \cite{Machado2019,Pizzi2021,Monroe2021}.
Unlike in a prethermal discrete time crystal, there is no visible prethermalization plateaus in Fig.~\ref{fig:lz}(b) for the metastable DTCs induced by contact interaction.
Next, we present in Fig.~\ref{fig:lz}(d) the representative dynamics of $C(t)$ when there is a harmonic confinement but short-range interaction is ignored. The fluctuation in the oscillation amplitude of $C(t)$ is more pronounced during the initial dynamics, $t/T\in[0,20]$, but it stabilizes in the long-time limit for a stable DTC under weak confinement. The fluctuating oscillation amplitude is highlighted in the relatively noisy dynamics of the stroboscopic correlation functions shown in Fig.~\ref{fig:lz}(e). The oscillations inferred from Fig.~\ref{fig:lz}(b) are more stable compared to those in Fig.~\ref{fig:lz}(e), which corroborates the role of the harmonic trap in introducing small irregularity in the period-doubling response and the photon number dynamics observed in the experiment \cite{Kessler2021}.
In Fig.~\ref{fig:lz}(f), we find that the lifetime of a metastable DTC decreases with increasing harmonic oscillator frequency similar to the effect of the short-range interaction.
Both energy scales Figs.~\ref{fig:lz}(c) and \ref{fig:lz}(f) are in relation to the recoil energy, i.e. $E_\mathrm{osc}/E_\mathrm{rec}$, and $E_\mathrm{int}/E_\mathrm{rec}$. We further observe that the typical lifetime of metastable DTCs with harmonic confinement is shorter by an order of magnitude than those without the trap but with contact interaction, as inferred from comparing Figs.~\ref{fig:lz}(c) and \ref{fig:lz}(f).
This implies that an inhomogeneous potential, such as the harmonic trap, has a more detrimental effect on the stability of dissipative time crystals that rely on states with spatial long-range order, like the DW phase in the atom-cavity system.
A different kind of metastable DTC emerges for strong harmonic confinement without contact interaction. We observe in Fig.~\ref{fig:lz}(e) the appearance of prethermalization plateaus, wherein the amplitude of the period-doubling response is fixed, reminiscent of those found in prethermal discrete time crystals \cite{Machado2019,Pizzi2021,Monroe2021}. This behavior is different from the exponential decay observed as soon as the periodic driving starts in the \textit{standard} metastable DTCs for strong contact interactions and without harmonic trap, see Fig.~\ref{fig:lz}(b). Thus, we propose a second kind of metastable DTC, a prethermal dissipative time crystal (PDTC). This phase can be understood in the paradigm of prethermalization arising from fast driving. In the presence of a harmonic trap, the energy of the system can be rescaled by the oscillator frequency $\omega$. Then, the ratio between the driving frequency and the oscillator frequency becomes the relevant energy scale for defining the ``fast-driving regime", $\omega_\mathrm{d}/\omega \gg 1$. Similar to a prethermal discrete time crystal \cite{Machado2019,Pizzi2021,Monroe2021}, the relaxation time in the PDTC can be increased by increasing the relative driving frequency, $\omega_\mathrm{d}/\omega$, which is effectively achieved by decreasing the oscillator frequency as demonstrated in Fig.~\ref{fig:lz}(e). This point of view is consistent with the infinitely long-lived DTC found in the mean-field limit $\omega = 0$, in which $\omega_\mathrm{d}/\omega \to \infty$.
\section{Conclusions}\label{sec:conc}
In conclusion, we have investigated the properties of dissipative time crystals under realistic conditions of the atom-cavity system. Specifically, we included a harmonic confining potential and short-range interactions that compete with the infinite-range interaction mediated by the cavity photons.
Our results demonstrate that the DTC phase is robust for a nonzero harmonic potential and contact interaction strength, consistent with the observation of a DTC in a similar setup \cite{Kessler2021}.
We also show that the irregular amplitude of oscillations observed in the experiment \cite{Kessler2021} can be attributed to the harmonic trap.
We point out that for the bad cavity $\kappa \gg \omega_\mathrm{rec}$, and similar conditions, there seems to be no evidence for a stable DTC phase \cite{Molignini2018}, which may hint at the importance of having recoil resolution $\kappa \sim \omega_\mathrm{rec}$ as considered in this work and in the experimental setup in Ref.~\cite{Kessler2021}.
For sufficiently strong harmonic confinement and contact interaction, a DTC may become unstable towards the formation of two kinds of metastable DTC. Strong contact interactions lead to an exponentially decaying period-doubling response. On the other hand, strong harmonic trap gives rise to prethermal DTC with prethermalization plateaus, during which the correlation function exhibits subharmonic response at a fixed amplitude.
Our work sheds light on the crucial role of trapping on the stability of a DTC, which we expect to apply to inhomogeneous potentials, in general.
The metastable DTCs are a genuine many-body phase produced by the interplay between driving, dissipation, and mean-field breaking effects, such as competing range of interaction and inhomogeneity in space. We provide not only a strategy for stabilising DTC but also a route for systematic realization of a standard metastable DTC and a prethermal DTC. As an outlook, the transition from a stable to a metastable DTC can be experimentally explored using a combination of Feshbach resonance for tuning the contact interaction strength \cite{Chin2010} and digital micromirror device for creating arbitrary potential for the atoms \cite{Gauthier16}. Finally, we emphasise all atom-cavity systems have a confining potential and atomic interactions. Our study demonstrates a general strategy to determine the influence of these inevitable features on any many-body state that is created in these systems.
\begin{acknowledgments}
R.J.L.T. and J.G.C. acknowledge support from the DOST-ASTI's COARE high-performance computing facility. J.S. acknowledges support from the German Academic Scholarship Foundation. L.M. and J.S. are supported by the Deutsche Forschungsgemeinschaft (DFG) in the framework of the Cluster of Excellence “Advanced Imaging of Matter” (EXC 2056), Project No. 390715994. L.M. is also supported by the DFG in the framework of SFB 925, Project No. 170620586.
\end{acknowledgments}
\setcounter{equation}{0}
\setcounter{table}{0}
|
1,477,468,751,047 | arxiv | \section*{Abstract}
\textbf{%
There is a contradiction at the heart of our current understanding of individual and collective mobility patterns.
On one hand, a highly influential stream of literature on human mobility driven by analyses of massive empirical datasets finds that human movements show no evidence of characteristic spatial scales. There, human mobility is described as \textit{scale-free}.\cite{brockmann2006scaling, gonzalez2008understanding, song2010modelling} On the other hand, in geography, the concept of \emph{scale}, referring to meaningful levels of description from individual buildings through neighborhoods, cities, regions, and countries, is central for the description of various aspects of human behavior such as socio-economic interactions, or political and cultural dynamics.\cite{paasi2004place, marston2000social} Here, we resolve this apparent paradox by showing that day-to-day human mobility does indeed contain meaningful scales, corresponding to spatial containers restricting mobility behavior. The scale-free results arise from aggregating displacements across containers. We present a simple model, which given a person's trajectory, infers their neighborhoods, cities, and so on, as well as the sizes of these geographical containers.
We find that the containers characterizing the trajectories of more than 700\,000 individuals do indeed have typical sizes.
We show that our model generates highly realistic trajectories without overfitting and provides a new lens through which to understand the differences in mobility behaviour across countries, gender groups, and urban-rural areas.}
It is nearly impossible to underestimate the importance of establishing a quantitative foundation for our understanding of how individuals move from place to place in their everyday lives.
Hundreds of millions of individuals spend billions of collective hours commuting every day.\cite{cresswell2006move}
Goods and food are transported through a global network using shared infrastructure.\cite{kaluza2010complex} Understanding mobility patterns helps us mitigate epidemic spreading\cite{kraemer2020effect}, assist in crisis management\cite{song2014prediction}, prepare for dramatic shifts in modes of transportation\cite{becker2017literature}, and many other cases.\cite{cresswell2006move}
For this reason, understanding the origin of scale-free distributions of displacements in empirical mobility traces is crucial, as this issue currently separates the large-scale data-driven human mobility research\cite{barbosa2018human} from the community of human geography\cite{ paasi2004place, marston2000social} and transportation research.\cite{larsen2016mobilities}
Our mental representation of physical space has a hierarchical structure.\cite{hirtle1985evidence}
We describe space referring to \emph{places}\cite{paasi2004place}, meaningful spatial entities with associated typical size, or \textit{scale}, from rooms and buildings -- via neighborhoods, cities, and states -- to nations and continents that are organized in a nested structure.\cite{paasi2004place,von1910isolierte,christaller1980zentralen,berry1967geography,alonso1964location}
Geographical borders confine residential mobility\cite{cadwallader1992migration} and collective mobility fluxes.\cite{thiemann2010structure} Commuting is characterized by a typical travel-time budget, and, as a consequence, there exist characteristic spatial scales that have evolved in connection to the progress of transportation.\cite{marchetti1994anthropological} Further, it has been conjectured that there are fundamental differences between forms of moving at different scales, from moving within a building to traveling across the globe.\cite{berry1967geography, cresswell2006move, larsen2016mobilities}
However, recent empirical research in the field of Human Mobility\cite{barbosa2018human}, has found no evidence for characteristic spatial scales in how people travel.\cite{brockmann2006scaling, song2010modelling, gonzalez2008understanding, noulas2012tale} On the contrary, studies have shown that the distribution of displacement lengths $\Delta r$ travelled by an individual has a power law tail $P(\Delta r) \sim \Delta r^ {-\beta}$ over several orders of magnitude, where typically $1 \leq \beta \leq 2$.\cite{alessandretti2017multi} Power law distributions are also called \textit{scale free}, because they are the only mathematical distribution to have no associated typical scale\cite{newman2005power} (see Supplementary Note~1).
\textbf{Nested scales generate power laws}.
So the question becomes:
How is it possible that our intuitive conception of space is clearly hierarchical and characterized by typical scales, when a broad range of empirical datasets, ranging from displacements of dollar bills\cite{brockmann2006scaling} or cell-tower data\cite{gonzalez2008understanding} to public transportation systems\cite{liang2013unraveling}, and GPS data\cite{alessandretti2018evidence,gallotti2016stochastic} all suggest that human mobility is scale free?
To explain this apparent contradiction, we propose that each typical scale of human mobility corresponds to a \textit{container} of a certain mobility behavior.
These containers (rooms, buildings, neighborhoods, cities, countries, and so on) have typical sizes, see Figure~\ref{Figure1}a, and roughly correspond to the notion of \textit{places} in geography.\cite{paasi2004place} The observed power law arises when we aggregate mobility behavior within containers and mobility that transports a person between containers.
Specifically, it is well known that mixtures of normal (or lognormal) distributions with different variances can generate power laws\cite{gheorghiu2004heterogeneity} (see Figure~\ref{Figure1}d).
More specifically, we assume that for each individual, physical space is organized as a nested structure of containers. This structure relates, in part, to the organization of the transportation system\cite{marchetti1994anthropological} and to the concrete structure of our built environment\cite{berry1967geography}, see Figure~\ref{Figure1}a.
\begin{figure}[htb!]
\includegraphics[width=\textwidth]{./Figures/Figure1.pdf}
\caption{\textbf{The Scales of Human Mobility.} a) Example of containers for an individual living in Copenhagen, characterized by the size of containers in neighborhoods (blue), cities (orange), urban agglomerations (green) and regions (red). Map data copyrighted OpenStreetMap contributors and available from \href{https://www.openstreetmap.org}{https://www.openstreetmap.org}. b) Distribution of container sizes (left column) and median time spent in the same container (right column) across individuals. Dashed lines correspond to medians. Results, shown here for containers at different hierarchical levels, are obtained by fitting the \emph{container model} to the D1 dataset, consisting of $\sim700\,000$ anonymized GPS traces of individuals distributed across the globe (see Extended Data Figure~\ref{Extended_data_F2} for dataset D2). c) Schematic representation of the container model. Individuals move between locations (black dots) inside a nested set of containers. The probability of transitioning between two locations $j$ and $k$ is the product of two factors, corresponding to choosing level-distance and destination (see main text). d) Gaussian distributions with different variances (left panel) and their mixture (right panel) in log-log scale. The dashed line (right panel) is a power law $P(x)\sim x^{-\beta}$ with $\beta=1$ to guide the eye. \label{Figure1}}
\end{figure}
We propose that these nested containers affect how individuals move, and therefore can be inferred from the raw mobility data.
Specifically, the amount of time spent within a container can depend on its hierarchical level. The connection between hierarchical level and mobility is supported by the literature, which shows that e.g.~transitions between regions are more frequent than transitions between countries\cite{amini2014impact}.
\textbf{A simple model identifies containers}.
We now describe the associated \textit{container model} of mobility, a model which estimates a person's containers from their empirical mobility patterns, see Figure~\ref{Figure1}c. For each individual, we model physical space as a hierarchy of $L$ levels, ordered from the smallest to largest (e.g.~individual locations to countries). At any level $l$, space is partitioned into topologically compact containers, with a characteristic size. For $l<L$, a container is fully included within a single parent container (for example each neighbourhood is part of a single city). Hence, each geographical location $k$ can be identified as a sequence of containers, $k = ( k_1,..., k_l, ..., k_L)$, where container $k_{l}$ is included in $k_{l+1}$.
Next, consistent with most models of human mobility\cite{barbosa2018human, fotheringham1983new}, each container $k_l$ is characterized by its probability to be selected within its parent container, its \emph{attractiveness} $a(k_l)$.
We define the \emph{level-distance} $d(j,k)$ between locations $j$ and $k$ as the highest index at which the two sequences of containers describing $j$ and $k$ differ.\cite{saraccli2013comparison} We model traces individually; each trace results in a unique hierarchical structure.
Based on the assumption that the amount of time spent in a container depends upon its place in the hierarchy, we design a model of trajectories, where the probability of transitioning from location $j$ to location $k$ depends on the level-distance between them.
For an agent located in $j$, we model the probability of moving to $k$ as the product of two factors:
\begin{equation}
\label{equation1}
P( j \rightarrow k)= p_{d(j,k), d(j,h)} \prod _{l \leq d(j,k)}a(k_l),
\end{equation}
see also Methods, section `Model description'.
The first factor, $p_{d(j,k), d(j,h)}$, represents the probability of traveling at level-distance $d(j,k)$, given that the current location $j$ is at level-distance $d(j,h)$ from the individual home-location, $h$.
This probability follows a multinomial distribution, which must depend on level-distance from home to account for the fact that higher-level transitions are more likely when individuals are not in the home container; for example, one is typically more likely to transition at the country scale, when not in the home country.
The second factor $ \prod _{l \leq d(j,k)}a(k_l)$ is the probability of choosing a specific location $k$ at that level-distance, where $a(k_l)$ is the attractiveness of a container at level $l$ including location $k$.
\textbf{Results: Scales of human mobility}.
We fit this \textit{container model} to the individual GPS traces from two different datasets: dataset D1 which consists of traces of $\sim700\,000$ individuals distributed across the globe, and dataset D2 which consists of traces of $\sim 1000$ students from the Technical University of Denmark (see Methods section `Data description'). We fit the model using maximum likelihood estimation (see Methods section `Likelihood optimization').
For each individual, the fitting procedure outputs the most likely hierarchical spatial structure, along with attractiveness of containers and probabilities of traveling at given level-distance.
We find that empirical individual mobility traces are characterized, on average, by four hierarchical levels.
In contrast, synthetic traces generated by the current state-of-the-art, for example the \textit{Exploration and preferential return} (EPR) model \cite{song2010modelling} and its variations\cite{barbosa2015effect}, are best described by a single hierarchical level grouping individual stop locations (see Extended Data Figure~\ref{Extended_data_F5}).
In both datasets of GPS traces, our model finds characteristic sizes of containers. The sizes of containers -- defined as the maximum distance between two locations in a container at a given level -- are not broad, but well described by lognormal distribution across the population. Our results are robust across datasets (see Extended Data Table~\ref{Extended_data_T1}). We argue that the characteristic sizes of containers are precisely the `scales' of human mobility.
These typical sizes of containers can be characterized by the median value $e^{\mu_l}$, of the lognormal distributions $\textrm{Lognormal}(\mu_l, \sigma_l^2)$\cite{gaddum1945lognormal}, for each hierarchical level $l$. We find $e^{\mu_2}=3.089 \pm 0.006 $~km, $e^{\mu_3}=27.064 \pm 0.006$~km, $e^{\mu_4}=88.442 \pm 0.022$~km and $e^{\mu_5}=161.634 \pm 0.049$~km, see Figure~\ref{Figure1}b and Extended Data Table~\ref{Extended_data_T3}. The coefficient of variation $C_l=\sqrt{e^{\sigma_l^2} - 1}$\cite{romeo2003broad}, characterizing the relative dispersion of the lognormal distribution, is included in the range [2.721,3.042] for $l$ in the range [2,5].
The median time spent within the same container at a given level is also well described by a lognormal distribution (see Figure~\ref{Figure1}c and Extended Data Table~\ref{Extended_data_T2}), implying that there are characteristic temporal scales associated to spatial scales.
Having shown that we can infer information on geographical scales directly from the raw data, we now demonstrate the usefulness of this novel description of mobility patterns. We approach this task in two steps. First, we argue that the hierarchical description generated by the container model generalizes to unseen data without overfitting, while providing a more expressive and nuanced description of mobility relative to state-of-the-art models according to unbiased
performance estimates. Secondly, drawing on demographic and environmental data, we show that the container model produces results that converge with existing literature on gender differences, urban/rural divides, and walkability scores.
\textbf{Validation: Generation of traces}. First, we explore the ability of the container model to capture key features of empirical mobility patterns and compare to state-of-the-art models. The container model allows us to generate synthetic traces. The realistic nature of these trajectories can be verified by comparing the statistical properties of synthetic and real sequences of locations (see Figure~\ref{Figure2}). For each individual, we fit the container model parameters using a portion of the entire trace with length $1$ year (see Methods section `Likelihood optimization'), and we then generate $1000$ synthetic sequences of $50$ displacements (see Methods section `Generation of traces').
Now, we can compare these synthetic traces with actual traces of the same length, collected in the $1$-year window subsequent to training.
Thus, there is no overlap between the data we used to fit and to validate the model. Comparing synthetic traces to unseen data provides an unbiased performance
estimate, which allows us to compare model performance across multiple models
and confirm that the container model does not overfit (see Supplementary Note~4).
\begin{figure}[htb!]
\includegraphics[width=\textwidth]{./Figures/Figure2.pdf}
\caption{\textbf{The container model generates realistic mobility traces.} (a) The distribution of displacements for the entire population, computed for real traces (black line, dots), traces generated by the container (orange filled area) and EPR\cite{song2010modelling} (blue filled area) models. (b) The median individual radius of gyration vs the number of displacements for data (black line, dots), traces generated by the container (orange filled area) and EPR (blue filled area) models. Dashed lines are logarithmic fits. (c) The average visitation frequency vs the rank of individuals' locations for real traces (black line, dots), container (orange filled area) and EPR (blue filled area) model traces. (d) The distribution of the difference between the temporal entropy $S_{temp}$ and the uncorrelated entropy $S_{unc}$ across individuals for real traces (black line, dots), and synthetic traces generated by the container (orange filled area) and the EPR model (blue filled area). In panels a,c,d, the filled areas for synthetic traces include two standard deviations around the mean computed across $1000$ simulations for each user. In panel b filled areas include the interquartile range. For each individual, we fitted the EPR and container models considering a training period of $1$ year. The data used here for validation corresponds to the $50$ individual displacements following the training period. Results are shown for a random sample of $9000$ individuals.}
\label{Figure2}
\end{figure}
We focus on four key properties of mobility in the generated data: Distribution of displacements, evolution of radius of gyration, time allocation among locations, and entropy.
Considering the distribution of displacement lengths between consecutive locations, a widely studied property of mobility traces\cite{alessandretti2017multi}, the likelihood ratio test\cite{clauset2009power} shows that the container model provides a significantly better description of the data than the EPR model\cite{song2010modelling} and its variations (see Figure~\ref{Figure2}a and Extended Data Figure~\ref{Extended_data_F4} and Table~\ref{Extended_data_T4}, with $p\ll0.01$).
Next, the \textit{radius of gyration}\cite{gonzalez2008understanding} (see Methods section `Metrics'), quantifies the spatial extent of an individual's mobility.
Here we find that while the evolution of individuals' radius of gyration over time is well described by a logarithmic growth in all cases: real\cite{gonzalez2008understanding}, EPR\cite{song2010modelling}, and the container model (see Figure~\ref{Figure2}b), only the fit $f(x) = a+b\cdot \log (x)$ for the container model traces is consistent with the real data within errors (see Supplementary Note~4).
We characterize the way in which individuals allocate time among locations (see Figure~\ref{Figure2}c), and find that distribution of location frequencies is better described by the container model, compared to the EPR model, under the likelihood ratio test\cite{clauset2009power} (with $p\ll0.01$).
The final property of synthetic traces is the individual difference between the uncorrelated entropy $S_\textrm{unc}$, which characterizes the heterogeneity of visitation patterns, and the temporal entropy, $S_\textrm{temp}$, which depends not only on the frequency of visitation, but also the order in which locations were visited\cite{song2010limits} (see Methods section `Metrics').
The likelihood ratio test\cite{clauset2009power} shows that the distribution of $S_\textrm{unc}$ - $S_\textrm{temp}$ is better described by the container model, compared to the EPR model (with $p\ll0.01$).
The result that the container model provides a better description of mobility compared to the state-of-the-art holds also when considering a comprehensive\cite{barbosa2018human} set of six state-of-the-art individual-level models (see Supplementary Note~4).
\textbf{Validation: Demographics and Built Environment}. Now, we aggregate users based on demographics and contextual features and explore the characteristics of containers for each subgroup of users, in order to underscore how the container model reveals patterns which have strong support in the existing literature. We focus on three factors which describe heterogeneity in mobility behavior: gender\cite{gauvin2020gender}, level of urbanization\cite{breheny1995compact}, and \textit{walkability} score\cite{carr2010walk} in the area surrounding one's home-location.
First, we find that gender differences can partly explain the observed heterogeneity, in line with previous findings\cite{gauvin2020gender}, although not in all of the countries under study (see Figure~\ref{Figure3}a, and Supplementary Table~1). A novel finding concerns the fact that in $21$ out of $53$ countries, females are characterized by a significantly larger number of hierarchical levels than males, while the opposite is not the case for any country (see Supplementary Table~2). As a key observation inviting further research, we find that the difference between genders across countries, measured as the Kullback-Leibler divergence between the distributions of number of levels, is strongly correlated with the Gender Inequality Index (GII).\cite{gaye2010measuring} GII measures the percentage of potential human development loss due to gender inequality (Spearman correlation $\rho=0.69$, $p<10^{-6}$, see Figure~\ref{Figure3}b). Turning next to the size of containers, we find that in $48$ out of $53$ countries, the containers characterizing the mobility of females are smaller compared to males (see Supplementary Note~2), in line with previous research.\cite{gauvin2020gender}
\begin{figure}[htb!]
\includegraphics[width=\textwidth]{./Figures/Figure3.pdf}
\caption{\textbf{Socio-demographic differences and heterogeneity in scales.} (a) The cumulative distribution of number of levels for males (blue dashed line, inset) and females (red dashed line, inset), and the difference between the two (black dashed line). Results are shown for the four countries with the largest (left column, Saudi Arabia and India) and the smallest (right column, Germany and South Africa) gender gap, measured as the Kullback-Leibler (KL) divergence. (b) The gender gap in number of levels, computed as the KL divergence between the number of levels for males and females, versus the Gender Inequality Index.\cite{gaye2010measuring} Each dot represents a different country, orange dots are the countries shown in (a). The black dashed line is a power law fit $P(x)\sim x^\beta$ with $\beta=0.58$. (c) The cumulative distribution of container sizes for individuals living in urban (orange dashed line, inset) and rural (green dashed line, inset) areas, and the difference between the two (black dashed line). Results are shown for hierarchical levels from $2$ to $5$. (d) The size of containers at level $2$ (with level $1$ corresponding to individual locations) versus the walkability score\cite{carr2010walk} around an individual's home location (blue dots). The shaded area corresponds to the $50\%$ interquartile range computed by bootstrapping 500 samples of individuals for each value of the walkability score.}
\label{Figure3}
\end{figure}
Secondly, we find that the urban/rural divide partly explains differences in mobility patterns, in line with the fact that rural areas are characterized by limited accessibility.\cite{velaga2012transport, berry1967geography} Individuals living in rural areas (see Methods section `Other data' for definitions) have significantly larger containers compared to urban individuals, and this difference is more pronounced at the lowest hierarchical levels (see Figure~\ref{Figure3}c, and Supplementary Table~3).
Finally, we find that the walkability score around an individual's home location correlates negatively with the size of containers at the lowest hierarchical level (Spearman correlation $\rho=-0.96$, $p<10^{-9}$, see Figure~\ref{Figure3}d), in line with the finding that improved walkability increases accessibility to goods, services, and activities.\cite{litman2003economic} The correlation between walkability and container-size is significant up to the third level of description (see Supplementary Note~2).
\textbf{Discussion}.
The paradigm of power law descriptions does not stand entirely unchallenged within the quantitative analysis literature. For example, it has been argued that exponential or lognormal functions may be more suitable to describe the distributions of displacements within cities\cite{alessandretti2017multi}, hinting that human mobility may not be completely free of scales.
Until now, however, the nature of the probability distribution of displacements has been unclear.\cite{alessandretti2017multi, gallotti2016stochastic}
For example, it has been suggested that scaling laws could be the signature of L\'{e}vy flights, a type of random walk with scale-free step-size attributed to animal foraging\cite{baronchelli2013levy}, but L\'{e}vy flights do not reproduce all statistical properties of human trajectories.\cite{song2010modelling}
It has also been proposed that the structure of the transportation system\cite{han2011origin, gallotti2016stochastic, zhao2015explaining}, where each mode of transportation corresponds to a typical distance traveled, could explain the observed scaling laws.
Intra-city displacements considering all transportation modes, however, are not scale-free distributed, as this theory would suggest.\cite{liang2013unraveling, noulas2012tale}
Because of the lack of agreement on the functional form of distribution of displacements, many state-of-the-art agent-based models of individual mobility focus on temporal aspects\cite{barbosa2018human}, including the interplay between exploration and exploitation\cite{song2010modelling, pappalardo2015returners}, recency and memory effects\cite{alessandretti2018evidence, szell2012understanding, barbosa2015effect}, weekly and circadian rhythms.\cite{jiang2016timegeo}
With few exceptions\cite{han2011origin}, these models do not account for effects due to the spatial distribution of locations.
In this paper, we have proposed a model in which human mobility is organized according to a hierarchical structure of spatial containers, corresponding to the notion of \textit{places} in geography (see Equation 1).
Under this model, the observed power law data arises by merging mobility \textit{within} containers with mobility that transports a person \textit{between} containers. The container model focuses on a specific aspect of mobility, and neglects other important features, including temporal visitation patterns, exploration\cite{song2010modelling}, and the structural connectedness of geographical spaces (e.g. through transportation networks)\cite{pumain2006alternative,batty2006hierarchy,arcaute2016cities}. These could be incorporated in future versions of the model. Fitting the model to trajectories collected in two distinct datasets, consisting of $\sim700\,000$ GPS traces of individuals distributed across the world, we found that -- across individuals -- the containers have typical sizes, representing the `scales' of human mobility.
We showed that our model allows for better understanding of mobility behavior and improves on the state-of-the-art in modeling.
\newpage
\section*{Methods}
\begin{small}
\subsection*{Data description and pre-processing}
\textbf{Mobility data. }Our analyses are based on two mobile phone datasets collecting high-resolution human trajectories. The study procedure follows the guidelines provided by the Danish Data Protection Agency.
The D1 dataset contains anonymized GPS location data for $\sim{5\,000\,000}$ individuals collected by a global smartphone and electronics company between $2017$ and $2019$ (see Extended Data Figure~\ref{Extended_data_F1}). The data consists of anonymized users who self-reported their age, gender, height, weight, and country of residence. Data was extracted through a smartphone app. All data analysis was carried out in accordance with the EU’s General Data Protection Regulation 2016/679 (GDPR) and the regulations set out by the Danish Data Protection Agency. We selected $\sim{700\,000}$ individuals with at least one year of data and whose position is known, every day, at least $50\%$ of the time. Individuals are located across the world and are aged between $18$ and $80$ years old, with an average age of $36$ years. About one-third of individuals are female. Gender and age were provided by the users at the time of registration. Data are not collected at a fixed sampling rate. Instead, the location estimate is updated when there is a change in the motion-state of the device (if the accelerometer registers a change). Location estimation error is below $100$~m for $93\%$ of data points. Informed consent was obtained for all study participants.
The D2 data was collected as part of an experiment that took place between September $2013$ and September $2015$.\cite{stopczynski2014measuring} The experiment involved $851$ Technical University of Denmark students ($\sim22\%$ female and $\sim78\%$ male), typically aged between $19$ and $21$ years old. Participants’ position over time was estimated from a combination of GPS and WiFi information, resulting in samples every $1-2$ minutes. The location estimation error was below $50$~m in $95\%$ of the cases. Data collection was approved by the Danish Data Protection Agency. All participants provided informed consent. Data to produce Figure~\ref{Figure1}a is the location trajectory of one of the authors. We pre-processed all trajectories to obtain stop-locations using the Infostop algorithm.\cite{aslak2020infostop} We used the following algorithm parameters: \texttt{r1}$=30$~m, \texttt{r2}$=30$~m, \texttt{min\_staying\_time}$=10$ minutes, \texttt{max\_time\_between}$=24$ hours. Results are robust with respect to variation of these parameters (Supplementary Note~1).
\textbf{Other data. }We collected data on the walkability score in the area surrounding individuals' home locations using the WalkScore\cite{carr2010walk} API (\href{https://www.walkscore.com/professional/walk-score-apis.php}{https://www.walkscore.com/professional/walk-score-apis.php}). We collected data for $11,511$ individuals living in New Zealand, Australia, Canada and the USA, for which WalkScore data was available.
Data on the urbanization level in the area surrounding individuals' home locations is based on the GHS Settlement Model grid\cite{pesaresi2016operating} that delineates and classifies settlement typologies via a logic of population size, population and built-up area densities. This classification categorizes areas in urban areas, towns, and rural areas. In our analysis, we merged towns and cities into a single category. Data can be downloaded from: \href{https://ghsl.jrc.ec.europa.eu/data.php}{https://ghsl.jrc.ec.europa.eu/data.php}.
The Gender Inequality Index dataset can be downloded from:\\ \href{http://hdr.undp.org/en/content/gender-inequality-index-gii}{http://hdr.undp.org/en/content/gender-inequality-index-gii}. We used data for $2017$.
\subsection*{The container model}
\textbf{Model description. } The container model models the trace of an agent transitioning between locations in space. The model is specified by three sets of parameters that can either be simulated to generate synthetic traces or estimated for an empirical trace through maximum likelihood estimation. The model contains:
\begin{itemize}
\item A hierarchical structure $\boldsymbol{H}$ with $L$ levels, where each level consists of containers encapsulating locations. Accordingly, each location $k$ can be described as a sequence of containers encapsulated within each other, $k = ( k_1,..., k_l, ..., k_L)$, where levels are ordered from the most fine grained $l=1$ to the most coarse grained $l=L$. In analogy, a restaurant can be described as a sequence corresponding to the building, the neighbourhood, the city, etc. where it is located. At each level in the hierarchy, containers have comparable size. In the simplest form, this structure is a nested grid (see Supplementary Note~3).
\item The collection of these containers' attractivenesses, $\boldsymbol{a}$. The attractiveness $a(k_l)$ is the probability of visiting container $k_l$ among all containers encapsulated within $k_{l+1}$. Accordingly, $\sum_{k_l \in k_{l+1}}a(k_l)=1$.
\item The $L \times L$ matrix, $\boldsymbol{p}$ characterizing the probability of travelling at a certain level-distance. Each row in $\boldsymbol{p}$ is a probability vector that describes the probabilities $p(d,d_h)$ of traveling at level-distance $d$ when the level distance from home is $d_h$. Here, home is defined as the location with largest attractiveness at all levels. By level-distance we mean the so-called cophenetic distance\cite{saraccli2013comparison}: the highest level in the hierarchy one travels to reach the destination. It is necessary to maintain separate probability distributions for each level-distance from home. This is, for example, because traveling at the highest-level distance (e.g. intercontinentally) is unlikely when one is near home, but comparatively likely when on a different continent.
\end{itemize}
Under the container model, each transition is the result of a two-stage decision process.
First, the individual selects at which level-distance to travel.
Then, she selects a specific destination based on container attractiveness. Specifically, an individual located in $j$, chooses destination location $k$ with probability:
\[
P_{\boldsymbol{H}, \boldsymbol{a}, \boldsymbol{p}}( j \rightarrow k )= p_{d(j,k), d(j,h)} \frac{a(k_{d(j,k)})}{1 - a(j_{d(j,k)})} \prod_{l=1}^{d(j,k)-1} a(k_l)
\]
The first factor, $p_{d(j,k), d(j,h)}$, is the probability of traveling at level-distance $d(j,k)$.
The second factor $\frac{a(k_{d(j,k)})}{1 - a(j_{d(j,k)})}$ is the probability of choosing container $k_{d(j,k)}$.
Such a container is found at level $d(j,k)$ in the hierarchy and has attractiveness $a(k_{d(j,k)})$.
The renormalization $1-a(j_{d(j, k)})$ accounts for the fact that container $j_{d(j, k)}$ cannot be selected (this detail is not present in the main text for readability).
The third factor $\prod_{l=1}^{d(j,k)-1} a(k_l)$ is the probability of picking all other containers $k_l$ that encapsulate location $k$, for any level in the hierarchy lower than $d(j,k)$. Note that the way we model destination choice in a hierarchical fashion, connects to the class of choice models called nested logit models.\cite{train2009discrete} The nested structure of the physical space in the container model relates, in part, to the organization of the transportation system\cite{marchetti1994anthropological,zahavi1978stability,miller2004tobler} and to the concrete structure of our built environment\cite{berry1967geography,batty2006hierarchy}. The importance of these contexts are also gradually being recognized in the Human Mobility literature, where early studies focused on large datasets, but did not consider the effect of contextual information, e.g. transportation type or other mobility characteristics, which can introduce heterogeneity.\cite{noulas2012tale, zhao2015explaining, gauvin2020gender, kraemer2020mapping, steele2017mapping, lu2016detecting, lu2012predictability, weiss2018global, althoff2017large}
\textbf*{Generating traces. }We model transitions as a two-step decision process. Thus, we can simulate synthetic trajectories given a hierarchical description $\boldsymbol{H}$, container attractivenesses $\boldsymbol{a}$, and the probability matrix $\boldsymbol{p}$, (either designed or obtained by fitting the container model to an empirical trace).
We simulate the mobility of an agent by the following algorithm. To guide the reader we offer an \emph{Example} at each step, describing an agent travelling across a hierarchy where levels correspond to countries, cities, neighbourhoods, buildings and locations.
\begin{enumerate}
\item Initialize the agent in a random location, $j$, at level-distance $d(j, h)$ from the \textit{home} location. \emph{Example: the agent is initialized in location $j$ located in a different country than her home country.}
\item Select a level-distance $l^*$ that the agent should travel at, by drawing from the multinomial distribution, $p_{d(j,h)}$. \emph{Example: the agent chooses to travel at the city-distance.}
\item Select a destination, $k$:
\begin{itemize}
\item {At level $l^*$, select a container $k_{l^*}$, by drawing from the attractiveness distribution over the containers encapsulated in $j_{l^*+1}$. $j_{l^*}$ cannot be selected in this process, so $k_{l^*} \neq j_{l^*}$. \emph{Example: the agent chooses the destination city among other cities in the same country where she is currently located.}}
\item {At level $(l^*-1)$, select a container $k_{l^*-1}$ encapsulated within $k_{l^*}$, by drawing from the attractiveness distribution over containers in $k_{l^*}$. Continue this process until level 1 is reached. \emph{Example: the agent picks a neighborhood, then a building and then a location encapsulated within the destination city chosen in the previous step.}}
\end{itemize}
\item Repeat steps 2 and 3 for any desired number of displacements.
\end{enumerate}
\textbf{Likelihood optimization.} We can fit the container model to an empirical trace and obtain the model parameters $\boldsymbol{H}$, $\boldsymbol{a}$, $\boldsymbol{p}$, using maximum likelihood estimation. We write the likelihood that a sequence of individual locations $T=[k(0),...,k(i),...k(n_T)]$ was generated by an instance of the container model specified by $\boldsymbol{H}$, $\boldsymbol{a}$, $\boldsymbol{p}$, as:
\[
\pazocal{L}(\boldsymbol{H}, \boldsymbol{a}, \boldsymbol{p} \mid T ) = \prod_{i=0}^{n_T-1} P_{\boldsymbol{H}, \boldsymbol{a}, \boldsymbol{p}}(k(i-1) \rightarrow k(i)),
\]
where $P_{\boldsymbol{H}, \boldsymbol{a}, \boldsymbol{p}}(k(i-1) \rightarrow k(i))$ is the probability of a transition to occur.
Unlike spatial clustering methodologies, this method allows us to identify `containers', structures that are compact in size, but also also contain mobility behavior.
This optimization task, however, is computationally expensive, therefore we approach the problem according to the following heuristic.
First, we note that, when $n_T$ is large and $\boldsymbol{H}$ is chosen, $\boldsymbol{p}$ and $\boldsymbol{a}$ are trivial to estimate. In fact, for $n_T \to \infty$, element $p_{d,d_h}$ of matrix $\boldsymbol{p}$ equals the fraction of transitions covering a level-distance $d$ among all transitions starting at level-distance $d_h$ from home.
Similarly, for $n_T \to \infty$, the attractiveness of a container equals the fraction of times such container is selected among containers in the same parent container.
Thus, for long enough traces, we can estimate the maximum likelihood by exploring different choices of $\boldsymbol{H}$ only, where $\boldsymbol{H}$ is effectively a spatial hierarchical partition of individual locations.
In order to ensure that clusters are compact, we choose $\boldsymbol{H}$ among the solutions of the complete linkage hierarchical clustering algorithm.\cite{everitt2011hierarchical}
First, we run the complete-linkage algorithm for the set of locations in sequence $T$.
The algorithm initializes each location as a separate cluster.
It then iteratively joins the two clusters whose union has the smallest diameter, defined as the maximum distance between two locations in a cluster, and stores the clustering solution.
It runs for $N$ iterations, where $N$ is the number of locations (and possible clustering solutions). In the final iteration all locations form one cluster.
The result of the complete-linkage algorithm can be visualized as a dendogram and queried for clusters at any cut-distance (see Extended Data Figure~\ref{Extended_data_F3}a).
We then proceed to find the hierarchical partition $\boldsymbol{H^*}$ corresponding to the maximum likelihood $\pazocal{L}^*$.
Exhaustive search would require computing the likelihood for all possible partitions $\boldsymbol{H}$.
When we let $L$ range from $1$ to $N$, we arrive at the total number of possible partitions by the following logic:
For $L=1$, the dendogram is cut zero times so there is one partition, which has only individual locations and no containers; for $L=2$ the dendogram is cut once, so there are $N$ partitions because there are $N$ ways to cut the dendogram; for $L=3$, there are $\frac{N(N-1)}{2}$ ways to cut the dendogram two times, and so on. The set of all possible partitions then has size $\sum_{L=1}^{N} {\binom{N}{L}}$.
We define a heuristic to reduce the set of candidate partitions $\boldsymbol{H}$, by optimizing the likelihood \emph{one-level-at-a-time}, see Extended Data Figure~\ref{Extended_data_F3}b.
The algorithm works as follows:
First, we compute the likelihood $\pazocal{L}_{1}$ of $T$ in the case $L=1$, corresponding to having no containers.
Then, we test the $N$ possible partitions corresponding to cutting the dendogram one time (i.e. $L=2$), by computing the corresponding likelihoods.
We find the cut $C_2$ of the dendogram resulting in the maximum likelihood $\pazocal{L}_2$. If $\pazocal{L}_{1}$ is significantly larger than $\pazocal{L}_{2}$ (tested by bootstrapping), we assign $\pazocal{L}^*=\pazocal{L}_{1}$, conclude that $\boldsymbol{H^*}$ has only $1$ level (individual locations), and stop the algorithm.
Otherwise, we explore the set of partitions corresponding to two cuts of the dendogram (i.e. $L=3$), where one of them is $C_2$, and find the cut $C_3$ that yields the maximum likelihood $\pazocal{L}_3$.
We compare $\pazocal{L}_2$ and $\pazocal{L}_3$, and stop the algorithm if $\pazocal{L}_2$ is significantly larger than $\pazocal{L}_3$.
We proceed for increasing values of $L$, until $L=N$ or there is no significant improvement in likelihood.
In the worst-case scenario, we explore $N!$ partitions.
We validate the \emph{one-level-at-a-time} algorithm against synthetic data (see Extended Data Figure~\ref{Extended_data_F3} and Supplementary Note~3). We find that the algorithm recovers the correct number of hierarchical levels $\sim95\%$ of the times.
The similarity between the correct and recovered hierarchical structure, measured as their cophenetic correlation\cite{saraccli2013comparison} has median value $1$ (the cophenetic correlation is the correlation between the cophenetic distance computed for all pairs of locations according to two different hierarchical descriptions, and thus is $1$ for identical descriptions). The median absolute error relative to the estimation of the matrix of probabilities $\boldsymbol{p}$ is $0.03$.
\subsection*{Model validation}
\textbf{Metrics.} In Figure~\ref{Figure2}, we compare synthetic and real traces by computing quantities characterizing individual trajectories. \\
The radius of gyration for an individual $u$ is defined as:
\[
r_g^{u} = \sqrt{\frac{1}{N} \sum_{n=0}^{N} (r_n^u - r_{CM,n}^u)^2},
\]
where $N$ is the total number of displacements ($50$ in our analysis), $r_n^u$ is the position of $u$ after $n$ displacements, $r_{CM, n}^u$ is its center of mass after $n$ displacements, defined as:
\[
r_{CM,n}^u = \frac{1}{n}\sum_{j=0}^{n} r_j^u.
\]
The uncorrelated entropy $S_{unc}$ is defined as:
\[
S_{unc} = - \sum_{i=0}^{N_L} P(i) log_2(P(i)),
\]
where $P(i)$ is the probability of visiting location $i$, and $N_L$ is the total number of locations. The temporal entropy $S_{temp}$, is defined as
\[
S_{temp} = - \sum_{T_{i'} \in T_{i}} P(T_{i'}) log_2(P(T_{i'})),
\]
where $P(T_{i'})$ is the probability of finding a particular time-ordered subsequence $T_{i'}$ in the trajectory $T_i$. We estimate $S_{temp}$ using the method described by Sekara \textit{et al}.\cite{sekara2016fundamental}
\textbf{EPR model. }We generate EPR synthetic traces as follows.
First, we fit the model parameters~\cite{song2014prediction} and determine, for each individual, the number of visited locations $S$ as well as the number of visits $f_i$ per location $i$ using traces with one-year duration. Then, we generate traces using the model described in Song \textit{et al}.\cite{song2014prediction}
At each new displacement, an individual explores a new place with probability $\rho S^{-\gamma}$ and exploits a previously known location with the complementary probability.
In the first case, she chooses a place at distance $\Delta r$, extracted from a power law distribution $P(\Delta r)\sim \Delta r ^{-\beta}$.
In the latter case, she chooses a previously visited location $i$ with probability proportional to the number of visits $f_i$. See Supplementary Note~4 for further details and the implementation of other models.
\subsection*{Declarations}
\textbf{Data Availability.} Derived data that support the findings of this study are available in DTU Data with the identifier \href{https://doi.org/10.11583/DTU.12941993.v1}{https://doi.org/10.11583/DTU.12941993.v1}. Source data for Figures 1, 2 and 3 are provided with the paper. Additional data related to this paper may be requested from the authors. Raw data for dataset D1 are not publicly available to preserve individuals' privacy under the European General Data Protection Regulation. Raw data for dataset D2 are not publicly available due to privacy considerations, but are available to researchers who meet the criteria for access to confidential data, sign a confidentiality agreement, and agree to work under supervision in Copenhagen.
Please direct your queries to the corresponding author.
\textbf{Code Availability.} \\
Code is available at \href{https://github.com/lalessan/scales\_human\_mobility/}{https://github.com/lalessan/scales\_human\_mobility/}\\
An interactive illustration of the \emph{container model} generative process can be found at \\ \href{https://observablehq.com/@ulfaslak/a-model-for-generating-multiscale-mobility-traces}{https://observablehq.com/@ulfaslak/a-model-for-generating-multiscale-mobility-traces}.
For a visual demonstration of how scale-free distribution can emerge from the aggregation of other distributions, refer to \href{https://observablehq.com/@ulfaslak/a-visual-exploration-of-how-a-power-law-can-emerge-from-aggre}{https://observablehq.com/@ulfaslak/a-visual-exploration-of-how-a-power-law-can-emerge-from-aggre}.
\textbf{Acknowledgements.} We thank Filippo Simini and Lars Kai Hansen for providing insightful comments. We thank Marta C. Gonzalez for help with datasets.
\textbf{Author Contributions.} LA, UA and SL designed the study and the model. LA and UA performed the analyses and implemented the model. LA, UA and SL analysed the results and wrote the paper.
\textbf{Corresponding Author.} Correspondence and requests for materials should be addressed to Sune Lehmann~(email: sljo@dtu.dk).
\end{small}
\bibliographystyle{naturemag}
|
1,477,468,751,048 | arxiv | \section{Introduction} \label{sec1}
When we have two independent samples and would like to detect whether there is difference between means of the two populations then we apply the parametric and the non parametric tests. If the samples come from normal distribution, then the $t$ test will be the right choice.
In this paper we will introduce a cross variance concept and present a new approach to detect whether there are differences between the means of the two samples based on the cross variance.
This paper is organized as follows. In Section~\ref{sec2} we will describe the cross-variance concept and the proposed test, inlcuding the new probability density function. Section~\ref{sec3} will present the simulation results of the power and the error type I of the proposed test in comparison to the $t$ test. Some examples will be provided together with the $t$ test. Finally, section~\ref{sec4} is the summary and remarks.
\section{An introduction to the cross variance concept and the proposed test} \label{sec2}
\subsection{The cross variance concept} \label{sec21}
\begin{definition}
Suppose we have two independent samples, $X_{i}$ and $Y_{j}$; $i=1,2,...,m$ and $j=1,2,...,n$. Their sample mean and variance are denoted by
$\overline{X}, \overline{Y}$ and $V_{x}, V_{y}$. Let
$$ V_{x}^{*} = \frac{{\sum \limits_{i=1}^{m}(X_{i} -\overline{Y})^{2}}}{m-1} \textrm{ and } V_{y}^{*} = \frac{ {\sum \limits_{i=1}^{n}(Y_{i} -\overline{X})^{2}}}{n-1}$$
be the cross-variance for each sample $X$ and $Y$ respectively. The cross-variance sample of groups \textbf{X} and \textbf{Y} is defined as
\begin{align} \label{eq21}
\textbf{T}&= \frac{V_{x}^{a}+V_{y}^{a}}{2},
\end{align}
where $\quad V_{x}^{a} = \frac{V_{x}}{V_{x}^{*}}, \quad V_{y}^{a} = \frac{V_{y}}{V_{y}^{*}}.$
\end{definition}
Clearly
\begin{align} \label{eq22}
V_{x}^{*} &= \frac{\sum_{i=1}^{m}(X_{i} -\overline{Y})^{2}}{m-1} \nonumber \\
&= V_{x}+\frac{m(\overline{Y}-\overline{X})^2}{m-1},
\end{align}
and
\begin{align}
V_{y}^{*} &=\frac{ \sum_{i=1}^{n}(Y_{i} -\overline{X})^{2}}{n-1} \nonumber \\
&= V_{y}+\frac{n(\overline{Y}-\overline{X})^2}{n-1}. \nonumber
\end{align}
Thus $T= \frac{V_{x}^{a}+V_{y}^{a}}{2}$ can be re-written as
\begin{align} \label{eq23}
T&= \frac{1}{2} \left[\frac{V_{x}}{V_{x} +\frac{m(\overline{Y}-\overline{X})^{2}}{m-1}}+\frac{V_{y}}{V_{y}+\frac{n(\overline{Y}-\overline{X})^{2}}{n-1}}\right] \nonumber \\
&= \frac{V_{x}}{2V_{x} +2\frac{m(\overline{Y}-\overline{X})^{2}}{m-1}}+\frac{V_{y}}{2V_{y}+2\frac{n(\overline{Y}-\overline{X})^{2}}{n-1}} \nonumber \\
&= Z_{1} + Z_{2}
\end{align}
In what follows, we assume that
\begin{enumerate}
\item{the sample sizes are equal}
\item{${X_{i}}$ and ${Y_{i}}$ are i.i.d. normally distributed with unknown means and known variances $\sigma_{x}^{2}$, $\sigma_{y}^{2}$.}
\end{enumerate}
It follows that
\begin{equation}
\frac{(n-1)V_{x} }{\sigma_{x}^{2}}\sim \chi^{2}_{(n-1)}, \frac{(n-1)V_{y} }{\sigma_{y}^{2}} \sim \chi^{2}_{(n-1)}, \textrm{ and } \frac{n(\overline{Y}-\overline{X})^{2}}{\sigma_{y}^{2} + \sigma_{x}^{2}} \sim \chi^{2}_{(1)} \nonumber
\end{equation}
Therefore Equation (\ref{eq23}) can be written as follows
\begin{align} \label{eq24}
T&= Z_{1}+Z_{2} \\
&= \frac{\frac{(n-1)V_{x} }{\sigma_{x}^{2}}}{2\frac{(n-1)V_{x} }{\sigma_{x}^{2}} +2\frac{(\sigma_{x}^{2}+\sigma_{y}^{2})}{\sigma_{x}^{2}}\frac{n}{(\sigma_{x}^{2}+\sigma_{y}^{2})}(\overline{Y}-\overline{X})^{2}}+\frac{\frac{(n-1)V_{y} }{\sigma_{y}^{2}}}{2\frac{(n-1)V_{y} }{\sigma_{y}^{2}} +2\frac{(\sigma_{x}^{2}+\sigma_{y}^{2})}{\sigma_{y}^{2}}\frac{n}{(\sigma_{x}^{2}+\sigma_{y}^{2})}(\overline{Y}-\overline{X})^{2}} \nonumber
\end{align}
where
\begin{align*}
Z_{1}&=\frac{\frac{(n-1)V_{x} }{\sigma_{x}^{2}}}{\frac{2(n-1)V_{x} }{\sigma_{x}^{2}} +\frac{2(\sigma_{x}^{2}+\sigma_{y}^{2})}{\sigma_{x}^{2}}\frac{n}{(\sigma_{x}^{2}+\sigma_{y}^{2})}(\overline{Y}-\overline{X})^{2}}= \frac{U}{2U+2abV},
\end{align*}
and
\begin{align*}
Z_{2}&=\frac{\frac{(n-1)V_{y} }{\sigma_{y}^{2}}}{\frac{2(n-1)V_{y} }{\sigma_{y}^{2}} +\frac{2(\sigma_{x}^{2}+\sigma_{y}^{2})}{\sigma_{y}^{2}}\frac{n}{(\sigma_{x}^{2}+\sigma_{y}^{2})}(\overline{Y}-\overline{X})^{2}}= \frac{S}{2S+2bcV}
\end{align*}
with
$$
U=\frac{(n-1)V_{x} }{\sigma_{x}^{2}}, \quad S=\frac{(n-1)V_{y} }{\sigma_{y}^{2}}, \quad V=\frac{n(\overline{Y}-\overline{X})^{2}}{(\sigma_{x}^{2}+\sigma_{y}^{2})}
$$
and
$$a=\frac{1}{\sigma_{x}^{2}}, \quad b=(\sigma_{x}^{2}+\sigma_{y}^{2}), \quad c=\frac{1}{\sigma_{y}^{2}}.$$
Hence Equation (\ref{eq24}) can be written as
\begin{align} \label{eq25}
T &=Z_{1}+Z_{2}=\frac{U}{2U+2abV} + \frac{S}{2S+2bcV}
\end{align}
To compute the distribution of $T$ in Equation (\ref{eq25}), it can be done by considering the fact that
\begin{enumerate}
\item{$U, V$ and $S$ are independent}
\item{$Z_{1}$ and $Z_{2}$ are dependent}
\end{enumerate}
In this paper we will describe the first case, where we consider that $U, V$ and $S$ are independent.
\subsection{The proposed test} \label{sec22}
Under normality asumption of $X$ and $Y$ then $U, V$ and $S$ are independent, where $V$ is $\chi^{2}_{(1)}$ distributed and $U, S$ are $\chi^{2}_{(n-1)}$ distributed. From the equation (\ref{eq25}), suppose $V=Z_{3}$ from here we get that $U= \frac{2abZ_{1}Z_{3}}{1-2Z_{1}}$ and $S= \frac{2bcZ_{2}Z_{3}}{1-2Z_{2}}$. The jacobian of this transformation is
$|J|=\begin{vmatrix}
\frac{dU}{dZ_{1}} & \frac{dU}{dZ_{2}} & \frac{dU}{dZ_{3}} \\
\frac{dS}{dZ_{1}} & \frac{dS}{dZ_{2}} & \frac{dS}{dZ_{3}} \\
\frac{dV}{dZ_{1}} & \frac{dV}{dZ_{2}} & \frac{dV}{dZ_{3}}
\end{vmatrix} =\begin{vmatrix}
\frac{2abZ_{3}}{\left(1-2Z_{1}\right)^{2}} & 0 & \frac{2abZ_{1}}{\left(1-2Z_{1}\right)} \\
0 & \frac{2bcZ_{3}}{\left(1-2Z_{2}\right)^{2}} & \frac{2bcZ_{2}}{\left(1-2Z_{2}\right) } \\
0 & 0 & 1
\end{vmatrix}=\frac{4ab^{2}cZ_{3}^{2}}{\left((1-2Z_{1})(1-2Z_{2})\right)^{2}}$
The joint probability density function of $Z_{1}, Z_{2}, Z_{3}$ is
\begin{align} \label{eq26}
&f_{Z_{1}, Z_{2}, Z_{3}}(z_{1}, z_{2}, z_{3}) \nonumber \\
&=f_{U}\left(u=\frac{2abz_{1}z_{3}}{1-2z_{1}}\right)f_{S}\left(s=\frac{2bcz_{2}z_{3}}{1-2z_{2}} \right)f_{V}(v=z_{3})|J| \nonumber \\
&= \frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}}{2^{n-\frac{1}{2}}\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}} \frac{z_{1}^{\frac{n-1}{2}-1} z_{2}^{\frac{n-1}{2}-1} z_{3}^{\frac{2n-1}{2}-1} e^{-z_{3}\left(\frac{1}{2}+\frac{abz_{1}}{1-2z_{1}}+\frac{cbz_{2}}{1-2z_{2}} \right)}}{\left(1-2z_{1} \right)^{\frac{n+1}{2}} \left( 1-2z_{2}\right)^{\frac{n+1}{2}}} \nonumber \\
\end{align}
The joint density function of $Z_{1}, Z_{2}$ is the marginal probability function of $Z_{1}, Z_{2}$ from the equation (\ref{eq26}) above. It is computed as follows:
\begin{align} \label{eq27}
&f_{Z_{1}, Z_{2}}(z_{1}, z_{2}) \nonumber \\
&= \int \limits_{0}^{\infty} f_{Z_{1}, Z_{2}, Z_{3}}(z_{1}, z_{2}, z_{3}) dz_{3} \nonumber \\
&= \frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}}{2^{n-\frac{1}{2}}\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}} \frac{z_{1}^{\frac{n-1}{2}-1} z_{2}^{\frac{n-1}{2}-1} }{\left(1-2z_{1} \right)^{\frac{n+1}{2}} \left( 1-2z_{2}\right)^{\frac{n+1}{2}}} \int \limits_{0}^{\infty} z_{3}^{\frac{2n-1}{2}-1} e^{-z_{3}\left(\frac{1}{2}+\frac{abz_{1}}{1-2z_{1}}+\frac{bcz_{2}}{1-2z_{2}} \right)} dz_{3} \nonumber \\
&= \frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right) z_{1}^{\frac{n-1}{2}-1} z_{2}^{\frac{n-1}{2}-1} }{2^{n-\frac{1}{2}} \Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2} \left(1-2z_{1} \right)^{\frac{n+1}{2}} \left( 1-2z_{2}\right)^{\frac{n+1}{2}} \left(\frac{1}{2}+ \frac{abz_{1}}{(1-2z_{1})}+\frac{bcz_{2}}{(1-2z_{2})}\right)^{n-\frac{1}{2}}} \nonumber \\
\end{align}
Therefore the cumulative distribution function (cdf) of $T \le t$ s computed as follows
\begin{align} \label{eq28}
&F_{T}(t) \nonumber \\
&=P(T \le t) \nonumber \\
&=\int \limits_{-\infty}^{\infty} \int \limits_{-\infty}^{t-z_{1}}f_{Z_{1},Z_{2}} (z_{1},z_{2}) dz_{2} dz_{1} \nonumber \\
&=\int \limits_{0}^{t} \int \limits_{0}^{t-z_{1}} \frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right) z_{1}^{\frac{n-1}{2}-1} z_{2}^{\frac{n-1}{2}-1} dz_{2} dz_{1}}{2^{n-\frac{1}{2}} \Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2} \left((1-2z_{1} ) ( 1-2z_{2})\right)^{\frac{n+1}{2}} \left(\frac{1}{2}+ \frac{abz_{1}}{(1-2z_{1})}+\frac{bcz_{2}}{(1-2z_{2})}\right)^{n-\frac{1}{2}}} \nonumber \\
&=\int \limits_{0}^{t} \frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right) z_{1}^{\frac{n-1}{2}-1}}{2^{n-\frac{1}{2}} \Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2} \left(1-2z_{1} \right)^{\frac{n+1}{2}}}\textbf{B} dz_{1}
\end{align}
where
\begin{align} \label{eq29}
\textbf{B}&=\int \limits_{0}^{t-z_{1}} \frac{z_{2}^{\frac{n-1}{2}-1} \left( 1-2z_{2}\right)^{\frac{n+1}{2}}dz_{2}}{\left(\frac{1}{2}+ \frac{abz_{1}}{(1-2z_{1})}+\frac{bcz_{2}}{(1-2z_{2})}\right)^{n-\frac{1}{2}}}
\end{align}
To compute the integral at Equation (\ref{eq29}), first we will simplify this $\\ \left(\frac{1}{2}+ \frac{abz_{1}}{(1-2z_{1})}+\frac{bcz_{2}}{(1-2z_{2})}\right)^{n-\frac{1}{2}}$ as $\left(1+\frac{2(bc-(1-2(1-ab-bc)z_{1}))}{1-2(1-ab)z_{1}} z_{2} \right) \left( \frac{1-2(1-ab)z_{1}}{2(1-2z_{1})}\right)$. Therefore the Equation (\ref{eq29}) becomes
\begin{align} \label{eq210}
\frac{\left(2(1-2z_{1})\right)^{n-\frac{1}{2}}}{\left( 1-2(1-ab)z_{1}\right)^{n-\frac{1}{2}}} \int \limits_{0}^{t-z_{1}} \frac{z_{2}^{\frac{n-1}{2}-1} \left( 1-2z_{2}\right)^{\frac{n+1}{2}} dz_{2}} {\left( 1+\frac{2(bc-(1-2(1-ab-bc)z_{1}))}{1-2(1-ab)z_{1}} z_{2} \right)^{n-\frac{1}{2}}}
\end{align}
The integral $\int \limits_{0}^{t-z_{1}} \frac{z_{2}^{\frac{n-1}{2}-1} \left( 1-2z_{2}\right)^{\frac{n+1}{2}} dz_{2}} {\left( 1+\frac{2(bc-(1-2(1-ab-bc)z_{1}))}{1-2(1-ab)z_{1}} z_{2} \right)^{n-\frac{1}{2}}} $ is written as
\begin{align} \label{eq211}
\int \limits_{0}^{t-z_{1}} z_{2}^{\frac{n-1}{2}-1} \left( 1-2z_{2}\right)^{\frac{n+1}{2}} \left( 1+\frac{2(bc-(1-2(1-ab-bc)z_{1}))}{1-2(1-ab)z_{1}} z_{2} \right)^{-\left(n-\frac{1}{2}\right)}dz_{2}
\end{align}
By considering the binomial expansion then Equation (\ref{eq211}) can be represented as
\begin{align} \label{eq212}
&=\sum \limits_{k=0}^{\infty} \sum \limits_{l=0}^{\infty} \Bigg[ \frac{2^{k+l} (-1)^{k+l} \dbinom{\frac{n+1}{2}}{k}\dbinom{n-\frac{3}{2} +l}{l}(bc-1+2(1-ab-bc)z_{1})^{l}}{\left( 1-2(1-ab)z_{1}\right)^{l}} \times \nonumber \\
& \int \limits_{0}^{t-z_{1}} z_{2}^{\frac{n-1}{2}+k+l}dz_{2} \Bigg] \nonumber \\
&=\sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \Bigg[ \frac{2^{k+l} (-1)^{k+l} \dbinom{\frac{n+1}{2}}{k}\dbinom{n-\frac{3}{2} +l}{l}(bc-1+2(1-ab-bc)z_{1})^{l}}{\left( 1-2(1-ab)z_{1}\right)^{l}} \times \nonumber \\
&\frac{\left(t-z_{1}\right)^{\frac{n+1}{2}+k+l}} {\frac{n+1}{2}+k+l} \Bigg]
\end{align}
Therefore,
\begin{align} \label{eq213}
\textbf{B}&=\frac{\left(2(1-2z_{1})\right)^{n-\frac{1}{2}}}{\left( 1-2(1-ab)z_{1}\right)^{n-\frac{1}{2}}} \times \nonumber \\
& \Bigg[ \sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \Bigg[ \frac{2^{k+l} (-1)^{k+l} \dbinom{\frac{n+1}{2}}{k}\dbinom{n-\frac{3}{2} +l}{l}(bc-1+2(1-ab-bc)z_{1})^{l}}{\left( 1-2(1-ab)z_{1}\right)^{l}}\times \nonumber \\
& \frac{\left(t-z_{1}\right)^{\frac{n+1}{2}+k+l}} {\frac{n+1}{2}+k+l} \Bigg] \Bigg]
\end{align}
and \\
\begin{align} \label{eq214}
F_{T} (t)&=\frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right)}{\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}}\times \nonumber \\
&\Bigg[\sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \frac{(-2)^{k} \dbinom{\frac{n+1}{2}}{k}\dbinom{n-\frac{3}{2} +l}{l}(-2(bc-1))^{l}t^{\frac{n+1}{2}+k+l} } {\frac{n+1}{2}+k+l}\textbf{G}\Bigg]
\end{align}
where
\begin{align} \label{eq215}
\textbf{G}&= \int \limits_{0}^{t} \Bigg[z_{1}^{\frac{n-1}{2}-1} \left( 1-\frac{z_{1}}{t}\right)^{\frac{n+1}{2}+k+l} (1-2z_{1})^{\frac{n}{2}-1} \left( 1-2(1-ab)z_{1}\right)^{-(n+l-\frac{1}{2})}\times \nonumber \\
& \left(1+\frac{2(1-ab-bc)z_{1}}{bc-1}\right)^{l} dz_{1} \Bigg]
\end{align}
Again, by considering the binomial expansion then Equation (\ref{eq215}) can be written as follows
\begin{align} \label{eq216}
\textbf{G}&=\sum \limits_{m=0}^{\infty} \sum_{p=0}^{\infty} \sum_{q=0}^{\infty} \Bigg[ \dbinom{\frac{n}{2}}{m} \dbinom{l}{p} \dbinom{n+l+q-\frac{3}{2}}{q} (-2)^{m} \left(\frac{2(1-ab-bc)}{bc-1} \right)^{p} (-2(1-ab))^{q} \times \nonumber \\
&\int \limits_{0}^{t} z_{1}^{\frac{n-1}{2+m+p+q}-1} \left( 1-\frac{z_{1}}{t}\right)^{\frac{n-1}{2}+k+l}dz_{1} \Bigg] \nonumber \\
&=\sum \limits_{m=0}^{\infty} \sum_{p=0}^{\infty} \sum_{q=0}^{\infty} \Bigg[ \dbinom{\frac{n}{2}}{m} \dbinom{l}{p} \dbinom{n+l+q-\frac{3}{2}}{q} (-2)^{m} \left(\frac{2(1-ab-bc)}{bc-1} \right)^{p} (-2(1-ab))^{q} \times \nonumber \\
&\frac {B\left ( \frac{n-1}{2}+m+p+q,\frac{n+1}{2}+k+l \right)} {t^{\frac{n-1}{2}+m+p+q} } \Bigg]
\end{align}
Therefore $F_{T}(t)$ is
\begin{align} \label{eq217}
F_{T} (t)&=\frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right)}{\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}} \qquad \times \nonumber \\
&\Bigg[ \sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \sum_{m=0}^{\infty} \sum_{p=0}^{\infty} \sum_{q=0}^{\infty} \Bigg[ \frac{2^{k} \dbinom{\frac{n-1}{2}+k}{k}\dbinom{n-\frac{3}{2} +l}{l}(-2(bc-1))^{l}t^{\frac{n-1}{2}+k+l} } {\frac{n-1}{2}+k+l} \qquad \times \nonumber \\
&\frac{\dbinom{\frac{n}{2}}{m} \dbinom{l}{p} \dbinom{n+l+q-\frac{3}{2}}{q} (-2)^{m} \left(\frac{2(1-ab-bc)}{bc-1} \right)^{p} (-2(1-ab))^{q} } {t^{\frac{n-1}{2}+m+p+q}} \qquad \times \nonumber \\
&B \left(\frac{n-1}{2}+m+p+q,\frac{n+1}{2}+k+l \right) \Bigg] \Bigg]\nonumber \\
&=\frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right)}{\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}} \qquad \times \nonumber \\
&\Bigg[ \sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \sum_{m=0}^{\infty} \sum_{p=0}^{\infty} \sum_{q=0}^{\infty} \Bigg[ \frac{2^{k} \dbinom{\frac{n-1}{2}+k}{k}\dbinom{n-\frac{3}{2} +l}{l}(-2(bc-1))^{l}t^{k+l-m-p-q} } {\frac{n-1}{2}+k+l} \qquad \times \nonumber \\
&\dbinom{\frac{n}{2}}{m} \dbinom{l}{p} \dbinom{n+l+q-\frac{3}{2}}{q} (-2)^{m} \left(\frac{2(1-ab-bc)}{bc-1} \right)^{p} (-2(1-ab))^{q} \qquad \times \nonumber \\
&B \left(\frac{n-1}{2}+m+p+q,\frac{n+1}{2}+k+l \right) \Bigg] \Bigg]
\end{align}
Furthermore, from the Equation (\ref{eq217}) then the probability density function (pdf) of $T$ is
\begin{align} \label{eq218}
f_{T} (t)&=\frac{\left(4ab^{2}c \right)^{\frac{n-1}{2}}\Gamma \left (n-\frac{1}{2} \right)}{\Gamma \left( \frac{1}{2}\right) \Gamma \left( \frac{n-1}{2}\right)^{2}} \sum \limits_{k=0}^{\infty}\sum_{l=0}^{\infty} \sum_{m=0}^{\infty} \sum_{p=0}^{\infty} \sum_{q=0}^{\infty} \frac{2^{k} \dbinom{\frac{n-1}{2}+k}{k}\dbinom{n-\frac{3}{2} +l}{l}} {\frac{n-1}{2}+k+l} \quad \times \nonumber \\
&(k+l-m-p-q) t^{k+l-m-p-q} (-2(bc-1))^{l} \dbinom{\frac{n}{2}}{m} \dbinom{l}{p} \dbinom{n+l+q-\frac{3}{2}}{q} (-2)^{m} \times \nonumber \\
& \left(\frac{2(1-ab-bc)}{bc-1} \right)^{p} (-2(1-ab))^{q} B \left(\frac{n-1}{2}+m+p+q,\frac{n+1}{2}+k+l \right)
\end{align}
The pdf of $T$ also can be computed as follows:
\begin{align} \label{eq219}
f_{T} (t)&=\int \limits_{-\infty}^{t} f_{Z_{1},Z_{2}} (z_{1},t-z_{1}) dz_{1}\nonumber \\
&=\int \limits_{-\infty}^{t} f_{Z_{1},Z_{2}} (t-z_{2},z_{2}) dz_{2}
\end{align}
Now we already have the pdf and cdf of $T$ from where we can compute the $T$ statistics value for the hypothesis testing, that is the hypothesis null about the equality of mean of two-groups independent samples is rejected if $T \le t_{0}$ or $P \left(T \le t_{0} \right)=F_{T}(t_{0}) \le \alpha$.
The computation of $F_{T}(t_{0})$ by using the Equation (\ref{eq218}) is involving the five summation and hence it is not quite simple. The computation gets simple in the case $\sigma_{x}^{2} = \sigma_{y}^{2} $, which will be described next.
\subsection{Special case of the proposed test} \label{sec23}
In the case of $\sigma_{x}^{2} = \sigma_{y}^{2} $, we estimate the $V_{x}$ and $V_{y}$ by the least square estimator of the pooled variance $S^{2}_{p} = \frac{V_{x}+V_{y}}{2}$.
If we use the least square estimate as the estimator of $V_{x}$ and $V_{y}$ therefore the Equations (\ref{eq23}) and (\ref{eq25}) becomes
\begin{subequations}
\begin{align}
T^{*} &= \frac{\frac{V_{x}+V_{y}}{2}}{\Big[\frac{V_{x}+V_{y}}{2} + \frac{n\left(\overline{Y}-\overline{X}\right)^{2}}{n-1}\Big]} \label{eq220a}\\
&= \frac{U^{*}}{U^{*}+4V^{*}} \label{eq220b}
\end{align}
\end{subequations}
where $U^{*}=\frac{(n-1)\left(V_{x}+V_{y}\right)}{\sigma_{x}^{2}}$ and $V^{*}=\frac{n\left(\overline{Y}-\overline{X}\right)^{2}}{2\sigma_{x}^{2}}$.
The pdf of $T^{*}$ is derived from the ratio of linear combination of chi-square random variables \cite{Pro94}. First, let $Y=1+4 \frac{V^{*}}{U^{*}}$, where $V^{*}$ is distributed $\chi^{2}_{(1)}$ and $U^{*}$ is distributed $\chi^{2}_{2(n-1)}$. Second, the pdf of $T^{*}$ is computed by taking $T^{*} = \frac{1}{Y}$.
In the folowing computation, the chi-square distribution will be represented as the Gamma distribution. Therefore, we have that $V^{*}$ is Gamma distributed with parameters $\alpha_{1} =\frac{1}{2}$ and $\beta_{1}=2$.
$U^{*}$ is Gamma distributed with parameters $\alpha_{2} =(n-1)$ and $\beta_{2}=2$.
$U^{*}$ and $V^{*}$ are independents.
Suppose $G=\frac{V^{*}}{U^{*}}$ and if we take $U^{*}=H$, then we get $V^{*}= GH$. Further we got the Jacobian of this transformation variable random is $h$. Because $V^{*}$ and $U^{*}$ are independents, then the joint probability function of $G$ and $H$ is
\begin{flalign}\label{eq221}
f_{G,H}(g,h)=f_{V^{*},U^{*}}(gh,h).h
\end{flalign}
where
\begin{align}
f_{V^{*},U^{*}}(gh,h)&=f_{V^{*}}(gh).f_{U^{*}}(h), \nonumber \\
f_{V^{*}}(gh)&=\frac{(gh)^{\alpha_{1}-1}e^{-\frac{gh}{\beta_{1}}}}{\beta_{1}^{\alpha_{1}} \Gamma({\alpha_{1}})}, \textrm{and } \nonumber \\
f_{U^{*}}(h)&=\frac{(h)^{\alpha_{2}-1}e^{-\frac{h}{\beta_{2}}}}{\beta_{2}^{\alpha_{2}} \Gamma({\alpha_{2}})} \nonumber
\end{align}
Therefore
\begin{align} \label{eq222}
f_{G,H}(g,h)
&=\frac{(gh)^{\alpha_{1}-1}e^{-\frac{gh}{\beta_{1}}}}{\beta_{1}^{\alpha_{1}} \Gamma({\alpha_{1}})}.\frac{(h)^{\alpha_{2}-1}e^{-\frac{h}{\beta_{2}}}}{\beta_{2}^{\alpha_{2}} \Gamma({\alpha_{2}})} h \nonumber \\
&=\frac{g^{\alpha_{1}-1} h^{\alpha_{1}+\alpha_{2}-1} e^{(-\frac{(1+g)}{\beta}h)}}{\beta^{\alpha_{1}+\alpha_{2}} \Gamma({\alpha_{1}}) \Gamma(\alpha_{2})}, \quad \beta_{1}=\beta_{2}=\beta
\end{align}
We want to determine the pdf of $g$ then
\begin{align} \label{eq223}
f_{G}(g)
&= \int \limits_{0}^{\infty} f_{G,H}(g,h) dh \nonumber \\
&= \frac{g^{\alpha_{1}-1}}{B\left(\alpha_{1},\alpha_{2}\right) (1+g)^{\alpha_{1}+\alpha_{2}}} \nonumber \\
&= \frac{g^{\frac{1}{2}-1}}{B\left(\frac{1}{2},n-1\right) (1+g)^{n-\frac{1}{2}}}
\end{align}
$G$ is distributed beta of second kind.
The next step is determining the distribution of \textbf{$Y$}. We define \textbf{$Y$}$=1+4G$ and by using the transformation random variable, where $G=\frac{Y-1}{4}$, the pdf of \textbf{$Y$} is computed as follow
\begin{align} \label{eq224}
f_{Y}
&=\frac{\left(\frac{y-1}{4}\right)^{\frac{1}{2}-1}}{B\left(\frac{1}{2},n-1\right) \left(1+\left(\frac{y-1}{4}\right )\right)^{n-\frac{1}{2}}}\frac{1}{4} \nonumber \\
&=\frac{4^{n-1} (y-1)^{\frac{1}{2}-1}}{B\left(\frac{1}{2},n-1\right)(y+3)^{n-\frac{1}{2}}}, \quad 1 \leq y \leq \infty
\end{align}
The pdf of $T^{*}=\frac{1}{Y}$ is obtained
\begin{align} \label{eq225}
f_{T^{*}}(t^{*})
&=\frac{4^{n-1} (\frac{1}{t^{*}}-1)^{\frac{1}{2}-1}}{B\left(\frac{1}{2},n-1\right)(3+\frac{1}{t^{*}})^{n-\frac{1}{2}}}\frac{1}{t^{*2}} \nonumber \\
&=\frac{4^{n-1} t^{n-2} (1-t^{*})^{\frac{1}{2}-1}}{B\left(\frac{1}{2},n-1\right)(1+3t^{*})^{n-\frac{1}{2}}}, \quad 0 \leq t^{*} \leq 1
\end{align}
Furthermore the cdf of $T^{*}$ analtically is computed as
\begin{align} \label{eq226}
F_{T^{*}}(t^{*}_{0})&=\int \limits_{0}^{t^{*}_{0}} f_{T^{*}}(t^{*})dt^{*} \nonumber \\
&=\frac{3^{n-1}}{B\left(\frac{1}{2},n-1\right)} \int \limits_{0}^{t^{*}_{0}} \frac{t^{*(n-2)} (1-t)^{\frac{1}{2}-1}}{(1+3t^{*})^{n-\frac{1}{2}}} dt^{*} \nonumber \\
&=\frac{4^{n-1}}{B\left(\frac{1}{2},n-1\right)} \sum \limits_{k=0}^{\infty} \left(-3\right)^k \binom{n-\frac{1}{2}+k-1}{k} \int \limits_{0}^{t^{*}_{0}} t^{*(n-1+k-1)} (1-t^{*})^{\frac{1}{2}-1}dt^{*} \nonumber \\
&=\frac{4^{n-1}}{B\left(\frac{1}{2},n-1\right)} \Bigg[\sum \limits_{k=0}^{\infty} \left(-3\right)^k \binom{n+k-\frac{3}{2}}{k} B\left(n-1+k,\frac{1}{2}\right) \times \nonumber \\
& \textrm{pbeta}\left(t^{*}_{0},n-1+k,\frac{1}{2}\right) \Bigg]
\end{align}
We reject the null hypothesis of the equality of mean of two independent samples if $T^{*} < t^{*}_{0,\alpha}$ or $P \left(t^{*} < T^{*}_{0} \right)=F_{T^{*}}(T^{*}_{0}) =\text{p-value} \le P \left(t^{*} < t^{*}_{0,\alpha} \right)=F_{T^{*}}(t^{*}_{0,\alpha}) = \alpha$.
Observing that $(2(n-1)) \left(\frac{V^{*}}{U^{*}}\right)$ is the square of a random variable having $t_{2(n-1)}$ distribution, a simple calculation shows that the same holds for the random variable
\begin{eqnarray} \label{eq227}
J&=&\sqrt{(n-1) \left( \frac{1}{T^{*}} -1\right)}
\end{eqnarray}
This statistic $J$ can also be used to test the hypothesis $\mu_{x}=\mu_{y}$ and the critical values can be computed from the $t$ table. It also follows that the $J$ has a limiting normal distribution as $n \rightarrow \infty$
\section{Simulation study}\label{sec3}
In this section we will describe the results from the two simulation studies of the $t$ and the proposed tests. First simulation is done to measure the power of the proposed test is conducted at $N=500$ times and $\alpha=0.01$. The results are presented at Section \ref{sec31}. In this simulation, we consider various possibilities regarding the sample sizes and variances. The results of the simulation are divided into groups according to
\begin{enumerate}
\item {variance: low ($S=0.2$), medium ($S=1.2$) and high ($ S=7$), }
\item{sample size: low ($5$ and $25$), medium ($100$) and high ($500$) }
\end{enumerate}
Section \ref{sec32} describes the second simulation results on measuring the rejection's rate under the null hypothesis of the $t$ and the proposed tests, based on the $N=500$ times simulation and $\alpha=0.01$. In the simulation, we choose $\mu_{X}=\mu_{Y}=\mu=9.2$ and $\sigma_{X}=\sigma_{Y}=\sigma$ are three values of variance $1.25, 3.5$ and $10$ which represents the low, medium and high variance respectively.
Section \ref{sec33} gives some examples on how the proposed test works on the data.
\subsection{Power of the test} \label{sec31}
The simulation study to compute the power of the proposed and the $t$ tests is conducted as follows:
\begin{enumerate}
\item{Choose the $\mu_{X}, \mu_{Y},\sigma_{X}=\sigma_{Y}=\sigma$ of the two-groups independent samples}
\item{The simulation under null hypothesis
\begin{enumerate}
\item{Generate $n$ random samples normally distributed with mean and standard deviation, $\mu_{X_{0}}$ and $\sigma$}
\item{Generate $n$ random samples normally distributed with mean and standard deviation, $\mu_{Y_{0}}$ and $\sigma$}
\item{Compute their mean and variance samples}
\item{Compute $t^{*}_{0}$ values, based on the equation (\ref{eq220a}).}
\end{enumerate}}
\item{The simulation under alternative hypothesis
\begin{enumerate}
\item{Generate $n$ random samples normally distributed with mean and standard deviation, $\mu_{X_{1}}$ and $\sigma$}
\item{Generate $n$ random samples normally distributed with mean and standard deviation, $\mu_{Y_{1}}$ and $\sigma$}
\item{Compute their mean and variance samples}
\item{Compute $t^{*}_{1}$ values, based on the equation (\ref{eq220a}).}
\end{enumerate}}
\item{Repeat steps (2) - (3), $M$ times}
\item{Compute $t^{*}_{0,\alpha}$, the $\alpha$ quantile of $t^{*}_{0}$. Note that $t^{*}_{0,\alpha}$ also can be computed by using the $\alpha^{th}$ quantile of pdf of $T^{*}$ in the Equation (\ref{eq221}).}
\item{Compute the power test of the proposed method $=1- \frac{\textrm{sum} (t^{*}_{1} \ge t^{*}_{0,\alpha})}{M}$}
\item{Compute the power of $t$ test from samples $X_{1}$ and $Y_{1}$}
\item{Compare the results from steps (6) and (7)}
\item{Do steps (1)-(8), for different value of mean and variance}
\end{enumerate}
Figures \ref{fig1}, \ref{fig2}, \ref{fig3} and \ref{fig4} show the power of the special case of the cross variance and $t$ tests, based on the simulation study. They show that the proposed and the $t$ tests have an equal power. In the computation, the power of the proposed test is provided by implementing the the empirical approach, therefore the results is not as smooth as the $t$ test.
\begin{figure}[!h]
\begin{center}$
\begin{array}{ccc}
\includegraphics[width=1.5in,height=1.75in]{lsn5s02}&
\includegraphics[width=1.5in,height=1.75in]{lsn5s12}&
\includegraphics[width=1.5in,height=1.75in]{lsn5s70}
\end{array}$
\end{center}
\caption{Graphical power both the $t$ and proposed tests, n=5} \label{fig1}
\end{figure}
\newpage{}
\begin{figure}[!h]
\begin{center}$
\begin{array}{ccc}
\includegraphics[width=1.5in,height=1.6in]{lsn25s02}&
\includegraphics[width=1.5in,height=1.6in]{lsn25s12}&
\includegraphics[width=1.5in,height=1.6in]{lsn25s70}
\end{array}$
\end{center}
\caption{Graphical power both the $t$ and proposed tests, n=25} \label{fig2}
\end{figure}
\begin{figure}[!h]
\begin{center}$
\begin{array}{ccc}
\includegraphics[width=1.5in,height=1.6in]{lsn100s02}&
\includegraphics[width=1.5in,height=1.6in]{lsn100s12}&
\includegraphics[width=1.5in,height=1.6in]{lsn100s70}
\end{array}$
\end{center}
\caption{Graphical power both the $t$ and proposed tests, n=100} \label{fig3}
\end{figure}
\begin{figure}[!h]
\begin{center}$
\begin{array}{ccc}
\includegraphics[width=1.5in,height=1.6in]{lsn500s02}&
\includegraphics[width=1.5in,height=1.6in]{lsn500s12}&
\includegraphics[width=1.5in,height=1.6in]{lsn500s70}
\end{array}$
\end{center}
\caption{Graphical power both the $t$ and proposed tests, n=500} \label{fig4}
\end{figure}
\subsection{Error type I rate} \label{sec32}
The simulation study to compute the error type I rate is conducted as follows:
\begin{enumerate}
\item{Choose the $n, M_{1}, \alpha, \mu_{X}=\mu_{Y}=\mu$ and $\sigma_{X}=\sigma_{Y}=\sigma$ of the two-groups independent samples}
\item{Repeat $M_{1}$ times
\begin{enumerate}
\item{Compute p-value of $t^{*}_{o}$ test}
\item{Compute p-value of $t$ test}
\end{enumerate}}
\item{Compute the proportion of $t$ test p-value < $\alpha$ in $M_{1}$ results}
\item{Compute the proportion of the proposed test p-value < $\alpha$ in $M_{1}$ results}
\item{Compare the result of steps $3$ and $4$}
\end{enumerate}
In Table \ref{Tab1}, it is shown the results of the simulation where the error type I rate of the proposed and the $t$ tests are equal.
\begin{table}[!h]
\begin{center}
\caption{Error type I rate under $500$ simulation for the proposed and $t$ tests} \label{Tab1}
\resizebox{8cm}{3cm}{
\begin{tabular}{clrr|rr}
\hline
\multirow{2}{*}{Sample size} & \multirow{2}{*}{Variance} & \multicolumn{2}{c|}{proposed test} & \multicolumn{2}{|c}{$t$ test} \\ \hhline{~~----}
& & 0.05&0.01&0.05&0.01 \\ \hline
&low& 0.056 &0.012 &0.056 &0.012 \\
5&medium& 0.062 &0.016 &0.062 &0.016 \\
&high& 0.056 &0.020 &0.056 &0.020 \\ \hline
&low& 0.046 &0.010 &0.046 &0.010 \\
25&medium& 0.044 &0.012 &0.044 &0.012 \\
&high& 0.038 &0.010 &0.038 &0.010 \\ \hline
&low& 0.058 &0.006 &0.058 &0.006 \\
100&medium& 0.050 &0.010 &0.050 &0.010 \\
&high& 0.062 &0.012 &0.062 &0.012 \\ \hline
&low& 0.038 &0.004 &0.038 &0.004 \\
500&medium& 0.038 &0.002 &0.038 &0.002 \\
&high& 0.050 &0.010 &0.050 &0.010 \\ \hline
\end{tabular}
}
\end{center}
\end{table}
The distribution of p-values from the proposed and $t$ tests are can be seen in the Figures \ref{fig5}, \ref{fig6} and \ref{fig7}.
\begin{figure}[!h]
\centering
\subfloat[n=5]{\includegraphics[width=2.5in,height=1.5in]{lsV125_5}}
\subfloat[n=500]{\includegraphics[width=2.5in,height=1.5in]{lsV125_500}}
\caption{P-values distribution of the proposed and $t$ tests, small variance} \label{fig5}
\end{figure}
\begin{figure}[!h]
\centering
\subfloat[n=5]{\includegraphics[width=2.5in,height=1.5in]{lsV35_5}}
\subfloat[n=500]{\includegraphics[width=2.5in,height=1.5in]{lsV35_500}}
\caption{P-values distribution of the proposed and $t$ tests, medium variance} \label{fig6}
\end{figure}
\begin{figure}[!h]
\centering
\subfloat[n=5]{\includegraphics[width=2.5in,height=1.5in]{lsV10_5}}
\subfloat[n=500]{ \includegraphics[width=2.5in,height=1.5in]{lsV10_500}}
\caption{P-values distribution of the proposed and $t$ tests, high variance} \label{fig7}
\end{figure}
The next section will give some examples of data analysis concerning the proposed test and compare the results with the $t$ test. We used $14$ artificial data sets which the first $10$ were taken randomly from the internet. The data can be seen in Table \ref{Tab2}.
\subsection{Some examples} \label{sec33}
In this section, some examples data sets are provided and used as an example how to take the decision by using the special case of the cross variance test $T^{*}$ (= the $T$ test when $\sigma_{X}=\sigma_{Y}$). For this $T^{*}$ test, the first step is making sure the variances of the samples are equal. There are some tests for this purpose, in this paper we implement the F.test in R.
The data sets are shown as in Table \ref{Tab2} and the computation results for the F.test are shown in Table \ref{Tab3}. Tables \ref{Tab4} and \ref{Tab5} provide the results of the decision for both methods: the proposed and the $t$ tests.
In the proposed test, the null hypothesis is rejected if $t^{*}_{o} < t^{*}_{\alpha}$ or if the p-value of the special case of the cross variance test less than $\alpha$, where $t^{*}_{o}$ is the statistics from the observed sample and p-value $=P\left(t^{*}_{o} < t^{*}_{\alpha}\right)$. In the computation we use $\alpha = 0.01$.
From Table \ref{Tab3} we can infer that data set 13 has unequal variances, therefore it will be excluded from further computation.
From the Table \ref{Tab4} we can see that the p-values and decisions from the proposed and the $t$ tests are equal, except for the data set 4. It is the only one data set which has different sample size. When the sample size of two samples is different, we provide two options of $n$: the max$(n_{1},n_{2})$ or the average between $(n_{1},n_{2})$. The example of this is shown in Table \ref{Tab5}.
\section{Summary and Remarks} \label{sec4}
In this paper we have introduced the cross-variance concept, a new test based on the cross variance, the special case of the cross variance (as the proposed test) and the new probability density functions. A simulation study has been conducted to compute the power and the error type 1 rate of the proposed and the $t$ tests.
The simulation study of the power shows that the proposed and the $t$ tests has the same power. Furthermore the p-value and the error type I rate under the null hypothesis of the proposed and $t$ tests are exactly equal. These results suggest that the proposed test could be used as an alternative test to detect whether there are difference between the means of two independent normal populations, where the variance and sample size is equal. In case the sample size is unequal we proposed to choose $n= \textrm{max}\left(n_{1},n_{2}\right)$ or the average of $\left(n_{1},n_{2}\right)$.
\textbf{Acknowledgements:} \\
This paper is part of the author's PhD dissertation written under the direction of Professor Istv\'an Berkes. We would like to thank \textit{Prof. Alexandre G. Patriota} for the discussions, comments and suggestions. Financial support from the Austrian Science Fund (FWF), Project P24302-N18 is gratefully acknowledged.
\textbf{Conflict of interest: -}
\section*{References}
\bibliographystyle{plainnat}
|
1,477,468,751,049 | arxiv | \section{Introduction}
In his landmark study \cite{mazur1978} of the Eisenstein ideal with prime level, Mazur named five ``special settings'' in which ``it would be interesting to develop the theory of the Eisenstein ideal in a broader context'' [pg.\ 39, \textit{loc.\ cit.}], the first of which is the setting of squarefree level. In this paper, we develop such a theory in certain cases.
\subsection{Mazur's results and their squarefree analogues}
\label{subsec:Mazur results}
Let $p \geq 3$ and $\ell$ be primes, and let $\mathbb{T}_\ell$ be the $p$-adic Eisenstein completion of the Hecke algebra acting on modular forms of weight 2 and level $\ell$, and let $\mathbb{T}_\ell\onto \mathbb{T}_\ell^0$ be the cuspidal quotient. Let $I_\ell^0 \subset \mathbb{T}_\ell^0$ be the Eisenstein ideal, and let $\mathfrak{m}_\ell^0 = (p,I_\ell^0)$ be the maximal ideal. Mazur proved the following results \cite{mazur1978}:
\begin{enumerate}
\item $\mathbb{T}^0_\ell/I_\ell^0 \cong \mathbb{Z}_p/(\frac{\ell-1}{12})\mathbb{Z}_p$,
\item $I_\ell^0$ is principal,
\item $\mathbb{T}^0_\ell$ is Gorenstein,
\item $\dim_{\mathbb{F}_p}(J_0(N)[\mathfrak{m}_\ell^0])=2$, and
\item if $q\ne \ell$ is a prime such that $q \not \equiv 1 \pmod{p}$ and such that $q$ is not a $p$-th power modulo $\ell$, then $T_q-(q+1)$ generates $I_\ell^0$.
\end{enumerate}
Mazur calls a prime $q$ as in (5) a \emph{good prime for $(\ell,p)$}. We note that, of course, (5) implies (2) implies (3). We also note that (2) implies that $\mathbb{T}_\ell$ is Gorenstein also.
The analogue of (1) has been proven for squarefree levels by Ohta \cite{ohta2014}. However, as has been noted by many authors, notably Ribet and Yoo \cite{ribet2015,yoo2015}, the statements (2)-(5) are not true in the squarefree setting. Still, in this paper, we prove, in certain cases, analogues of (2)-(5). Namely, we count the minimal number of generators of the Eisenstein ideal, count the dimension of the Eisenstein kernel of the Jacobian, and give sufficient (and sometimes also necessary) conditions for a list of elements $T_q-(q+1)$ to generate the Eisenstein ideal.
\subsection{Pseudomodularity} Our main technical result is an $R=\mathbb{T}$ theorem, where $R$ is a deformation ring for Galois pseudorepresentations and $\mathbb{T}$ is the Eisenstein part of the Hecke algebra. The strategy is similar to that of our previous works \cite{WWE3, WWE4}, where we gave new proofs and refinements of Mazur's results. However, there are several points of interest that are new in this setting.
\begin{enumerate}[label=(\alph*), leftmargin=2em]
\item In the case of prime level $\ell$, Calegari and Emerton \cite{CE2005} have already applied deformation theory to study Mazur's Eisenstein ideal. Their method is to rigidify the deformation theory of Galois representations using auxiliary data coming from the prime level $\ell$. In the case of squarefree level, a similar strategy will not work: the deformation problem at prime level is already rigid, and cannot be further rigidified to account for the additional primes.
\item
In the case of squarefree level, there are multiple Eisenstein series, and one has to account for the possibility of congruences among them.
\item At squarefree level, unlike prime level, the Tate module of the Jacobian may not be free over the Hecke algebra. Since this Tate module is the natural way to construct Galois representations, it is really necessary to work with pseudorepresentations.
\item We prove $R=\mathbb{T}$ even in some cases where the Galois cohomology groups controlling the tangent space of $R$ are all non-cyclic (see Remark \ref{rem:no cyclicity assumption}). In this case, the universal pseudodeformation cannot arise from a representation.
\end{enumerate}
To address issue (a), we have to develop a theory of Cayley-Hamilton representations and pseudorepresentations with squarefree level, which has the required flexibility; for this, we drew inspiration from our previous joint works \cite{WWE1,WWE3,WWE4} and the work of Calegari-Specter \cite{CS2016}. The ideas are discussed later in this introduction in \S \ref{subsec:psdef method}. To address issue (b), we make extensive use of an idea of Ohta \cite{ohta2014}: we use the Atkin-Lehner involutions at $\ell\mid N$ to define $\mathbb{T}$, rather than the usual Hecke operators $U_\ell$.
\subsection{Setup}
We introduce notation in order to state our main results precisely. Throughout the paper we fix a prime $p>3$ and let $N$ denote a squarefree integer with distinct prime factors $\ell_0, \ell_1, \dots, \ell_r$. The case $p \mid N$ is not excluded.
\subsubsection{Eisenstein series and Hecke algebras}
\label{sssec:eisen series and hecke alg defn}
The Eisenstein series of weight two and level $N$ have a basis $\{E_{2,N}^\epsilon\}$, labeled by elements $\epsilon= (\epsilon_0, \dots, \epsilon_r)$ in the set $\mathcal{E} =\{\pm 1\}^{r+1} \setminus \{(1,1, \dots, 1)\}$. The $E_{2,N}^\epsilon$ are characterized in terms of Hecke eigenvalues by the properties that
\begin{enumerate}[leftmargin=2em]
\label{eq:eisenstein eigenvalues}
\item $\displaystyle T_n E_{2,N}^\epsilon =\left (\sum_{0<t\mid n} t \right) E_{2,N}^\epsilon$ for all $n$ with $\gcd(n,N)=1$, and
\item $w_{\ell_i} E_{2,N}^\epsilon = \epsilon_i E_{2,N}^\epsilon$ for the Atkin-Lehner involutions $w_{\ell_0}, \dots, w_{\ell_r}$,
\end{enumerate}
together with the normalization $a_1(E_{2,N}^\epsilon)=1$. The constant coefficients satisfy
\begin{equation}
\label{eq:constant term}
a_0(E_{2,N}^\epsilon) = -\frac{1}{24} \prod_{i=0}^r (\epsilon_i\ell_i+1).
\end{equation}
Based on the philosophy that congruences between Eisenstein series and cusp forms should happen when the constant term is divisible by $p$, we expect the most interesting congruences to occur when $\ell_i \equiv -\epsilon_i \pmod{p}$ for many $i$.
Consider the Hecke algebra of weight $2$ and level $N$ generated by all $T_n$ with $\gcd(n,N)=1$ and by all Atkin-Lehner involutions $w_{\ell_0}, \dots, w_{\ell_r}$. Let $\mathbb{T}_N^\epsilon$ denote the completion of this algebra at the maximal ideal generated by $p$ together with the annihilator of $E_{2,N}^\epsilon$.
Let $I^\epsilon$ denote the annihilator of $E_{2,N}^\epsilon$ in $\mathbb{T}_N^\epsilon$, so $\mathbb{T}_N^\epsilon/I^\epsilon = \mathbb{Z}_p$, and let $\mathfrak{m}^\epsilon =(I^\epsilon, p)$ be the maximal ideal of $\mathbb{T}_N^\epsilon$. For a Hecke module $M$, let $M_{\mathfrak{m}^\epsilon}$ denote the tensor product of $M$ with $\mathbb{T}_N^\epsilon$ over the Hecke algebra. In particular, let $M_2(N)_{\mathfrak{m}^\epsilon}$ (resp.\ $S_2(N)_{\mathfrak{m}^\epsilon}$) denote the resulting module of modular forms (resp.\ cuspidal forms). Let $\mathbb{T}_N^{\epsilon,0}$ denote the cuspidal quotient of $\mathbb{T}_N^\epsilon$, and let $I^{\epsilon,0}$ be the image of $I^\epsilon$ in $\mathbb{T}_N^{\epsilon,0}$.
\subsubsection{Another Hecke algebra}
\label{subsubsec:T_U}
In contrast with our approach, one often studies a different Hecke algebra $\mathbb{T}^\epsilon_{N,U}$, containing the operators $U_\ell$ instead of $w_\ell$, and with Eisenstein ideal $I_U^\epsilon$ generated by $T_q-(q+1)$ for $q\nmid N$ and $U_{\ell_i}-\ell_i^\frac{\epsilon_i+1}{2}$ for $i=0,\dots, r$. We prove that $\mathbb{T}_{N,U}^\epsilon = \mathbb{T}^\epsilon_N$ in some of the cases that we consider --- see Appendix \ref{appendix:U and w}. Our main results together with Appendix \ref{appendix:U and w} can be used to prove results about $\mathbb{T}^{\epsilon}_{N,U}$ that are closely related to the results of authors including Ribet \cite{ribet2010, ribet2015}, Yoo (\cite{yoo2015, yoo2017, yoo2017b} and others) and Hsu \cite{hsu2018}. In general, when $\mathbb{T}_N^\epsilon \ne \mathbb{T}_{U,N}^\epsilon$, we believe that $\mathbb{T}_N^\epsilon$ is more natural and better behaved, so we mostly consider $\mathbb{T}_N^\epsilon$.
\subsubsection{The number fields $K_i$}
\label{subsubsec:defn of K_i}
Let $\ell$ be a prime such that $\ell \equiv \pm 1 \pmod{p}$. Then there is a unique degree $p$ Galois extension $K_\ell/\mathbb{Q}(\zeta_p)$ such that
\begin{enumerate}
\item $\mathrm{Gal}(\mathbb{Q}(\zeta_p)/\mathbb{Q})$ acts on $\mathrm{Gal}(K_\ell/\mathbb{Q}(\zeta_p))$ via the character $\omega^{-1}$,
\item the prime $(1-\zeta_p)$ of $\mathbb{Q}(\zeta_p)$ splits completely in $K_\ell$, and
\item only the primes above $\ell$ ramify in $K_\ell/\mathbb{Q}(\zeta_p)$.
\end{enumerate}
For each $i$ such that $\ell_i \equiv \pm 1 \pmod{p}$, let $K_i=K_{\ell_i}$ (see also Definition \ref{defn:K_i}).
\subsection{Structure of the Hecke algebra}
\label{subsec:main thms}
Our main results concern the structure of the Hecke algebra $\mathbb{T}_N^\epsilon$.
\begin{thm}
\label{thm:main r primes}
Assume that $\epsilon=(-1,1,\dots,1)$. Let
\[
\mathcal{S}=\{i\in\{1,\dots, r\} \mid \ell_i \equiv -1 \pmod{p}\}
\]
and let $s=\#\mathcal{S}$. Then
\begin{enumerate}
\item $\mathbb{T}_N^\epsilon$ is a complete intersection ring.
\item $\mathbb{T}_N^{0,\epsilon}$ is Gorenstein if and only if $I^{\epsilon}$ is principal.
\item There is a short exact sequence
\begin{equation}
\label{eq:main SES}
0 \to \bigoplus_{i=1}^r \mathbb{Z}_p/(\ell_i+1)\mathbb{Z}_p \to I^\epsilon/{I^\epsilon}^2 \to \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \to 0.
\end{equation}
\item The minimal number of generators of $I^\epsilon$ is $s+\delta$ where
\[
\delta =\left\{
\begin{array}{lc}
1 & \text{ if } \ell_0 \text{ splits completely in } K_i \text{ for all } i \in \mathcal{S}, \text{ or} \\
0 & \text{ otherwise.}
\end{array}\right.
\]
\end{enumerate}
\end{thm}
\begin{proof}
Parts (1) and (3) are proved in \S\ref{sec:case1} (see especially Theorem \ref{thm:star main}). It is known to experts that Part (2) follows from (1) (see Lemma \ref{lem:goren and I principal}). Part (4) is Theorem \ref{thm:star split}.
\end{proof}
\begin{rem}
In fact, we show that, unless $s=r$, there are no newforms in $M_2(N)_{\mathfrak{m}^\epsilon}$, so we can easily reduce to the case $s=r$ (i.e.~the case that $\ell_i \equiv -1 \pmod{p}$ for all $i>0$). When $s=r$, one could use this theorem to prove that there are newforms in $M_2(N)_{\mathfrak{m}^\epsilon}$, but this is known (see \cite{ribet2015}, \cite[Thm.\ 1.3(3)]{yoo2017}).
\end{rem}
\begin{rem}
The criterion of Part (4) determines whether or not the extension class defined by the sequence \eqref{eq:main SES} is $p$-cotorsion. In fact, one can describe this extension class exactly in terms of algebraic number theory, but we content ourselves with the simpler statement (4).
\end{rem}
\begin{thm}
\label{thm:main 2 primes no new}
Assume $r=1$ and $\epsilon=(-1,-1)$ and that $\ell_0 \equiv 1 \pmod{p}$ but $\ell_1 \not \equiv 1 \pmod{p}$. If $\ell_1$ is not a $p$-th power modulo $\ell_0$, then there are no newforms in $M_2(N)_{\mathfrak{m}^\epsilon}$. In particular, $I^\epsilon$ is principal, and generated by $T_q-(q+1)$ where $q$ is a good prime (of Mazur) for $(\ell_0,p)$.
\end{thm}
\begin{proof}
This is Theorem \ref{thm:one interesting prime}.
\end{proof}
\begin{rem}
In the case $\ell_1 \ne p$, this is a theorem of Ribet \cite{ribet2010} and Yoo \cite[Thm.\ 2.3]{yoo2017}. Yoo has informed us that the method should work for the case $\ell_1 =p$ as well. In any case, our method is completely different.
\end{rem}
\begin{thm}
\label{thm:main 2 primes new}
Assume $r=1$ and $\epsilon=(-1,-1)$ and that $\ell_0 \equiv \ell_1 \equiv 1 \pmod{p}$. Assume further that
\begin{center}
$\ell_i$ is not a $p$-th power modulo $\ell_j$ for $(i,j) \in \{(0,1),(1,0)\}$.
\end{center}
Then
\begin{enumerate}
\item there are newforms in $M_2(N)_{\mathfrak{m}^\epsilon}$.
\item $\mathbb{T}_N^\epsilon$ is a complete intersection ring.
\item $\mathbb{T}_N^{\epsilon,0}$ is not a Gorenstein ring.
\item $I^{\epsilon,0}/{I^{\epsilon,0}}^2 \cong \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \mathbb{Z}_p/(\ell_1-1)\mathbb{Z}_p$.
\end{enumerate}
\end{thm}
\begin{proof}
Parts (2) and (4) are proven in Theorem \ref{thm:thm2}. Part (1), the precise meaning of which is given in Definition \ref{defn:no newforms}, follows from Part (2) by Theorem \ref{thm:newforms in case 2}. Part (3) follows from (2) and (4) by Lemma \ref{lem:goren and I principal}.
\end{proof}
\begin{rem}
\label{rem:no cyclicity assumption}
The proof of this theorem may be of particular interest for experts in the deformation theory of Galois representations. The proof is the first (as far as we are aware) example of an $R=\mathbb{T}$ theorem, where $R$ is a universal pseudodeformation ring, and where we do not rely on certain Galois cohomology groups being cyclic. (This cyclicity ensures that the pseudorepresentations come from true representations.) In fact, with the assumptions of the theorem, the relevant cohomology groups are not cyclic. However, see \cite[Thm.\ 8.2]{BerKlo2015}, where $R' = \mathbb{T}$ is proved, where $R'$ is a certain quotient of a universal pseudodeformation ring.
\end{rem}
\begin{rem}
Outside of the cases considered in these theorems, we cannot expect that $\mathbb{T}_N^\epsilon$ is a complete intersection ring, as the examples in \S \ref{subsec:examples} below illustrate. Our method, which applies Wiles's numerical criterion \cite{wiles1995}, proves that $\mathbb{T}_N^\epsilon$ is a complete intersection ring as a byproduct. A new idea is needed to proceed beyond these cases.
\end{rem}
\subsection{Applications to multiplicity one}
For an application of the main result, we let $J_0(N)$ be the Jacobian of the modular curve $X_0(N)$.
\begin{cor}
\label{cor:mult one fails}
In the following cases, we can compute $\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon]$:
\begin{enumerate}[leftmargin=2em]
\item With the assumptions of Theorem \ref{thm:main r primes}, we have
\[
\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon] =1+s+\delta,
\]
where $s$ and $\delta$ are as in Theorem \ref{thm:main r primes}.
\item With the assumptions of Theorem \ref{thm:main 2 primes no new}, we have $\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon] =2$.
\item With the assumptions of Theorem \ref{thm:main 2 primes new}, we have $\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon] =3$.
\end{enumerate}
\end{cor}
\begin{proof}
This follows from the named theorems together with Lemma \ref{lem:I and Jacobian kernel} (which is known to experts).
\end{proof}
One says that ``multiplicity one holds'' if $\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon] =2$. This corollary implies that multiplicity one holds in case (1) if and only if $s+\delta=1$, always holds in case (2), and always fails in case (3).
\subsubsection{Ribet's Conjecture} Previous works on multiplicity one have used a different Hecke algebra $\mathbb{T}_{N,U}^\epsilon$, defined in \S \ref{subsubsec:T_U} (see, for example, \cite{yoo2015}). Let $\mathfrak{m}_U^\epsilon=(I_U^\epsilon,p) \subset \mathbb{T}^\epsilon_{N,U}$ be its maximal ideal. The previous corollary together with Proposition \ref{prop:T=T_U first case} give the following
\begin{cor}[Generalized Ribet's Conjecture]
With the assumptions of Theorem \ref{thm:main r primes}, assume in addition that $\ell_i \not \equiv 1 \pmod{p}$ for $i>0$. Then
\[
\dim_{\mathbb{F}_p} J_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}_U^\epsilon]=1+s+\delta,
\]
where $s$ and $\delta$ are as in Theorem \ref{thm:main r primes}.
\end{cor}
The case $s=r=1$ of this corollary was conjectured by Ribet \cite{ribet2015} (see also \cite[pg.~4]{yoo2017b}).
\begin{rem}
After we told Yoo about the results of this paper, he found an alternate proof of this corollary in the case $s=r=1$, under the assumption that $I_U^\epsilon$ is principal if and only if $\mathbb{T}^{0,\epsilon}_{N,U}$ is Gorenstein (this assumption follows from Theorem \ref{thm:main r primes} and Proposition \ref{prop:T=T_U first case}). Yoo's proof involves a delicate study of the geometry of $J_0(N)$ and, unlike our proof, does not make use of the fact that $\mathbb{T}_N^\epsilon$ is Gorenstein. The fact that our proof is simpler demonstrates the power of the Gorenstein property and is a reason for our interest in using $\mathbb{T}_N^\epsilon$ rather than $\mathbb{T}_{N,U}^\epsilon$.
\end{rem}
\subsubsection{Gorensteinness, and multiplicity one for the generalized Jacobian} The following observations are not used (nor proven) in this paper (although they are familiar to experts), but we include them to illustrate the the arithmetic significance of the Gorenstein property for $\mathbb{T}_N^\epsilon$ proved in Theorems \ref{thm:main r primes}, \ref{thm:main 2 primes no new} and \ref{thm:main 2 primes new}. We learned this point of view from papers of Ohta, especially \cite{ohta2005}.
As is well-known, and as we explain in \S \ref{subsec:gorenstein and jacobian}, multiplicity one holds if and only if $\mathbb{T}_N^{0,\epsilon}$ is Gorenstein. The nomenclature ``multiplicity one" comes from representation theory. It is related to the question of whether $H^1_{\mathrm{\acute{e}t}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon}$ is a free $\mathbb{T}_N^{0,\epsilon}$-lattice in the free $\mathbb{T}_N^{0,\epsilon}[\frac{1}{p}]$-module $H^1_{\mathrm{\acute{e}t}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Q}_p(1))_{\mathfrak{m}^\epsilon}$.
There is another natural lattice to consider, namely $H^1_{{\mathrm{\acute{e}t}}}(Y_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon,\mathrm{DM}}$, the image of $H^1_{{\mathrm{\acute{e}t}}}(Y_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon}$ under the Drinfeld-Manin splitting
\[
H^1_{{\mathrm{\acute{e}t}}}(Y_0(N)_{\overline{\mathbb{Q}}},\mathbb{Q}_p(1))_{\mathfrak{m}^\epsilon} \longrightarrow H^1_{{\mathrm{\acute{e}t}}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Q}_p(1))_{\mathfrak{m}^\epsilon}.
\]
In a similar manner to the proof of Lemma \ref{lem:failure of mult one = gorenstein defect}, one can show that $\mathbb{T}_N^\epsilon$ is Gorenstein if and only if $H^1_{{\mathrm{\acute{e}t}}}(Y_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon,\mathrm{DM}}$ is a free $\mathbb{T}_N^{0,\epsilon}$-module, if and only if
\[
\dim_{\mathbb{F}_p} GJ_0(N)(\overline{\mathbb{Q}}_p)[\mathfrak{m}^\epsilon]=2,
\]
where $GJ_0(N)$ is the generalized Jacobian of $J_0(N)$ relative to the cusps (see e.g.\ \cite[\S3]{ohta1999} for a discussion of generalized Jacobians). Hence our result that $\mathbb{T}_N^\epsilon$ is Gorenstein can be thought of as a multiplicity one result for $GJ_0(N)$.
Finally, we note that these ideas illustrate why the failure of multiplicity one in Corollary \ref{cor:mult one fails} is related to the failure of $I^\epsilon$ to be principal: if $\mathbb{T}_N^\epsilon$ is Gorenstein,
\[
H^1_{{\mathrm{\acute{e}t}}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon} \hookrightarrow H^1_{{\mathrm{\acute{e}t}}}(Y_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon,\mathrm{DM}}
\]
has the form, as $\mathbb{T}^{0, \epsilon}_N$-modules, of
\[
\mathbb{T}_N^{0,\epsilon} \oplus I^{0,\epsilon} \hookrightarrow \mathbb{T}_N^{0,\epsilon} \oplus \mathbb{T}_N^{0,\epsilon}.
\]
Hence $H^1_{{\mathrm{\acute{e}t}}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1))_{\mathfrak{m}^\epsilon}$ is free if and only if $I^{0,\epsilon}$ is principal.
\subsection{Good primes} We also prove analogues of Mazur's good prime criterion (statement (5) of \S \ref{subsec:Mazur results}). In the situation of Theorem \ref{thm:main r primes}, the list of conditions is cumbersome to write down, so we are not precise here. We refer the reader to \S\ref{subsubsec:r primes examples} for some specific examples and \S \ref{subsec:good primes for r primes} for the complete criterion.
\begin{thm}
\label{thm:main good primes r}
With the assumptions of Theorem \ref{thm:main r primes}, we can specify sufficient conditions on a set of primes $q_1,\dots, q_{s+\delta}$ not dividing $N$ such that the elements $T_{q_1}-(q_1+1),\dots, T_{q_{s+\delta}}-(q_{s+\delta}+1)$ together generate $I^\epsilon$.
\end{thm}
\begin{rem}
We can also write down a necessary and sufficient condition, but cannot compute with it, so we doubt its practical use.
\end{rem}
In the situation of Theorem \ref{thm:main 2 primes new}, the sufficient condition is very simple to state, and also necessary. To state it, we let
\[
\log_\ell:(\mathbb{Z}/\ell\mathbb{Z})^\times \twoheadrightarrow \mathbb{F}_p
\]
denote an arbitrary surjective homomorphism, for any prime $\ell$ that is congruent to $1$ modulo $p$ (the statement below will not depend on the choice).
\begin{thm}
\label{thm:main good primes 2}
With the assumptions of Theorem \ref{thm:main 2 primes new}, fix primes $q_0,q_1$ not dividing $N$ (but possibly dividing $p$). Then the elements $T_{q_0}-(q_0+1)$ and $T_{q_1}-(q_1+1)$ together generate $I^\epsilon$ if and only if
\[
(q_0-1)(q_1-1)\det\ttmat{\log_{\ell_0}(q_0)}{\log_{\ell_0}(q_1)}{\log_{\ell_1}(q_0)}{\log_{\ell_1}(q_1)} \in \mathbb{F}_p^\times.
\]
\end{thm}
\begin{rem}
For a single prime $\ell$, Mazur's criterion for $q$ to be a good prime can be written as $(q-1)\log_\ell(q) \in \mathbb{F}_p^\times$, so this is a natural generalization.
\end{rem}
\subsection{Relation to Hida Hecke algebras} The reader will note that we have allowed for the possibility that $p\mid N$. When $p\mid N$, in Appendix \ref{appendix:U and w}, we also consider a related Hecke algebra $\mathbb{T}_{N,H}^\epsilon$ that contains $U_p$ instead of $w_p$ (but still has all other $w_\ell$ for $\ell \mid \frac{N}{p}$) and show that, in many cases we consider, $\mathbb{T}_{N,H}^\epsilon= \mathbb{T}_{N}^\epsilon$.
This is related to Hida theory because (as is well-known for the Hecke algebra $\mathbb{T}_{N,U}^\epsilon$) there is a Hida-theoretic Hecke algebra $\mathbb{T}_\Lambda^\epsilon$ that is a free module of finite rank over $\Lambda \simeq \mathbb{Z}_p {[\![} T {]\!]}$ that satisfies a control theorem with respect to $\mathbb{T}_{N,H}^\epsilon$: there is an element $\omega_2 \in \Lambda$ such that $\mathbb{T}_{N,H}^\epsilon = \mathbb{T}_\Lambda^\epsilon/\omega_2 \mathbb{T}_\Lambda^\epsilon$. (A proof of this control theorem will appear in forthcoming work of the first-named author with Rob Pollack.)
Then our results about $\mathbb{T}_N^\epsilon$ (including its Gorensteinness and the number of generators of its Eisenstein ideal) translate directly to $\mathbb{T}_\Lambda^\epsilon$. Subsequently, these results can be specialized into higher weights, as is usual in Hida theory.
\subsection{Method of pseudodeformation theory}
\label{subsec:psdef method}
Like our previous work \cite{WWE3}, the method of proof of the theorems in \S \ref{subsec:main thms} is to construct a pseudodeformation ring $R$ and prove that $R =\mathbb{T}$ using the numerical criterion. The ring $R$ is the deformation ring of the residual pseudorepresentation ${\bar D} = \psi(\omega \oplus 1)$ associated to $E^\epsilon_{2,N}$ that is universal subject to certain conditions (here $\psi$ is the functor associating a pseudorepresentation to a representation, and $\omega$ is the mod $p$ cyclotomic character). These conditions include include the conditions considered in our previous works \cite{WWE1,WWE3} (having cyclotomic determinant, being flat at $p$, being ordinary at $p$), but they also include new conditions at $\ell$ dividing $N$ that are of a different flavor, as we now explain.
\subsubsection{The Steinberg at $\ell$ condition}
Fix $\ell=\ell_i \mid N$, assume $\ell \neq p$, and let $G_\ell\subset G_\mathbb{Q}$ be a decomposition group at $\ell$. Let $f$ be a normalized cuspidal eigenform of weight 2 and level $\Gamma_0(N)$ and $\rho_f:G_\mathbb{Q} \to \GL_2(\mathcal{O}_f)$ is the associated Galois representation, where $\mathcal{O}_f$ is a finite extension of $\mathbb{Z}_p$.
If $f$ is old at $\ell$, then $\rho_f|_{G_\ell}$ is unramified. If $f$ is new at $\ell$, we have
\begin{equation}
\label{eq:f on G_ell}
\rho_f|_{G_\ell} \sim \ttmat{ \lambda(a_\ell(f))\kcyc}{*}{0}{\lambda(a_\ell(f))}
\end{equation}
where $\lambda(x)$ is the unramified character of $G_\ell$ sending a Frobenius element $\sigma_\ell$ to $x$, and $a_\ell(f)$ is the coefficient of $q^\ell$ in the $q$-expansion of $f$ (see Lemma \ref{lem:normalization of bT0}). Note that since $\det(\rho_f)=\kcyc$, we have $\lambda(a_\ell(f))^2=1$. In fact, $a_\ell(f)$ is the negative of the $w_\ell$-eigenvalue of $f$. We call such representations \eqref{eq:f on G_ell} ``$\pm1$-Steinberg at $\ell$", where $\pm 1=\mp a_\ell(f)$ is the $w_\ell$-eigenvalue of $f$.
Now assume in addition that $f \in S_2(N)_{\mathfrak{m}^\epsilon}$, so that the semi-simplification of the residual representation of $\rho_f$ is $\omega \oplus 1$ and $w_\ell f=\epsilon f$, where $\epsilon=\epsilon_i$. We want to impose a condition on pseudorepresentations that encapsulates the condition that $\rho_f|_{G_\ell}$ is either unramified or $\epsilon$-Steinberg. The main observation is the following, and is inspired by the work of Calegari-Specter \cite{CS2016}.
\begin{obs}
Suppose that $\rho: G_\ell \to \GL_2(\mathcal{O})$ is either unramified or $\epsilon$-Steinberg. Then
\begin{equation}
\label{eq:unram-or-stein}
(\rho(\sigma)-\lambda(-\epsilon)\kcyc(\sigma))(\rho(\tau)-\lambda(-\epsilon)(\tau))=0
\end{equation}
for all $\sigma, \tau \in G_\ell$ with at least one of $\sigma$ or $\tau$ in the inertia group $I_\ell$.
\end{obs}
This is clear if $\rho$ is unramified: the factor involving the one of $\sigma$ or $\tau$ that is in $I_\ell$ will be zero. If $\rho$ is $\epsilon$-Steinberg, then the given product \eqref{eq:unram-or-stein} will have the form
\[
\ttmat{0}{*}{0}{*} \ttmat{*}{*}{0}{0}
\]
and any such product is zero (note that the order is important!).
To impose the unramified-or-$\epsilon$-Steinberg condition on the pseudodeformation ring $R$, we impose the condition \eqref{eq:unram-or-stein} on the universal Cayley-Hamilton algebra, using the theory of \cite{WWE4} (see \S \ref{sec:pseudodef ring}).
\subsubsection{The ordinary at $p$ condition} When $p \mid N$ and $f \in S_2(N)_{\mathfrak{m}^\epsilon}$ is a newform, then $\epsilon_p = -1$ and the representation $\rho_f\vert_{G_p}$ is ordinary. In this paper, we define ``ordinary pseudorepresentation" exactly as we define the unramified-or-$\epsilon$-Steinberg, following ideas of Calegari-Specter. In our previous paper \cite{WWE1}, we gave a different definition of ordinary, and we prove in this paper that the two definitions coincide (see Lemma \ref{lem:WWE ord = CS ord}). This gives an answer to a question of Calegari-Specter \cite[pg.\ 2]{CS2016}.
\subsection{Examples}
\label{subsec:examples}
We conclude this introduction with examples that illustrate the theorems and show that the hypotheses are necessary. For examples where we show that $\mathbb{T}_N^\epsilon$ is not Gorenstein, it is helpful to note that $\mathbb{T}_N^\epsilon$ is Gorenstein if and only if $\mathrm{Soc}(\mathbb{T}_N^\epsilon/p\mathbb{T}_N^\epsilon)$ is 1-dimensional, where $\mathrm{Soc}(\mathbb{T}_N^\epsilon/p\mathbb{T}_N^\epsilon)$ is the annihilator of the maximal ideal (see \S\ref{subsec:Gorenstein defect}).
All computations are using algorithms we have written for the Sage computer algebra software \cite{SAGE}.
\subsubsection{Examples illustrating Theorem \ref{thm:main r primes}}
\label{subsubsec:r primes examples}
\begin{eg}
Let $p=5$, $\ell_0=41$, $\ell_1=19$, so $N=19\cdot 41$, and let $\epsilon=(-1,1)$. In this case, we compute that $K_{19}$ is the field cut out by
\begin{align*}
x^{20} - x^{19} & - 7x^{18} + 21x^{17} + 22x^{16} + 223x^{15} - 226x^{14} - 1587x^{13} + 4621x^{12} \\
& + 5202x^{11} - 91x^{10} - 3142x^9 - 439x^8 - 2143x^7 - 2156x^6 - 58x^5 \\
& + 1237x^4 + 414x^3 + 148x^2 + 56x + 16
\end{align*}
and that $41$ splits completely in $K_{19}$. The theorem says that $I^\epsilon$ has 2 generators. Moreover, Theorem \ref{thm:main good primes r} says, in this case, that $I^\epsilon$ is generated by $T_{q_0}-(q_0+1)$ and $T_{q_1}-(q_1+1)$ where $q_0$ is a good prime for $(41,5)$ and where $q_1$ satisfies
\begin{enumerate}[label=(\alph*)]
\item $q_1$ is a prime such that $q_1 \equiv 1 \pmod{5}$,
\item $41$ is not a $5$-th power modulo $q_1$, and
\item $q_1$ does not split completely in $K_{19}$.
\end{enumerate}
A quick search yields that $q_0=2$ and $q_1=11$ satisfy these criteria. And indeed, we compute that there is an isomorphism
\[
\frac{\mathbb{F}_5[x,y]}{(y^2-2x^2,xy)} \buildrel\sim\over\lra \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \quad (x,y) \mapsto (T_{2}-3, T_{11}-12).
\]
\end{eg}
\begin{eg}
Let $p=5$, $\ell_0=11$, $\ell_1=19$, $\ell_2=29$, so $N=11\cdot 19 \cdot 29$, and let $\epsilon=(-1,1,1)$. In this case, $11$ does not split completely in either of the fields $K_{19},K_{29}$, and the theorem says that $I^\epsilon$ has 2 generators. Moreover, Theorem \ref{thm:main good primes r} says, in this case, that $I^\epsilon$ is generated by $T_{q_0}-(q_0+1)$ and $T_{q_1}-(q_1+1)$ where $q_0$ is a good prime for $(11,5)$ (for example $q_0=2$) and where the prime $q_1$ satisfies:
\begin{enumerate}[label=(\alph*)]
\item $q_1 \equiv 1 \pmod{5}$,
\item $11$ is not a $5$-th power modulo $q_1$,
\item $q_1$ does not split completely in $K_{19}$, and
\item $q_1$ does split completely in $K_{29}$.
\end{enumerate}
In this case, $K_{19}$ is the field computed in the previous example and $K_{29}$ is the field cut out by
\begin{align*}
x^{20} - x^{19}& - 11x^{18} + 9x^{17} + 124x^{16} - 223x^{15} - 1244x^{14} + 2111x^{13} + 14291x^{12} \\
&- 19804x^{11} + 7169x^{10} + 7938x^9 - 10937x^8 + 15603x^7 - 9472x^6 \\
&- 2582x^5 + 8233x^4 - 3732x^3 + 1808x^2 - 832x + 256.
\end{align*}
A quick search finds that $q_1=181$ satisfies the conditions (a)-(d). And indeed, we compute that there is an isomorphism
\[
\frac{\mathbb{F}_5[x,y]}{(x^3+2x^2, y^3, xy+y^2)} \buildrel\sim\over\lra \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \quad (x,y) \mapsto (T_2-3,T_{181}-182).
\]
Note that these conditions are far from necessary. For example $T_2-3$ and $T_7-8$ also generate the Eisenstein ideal.
\end{eg}
\subsubsection{Examples related to Theorem \ref{thm:main 2 primes no new}}
\label{subsubsec:2 primes no new examples}
We give examples illustrating that the assumption is necessary.
In fact, it seems that the assumption is necessary even for the Gorensteinness of $\mathbb{T}_N^\epsilon$.
\begin{eg}
Let $p=5$, $\ell_0=11$, $\ell_1=23$, so $N=11\cdot 23$, and let $\epsilon=(-1,-1)$. Then $\ell_1 \equiv 1 \pmod{11}$ is a $5$-th power so the theorem does not apply. We can compute that
\[
\frac{\mathbb{F}_5[x,y]}{(x^2,xy,y^2)} \buildrel\sim\over\lra \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \quad (x,y) \mapsto (T_2 - 3, T_3-4)
\]
has dimension 3. Since $\mathbb{T}_{11}^0=\mathbb{Z}_5$, we see that the space of oldforms has dimension 2, so there must be a newform at level $N$. Moreover, $\mathrm{Soc}(\mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon)=x\mathbb{F}_5\oplus y \mathbb{F}_5$, so $\mathbb{T}_N^\epsilon$ is not Gorenstein.
\end{eg}
\begin{eg}
Let $p=5$, $\ell_0=31$, $\ell_1=5$, so $N=5 \cdot 31$, and let $\epsilon=(-1,-1)$. Then note that $\ell_1=5 \equiv 7^5 \pmod{31}$, so the theorem does not apply. We can compute that
\[
\frac{\mathbb{F}_5[x,y]}{(x^3,xy,y^2)} \buildrel\sim\over\lra \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \quad (x,y) \mapsto (T_2-3, 2T_2+T_3)
\]
has dimension 4. Since $\mathrm{rank}_{\mathbb{Z}_5}(\mathbb{T}_{31}^0)=2$, we see that the space of oldforms has dimension 3, and there must be a newform at level $N$. Moreover, $\mathrm{Soc}(\mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon)=x^2\mathbb{F}_5\oplus y \mathbb{F}_5$, so $\mathbb{T}_N^\epsilon$ is not Gorenstein.
\end{eg}
In this last example, the reader may think that $\ell_0=31$ is special because the rank of $\mathbb{T}_{31}^0$ is 2. However, we can take $p=\ell_1=5$ and $\ell_0=191$ (note that $\mathbb{T}^0_{191}=\mathbb{Z}_p$). Noting that $5 \equiv 18^5 \pmod{191}$, we again see that the theorem does not apply, and we can compute that $\mathbb{T}_N^\epsilon$ is also not Gorenstein in this case.
\subsubsection{Examples related to Theorem \ref{thm:main 2 primes new}}
\label{subsubsec:2 primes new examples} First, we give examples illustrating that the assumption is necessary. Again, it seems that the assumption is necessary even for the Gorenstein property of $\mathbb{T}_N^\epsilon$.
\begin{eg}
Let $p=5$, $\ell_0=11$, $\ell_1=61$, so $N=11 \cdot 61$, and let $\epsilon=(-1,-1)$. Then note that $11 \equiv 8^5 \pmod{61}$ so the theorem does not apply (but note that $61$ is not a $5$-th power modulo $11$). We can compute that
\[
\frac{\mathbb{F}_5[x,y]}{(x^2,xy,y^3)} \isoto \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \quad (x,y) \mapsto (T_3-T_2-1, T_2-3).
\]
We see that $\mathrm{Soc}(\mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon)=x\mathbb{F}_5 \oplus y^2\mathbb{F}_5$, so $\mathbb{T}_N^\epsilon$ is not Gorenstein.
\end{eg}
\begin{eg}
Let $p=5$, $\ell_0=31$, $\ell_1=191$, so $N=31 \cdot 191$, and let $\epsilon=(-1,-1)$. We have $191 \equiv 7^5 \pmod{31}$ and $31 \equiv 61^5 \pmod{191}$, so the assumption of the theorem fails most spectacularly. We can compute that
\begin{align*}
\frac{\mathbb{F}_5[x,y]}{((x,y)^4,2x^3+xy^2+3y^3,x^3-x^2y+2y^3)} &\isoto \mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon, \\ (x,y) &\mapsto (T_2-3, T_7-8).
\end{align*}
Letting $\bar{\mathfrak{m}}^\epsilon$ denote the maximal ideal of $\mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon$, we see that $(\bar{\mathfrak{m}}^\epsilon)^4=0$ but that $(\bar{\mathfrak{m}}^\epsilon)^3$ is 2-dimensional, so $\dim_{\mathbb{F}_5} \mathrm{Soc}(\mathbb{T}_N^\epsilon/5\mathbb{T}_N^\epsilon) >1$ and $\mathbb{T}_N^\epsilon$ is not Gorenstein.
\end{eg}
Finally, we give an example illustrating Theorem \ref{thm:main good primes 2}.
\begin{eg}
Let $p=5$, $\ell_0=11$, $\ell_1=41$, so $N=11\cdot 41$, and let $\epsilon=(-1,-1)$. We see that neither of $11$ or $41$ is a $5$-th power modulo the other, so Theorem \ref{thm:main good primes 2} applies. We consider the primes $2,3,7$ and $13$, none of which are congruent to 1 modulo $5$.
\begin{center}
\begin{tabular}{c|c|c}
$q$ & Is $5$-th power modulo $11$? & Is $5$-th power modulo $41$?\\ \hline
$2$ & No & No \\
$3$ & No & Yes \\
$7$ & No & No \\
$13$ & No & No \\
\end{tabular}
\end{center}
From this we see that
\[
\det\ttmat{\log_{11}(3)}{\log_{11}(q)}{\log_{41}(3)}{\log_{41}(q)}=\log_{11}(3)\cdot \log_{41}(q) \ne 0.
\]
for any $q\in\{2,7,13\}$. By Theorem \ref{thm:main good primes 2}, $\{T_3-4,T_q-(q+1)\}$ generates $I^\epsilon$ for any $q\in\{2,7,13\}$, and we can see by direct computation that this is true.
More subtly, we can compute that
\[
\det\ttmat{\log_{11}(2)}{\log_{11}(7)}{\log_{41}(2)}{\log_{41}(7)} \ne 0, \quad \det\ttmat{\log_{11}(2)}{\log_{11}(13)}{\log_{41}(2)}{\log_{41}(13)} = 0.
\]
By Theorem \ref{thm:main good primes 2}, this implies that $\{T_2-3,T_7-8\}$ generates $I^\epsilon$, but that $\{T_2-3,T_{13}-14\}$ does not, and we again verify this by direct computation.
\end{eg}
\subsection{Acknowledgements}
We thank Akshay Venkatesh for interesting questions that stimulated this work, and Ken Ribet for his inspiring talk \cite{ribet2015}. We thank Hwajong Yoo for bringing our attention to his work on the subject, and Frank Calegari for clarifying the provenance of Ribet's Conjecture. We thank Shekhar Khare for helpful discussions about the Steinberg condition, Matt Emerton for encouragement and for asking us about implications for newforms, and Rob Pollack for asking us about the case $p \mid N$.
We thank Jo\"el Bella\"iche, Tobias Berger, Frank Calegari, K\c{e}stutis \v{C}esnavi\v{c}ius, Emmanuel Lecouturier, Barry Mazur, Rob Pollack, Ken Ribet, and Hwajong Yoo and for comments on and corrections to an early draft.
P.W.\ was supported by the National Science Foundation under the Mathematical Sciences Postdoctoral Research Fellowship No.~1606255. C.W.E.\ was supported by Engineering and Physical Sciences Research Council grant EP/L025485/1.
\subsection{Notation and Conventions}
\label{subsec:notation}
We let $\partial_{ij}$ denote the Kronecker symbol, which is 1 if $i=j$ and $0$ otherwise.
For each prime $\ell\mid Np$, we fix $G_\ell \subset G_\mathbb{Q}$, a decomposition group at $\ell$, and let $I_\ell \subset G_\ell$ denote the inertia subgroup. We fix elements $\sigma_\ell \in G_{\ell}$ whose image in $G_{\ell}/I_{\ell} \cong \mathrm{Gal}(\overline{\mathbb{F}}_{\ell}/\mathbb{F}_{\ell})$ is the Frobenius. For $\ell \ne p$, we fix elements $\gamma_\ell \in I_{\ell}$ such that the image in the maximal pro-$p$-quotient $I_{\ell}^{\mathrm{pro}-p}$ (which is well-known to be pro-cyclic) is a topological generator. Let $\gamma_p \in I_p$ be an element such that the image of $\gamma_p$ in $\mathrm{Gal}(\mathbb{Q}_p^\mathrm{nr}(\sqrt[p]{p})/\mathbb{Q}_p^\mathrm{nr})$ is non-trivial and $\omega(\gamma_p) =1$. When $\ell = \ell_i$ for $i \in \{0,\dotsc, r\}$ (i.e.\ $\ell \mid N$), we also write $\sigma_i := \sigma_{\ell_i}$ and $\gamma_i := \gamma_{\ell_i}$ for these elements. We write $G_{\mathbb{Q},S}$ for the Galois group of the maximal extension of $\mathbb{Q}$ unramified outside of the set places $S$ of $\mathbb{Q}$ supporting $Np \infty$, and use the induced maps $G_\ell \rightarrow G_{\mathbb{Q},S}$. For primes $q \nmid Np$, we write $\mathrm{Fr}_q \in G_{\mathbb{Q},S}$ for a Frobenius element at $q$.
As in the theory of representations, Cayley-Hamilton representations, actions on modules, pseudorepresentations, and cochains/cocycles/cohomology of profinite groups $G$ discussed in \cite{WWE4}, these objects and categories are implicitly meant to be continuous without further comment. Here all of the targets are finitely generated $A$-modules for some Noetherian local (continuous) $\mathbb{Z}_p$-algebra $A$ with ideal of definition $I$, and the $I$-adic topology is used on the target. Profinite groups used in the sequel satisfy the $\Phi_p$-finiteness condition (i.e.\ the maximal pro-$p$ quotient of every finite-index subgroup is topologically finitely generated), which allows the theory of \cite{WWE4} to be applied.
We write
\[
H^i(\mathbb{Z}[1/Np], M)=H^i(C^\bullet(\mathbb{Z}[1/Np],M))=\frac{Z^i(\mathbb{Z}[1/Np],M)}{B^i(\mathbb{Z}[1/Np],M)}
\]
for (continuous) cohomology of a $G_{\mathbb{Q},S}$-module $M$, together with this notation for cochains, cocycles, and coboundaries. We write $x_1 \smile x_2 \in C^*(\mathbb{Z}[1/Np], M_1 \otimes M_2)$ for the cup product of $x_i \in C^*(\mathbb{Z}[1/Np],M_i)$, and $a_1 \cup a_2 \in H^*(\mathbb{Z}[1/Np], M_1 \otimes M_2)$ for cup product of cohomology classes $a_i \in H^*(\mathbb{Z}[1/Np],M_i)$.
\section{Modular forms}
\label{sec:modular}
In this section, we recall some results about modular curves and modular forms. Our reference is the paper of Ohta \cite{ohta2014}.
\subsection{Modular curves, modular forms, and Hecke algebras} The statements given here are all well-known. We review them here to fix notation.
\subsubsection{Modular curves}
Let $Y_0(N)_{/\mathbb{Z}_p}$ be the $\mathbb{Z}_p$-scheme representing the functor taking a $\mathbb{Z}_p$-scheme $S$ to the set of pairs $(E,C)$, where $E$ is an elliptic curve over $S$ and $C \subset E[N]$ is a finite-flat subgroup scheme of rank $N$ and cyclic (in the sense of Katz-Mazur \cite{KM1985}). Let $X_0(N)_{/\mathbb{Z}_p}$ be the usual compactification of $Y_0(N)_{/\mathbb{Z}_p}$, and let $\{\mathrm{cusps}\}$ denote the complement of $Y_0(N)_{/\mathbb{Z}_p}$ in $X_0(N)_{/\mathbb{Z}_p}$, considered as an effective Cartier divisor on $X_0(N)_{/\mathbb{Z}_p}$. Finally, let $X_0(N)=X_0(N)_{/\mathbb{Z}_p} \otimes \mathbb{Q}_p$.
\subsubsection{Modular forms and Hecke algebras}
\label{subsubsec:mf}
The map $X_0(N)_{/\mathbb{Z}_p} \to \Spec(\mathbb{Z}_p)$ is known to be LCI, and we let $\Omega$ be the sheaf of regular differentials. Let
\[
S_2(N;\mathbb{Z}_p) = H^0(X_0(N)_{/\mathbb{Z}_p},\Omega), \quad M_2(N;\mathbb{Z}_p) = H^0(X_0(N)_{/\mathbb{Z}_p},\Omega(\{\mathrm{cusps}\}))
\]
Let $\mathbb{T}_N'$ and $\mathbb{T}_N'^0$ be the subalgebras of
\[
\mathrm{End}_{\mathbb{Z}_p}(M_2(N; \mathbb{Z}_p)), \quad \mathrm{End}_{\mathbb{Z}_p}(S_2(N; \mathbb{Z}_p)),
\]
respectively, generated by the standard Hecke operators $T_n$ with $(N,n)=1$, and all Atkin-Lehner operators $w_\ell$ for $\ell\mid N$ (we do not include any $U_\ell$ for $\ell \mid N$). These are semi-simple commutative $\mathbb{Z}_p$-algebras (see, e.g.~ \cite{AL1970}).
\subsubsection{Eisenstein series and Eisenstein parts}
For each $\epsilon \in \{\pm 1\}^{r+1} \setminus \{(1,1,\dots,1)\}$, there is a element $E^\epsilon_{2,N} \in M_2(N;\mathbb{Z}_p)$ that is an eigenform for all $T_n$ with $(N,n)=1$, and has $q$-expansion
\begin{equation}
\label{eq:eisenstein series}
E^\epsilon_{2,N} = -\frac{1}{24} \prod_{i=0}^r (\epsilon_i\ell_i +1)+ \sum_{n=1}^\infty a_n q^n
\end{equation}
where $a_n = \sum_{0<d\mid n} t$ when $\gcd(n,N)=1$ (in particular, $a_1=1$), and $w_{\ell_i}E^\epsilon_{2,N} = \epsilon_i E^\epsilon_{2,N}$ (see \cite[Lem.\ 2.3.4]{ohta2014}).
Let ${I'}^\epsilon=\Ann_{\mathbb{T}_N'}(E^\epsilon_{2,N})$, and let $\mathbb{T}_N^\epsilon$ be the completion of ${\mathbb{T}_N'}$ at the maximal ideal $({I'}^\epsilon,p)$, and let $\mathbb{T}_N^{0,\epsilon}=\mathbb{T}_N'^0\otimes_{\mathbb{T}_N'}\mathbb{T}_N^\epsilon$. Let $I^\epsilon = {I'}^\epsilon\mathbb{T}_N^\epsilon$ and let $I^{0,\epsilon}$ be the image of $I^\epsilon$ in $\mathbb{T}_N^{0,\epsilon}$. For a $\mathbb{T}_N'$-module $M$, let $M^\epsilon_\mathrm{Eis}=M \otimes_{\mathbb{T}_N'} \mathbb{T}_N^\epsilon$. The map $\mathbb{T}_N^\epsilon \twoheadrightarrow \mathbb{Z}_p$ induced by $E^\epsilon_{2,N}$ is a surjective ring homomorphism with kernel $I^\epsilon$. We refer to this as the \emph{augmentation map} for $\mathbb{T}_N^\epsilon$.
Note that we have $w_{\ell_i} = \epsilon_i$ as elements of $\mathbb{T}_N^\epsilon$. Indeed, this follows from $w_{\ell_i}^2=1$, $w_{\ell_i} - \epsilon_i \in I^\epsilon$, and $p \ne 2$: consider $(w_{\ell_i}-\epsilon_i)(w_{\ell_i}+\epsilon_i)=0$ and observe that $w_{\ell_i}+\epsilon_i \in (\mathbb{T}_N^\epsilon)^\times$. Consequently, $\mathbb{T}_N^\epsilon$ is generated as a $\mathbb{Z}_p$-algebra by $T_q$ for $q \nmid N$.
If $p\nmid N$, let $U_p \in \mathbb{T}_N^\epsilon$ denote the unit root of the polynomial
\[
X^2-T_pX+p=0,
\]
which exists and is unique by Hensel's lemma. Since $T_p-(p+1) \in I^\epsilon$, we see that $U_p - 1 \in I^\epsilon$. Moreover, we see that $T_p = U_p + p U_p^{-1}$.
\subsubsection{Duality}
As in \cite[Thm.\ 2.4.6]{ohta2014}, there are perfect pairings of free $\mathbb{Z}_p$-modules
\begin{equation}
\label{eq:M and T duality}
M_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis} \times \mathbb{T}_N^\epsilon \longrightarrow \mathbb{Z}_p, \quad S_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis} \times \mathbb{T}_N^{0,\epsilon} \longrightarrow \mathbb{Z}_p
\end{equation}
given by $(f,t) \mapsto a_1(t\cdot f)$, where $a_1(-)$ refers to the coefficient of $q$ in the $q$-expansion. In particular, $M_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis}$ (resp.\ $S_2(N;\mathbb{Z}_p)_\mathrm{Eis}^\epsilon$) is a dualizing $\mathbb{T}_N^\epsilon$-module (resp.\ $\mathbb{T}_N^{0,\epsilon}$-module).
\subsubsection{Oldforms and stabilizations}
\label{subsub:stabilizations}
If $\ell \mid N$ is a prime and $f \in S_2(N/\ell; \mathbb{Z}_p)$ is an eigenform for all $T_n$ with $(n,N/\ell)=1$, then the subspace
\[
\{g \in S_2(N;\mathbb{Z}_p) \ : \ a_n(g)=a_n(f) \text{ for all } (n,N/\ell)=1\}
\]
has rank two, with basis $f(z),f(\ell z)$. If we let $f_\pm(z) = f(z) \pm \ell f(\ell z)$, then $w_\ell f_\pm(z)=\pm f_\pm (z)$. Note that, since $p \ne 2$, we have $f_+ \not \equiv f_- \pmod{p}$. In particular, if $\epsilon'\in \{\pm 1\}^r$ is the tuple obtained from $\epsilon$ by deleting the entry corresponding to $\ell$, then there are injective homomorphisms given by $f \mapsto f_{\epsilon_\ell}$,
\[
M_2(N/\ell;\mathbb{Z}_p)^{\epsilon'}_\mathrm{Eis} \hookrightarrow M_2(N;\mathbb{Z}_p)^{\epsilon}_\mathrm{Eis} \quad \text{and} \quad S_2(N/\ell;\mathbb{Z}_p)^{\epsilon'}_\mathrm{Eis} \hookrightarrow S_2(N;\mathbb{Z}_p)^{\epsilon}_\mathrm{Eis}.
\]
\subsection{Congruence number} We recall this theorem of Ohta, and related results.
\begin{thm}[Ohta]
\label{thm:congruence number}
There is an isomorphism $\mathbb{T}_N^{\epsilon,0}/I^{\epsilon,0} \cong \mathbb{Z}_p/a_0(E^\epsilon_{2,N})\mathbb{Z}_p$.
\end{thm}
This is \cite[Thm.\ 3.1.3]{ohta2014}. His method of proof actually can be used to give the following stronger result, exactly as in \cite[Lem.\ 3.2.2]{WWE3}. See Lemma \ref{lem:fiber prods} for a discussion of fiber products of rings.
\begin{lem}
\label{lem:bT is a pull-back}
The composition of the augmentation map $\mathbb{T}_N^\epsilon \to \mathbb{Z}_p$ with the quotient map $\mathbb{Z}_p \to \mathbb{Z}_p/a_0(E^\epsilon_{2,N})\mathbb{Z}_p$ factors through $\mathbb{T}_N^{0,\epsilon}$ and induces an isomorphism
\[
\mathbb{T}_N^\epsilon \buildrel\sim\over\lra \mathbb{T}_N^{0,\epsilon} \times_{\mathbb{Z}_p/a_0(E^\epsilon_{2,N})\mathbb{Z}_p} \mathbb{Z}_p.
\]
In particular, $\ker(\mathbb{T}_N^{\epsilon} \to \mathbb{T}_N^{0,\epsilon})=\Ann_{\mathbb{T}_N^\epsilon}(I^\epsilon)$.
\end{lem}
\subsection{Eigenforms and associated Galois representations}
Let $\nu: \mathbb{T}_N^{0,\epsilon} \hookrightarrow \tilde{\mathbb{T}}_N^{0,\epsilon}$ denote the normalization of $\mathbb{T}^{0, \epsilon}_N$.
\begin{lem}
\label{lem:normalization of bT0}
We record facts about $\tilde{\mathbb{T}}_N^{0,\epsilon}$ and associated Galois representations.
\begin{enumerate}[leftmargin=2em]
\item Letting $q$ vary over primes $q \nmid Np$, there is an isomorphism
\[
h: \tilde{\mathbb{T}}_N^{0,\epsilon} \buildrel\sim\over\lra \bigoplus_{f\in \Sigma} \mathcal{O}_f, \quad \nu(T_q) \mapsto (a_q(f))_{f \in \Sigma},
\]
where $\Sigma \subset S_2(N;\overline{\mathbb{Q}}_p)^\epsilon_\mathrm{Eis}$ is the set of normalized eigenforms, and $\mathcal{O}_f$ is the valuation ring of the finite extension $\mathbb{Q}_p(a_q(f)_{q \nmid Np})/\mathbb{Q}_p$.
\item For each $f \in \Sigma$, there is an absolutely irreducible representation $\rho_f:G_{\mathbb{Q},S} \to \GL_2(\mathcal{O}_f[1/p])$ such that the characteristic polynomial of $\rho_f(\mathrm{Fr}_q)$ is $X^2-a_q(f) X +q$ for any $q \nmid Np$.
\item Assume $\ell_i \neq p$. The representation $\rho_f|_{G_{\ell_i}}$ is unramified if $f$ is old at $\ell_i$. Otherwise, $f$ is new at $\ell_i$ and there is an isomorphism
\begin{equation}
\label{eq:Steinberg G_ell rep}
\rho_f|_{G_{\ell_i}} \simeq \ttmat{\lambda(a_{\ell_i}(f))\kcyc}{*}{0}{\lambda(a_{\ell_i}(f))},
\end{equation}
where $a_{\ell_i}(f)=-\epsilon_i$.
\item There is an isomorphism
\begin{equation}
\label{eq:Steinberg G_p rep}
\rho_f|_{G_{p}} \simeq \ttmat{\lambda(a_p(f)^{-1})\kcyc}{*}{0}{\lambda(a_p(f))}.
\end{equation}
Moreover,
\begin{enumerate}
\item $\rho_f\vert_{G_p}$ is finite-flat if and only if either
\begin{enumerate}
\item $p \nmid N$, in which case $h: \nu(U_p) \mapsto (a_p(f))_{f \in \Sigma}$, or
\item $p \mid N$ and $f$ is old at $p$.
\end{enumerate}
\item If $p \mid N$ and $f$ is new at $p$, then $a_p(f) = -\epsilon_p=+1$, i.e.\ $\epsilon_p = -1$.
\end{enumerate}
\end{enumerate}
\end{lem}
\begin{proof}
For (1)-(3) and (4a) see, for example, \cite[Thm.\ 3.1]{DDT1994}. In (4b), the fact that $a_p(f) = -\epsilon_p$ is \cite[Thm.~3]{AL1970}. To see that $\epsilon_p=-1$, note that the semi-simple residual representation $\bar{\rho}^\mathrm{ss}_f$ is $\omega \oplus 1$, but \eqref{eq:Steinberg G_p rep} implies $\bar{\rho}^\mathrm{ss}_f|_{G_p} = \lambda(-\epsilon_p)\omega \oplus \lambda(-\epsilon_p)$. Since $\omega|_{G_p}$ is ramified, this implies that $\lambda(-\epsilon_p)=1$, so $\epsilon_p=-1$.
\end{proof}
Combining Lemmas \ref{lem:bT is a pull-back} and \ref{lem:normalization of bT0}, we obtain an injective homomorphism
\begin{equation}
\label{eq:bT in product}
\mathbb{T}_N^\epsilon \to \mathbb{Z}_p \oplus \mathbb{T}_N^{0,\epsilon} \to \mathbb{Z}_p \oplus \bigoplus_{f\in \Sigma} \mathcal{O}_f
\end{equation}
determined by sending $T_q$ to $(q+1,a_q(f)_{f \in \Sigma})$ for $q \nmid Np$ and, if $p\nmid N$, sending $U_p$ to $(1,a_p(f)_{f \in \Sigma})$.
\subsection{The kernel of $\mathfrak{m}^\epsilon$ on the modular Jacobian and the Gorenstein condition}
\label{subsec:gorenstein and jacobian}
In this section, we use some results of Ohta (following ideas of Mazur) to relate the structure of the rings $\mathbb{T}_N^{\epsilon}$ and $\mathbb{T}_N^{0,\epsilon}$ to the geometry of the N\'{e}ron model $J_0(N)_{/\mathbb{Z}_p}$ of the Jacobian of $X_0(N)$. Let $J_0(N) = J_0(N)_{/\mathbb{Z}_p} \otimes \mathbb{Q}_p$.
For a $\mathbb{Z}_p$-module $M$, let $\mathrm{Ta}_p(M)=\mathrm{Hom}(\mathbb{Q}_p/\mathbb{Z}_p,M)$ be the Tate module of $M$, let $M^*=\mathrm{Hom}_{\mathbb{Z}_p}(M,\mathbb{Q}_p/\mathbb{Z}_p)$ be the Pontrjagin dual, and let $M^\vee=\mathrm{Hom}_{\mathbb{Z}_p}(M,\mathbb{Z}_p)$ be the $\mathbb{Z}_p$-dual. If $M$ is a free $\mathbb{Z}_p$-module, there is an identification $M^*\cong \mathrm{Ta}_p(M)^\vee$.
Let $\cT=H^1_{{\mathrm{\acute{e}t}}}(X_0(N)_{\overline{\mathbb{Q}}},\mathbb{Z}_p(1)) \cong \mathrm{Ta}_p(J_0(N)(\overline{\mathbb{Q}}_p))$.
\begin{lem}
\label{lem:failure of mult one = gorenstein defect}
There is an exact sequence of $\mathbb{T}_N^{0,\epsilon}[I_p]$-modules
\[
0 \longrightarrow \mathbb{T}_N^{0,\epsilon}(1) \longrightarrow \cT_{\mathfrak{m}^\epsilon} \longrightarrow (\mathbb{T}_N^{0,\epsilon})^\vee \longrightarrow 0.
\]
The sequence splits as $\mathbb{T}_N^{0,\epsilon}$-modules. In particular, we have
\[
\dim_{\mathbb{F}_p} J_0(N)[\mathfrak{m}^\epsilon](\overline{\mathbb{Q}}_p) = \dim_{\mathbb{F}_p}( \cT/{\mathfrak{m}^\epsilon}\cT) = 2+ \delta(\mathbb{T}_N^{0,\epsilon})
\]
where $\delta(\mathbb{T}_N^{0,\epsilon})$ is the Gorenstein defect of $\mathbb{T}_N^{0,\epsilon}$. (See \S\ref{subsec:Gorenstein defect} for a discussion of Gorenstein defect.)
\end{lem}
\begin{proof}
Ohta has shown in \cite[Prop.\ 3.5.9]{ohta2014} that
\[
\dim_{\mathbb{F}_p} J_0(N)_{/\mathbb{Z}_p}(\overline{\mathbb{F}}_p)[\mathfrak{m}^\epsilon] \le 1.
\]
This implies the result, following \cite[\S\S II.7-II.8]{mazur1978} (see also \cite{mazur1997}).
\end{proof}
\begin{lem}
\label{lem:goren and I principal}
Suppose that $\mathbb{T}_N^{\epsilon}$ is Gorenstein. Then there is an isomorphism of $\mathbb{T}_N^\epsilon$-modules
\[
I^\epsilon \buildrel\sim\over\lra (\mathbb{T}_N^{0,\epsilon})^\vee.
\]
In particular, the minimal number of generators of $I^\epsilon$ is $\delta(\mathbb{T}_N^{0,\epsilon})+1$, and $I^\epsilon$ is principal if and only if $\mathbb{T}_N^{0,\epsilon}$ is Gorenstein.
\end{lem}
\begin{proof}
Like the proof of \cite[Lem.\ 3.2.5]{ohta2014}, there is an exact sequence of $\mathbb{T}_N^\epsilon$-modules
\[
0 \longrightarrow S_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis} \longrightarrow M_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis} \xrightarrow{\mathrm{Res}} \mathbb{Z}_p \longrightarrow 0
\]
where $\mathbb{T}_N^\epsilon$ acts on $\mathbb{Z}_p$ via the augmentation map $\mathbb{T}_N^\epsilon \to \mathbb{T}_N^\epsilon/I^\epsilon=\mathbb{Z}_p$. Since we assume that $\mathbb{T}_N^{\epsilon}$ is Gorenstein, we see by the duality \eqref{eq:M and T duality} that $M_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis}$ is a free $\mathbb{T}_N^\epsilon$-module of rank $1$. We may choose a generator $f$ of $M_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis}$ such that $\mathrm{Res}(f)=1$. Then we obtain a surjective $\mathbb{T}_N^\epsilon$-module homomorphism
\[
\mathbb{T}_N^\epsilon \twoheadrightarrow \mathbb{Z}_p, \quad T \mapsto \mathrm{Res}(Tf)
\]
whose kernel is isomorphic to $S_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis}$. Because this map sends $1$ to $1$, it is a ring homomorphism, and it must be the augmentation map $\mathbb{T}_N^\epsilon \twoheadrightarrow \mathbb{Z}_p$. Thus $I^\epsilon \cong S_2(N;\mathbb{Z}_p)^\epsilon_\mathrm{Eis}$, so duality \eqref{eq:M and T duality} yields the isomorphism of the lemma. The remaining parts follow from \S \ref{subsec:Gorenstein defect}.
\end{proof}
Combining the preceding two lemmas, we obtain the following
\begin{lem}
\label{lem:I and Jacobian kernel}
Assume that $\mathbb{T}_N^\epsilon$ is Gorenstein. Then
\[
\dim_{\mathbb{F}_p}J_0(N)[\mathfrak{m}^\epsilon](\overline{\mathbb{Q}}_p) = 1 + \dim_{\mathbb{F}_p}(I^\epsilon/\mathfrak{m}^\epsilon I^\epsilon).
\]
\end{lem}
\section{The pseudodeformation ring}
\label{sec:pseudodef ring}
In this section, we set up the deformation theory of Galois pseudorepresentations modeling those that arise from Hecke eigenforms of weight 2 and level $N$ that are congruent to the Eisenstein series $E^\epsilon_{2,N}$. These are the Galois representations of Lemma \ref{lem:normalization of bT0}. See \S\ref{subsec:psdef method} for further introduction.
\subsection{Theory of Cayley-Hamilton representations}
\label{subsec:CH setup}
This section is a summary of \cite{WWE4}. Only for this section, we work with a general profinite group $G$ satisfying condition $\Phi_p$ (of \S\ref{subsec:notation}). All pseudorepresentations are assumed to have dimension $2$, for simplicity.
\subsubsection{Pseudorepresentations}
A pseudorepresentation $D:E \to A$ is the data of an associative $A$-algebra $E$ along with a homogeneous multiplicative polynomial law $D$ from $E$ to $A$. This definition is due to Chenevier \cite{chen2014}; see \cite{WWE4} and the references therein. Despite the notation, the pseudorepresentation $D$ includes the data of a multiplicative function $D:E\to A$, but is not characterized by this function alone. It is characterized by the pair of functions $\mathrm{Tr}_D,D:E \to A$, where $\mathrm{Tr}_D$ is defined by the \emph{characteristic polynomial}:
\begin{equation}
\label{eq:x+1 identity}
D(x-t)=t^2 - \mathrm{Tr}_D(x)t + D(x) \in A[t].
\end{equation}
A pseudorepresentation $D:E \to A$ is said to be \emph{Cayley-Hamilton} if, for every commutative $A$-algebra $B$, every element $x\in E \otimes_A B$ satisfies its characteristic polynomial. We also denote by $D : G \rightarrow A$ a pseudorepresentation $D : A[G] \rightarrow A$.
\subsubsection{Cayley-Hamilton representations}
In the category of \emph{Cayley-Hamilton representations of a profinite group $G$}, an object is a triple
\[
(\rho: G \rightarrow E^\times, E, D : E\rightarrow A),
\]
and sometimes referred to more briefly as ``$\rho$.'' Here $\rho$ is a homomorphism (continuous, as always), $E$ is an associative $A$-algebra that is finitely generated as an $A$-module, $(A,\mathfrak{m}_A)$ is a Noetherian local $\mathbb{Z}_p$-algebra, and $D$ is a Cayley-Hamilton pseudorepresentation. We call $A$ the \emph{scalar ring} of $E$. The \emph{induced pseudorepresentation of $\rho$} is $D \circ \rho : G \rightarrow A$, also denoted $\psi(\rho)$. The functor $\psi$ is essentially surjective. The Cayley-Hamilton representation $\rho$ is said to be \emph{over} $\psi(\rho)\otimes_A A/\mathfrak{m}_A$, and $\psi(\rho)$ is said to be a \emph{pseudodeformation} of $\psi(\rho)\otimes_A A/\mathfrak{m}_A$.
Given a pseudorepresentation ${\bar D}:G \to \mathbb{F}$ for a field $\mathbb{F}$, there is a universal object in the category of Cayley-Hamilton representations over ${\bar D}$. This is denoted by
\[
(\rho^u_{\bar D} : G \longrightarrow E_{\bar D}^\times, E^u_{\bar D}, D_{E^u_{\bar D}} : E^u_{\bar D} \rightarrow R^u_{\bar D}),
\]
and the induced pseudorepresentation $D^u_{\bar D} := \psi(\rho^u_{\bar D})$ is the universal pseudodeformation of ${\bar D}$.
\subsubsection{Generalized matrix algebras (GMA)} An important example of a Cayley-Hamilton algebra is a \emph{generalized matrix algebra (GMA)}. An $A$-GMA $E$ is given by the data $(B,C,m)$ where $B$ and $C$ are finitely-generated $R$-modules, $m:B \otimes_R C \to R$ is an $R$-module homomorphism satisfying certain conditions, and $E =\sm{R}{B}{C}{R}$ (see \cite[Example 3.1.7]{WWE4}). There is a Cayley-Hamilton pseudorepresentation $D:E \to A$ given by the usual formula for characteristic polynomial. We write a homomorphism $\rho: G \to E^\times$ as $\rho = \sm{\rho_{1,1}}{\rho_{1,2}}{\rho_{2,1}}{\rho_{2,2}}$.
If ${\bar D}$ is multiplicity-free (see \cite[Defn.\ 3.2.1]{WWE4}), then $E^u_{{\bar D}}$ has a GMA structure whose associated pseudorepresentation is $D_{E^u_{\bar D}}$ \cite[Thm.\ 3.2.2]{WWE4}.
\subsubsection{Reducibility}
We will refer to the condition that a Cayley-Hamilton representation or a pseudorepresentation is \emph{reducible}. We also refer to the \emph{reducibility ideal} in rings receiving a pseudorepresentations. For these definitions, see \cite[\S4.2]{WWE4} or \cite[\S5.7]{WWE1}. The important case for this paper is that, if $(\rho, E, D:E\to A)$ is a Cayley-Hamilton representation where $E$ is the GMA associated to $(B,C,m)$, then the reducibility ideal of $D$ is the image of $m$. There are also universal objects, denoted $\rho^\mathrm{red}$, etc.
\subsubsection{Conditions on Cayley-Hamilton representations}
\label{sssec:cond on CH}
We consider two flavors of conditions $\mathcal{P}$ imposed on Cayley-Hamilton representations of $G$:
\begin{enumerate}
\item $\mathcal{P}$ is a condition that certain elements vanish, e.g.\ Definition \ref{defn:US-CH ell}.
\item $\mathcal{P}$ is a property applying to finite-length $\mathbb{Z}_p[G]$-modules and satisfying a basic stability condition, e.g.\ \S\ref{subsec:flat case}.
\end{enumerate}
In case (1), one produces a universal Cayley-Hamilton $\rho_{\bar D}^\mathcal{P}$ representation of $G$ satisfying $\mathcal{P}$ by taking the quotient by the two-sided ideal of $E_{\bar D}$ generated by the relevant elements, and then taking a further quotient so that a pseudorepresentation exists. This final quotient is known as the \emph{Cayley-Hamilton quotient of $\rho^u_{\bar D}$ for $\mathcal{P}$}. See \cite[Defn.\ 2.4.7]{WWE4} for details; cf.\ also \cite[Defn.\ 5.9.5]{WWE1}.
In case (2), we consider $E^u_{\bar D}$ as a $G$-module using its left action on itself by multiplication, and find in \cite[\S2.4]{WWE4} that the maximal left quotient module satisfying $\mathcal{P}$ can be defined and is an algebra quotient. The subsequent Cayley-Hamilton quotient is then shown to satisfy the desired properties of $\rho_{\bar D}^\mathcal{P}$.
\subsubsection{Conditions on pseudorepresentations}
As discussed in \cite[\S2.5]{WWE4}, one says that a pseudorepresentation $D$ of $G$ satisfies $\mathcal{P}$ if there exists a Cayley-Hamilton representation $\rho$ of $G$ such that $\psi(\rho) = D$ and $\rho$ satisfies $\mathcal{P}$. Then the universal pseudodeformation of ${\bar D}$ with property $\mathcal{P}$ turns out to be $\psi(\rho^\mathcal{P}_{\bar D})$.
\subsection{Universal Cayley-Hamilton representations of Galois groups}
\label{subsec:CH Gal setup}
Let $\ell \mid Np$ be a prime. Recall from \S\ref{subsec:notation} the decomposition group $G_\ell \rightarrow G_{\mathbb{Q},S}$. Let ${\bar D}: G_{\mathbb{Q},S} \to \mathbb{F}_p$ denote the pseudorepresentation $\psi(\mathbb{F}_p(1)\oplus \mathbb{F}_p)$.
We denote by
\[
(\rho_{\bar D} : G_{\mathbb{Q},S} \longrightarrow E_{\bar D}^\times, E_{\bar D}, D_{E_{\bar D}} : E_{\bar D} \rightarrow R_{\bar D})
\]
the universal Cayley-Hamilton representation of $G_{\mathbb{Q},S}$ over ${\bar D}$. The scalar ring $R_{\bar D}$ is the universal pseudodeformation ring of ${\bar D}$, with universal pseudorepresentation $D_{\bar D} := \psi(\rho_{\bar D})$. Similarly, we let the triple
\[
(\rho_\ell : G_\ell \rightarrow E_\ell^\times, E_\ell, D_{E_\ell} : E_\ell \rightarrow R_\ell)
\]
denote the universal Cayley-Hamilton representation of $G_\ell$ over ${\bar D}\vert_{G_\ell}$, so that $D_\ell := \psi(\rho_\ell) : G_\ell \rightarrow R_\ell$ is the universal pseudodeformation of ${\bar D}\vert_{G_\ell}$.
\begin{defn}
\label{def:GMA structure}
Note that ${\bar D}$ is multiplicity-free, and that, if $\ell \not \equiv 1 \pmod{p}$, then ${\bar D}|_{G_\ell}$ is multiplicity-free. In this case, $E_\ell$ and $E_{\bar D}$ have the structure of a GMA. In this paper, whenever we fix such a structure, we assume that $(\rho_\ell)_{1,1} \otimes_{R_\ell} \mathbb{F}_p \cong \omega|_{G_\ell}$ (resp.\ $(\rho_{\bar D})_{1,1}\otimes_{R_{\bar D}} \mathbb{F}_p \cong \omega$).
\end{defn}
\subsection{Case $\ell \nmid Np$: unramified}
\label{subsec:unram}
For $\ell \nmid Np$, we want Galois representations to be unramified at $\ell$. We impose this by considering representations of $G_{\mathbb{Q},S}$, as opposed to $\mathrm{Gal}(\overline{\mathbb{Q}}/\mathbb{Q})$.
\subsection{Case $\ell \neq p$ and $\ell \mid N$: the unramified-or-Steinberg condition}
\label{subsec:US ell}
In this subsection, we write $\ell$ for one of the factors of $N$ referred to elsewhere in this manuscript as $\ell_i$. Likewise, we write $\epsilon_\ell$ for $\epsilon_i$.
\begin{defn}
\label{defn:US-CH ell}
Let $(\rho : G_\ell \rightarrow E, E, D_E: E \rightarrow A)$ be a Cayley-Hamilton representation of $G_\ell$ over ${\bar D}\vert_{G_\ell}$. We call $\rho$ \emph{unramified-or-$\epsilon_\ell$-Steinberg} (or $\mathrm{US}^{\epsilon_\ell}_\ell$) if
\begin{equation}
\label{eq:US test elts}
V^{\epsilon_\ell}_\rho(\sigma,\tau) := (\rho(\sigma)-\lambda(-\epsilon_\ell)(\sigma)\kcyc(\sigma))(\rho(\tau)-\lambda(-\epsilon_\ell)(\tau)) \in E
\end{equation}
is equal to $0$ for all $(\sigma, \tau)$ ranging over the set
\[
I_{\ell} \times G_{\ell} \cup G_{\ell} \times I_{\ell} \ \subset \ G_{\ell} \times G_{\ell}.
\]
Write $V^{\epsilon_\ell}_\rho$ for the set of all elements $V^{\epsilon_\ell}_\rho(\sigma,\tau)$ over this range.
A pseudodeformation $D : G_\ell \rightarrow A$ of ${\bar D}\vert_{G_\ell}$ is called $\mathrm{US}^\epsilon_\ell$ if there exists a $\mathrm{US}^\epsilon_\ell$ Cayley-Hamilton representation $\rho$ of $G_\ell$ such that $\psi(\rho) = D$.
\end{defn}
\begin{defn}
\label{defn:US-CH ell UO}
Let $(E^{\epsilon_\ell}_\ell, D_{E^{\epsilon_\ell}_\ell}: E^{\epsilon_\ell}_\ell \rightarrow R^{\epsilon_\ell}_\ell)$ be the Cayley-Hamilton quotient of $(E_\ell, D_\ell)$ by $V^{\epsilon_\ell}_{\rho_\ell}$. Let
\[
(\rho^{\epsilon_\ell}_\ell : G_\ell \rightarrow (E^{\epsilon_\ell}_\ell)^\times, E^{\epsilon_\ell}_\ell, D_{E^{\epsilon_\ell}_\ell} : E^{\epsilon_\ell}_\ell \rightarrow R^{\epsilon_\ell}_\ell),
\]
be the corresponding Cayley-Hamilton representation, with induced pseudorepresentation of $G_\ell$ denoted $D^{\epsilon_\ell}_\ell := \psi(\rho^{\epsilon_\ell}_\ell) : G_\ell \rightarrow R^{\epsilon_\ell}_\ell$.
\end{defn}
By the theory of \S\ref{sssec:cond on CH}, $\rho^{\epsilon_\ell}_\ell$ is the universal $\mathrm{US}_\ell^{\epsilon_\ell}$ Cayley-Hamilton representation over ${\bar D}\vert_{G_\ell}$, and $D^{\epsilon_\ell}_\ell$ is the universal $\mathrm{US}_\ell^{\epsilon_\ell}$ pseudodeformation of ${\bar D}\vert_{G_\ell}$.
\begin{lem}
\label{lem:USell implies pseudo-unram}
If $\ell \neq p$, then, for any $\epsilon_\ell$, we have $D_\ell^{\epsilon_\ell}(\tau)=1$ and $\mathrm{Tr}_{D_\ell^{\epsilon_\ell}}(\tau)=2$ for all $\tau \in I_\ell$. That is, $(D^{\epsilon_\ell}_\ell)\vert_{I_\ell} = \psi(1 \oplus 1)$.
\end{lem}
\begin{proof}
Let $\tau \in I_\ell$. We see in \eqref{eq:US test elts} that $V^{\epsilon_\ell}_{\rho^{\epsilon_\ell}_\ell}(\tau,\tau)=(\rho^{\epsilon_\ell}_\ell(\tau)-1)^2 = 0$. Thus by \cite[Lem.\ 2.7(iv)]{chen2014}, we see $\mathrm{Tr}_{D^{\epsilon_\ell}_\ell}(\tau-1) = D^{\epsilon_\ell}_\ell(\tau-1) = 0$. As traces are additive, we have $\mathrm{Tr}_{D^{\epsilon_\ell}_\ell}(\tau) = \mathrm{Tr}_{D^{\epsilon_\ell}_\ell}(1)=2$. Applying \eqref{eq:x+1 identity} with $x = \tau$ and $t=1$, we find that $D^{\epsilon_\ell}_\ell(\tau) = 1$.
\end{proof}
\begin{lem}
\label{lem:epsilon=1 and unram}
Suppose that $\epsilon_\ell = +1$ and $\ell \not \equiv -1,0 \pmod{p}$. Then $\rho_\ell^{\epsilon_\ell}$ is unramified (i.e. $\rho_\ell^{\epsilon_\ell}|_{I_{\ell}} =1$).
\end{lem}
\begin{proof}
Let $\sigma \in G_\ell$ be the element $\sigma_\ell$ defined in \S\ref{subsec:notation}. By definition of $E_\ell^{\epsilon_\ell}$,
\[
V^{\epsilon_\ell}_{\rho^{\epsilon_\ell}_\ell}(\tau,\sigma) = (\rho_\ell^{\epsilon_\ell}(\tau)-1)(\rho_\ell^{\epsilon_\ell}(\sigma)+1)= 0,
\]
for any $\tau \in I_\ell$. To prove the lemma, it suffices to show that $(\rho_\ell^{\epsilon_\ell}(\sigma)+1) \in (E_\ell^{\epsilon_\ell})^\times$.
By the Cayley-Hamilton property, we know that any element $x \in E_\ell^{\epsilon_\ell}$ satisfies $x^2-\mathrm{Tr}_{D^{\epsilon_\ell}_\ell}(x)x+D^{\epsilon_\ell}_\ell(x)=0$. In particular, we see that $x \in (E_\ell^{\epsilon_\ell})^\times$ if and only if $D^{\epsilon_\ell}_\ell(x) \in (R_\ell^{\epsilon_\ell})^\times$. Hence it will suffice to show that $D^{\epsilon_\ell}_\ell(\sigma + 1) \in (R_\ell^{\epsilon_\ell})^\times$.
Writing $\mathfrak{m} \subset R^{\epsilon_\ell}_\ell$ for the maximal ideal, we know that $D^{\epsilon_\ell}_\ell \equiv {\bar D} \pmod{\mathfrak{m}}$, so it will suffice to show that ${\bar D}(\sigma+1) \in \mathbb{F}_p^\times$.
Because $\ell \neq p$ and ${\bar D} = \psi(\omega \oplus 1)$, we apply \eqref{eq:x+1 identity} with $x = \sigma$ and $t=-1$, calculating that ${\bar D}(\sigma + 1)= 2(\ell+1) \in \mathbb{F}_p$. This is a unit because $p$ is odd and $\ell \not\equiv -1 \pmod{p}$.
\end{proof}
\subsection{The finite-flat case: $\ell = p$ and $p \nmid N$}
\label{subsec:flat case}
A finite-length $\mathbb{Z}_p[G_p]$-module $V$ is said to be \emph{finite-flat} when it arises as ${\mathcal{G}}(\overline{\mathbb{Q}}_p)$, where ${\mathcal{G}}$ is a finite flat group scheme over $\mathbb{Z}_p$. In \cite[\S5.2]{WWE4} we check that the theory of \S\ref{sssec:cond on CH} can be applied to the finite-flat condition. This theory gives us
\[
(\rho_p^{\mathrm{flat}}: G_p \to (E_p^{\mathrm{flat}})^\times, E_p^{\mathrm{flat}}, D_{E^{\mathrm{flat}}_p} : E_p^{\mathrm{flat}} \rightarrow R_p^{\mathrm{flat}}),
\]
the universal finite-flat Cayley-Hamilton representation of $G_p$ over ${\bar D} \vert_{G_p}$. The pseudorepresentation $D^{\mathrm{flat}}_p := \psi(\rho^{\mathrm{flat}}_p) : G_p \rightarrow R^{\mathrm{flat}}_p$ is the universal finite-flat pseudodeformation of ${\bar D} \vert_{G_p}$.
Consider a GMA structure on $E_p^{\mathrm{flat}}$ as in Definition \ref{def:GMA structure}, which we write as
\[
\rho_p^{\mathrm{flat}} = \ttmat{\rho^{\mathrm{flat}}_{p,1,1}}{\rho^{\mathrm{flat}}_{p,1,2}}{\rho^{\mathrm{flat}}_{p,2,1}}{\rho^{\mathrm{flat}}_{p,2,2}} : G_p \longrightarrow
\ttmat{R_p^{\mathrm{flat}}}{B_p^{\mathrm{flat}}}{C_p^{\mathrm{flat}}}{R_p^{\mathrm{flat}}}^\times.
\]
\begin{lem}
\label{lem:flat implies upper tri}
For any such GMA structure on $E_p$, $C_p^{\mathrm{flat}} = 0$.
\end{lem}
\begin{proof}
The proof is implicit in \cite{WWE3} but not stated in this form there. One simply combines the following facts. See \cite[\S B.4]{WWE3} for the notation.
\begin{itemize}
\item As the maximal ideal of $R_p^{\mathrm{flat}}$ contains the reducibility ideal, we have $\mathrm{Hom}_{R_p^{\mathrm{flat}}}(C_p^{\mathrm{flat}}, \mathbb{F}_p) =\mathrm{Ext}^1_{\mathrm{ffgs}/\mathbb{Z}_p}(\mu_p,\mathbb{Z}/p\mathbb{Z})$, where $\mathrm{ffgs}/\mathbb{Z}_p$ is the category of finite flat groups schemes over $\mathbb{Z}_p$, by \cite[Thm.\ 4.3.5]{WWE4}.
\item We see in \cite[Lem.\ 6.2.1(1)]{WWE3} that $\mathrm{Ext}^1_{\mathrm{ffgs}/\mathbb{Z}_p}(\mu_p,\mathbb{Z}/p\mathbb{Z}) = 0$.
\end{itemize}
As $C_p^{\mathrm{flat}}$ is a finitely-generated $R_p^{\mathrm{flat}}$-module, this implies that $C_p^{\mathrm{flat}} = 0$.
\end{proof}
Now that we know that $C_p^{\mathrm{flat}} = 0$, $\rho_{p,i,i}^{\mathrm{flat}}$ are $R_p^{\mathrm{flat}}$-valued characters of $G_p$, for $i = 1,2$. Similarly to \cite[\S5.1]{WWE3}, using the fact that $\omega\vert_{G_p} \neq 1$, we see the following
\begin{lem}
\label{lem:flat psrep form}
A pseudodeformation $D$ of ${\bar D}|_{G_p}$ is finite-flat if and only if $D=\psi(\kcyc \chi_1 \oplus \chi_2)$ where $\chi_1,\chi_2$ are unramified deformations of the trivial character.
\end{lem}
\subsection{The finite-flat case: $\ell = p$, $p \mid N$, and $\epsilon_p=+1$}
\label{subsec:epsilon_p=1}
By Lemma \ref{lem:normalization of bT0}(4), we see that, if $\epsilon_p=+1$, then the residually Eisenstein cusp forms are old at $p$ with associated $G_{\mathbb{Q},S}$-representation being finite-flat at $p$. We impose this condition exactly as in \S\ref{subsec:flat case}. Namely, we say that a Cayley-Hamilton representation of $G_p$ is \emph{unramified-or-$(+1)$-Steinberg} (or $\mathrm{US}_p^{+1}$) if it is finite-flat.
\subsection{The ordinary case: $\ell = p$, $p \mid N$, and $\epsilon_p=-1$}
\label{subsec:ord case}
Based on the form of Galois representations arising from $p$-ordinary eigenforms given in Lemma \ref{lem:normalization of bT0}(4), we proceed exactly as in the case $\ell \neq p$ given in \S\ref{subsec:US ell}.
\begin{defn}
\label{defn:US-CH p}
We say that a Cayley-Hamilton representation or a pseudodeformation over ${\bar D}\vert_{G_p}$ is \emph{ordinary} (or $\mathrm{US}_p^{-1}$) when it satisfies Definition \ref{defn:US-CH ell}, simply letting $\ell = p$.
\end{defn}
Similarly to Definition \ref{defn:US-CH ell UO}, let $(E^\ord_p, D_{E^\ord_p})$ be the Cayley-Hamilton quotient of $(E_p,D_{E_p})$ by $V_{\rho_p}^{-1}$, and let $(\rho^\ord_p, E^\ord_p, D_{E^\ord_p} : E^\ord_p \rightarrow R^\ord_p)$ be the corresponding Cayley-Hamilton representation. As per \S\ref{sssec:cond on CH}, $\rho^\ord_p$ is the universal ordinary Cayley-Hamilton representation over ${\bar D}\vert_{G_p}$, and $D^\ord_p := \psi(\rho^\ord_p) : G_p \rightarrow R^\ord_p$ is the universal ordinary pseudodeformation of ${\bar D}\vert_{G_p}$.
\begin{rem}
If one applies $V^{+1}_{\rho_p} = 0$ in the case $\epsilon_p = +1$, one does not get the the desired finite-flat condition of \S \ref{subsec:epsilon_p=1} that agrees with Lemma \ref{lem:normalization of bT0}(4b). Instead, one finds that $E_p^{+1} = 0$ (i.e.\ no deformations of ${\bar D}$ satisfy this condition).
\end{rem}
We set up the following notation, which includes all cases: $\epsilon_p=\pm 1$ or $p \nmid N$.
\begin{defn}
\label{defn:E_p^epsilon}
For any $N$ and $\epsilon$, we establish notation
\[
(\rho^{\epsilon_p}_p, E^{\epsilon_p}_p, D_{E^{\epsilon_p}_p}, R^{\epsilon_p}_p, D^{\epsilon_p}_p) := \left\{\begin{array}{ll}
(\rho^\ord_p, E^\ord_p, D_{E^\ord_p}, R^\ord_p, D^\ord_p) & \text{if } p \mid N, \epsilon_p=-1, \\
(\rho^{\mathrm{flat}}_p, E^{\mathrm{flat}}_p, D_{E^{\mathrm{flat}}_p}, R^{\mathrm{flat}}_p, D^{\mathrm{flat}}_p) & \text{otherwise}.
\end{array}\right.
\]
\end{defn}
In \cite[\S5]{WWE1}, we developed an alternative definition of ordinary Cayley-Hamilton algebra. (This definition applies to general weight, which we specialize to weight 2 here.) Choose a GMA structure on $E_p$, as in Definition \ref{def:GMA structure}. Let $J_p^\ord \subset E_p$ be the two-sided ideal generated by the subset
\[
\rho_{p,2,1}(G_p) \bigcup (\rho_{p,1,1}-\kcyc)(I_p) \bigcup (\rho_{p,2,2} - 1)(I_p).
\]
As in \cite[Lem.\ 5.9.3]{WWE1}, $J^{\ord}_p$ is independent of the choice of GMA-structure.
\begin{lem}
\label{lem:WWE ord = CS ord}
The Cayley-Hamilton quotient of $E_p$ by $J^{\ord}_p$ is equal to $E^\ord_p$.
\end{lem}
\begin{proof}
Let $(V^\ord_{\rho_p})$ denote the kernel of $E_p \twoheadrightarrow E_p^\ord$, which contains (but may not be generated by) $V^\ord_p$ (see \S\ref{sssec:cond on CH}). It will suffice to show that $(V^\ord_{\rho_p}) = J^\ord_p$. The inclusion $(V^\ord_{\rho_p})\subset J^\ord_p$ is straightforward: see the calculations in \cite[\S5.9]{WWE1}, from which it is evident that the Cayley-Hamilton quotient of $\rho_p$ by $J^\ord_p$ is a Cayley-Hamilton representation that is ordinary (in the sense of Definition \ref{defn:US-CH ell}). It remains to show that $J^{\ord}_p \subset (V^\ord_{\rho_p})$.
First we will show that $D^\ord_p\vert_{I_p} = \psi(\kcyc \oplus 1)\vert_{I_p} \otimes_{\mathbb{Z}_p} R^\ord_p$. For any $\tau \in I_p$, $\rho^\ord_p(\tau)$ satisfies both polynomials
\[
T^2 - \mathrm{Tr}_{D^\ord_p}(\tau)T - D^\ord_p(\tau) \quad \text{ and } \quad
(T -\kcyc(\tau))(T-1),
\]
the first by the Cayley-Hamilton condition and the second by Definition \ref{defn:US-CH p}. If $\omega(\tau) \neq 1$, Hensel's lemma implies that these two polynomials are identical. For such $\tau$, we have $D^\ord_p(\tau)=\kcyc(\tau)$ and $\mathrm{Tr}_{D^\ord_p}(\tau)=\kcyc(\tau)+1$. Now choose an arbitrary element of $I_p$ and write it as $\sigma\tau$ with $\omega(\sigma), \omega(\tau) \neq 1$. We immediately see that $D^\ord_p(\sigma\tau) = \kcyc(\sigma\tau)$, since both sides are multiplicative. Let $r_\sigma =\rho_p^\ord(\sigma)$ and $r_\tau=\rho_p^\ord(\tau)$. Since $E_p^\ord$ is Cayley-Hamilton, we have
\[
(t_\sigma r_\sigma + t_\tau r_\tau)^2-\mathrm{Tr}_{D_p^\ord}(t_\sigma r_\sigma + t_\tau r_\tau)(t_\sigma r_\sigma + t_\tau r_\tau)+D_p^\ord(t_\sigma r_\sigma + t_\tau r_\tau)=0
\]
in the polynomial ring $E_p^\ord[t_\sigma, t_\tau]$. We can expand $D_p^\ord(t_\sigma r_\sigma + t_\tau r_\tau)$ using \cite[Example 1.8]{chen2014}. Taking the coefficient of $t_\sigma t_\tau$ and writing $\mathrm{Tr} = \mathrm{Tr}_{D^\ord_p}$ for brevity,
\[
r_\sigma r_\tau + r_\tau r_\sigma - \mathrm{Tr}(\sigma)r_\tau - \mathrm{Tr}(\tau)r_\sigma - \mathrm{Tr}(\sigma\tau) + \mathrm{Tr}(\sigma)\mathrm{Tr}(\tau) = 0.
\]
Substituting for $r_\sigma r_\tau$ using $V^\ord_{\rho_p}(\sigma, \tau) = 0$ and for $r_\tau r_\sigma$ using $V^\ord_{\rho_p}(\tau, \sigma) = 0$, one obtains the desired conclusion $\mathrm{Tr}(\sigma\tau) = \kcyc(\sigma\tau) + 1$.
Let $\sigma \in I_p$, and let $\tau \in I_p$ be such that $\omega(\tau) \ne 1$. Using the fact that $\rho^\ord_p\vert_{I_p}$ is reducible, we see that the $(1,1)$-coordinate of $V^\ord_{\rho^\ord_p}(\sigma, \tau)$ is
\[
(\rho_{p,1,1}^\ord(\sigma)-\kcyc(\sigma))(\rho^\ord_{p,1,1}(\tau)-1)=0
\]
Since $\rho_{p,1,1}^{\ord}$ is a deformation of $\omega$, we have $\rho_{p,1,1}^\ord(\tau)-1 \in (R_p^\ord)^\times$, so this implies $\rho_{p,1,1}^\ord(\sigma)-\kcyc(\sigma) =0$. This shows that $(\rho_{p,1,1}-\kcyc)(I_p) \subset (V^\ord_{\rho_p})$, and a similar argument gives $(\rho_{p,2,2}^\ord-1)(I_p) \subset (V^\ord_{\rho_p})$.
It remains to show that $\rho^\ord_{p,2,1}(G_p) = 0$. Let $\mathfrak{m} \subset R^\ord_p$ be the maximal ideal. In fact, we will show that $C^\ord_p/\mathfrak{m} C^\ord_p = 0$, which is equivalent because $\rho^\ord_{p,2,1}(G_p)$ generates the finitely generated $R^\ord_p$-module $C^\ord_p$. We work with $\bar{\rho}^\ord := \rho^\ord_p \pmod{\mathfrak{m}}$. Since $\bar{\rho}^\ord$ is reducible, we can consider $\bar\rho^{\ord}_{2,1} \in Z^1(G_p, C_p^\ord/\mathfrak{m} C_p^\ord \otimes_{\mathbb{F}_p} \mathbb{F}_p(-1))$, and \cite[Thm.\ 1.5.5]{BC2009} implies that there is an injection
\[
\mathrm{Hom}_{\mathbb{F}_p}(C_p^\ord/\mathfrak{m} C_p^\ord, \mathbb{F}_p) \hookrightarrow H^1(G_p, \mathbb{F}_p(-1))
\]
sending $\phi$ to the class of the cocycle $\phi \circ \bar\rho^{\ord}_{2,1}$. So to show that $C^\ord_p/\mathfrak{m} C^\ord_p$ is zero, it is enough to show that $\bar\rho^{\ord}_{2,1}$ is a coboundary, or, equivalently, that $\bar\rho^{\ord}_{2,1}(\sigma)=0$ for all $\sigma \in \ker(\omega) \subset G_p$. However, we compute that the $(2,1)$-entry of $V^\ord_{\rho_p}(\sigma, \tau)$ is
\[
\rho_{p,2,1}^\ord(\sigma)(\rho_{p,1,1}^\ord(\tau)-1)+(\rho_{p,2,2}^\ord(\sigma)-\kcyc(\sigma))\rho_{p,2,1}^\ord(\tau).
\]
Taking $\sigma \in \ker(\omega)$ and $\tau \in I_p$ such that $\omega(\tau)\ne 1$, we see that $\rho_{p,1,1}^\ord(\tau)-1 \equiv \omega(\tau) -1 \not \equiv 0 \pmod{\mathfrak{m}}$ and $\rho_{p,2,2}^\ord(\sigma)-\kcyc(\sigma) \in \mathfrak{m}$, so this implies $\bar\rho^{\ord}_{2,1}(\sigma)=0$.
\end{proof}
We have the following consequence, following \cite[\S5.9]{WWE1}.
\begin{prop}
\label{prop:ord C-H form}
A Cayley-Hamilton representation $(\rho: G_p \rightarrow E^\times, E, D: E \rightarrow A)$ over ${\bar D}\vert_{G_p}$ is ordinary if and only if it admits a GMA structure such that
\begin{enumerate}
\item it is upper triangular, i.e.\ $\rho_{2,1} = 0$, and
\item the diagonal character $\rho_{1,1}$ (resp.\ $\rho_{2,2}$) is the product of $\kcyc \otimes_{\mathbb{Z}_p} A$ (resp.\ the constant character $A$) and an unramified $A$-valued character.
\end{enumerate}
\end{prop}
\begin{cor}
\label{cor:ord to flat compare}
Any finite-flat Cayley-Hamilton representation of $G_p$ over ${\bar D}\vert_{G_p}$ is ordinary. The resulting morphism of universal Cayley-Hamilton representations of $G_p$, $(\rho^\ord_p, E^\ord_p, D_{E^\ord_p}) \rightarrow (\rho^{\mathrm{flat}}_p, E^{\mathrm{flat}}_p, D_{E^{\mathrm{flat}}_p})$, induces an isomorphism on universal pseudodeformation rings $R^\ord_p \buildrel\sim\over\ra R^{\mathrm{flat}}_p$. The universal pseudodeformations $D^\ord_p \cong D^{\mathrm{flat}}_p$ of ${\bar D}\vert_{G_p}$ have the form $\psi(\kcyc \chi_1 \oplus \chi_2)$, where $\chi_1, \chi_2$ are unramified deformations of the trivial character $1 : G_p \rightarrow \mathbb{F}_p^\times$.
\end{cor}
\begin{proof}
The Cayley-Hamilton representation $\rho_p^{\mathrm{flat}}$ satisfies conditions (1) and (2) of Proposition \ref{prop:ord C-H form} by Lemmas \ref{lem:flat implies upper tri} and \ref{lem:flat psrep form}, respectively. The isomorphism of universal pseudorepresentations becomes evident by comparing Lemma \ref{lem:flat psrep form} and Proposition \ref{prop:ord C-H form}(2).
\end{proof}
\subsection{Global formulation}
We now combine the local constructions to define what it means for a global Cayley-Hamilton representation or pseudorepresentation to be unramified-or-Steinberg of level $N$ and type $\epsilon$.
\begin{defn}
\label{defn:US global}
Let $(\rho : G_{\mathbb{Q},S} \rightarrow E^\times, E, D_E : E \rightarrow A)$ be a Cayley-Hamilton representation over ${\bar D}$. We say that $\rho$ is \emph{unramified-or-Steinberg of level $N$ and type $\epsilon$} (or $\mathrm{US}_N^\epsilon$) when $\rho\vert_{G_\ell}$ is $\mathrm{US}_\ell^{\epsilon_\ell}$ for all primes $\ell \mid N$, and, if $p \nmid N$, $\rho|_{G_p}$ is finite-flat.
Let $D : G_{\mathbb{Q},S} \rightarrow A$ be a pseudodeformation of ${\bar D}$. We say that $D$ is \emph{unramified-or-Steinberg of level $N$ and type $\epsilon$} (or $\mathrm{US}_N^\epsilon$) when there exists a Cayley-Hamilton representation $(\rho: G_{\mathbb{Q},S} \rightarrow E^\times, E, D_E : E \rightarrow A)$ such that $D = \psi(\rho)$ and $\rho$ is $\mathrm{US}^\epsilon_N$.
\end{defn}
Recall the Cayley-Hamilton representation $\rho_{\bar D}$ set up in \S\ref{subsec:CH Gal setup}. There are maps of Cayley-Hamilton algebras $\iota_\ell : (E_\ell, D_{E_\ell}) \rightarrow (E_{\bar D}, D_{E_{\bar D}})$ arising from the fact that $\rho_{\bar D}\vert_{G_\ell}$ is a Cayley-Hamilton representation of $G_\ell$ over ${\bar D}\vert_{G_\ell}$. For any $\ell \mid Np$, write $J^\epsilon_\ell$ for the kernel of $E_\ell \rightarrow E_\ell^{\epsilon_\ell}$ (refer to Definition \ref{defn:E_p^epsilon} for $E_p^{\epsilon_p}$).
\begin{defn}
\label{defn:global US univ objects}
Let $(E^\epsilon_N, D_{E^\epsilon_N})$ denote the Cayley-Hamilton algebra quotient of $E_{\bar D}$ by the union of $\iota_\ell(J^\epsilon_\ell)$ over all primes $\ell \mid Np$. We denote the quotient Cayley-Hamilton representation of $G_{\mathbb{Q},S}$ by
\[
(\rho^\epsilon_N : G_{\mathbb{Q},S} \longrightarrow (E^\epsilon_N)^\times, E^\epsilon_N, D_{E^\epsilon_N} : E^\epsilon_N \longrightarrow R^\epsilon_N)
\]
and its induced pseudorepresentation by $D^\epsilon_N = \psi(\rho^\epsilon_N) : G_{\mathbb{Q},S} \rightarrow R^\epsilon_N$.
\end{defn}
Using \S\ref{sssec:cond on CH}, we see that $\rho_N^\epsilon$ (resp.\ $D^\epsilon_N$) is the universal $\mathrm{US}^\epsilon_N$ Cayley-Hamilton representation (resp.\ pseudodeformation) over ${\bar D}$. In particular, a homomorphism $R_{\bar D} \to A$ factors through $R_N^\epsilon$ if and only if the corresponding pseudodeformation $D:G_{\mathbb{Q},S} \to A$ of ${\bar D}$ satisfies $\mathrm{US}^\epsilon_N$.
\begin{prop}
\label{prop:global US det is kcyc}
Let $D : G_{\mathbb{Q},S} \rightarrow A$ be a pseudodeformation of ${\bar D}$ satisfying $\mathrm{US}^\epsilon_N$. Then $D(\tau)=\kcyc(\tau)$ for all $\tau \in G_{\mathbb{Q},S}$.
\end{prop}
\begin{proof}
It suffices to show that $D(\tau)=\kcyc(\tau)$ for all $\tau \in I_\ell$ and all $\ell \mid Np$, since this will show that $G_{\mathbb{Q},S} \ni \sigma \mapsto D(\sigma) \kcyc^{-1}(\sigma) \in A^\times$ is a character of $G_{\mathbb{Q},S}$ that is unramified everywhere and hence trivial. For $\ell \ne p$, this follows from Lemma \ref{lem:USell implies pseudo-unram}, and for $\ell=p$ this follows from Corollary \ref{cor:ord to flat compare}.
\end{proof}
\subsection{Information about $B_N^\epsilon$ and $C_N^\epsilon$}
Recall that we fixed a GMA structure on $E_p$ in \S \ref{subsec:ord case}. This defines a GMA structure on $E_p^{\epsilon_p}$ and $E^\epsilon_N$ via the Cayley-Hamilton algebra morphisms $E_p \to E_p^{\epsilon_p}$ and $E_p^{\epsilon_p} \rightarrow E^\epsilon_N$. We write this GMA structure as
\begin{equation}
\label{eq:GMA}
E^\epsilon_N= \ttmat{R^\epsilon_N}{B^\epsilon_N}{C^\epsilon_N}{R^\epsilon_N}, \quad \rho_N^\epsilon(\tau) = \ttmat{a_{\tau}}{b_{\tau}}{c_{\tau}}{d_{\tau}}.
\end{equation}
\subsubsection{Computation of $B_{\mathrm{flat}}^{\min}$ and $C_{\mathrm{flat}}^{\min}$}
\label{sssec:comp of Bfl and Cfl}
First we work in the case that either $p \nmid N$ or $\epsilon_p=+1$, so $E^{\epsilon_p}_p = E^{\mathrm{flat}}_p$, with a GMA structure chosen. Let $(E_{\mathrm{flat}}, D_{E_{\mathrm{flat}}})$ represent the Cayley-Hamilton quotient of $E_{\bar D}$ by $\iota_p(J^\epsilon_p)$, with a GMA structure coming from $E^{\mathrm{flat}}_p \rightarrow E_{\mathrm{flat}}$. Write this GMA structure as
\begin{equation}
\label{eq:GMA structure}
E_{\mathrm{flat}} \cong \ttmat{R_{\mathrm{flat}}}{B_{\mathrm{flat}}}{C_{\mathrm{flat}}}{R_{\mathrm{flat}}}, \quad \rho_{\mathrm{flat}}(\tau) = \ttmat{a_{{\mathrm{flat}},\tau}}{b_{{\mathrm{flat}},\tau}}{c_{{\mathrm{flat}},\tau}}{d_{{\mathrm{flat}},\tau}}.
\end{equation}
Let $J^{\min}_{\mathrm{flat}} =\ker(R_{\mathrm{flat}} \to \mathbb{Z}_p)$, where $R_{\mathrm{flat}} \to \mathbb{Z}_p$ corresponds to $\psi(\mathbb{Z}_p(1) \oplus \mathbb{Z}_p)$, which is obviously finite-flat. Let
\[
B_{\mathrm{flat}} ^{\min} =B_{\mathrm{flat}}/J^{\min}_{\mathrm{flat}} B_{\mathrm{flat}}, \quad C_{\mathrm{flat}} ^{\min} =C_{\mathrm{flat}}/J^{\min}_{\mathrm{flat}} C_{\mathrm{flat}} .
\]
By \cite[Prop.\ 2.5.1]{WWE3}, we have, for any finitely-generated $\mathbb{Z}_p$-module $M$, isomorphisms
\begin{align}
\label{eq:BC and exts}
\begin{split}
&\mathrm{Hom}_{\mathbb{Z}_p}(B_{\mathrm{flat}} ^{\min},M) \cong H^1_{\mathrm{flat}}(\mathbb{Z}[1/Np],M(1)) \\
&\mathrm{Hom}_{\mathbb{Z}_p}(C_{\mathrm{flat}} ^{\min},M) \cong H^1_{(p)}(\mathbb{Z}[1/Np],M(1)) .
\end{split}
\end{align}
where
\[
H^1_{\mathrm{flat}}(\mathbb{Z}[1/Np],M(1))= \ker\left((H^1(\mathbb{Z}[1/Np],M(1)) \to \frac{H^1(\mathbb{Q}_p,M(1))}{\mathrm{Ext}_{\mathrm{ffgs}/\mathbb{Z}_p}(M,\mathrm{Ta}_p(\mu_{p^\infty}))}\right)
\]
and
\[
H^1_{(p)}(\mathbb{Z}[1/Np],M(1))=\ker(H^1(\mathbb{Z}[1/Np], M(-1)) \to H^1(\mathbb{Q}_p,M(-1))).
\]
The Galois cohomology computations of \cite[\S6.3]{WWE3} allow us to compute these.
\begin{lem}
\label{lem:B_fl and C_fl}
There are isomorphisms
\[
\mathbb{Z}_p^{\oplus r+1} \isoto B_{\mathrm{flat}} ^{\min}, \quad \bigoplus_{i=0}^r \mathbb{Z}_p/(\ell_i^2-1)\mathbb{Z}_p \isoto C_{\mathrm{flat}} ^{\min}
\]
given by $e_i \mapsto b_{{\mathrm{flat}},\gamma_i}$ and $e_i \mapsto c_{{\mathrm{flat}},\gamma_i}$, where $e_i \in \mathbb{Z}_p^{\oplus r+1}$ is the $i$-th standard basis vector.
\end{lem}
\subsubsection{Computation of $B_\ord^{\min}$ and $C_\ord^{\min}$}
\label{sssec:comp of Bord and Cord}
Next we compute in the case $p \mid N$ and $\epsilon_p = -1$, so $E^{\epsilon_p}_p = E^\ord_p$. Let $(E_\ord, D_{E_\ord})$ be the Cayley-Hamilton quotient of $E_{\bar D}$ by $\iota_p(J^\epsilon_p)$, receiving a GMA structure via $E^\ord_p \rightarrow E_\ord$. Write this GMA structure as
\begin{equation}
\label{eq:GMA structure ord}
E_\ord \cong \ttmat{R_\ord}{B_\ord}{C_\ord}{R_\ord}, \quad \rho_\ord(\tau) = \ttmat{a_{\ord,\tau}}{b_{\ord,\tau}}{c_{\ord,\tau}}{d_{\ord,\tau}}.
\end{equation}
Let $J^{\min}_\ord =\ker(R_\ord \to \mathbb{Z}_p)$, where $R_\ord \to \mathbb{Z}_p$ corresponds to $\psi(\mathbb{Z}_p(1) \oplus \mathbb{Z}_p)$, which is obviously ordinary. Let
\[
B_\ord^{\min} =B_\ord/J^{\min}_\ord B_{\mathrm{flat}}, \quad C_\ord ^{\min} =C_\ord/J^{\min}_\ord C_\ord .
\]
Just as in \cite[Lem.\ 4.1.5]{WWE2}, we have, for any finitely-generated $\mathbb{Z}_p$-module $M$, isomorphisms
\begin{align}
\label{eq:BC and exts ord}
\begin{split}
&\mathrm{Hom}_{\mathbb{Z}_p}(B_\ord ^{\min},M) \cong H^1(\mathbb{Z}[1/Np],M(1)),\\
& \mathrm{Hom}_{\mathbb{Z}_p}(C_\ord^{\min},M) \cong H^1_{(p)}(\mathbb{Z}[1/Np], M(-1)).
\end{split}
\end{align}
The Galois cohomology computations of \cite[\S6.3]{WWE3} allow us to compute these. Recall that $\gamma_i$ is defined in \S\ref{subsec:notation}, even when $\ell_i = p$.
\begin{lem}
\label{lem:B_ord and C_ord}
There are isomorphisms
\[
\mathbb{Z}_p^{\oplus r+1} \isoto B_\ord ^{\min}, \quad \bigoplus_{i=0}^r \mathbb{Z}_p/(\ell_i^2-1)\mathbb{Z}_p \isoto C_\ord ^{\min}
\]
given by $e_i \mapsto b_{\ord,\gamma_i}$ and $e_i \mapsto c_{\ord,\gamma_i}$, where $e_i \in \mathbb{Z}_p^{\oplus r+1}$ is the $i$-th standard basis vector.
\end{lem}
\subsubsection{Information about $B_N^{\epsilon,{\min}}$ and $C_N^{\epsilon,{\min}}$}
Let $J^{\min} := \ker(R^\epsilon_N \rightarrow \mathbb{Z}_p)$, where this homomorphism is induced by the $\mathrm{US}^\epsilon_N$ pseudodeformation $\psi(\mathbb{Z}_p(1) \oplus \mathbb{Z}_p)$ of ${\bar D}$.
\begin{lem}
\label{lem:info about B and C}
We consider $B_N^{\epsilon,{\min}}=B_N^\epsilon/{J^{\min{}}} B_N^\epsilon$ and $C_N^{\epsilon,{\min}}=C_N^\epsilon/{J^{\min{}}} C_N^\epsilon$.
\begin{enumerate}
\item If $\epsilon_i=1$ and $\ell_i \ne p$, then the image of $b_{\gamma_i}$ in $B_N^{\epsilon,{\min}}$ is $0$.
\item If $\epsilon_i=-1$ and $\ell_i \equiv -1 \pmod{p}$, then the image of $c_{\gamma_i}$ in $C_N^{\epsilon,{\min}}$ is $0$.
\end{enumerate}
Moreover, there are surjections
\[
\bigoplus_{i=0}^r \mathbb{Z}_p/(\epsilon_i+1)\mathbb{Z}_p \onto B_N^{\epsilon,{\min}}, \qquad \bigoplus_{i=0}^r \mathbb{Z}_p/(\ell_i+\epsilon_i)\mathbb{Z}_p \onto C_N^{\epsilon,{\min}}.
\]
given by $e_i \mapsto b_{\gamma_i}$ and $e_i \mapsto c_{\gamma_i}$, respectively.
\end{lem}
\begin{proof}
Note that for $\rho_N^{\epsilon,{\min}} = \rho_N^\epsilon \otimes_{R_N^\epsilon} R_N^{\epsilon,{\min}}$, in the GMA structure, we have
\[
\rho_N^{\epsilon,{\min}} = \ttmat{\kcyc}{b}{c}{1}.
\]
Note that we have
\[
V^{\epsilon_i}_{\rho_N^{\epsilon, {\min}}}(\gamma_i, \sigma_i) = (\rho_N^{\epsilon,{\min}}(\gamma_i)-1)(\rho_N^{\epsilon,{\min}}(\sigma_i)+\epsilon_i)= 0.
\]
In GMA notation, this is
\[
0=\ttmat{0}{b_{\gamma_i}}{c_{\gamma_i}}{0} \ttmat{\ell_i+\epsilon_i}{b_{\sigma_i}}{c_{\sigma_i}}{1+\epsilon_i} = \ttmat{0}{(1+\epsilon_i)b_{\gamma_i}}{(\ell_i+\epsilon_i)c_{\gamma_i}}{0}.
\]
In case (1), $(1+\epsilon_i)$ is invertible, so $b_{\gamma_i}=0$. In case (2), $(\ell_i+\epsilon_i)$ is invertible, so $c_{\gamma_i}=0$.
The final statement follows from (1) and (2) and Lemma \ref{lem:B_ord and C_ord} if $p \mid N$ and $\epsilon_p=-1$; otherwise, it follows from Lemma \ref{lem:B_fl and C_fl}.
\end{proof}
\subsection{Labeling some cohomology classes}
\label{subsec:label}
Later, in \S\ref{sec:GP}, it will be convenient to have notation for the extension classes, taken as Galois cohomology classes, arising from homomorphisms $B_N^{\epsilon,{\min}} \rightarrow \mathbb{F}_p$ and $C_N^{\epsilon,{\min}} \rightarrow \mathbb{F}_p$.
\begin{defn}
\label{defn:bc in H1}
We call a cohomology class $x \in H^i(\mathbb{Z}[1/Np],M)$ \emph{ramified at a prime $\ell$} when its image in $H^i(I_\ell, M)$ is non-zero. For certain $i, 0 \leq i \leq r$, we designate $b_i$ and $c_i$ as follows.
\begin{itemize}[leftmargin=2em]
\item For $i=0,\dots, r$, let $\tilde{b}_i$ denote the $\mathbb{F}_p^\times$-scaling the Kummer cocycle of $\ell_i$ such that $\tilde b_i(\gamma_i) = 1$, and let $b_i \in H^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$ be the class of $\tilde{b}_i$.
\item Let $T=\{0\le j \le r \ : \ell_i \equiv \pm 1\pmod{p}\}$. For $i \in T$, let $c_i \in H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$ be an element that is ramified exactly at $\ell_i$ and such that $\tilde c_i(\gamma_i) = 1$ for any cocycle $\tilde c_i$ representing $c_i$.
\end{itemize}
\end{defn}
\begin{lem}
The sets $\{b_i\}_{i=0}^r$ and $\{c_i\}_{i \in T}$ are well-defined and satisfy the following properties:
\begin{enumerate}[label=(\roman*)]
\item $b_i$ is characterized up to $\mathbb{F}_p^\times$-scaling by being ramified at $\ell_i$ and unramified outside $\{\ell_i, p\}$.
\item If $p \mid N$, the set $\{b_i\}_{i=0}^r$ is a basis of $H^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$.
\item The subset $\{b_i \ : \ \ell_i \ne p\}$ is a basis of $H^1_{\mathrm{flat}}(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$.
\item The set $\{c_i\}_{i \in T}$ is a basis of $H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$.
\end{enumerate}
\end{lem}
\begin{proof}
See \cite[Prop.\ 6.3.3]{WWE3}, combined with Tate duality of Thm.\ B.3.2 of \emph{loc.\ cit.}, for the existence of $c_i \in H^1_{(p)}(\mathbb{F}_p(-1))$ characterized up to $\mathbb{F}_p^\times$-scaling by being ramified exactly at $\ell_i$. These statements also imply Part (iv). Because $\omega\vert_{I_{\ell_i}} = 1$, $\tilde c_i\vert_{I_{\ell_i}} : I_{\ell_i} \twoheadrightarrow \mathbb{F}_p$ is a homomorphism not dependent on the choice of $\tilde c_i$.
The value of $\tilde b_i(\gamma_i)$ is well-defined for the same reason when $\ell_i \neq p$, and $b_p(\gamma_p)$ is well-defined by the choice of $\gamma_p$ (in \S\ref{subsec:notation}). Parts (i), (ii), and (iii) follow from Kummer theory.
\end{proof}
The stated bases are almost dual bases, with the exception arising from the possibility that $b_i$ is ramified at $p$ even when $\ell_i \neq p$.
\begin{lem}
\label{lem:dual B C and H1}
Under the perfect pairings
\begin{enumerate}
\item $B_{\mathrm{flat}} \otimes_{R_{\mathrm{flat}}} \mathbb{F}_p \times H^1_{\mathrm{flat}}(\mathbb{Z}[1/Np],\mathbb{F}_p(1)) \to \mathbb{F}_p$,
\item $C_\ord \otimes_{R_\ord} \mathbb{F}_p \times H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1)) \to \mathbb{F}_p$
\item $C_{\mathrm{flat}} \otimes_{R_{\mathrm{flat}}} \mathbb{F}_p \times H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1)) \to \mathbb{F}_p$,
\end{enumerate}
defined by \eqref{eq:BC and exts} and \eqref{eq:BC and exts ord}, the following are respective dual basis pairs
\begin{enumerate}
\item $\{b_{{\mathrm{flat}},\gamma_i} \ : \ i=0,\dots, r \text{ if } \ell_i \ne p\}$ and $\{b_{i} \ : \ i=0,\dots, r \text{ if } \ell_i \ne p\}$
\item $\{c_{\ord,\gamma_i} \ : \ i\in T\}$ and $\{c_{i} \ : \ i\in T\}$
\item $\{c_{{\mathrm{flat}},\gamma_i} \ : \ i\in T\}$ and $\{c_{i} \ : \ i\in T\}$
\end{enumerate}
Also, for $0 \leq i, j \leq r$ such that $\ell_i = p$ or $\ell_j \neq p$, we have $b_i(b_{\ord, \gamma_j}) = \partial_{ij}$.
\end{lem}
\begin{proof}
We give the proof for (1), the other parts being similar. The pairing \eqref{eq:BC and exts} sends a class $x \in H^1_{\mathrm{flat}}(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$ to a homomorphism $B_{\mathrm{flat}} \to \mathbb{F}_p$ that sends $b_\tau$ to $\tilde{x}(\tau)$, where $\tilde{x}$ is a particular cocycle representing $x$ (the choice is determined by the choice of GMA structure on $E_{\mathrm{flat}}$). However, if $\omega(\tau)=1$, the value of $\tilde{x}(\tau)$ is independent of the choice of cocycle, and we may write this value as $x(\tau)$. Hence we see that $b_i(b_{{\mathrm{flat}},\gamma_j})=b_i(\gamma_j)=\partial_{ij}$.
\end{proof}
\begin{defn}
\label{defn:K_i}
For each $i \in T$, let $K_i$ be the fixed field of $\ker(\tilde c_i \vert_{G_{\mathbb{Q}(\zeta_p),S}})$, where $\tilde c_i$ is any cocycle $\tilde c_i : G_{\mathbb{Q},S} \rightarrow \mathbb{F}_p(-1)$ representing $c_i$.
\end{defn}
One readily verifies that $K_i$ is the unique extension of $\mathbb{Q}(\zeta_p)$ satisfying the properties of \S\ref{subsubsec:defn of K_i}.
\section{Toward $R=\mathbb{T}$}
\subsection{The map $R_N^\epsilon \to \mathbb{T}_N^\epsilon$} We prove the following proposition, following the construction technique of Calegari-Emerton \cite[Prop.\ 3.12]{CE2005}.
\begin{prop}
\label{prop:R to T}
There is a surjective homomorphism $R_N^\epsilon \twoheadrightarrow \mathbb{T}_N^\epsilon$ of augmented $\mathbb{Z}_p$-algebras. Moreover, $\mathbb{T}_N^\epsilon$ is generated as a $\mathbb{Z}_p$-algebra by $T_q$ for any cofinite set of primes $q$ not dividing $Np$.
\end{prop}
\begin{proof}
For this proof, it is important to note that the elements $\mathrm{Tr}_{D^\epsilon_N}(\mathrm{Fr}_q)$ for any such set of primes $q$ generate $R_N^\epsilon$ as a $\mathbb{Z}_p$-algebra. This follows the fact that $R^\epsilon_N$ is a quotient of the (unrestricted) universal pseudodeformation ring $R_{\bar D}$, that traces $\{\mathrm{Tr}_{D_{\bar D}}(\sigma) : \sigma \in G_{\mathbb{Q},S}\}$ of the universal pseudodeformation generate $R_{\bar D}$ (because the residue characteristic is not 2, see \cite[Prop.\ 1.29]{chen2014}), and Chebotarev density.
In the rest of the proof, we use the notation $\Sigma$, $\rho_f$ and $\mathcal{O}_f$ established in Lemma \ref{lem:normalization of bT0}. We proceed in three steps:
\begin{itemize}
\item[{\bf Step 1.}] Construct a homomorphism $R_N^\epsilon \to \mathcal{O}_f$ for each $f \in \Sigma$ that sends $\mathrm{Tr}_{D^\epsilon_N}(\mathrm{Fr}_q)$ to $a_q(f)$ for each prime $q \nmid Np$.
\item[{\bf Step 2.}] Show that the resulting map $R_N^\epsilon \to \mathbb{Z}_p \oplus \bigoplus_f \mathcal{O}_f$ sends $\mathrm{Tr}_{D^\epsilon_N}(\mathrm{Fr}_q)$ to the image of $T_q$ under the map $\mathbb{T}_N^\epsilon \to \mathbb{Z}_p \oplus \bigoplus_f\mathcal{O}_f$ of \eqref{eq:bT in product}, for each $q \nmid Np$. This gives a homomorphism $R_N^\epsilon \to \mathbb{T}_N^\epsilon$ whose image is the $\mathbb{Z}_p$-subalgebra generated by the $T_q$ for all $q \nmid Np$. This completes the proof if $p \mid N$.
\item[{\bf Step 3.}] In the case that $p \nmid N$, show that the image of $R_N^\epsilon \to \mathbb{T}_N^\epsilon$ contains $U_p$ and $U_p^{-1}$. This shows both that $R_N^\epsilon \to \mathbb{T}_N^\epsilon$ is surjective and that $\mathbb{T}_N^\epsilon$ is generated as a $\mathbb{Z}_p$-algebra by $T_q$ for $q \nmid Np$.
\end{itemize}
\begin{proof}[Proof of Step 1]
Let $f \in \Sigma$. Then $\psi(\bar{\rho}_f)={\bar D}$, so $\psi(\rho_f)$ induces a map $R_{\bar D} \to \mathcal{O}_f$. For each prime $q \nmid Np$, we have $\mathrm{Tr}(\rho_f(\mathrm{Fr}_q))=a_q(f)$, so $R_{\bar D} \to \mathcal{O}_f$ sends $\mathrm{Tr}_{D_{\bar D}}(\mathrm{Fr}_q)$ to $a_q(f)$.
In order to show that $R_{\bar D} \to \mathcal{O}_f$ factors through $R_N^\epsilon$, we prove that $\psi(\rho_f)$ and $\rho_f$ are $\mathrm{US}^\epsilon_N$ by verifying local conditions, as per Definition \ref{defn:US global}.
\begin{itemize}[leftmargin=2em]
\item For $\ell \mid N$ with $\ell \ne p$, $\rho_f\vert_{G_\ell}$ is $\mathrm{US}_\ell^{\epsilon_\ell}$ by Lemma \ref{lem:normalization of bT0}(3).
\item If $p \nmid N$, or if $p \mid N$ and $f$ is old at $p$, then $\rho_f\vert_{G_p}$ is finite-flat by Lemma \ref{lem:normalization of bT0}(4a). Also, when $p\mid N$, this implies that $\rho_f\vert_{G_p}$ is $\mathrm{US}_p^{\epsilon_p}$ by definition if $\epsilon_p=+1$ and by Corollary \ref{cor:ord to flat compare} if $\epsilon_p=-1$.
\item If $f$ is new at $p$, then $\epsilon_p=-1$ and $\rho_f\vert_{G_p}$ is $\mathrm{US}_p^{-1}$ by Lemma \ref{lem:normalization of bT0}(4b). \qedhere
\end{itemize}
\end{proof}
\begin{proof}[Proof of Step 2]
By construction, the map $R_N^\epsilon \to \mathbb{Z}_p \oplus \bigoplus_f\mathcal{O}_f$ sends $\mathrm{Tr}_{D^\epsilon_N}(\mathrm{Fr}_q)$ to $(1+q,\bigoplus_f a_q(f))$, which, by \eqref{eq:bT in product}, is the image of $T_q$.
\end{proof}
\begin{proof}[Proof of Step 3]
Let $\tau \in I_p$ be an element such that $\omega(\tau) \ne 1$. Let $x=\kcyc(\tau) \in \mathbb{Z}_p$, so that $1-x \in \mathbb{Z}_p^\times$. Let $\sigma_p \in G_p$ be the element defined in \S\ref{subsec:notation} and let $z=\kcyc(\sigma_p)$. By Lemma \ref{lem:normalization of bT0}(4), we see that $\mathrm{Tr}(\rho_f(\sigma_p))=za_p(f)^{-1}+a_p(f)$ and $\mathrm{Tr}(\rho_f(\tau\sigma_p))=xza_p(f)^{-1}+a_p(f)$. Hence we have
\begin{align*}
&a_p(f)=\frac{1}{x-1}\big( x\mathrm{Tr}(\rho_f(\sigma_p))-\mathrm{Tr}(\rho_f(\tau\sigma_p))\big) \quad \text{and} \\
&a_p(f)^{-1} = \frac{1}{z-xz} \big(\mathrm{Tr}(\rho_f(\sigma_p)) -\mathrm{Tr}(\rho_f(\tau\sigma_p))\big).
\end{align*}
We see that $U_p$ is the image of $\frac{1}{x-1}(x\mathrm{Tr}_{D^\epsilon_N}(\sigma_p)-\mathrm{Tr}_{D^\epsilon_N}(\tau\sigma_p))$ and $U_p^{-1}$ is the image of $\frac{1}{z-xz}(\mathrm{Tr}_{D^\epsilon_N}(\sigma_p)-\mathrm{Tr}_{D^\epsilon_N}(\tau\sigma_p))$. Since $\mathbb{T}_N^\epsilon$ is generated by $T_q$ for $q \nmid Np$ along with $T_p = U_p + pU_p^{-1}$, we see that $R_N^\epsilon \to \mathbb{T}_N^\epsilon$ is surjective.
\end{proof}
\let\qed\relax
\end{proof}
\subsection{Computation of $(R_N^\epsilon)^\mathrm{red}$}
\label{subsec:compute Rred}
In this section, we will frequently make use of the elements $\sigma_i$ and $\gamma_i$ defined in \S \ref{subsec:notation}. We denote by $M^{p\text{-part}}$ the maximal $p$-primary quotient of a finite abelian group $M$.
Consider the group $\mathrm{Gal}(\mathbb{Q}(\zeta_N)/\mathbb{Q})^{p\text{-part}}$. We have isomorphisms
\[
\mathrm{Gal}(\mathbb{Q}(\zeta_N)/\mathbb{Q})^{p\text{-part}} \buildrel\sim\over\lra \prod_{i=0}^r \mathrm{Gal}(\mathbb{Q}(\zeta_{\ell_i})/\mathbb{Q})^{p\text{-part}} \buildrel\sim\over\lra \prod_{i=0}^r \mathbb{Z}_p/(\ell_i-1)\mathbb{Z}_p.
\]
Since $\mathbb{Q}(\zeta_{\ell_i})/\mathbb{Q}$ is totally ramified at $\ell_i$, we can and do choose the second isomorphism so that the image of $\gamma_i$ is $(0,\dots,0,1,0,\dots,0)$ (with $1$ in the $i$-th factor). We define $\alpha_j^i$ to be the $j$-th factor of the image of $\sigma_i$, so that $\sigma_i \mapsto (\alpha_0^i,\alpha_1^i,\dots,\alpha_r^i)$ (we can and do assume that $\alpha_i^i=0$).
\begin{rem}
\label{rem:log tame}
Note that if $\ell_j \equiv 1 \pmod{p}$, we may choose a surjective homomorphism $\log_{\ell_j}:(\mathbb{Z}/\ell_j\mathbb{Z})^\times \twoheadrightarrow \mathbb{F}_p$ such that $\log_{\ell_j}(\ell_i) \equiv \alpha_j^i \pmod{p}$. By abuse of notation, we denote by $\log_j = \log_{\ell_j}$ a $\mathbb{F}_p^\times$-valued character of $G_{\mathbb{Q},S}$ produced by composition with the canonical surjection $G_{\mathbb{Q},S} \twoheadrightarrow \mathrm{Gal}(\mathbb{Q}(\zeta_{\ell_j})/\mathbb{Q}) \buildrel\sim\over\ra (\mathbb{Z}/\ell_j)^\times$.
\end{rem}
This isomorphism determines an isomorphism of group rings
\[
\mathbb{Z}_p[\mathrm{Gal}(\mathbb{Q}(\zeta_N)/\mathbb{Q})^{p\text{-part}}] \isoto \mathbb{Z}_p\left[\prod_{i=0}^r \mathbb{Z}_p/(\ell_i-1)\mathbb{Z}_p\right] \cong \mathbb{Z}_p[y_0,\dots,y_r]/(y_i^{p^{v_i}}-1)
\]
where $v_i=v_p(\ell_i-1)$, and where the second isomorphism sends $y_i$ to the group-like element $(0,\dots,0,1,0,\dots,0)$ (with $1$ in the $i$-th factor). Let
\[
\dia{-}:G_{\mathbb{Q},S} \to (\mathbb{Z}_p[y_0,\dots,y_r]/(y_i^{p^{v_i}}-1))^\times
\]
be the character obtained by the quotient $G_{\mathbb{Q},S} \onto \mathrm{Gal}(\mathbb{Q}(\zeta_N)/\mathbb{Q})^{p\text{-part}}$ followed by this isomorphism. Note that
\[
\dia{\gamma_i}=y_i, \qquad \dia{\sigma_i}=\prod_{j=0}^r y_j^{\alpha_j^i}.
\]
Let $R^\mathrm{red}_{\mathrm{flat}}(\kcyc)$ (resp.\ $R^\mathrm{red}_\ord(\kcyc)$) be the quotient of the finite-flat global deformation ring $R_{\mathrm{flat}}$ (resp.\ ordinary global deformation ring $R_\ord$) defined in \S\ref{sssec:comp of Bfl and Cfl} (resp.\ \S\ref{sssec:comp of Bord and Cord}) by the ideal generated by the reducibility ideal along with $\{D_{\bar D}(\gamma) - \kcyc(\gamma) : \gamma \in G_{\mathbb{Q},S}\}$. That is, we are insisting that the determinant is $\kcyc$.
\begin{lem}
\label{lem:calculate R-red-flat}
The surjection $R_\ord \twoheadrightarrow R_{\mathrm{flat}}$ induces an isomorphism $R_{\ord}^\mathrm{red}(\kcyc) \buildrel\sim\over\ra R_{\mathrm{flat}}^\mathrm{red}(\kcyc)$. Moreover, they are both isomorphic as rings to
\[
\mathbb{Z}_p[y_0,\dots,y_r]/(y_i^{p^{v_i}}-1)
\]
and the universal reducible pseudorepresentation pulls back to $D^\mathrm{red}=\psi(\kcyc \dia{-}^{-1} \oplus \dia{-})$ via these isomorphisms.
\end{lem}
\begin{proof}
The quotient map $R_\ord \twoheadrightarrow R_{\mathrm{flat}}$ comes from the first part of Corollary \ref{cor:ord to flat compare}, and the two rings differ only in the local condition at $p$. After imposing the reducibility and determinant conditions, the universal pseudodeformations both have the form $\psi(\kcyc \chi^{-1} \oplus \chi)$ for a character $\chi$ that deforms the trivial character. By the latter parts of the corollary, the finite-flat and ordinary conditions on such pseudodeformations are identical. The last statement is proven just as in \cite[Lem.\ 5.1.1]{WWE3}.
\end{proof}
Let $Y_i=1+y_i$.
\begin{lem}
\label{lem:computation of Rred}
There is an isomorphism
\[
(R_N^\epsilon)^\mathrm{red} \cong \mathbb{Z}_p[Y_0,\dots,Y_r]/\mathfrak{a}
\]
where $\mathfrak{a}$ is the ideal generated by the elements
\[
Y_i^2,(\ell_i-1)Y_i,(\epsilon_i+1)Y_i, Y_i\left(\prod_{j=0}^r(1-\tilde{\alpha}_i^jY_j)-1\right),Y_i\left(\prod_{j=0}^r(1+\tilde{\alpha}_i^jY_j)-1\right),
\]
for $i=0,\dots,r$, where $\tilde{\alpha}_i^j \in \mathbb{Z}_p$ is any lift of ${\alpha_i^j}\in \mathbb{Z}_p/(\ell_j-i)\mathbb{Z}_p$ (note that $\mathfrak{a}$ is independent of the choice of this lift).
\end{lem}
\begin{proof}
We consider $(E_N^\epsilon)^\mathrm{red} =E_N^\epsilon \otimes_{R_N^\epsilon} (R_N^\epsilon)^\mathrm{red}$. We write the base-change of $\rho_N^\epsilon$ to this algebra as $\rho^\mathrm{red}$, for simplicity. Write $\odia{-}: G_{\mathbb{Q},S} \to ((R_N^\epsilon)^\mathrm{red})^\times$ for the composite of $\dia{-}$ with the quotient $R_{\mathrm{flat}}^\mathrm{red}(\kcyc) \to (R_N^\epsilon)^\mathrm{red}$, which exists by Proposition \ref{prop:global US det is kcyc}. (We use $R_{\mathrm{flat}}^\mathrm{red}(\kcyc)$ even in the ordinary case, in light of Lemma \ref{lem:calculate R-red-flat}.)
First we show that the map $R_{\mathrm{flat}}^\mathrm{red}(\kcyc) \to (R_N^\epsilon)^\mathrm{red}$ factors through $\mathbb{Z}_p[Y_0,\dots,Y_r]/\mathfrak{a}$. We can write $\rho^\mathrm{red}$ in GMA notation as
\[
\rho^\mathrm{red} = \ttmat{\kcyc\odia{-}^{-1}}{*}{*}{\odia{-}}.
\]
Since $V^{\epsilon_i}_{\rho^\mathrm{red}}(\gamma_i, \gamma_i) = (\rho^\mathrm{red}(\gamma_i)-1)^2=0$ in $(E_N^\epsilon)^\mathrm{red}$, we see that $Y_i^2=0$ in $(R_N^\epsilon)^\mathrm{red}$. Since $(1+Y_i)^{p^{v_i}}-1=0$, this implies that $p^{v_i}Y_i=0$ in $(R_N^\epsilon)^\mathrm{red}$. Moreover, by Lemma \ref{lem:epsilon=1 and unram}, if $\epsilon_i=+1$ and $v_i>0$, then $\rho^\mathrm{red}(\gamma_i)=1$; for such $i$, this implies that $Y_i=0$ in $(R_N^\epsilon)^\mathrm{red}$. We can rephrase this as $(\epsilon_i+1)Y_i=0$ for all $i$.
From now on, consider $i$ such that $\epsilon_i =-1$. Already, we see that
\[
\odia{\sigma_i} = \prod_{j=0}^r y_j^{\alpha_j^i} = \prod_{j=0}^r (1+\tilde{\alpha}_j^iY_j)
\]
Since $V^{\epsilon_i}_{\rho^\mathrm{red}}(\gamma_i, \sigma) = (\rho^\mathrm{red}(\gamma_i)-1)(\rho^\mathrm{red}(\sigma)-1)=0$ in $(E_N^\epsilon)^\mathrm{red}$, we obtain
\[
(\odia{\gamma_i}^{-1}-1)(\ell_i\odia{\sigma_i}^{-1}-1)=0, \quad (\odia{\gamma_i}-1)(\odia{\sigma_i}-1)=0.
\]
These imply
\[
0=Y_i\left(\prod_{j=0}^r(1-\tilde{\alpha}_i^jY_j)-1\right)=Y_i\left(\prod_{j=0}^r(1+\tilde{\alpha}_i^jY_j)-1\right).
\]
This shows that $R_{\mathrm{flat}}^\mathrm{red}(\kcyc) \to (R_N^\epsilon)^\mathrm{red}$ factors through $\mathbb{Z}_p[Y_0,\dots,Y_r]/\mathfrak{a}$. It remains to verify that the pseudorepresentation $D:G_{\mathbb{Q},S} \to \mathbb{Z}_p[Y_0,\dots,Y_r]/\mathfrak{a}$ defined by $\psi(\kcyc\odia{-}^{-1}\oplus \odia{-})$ is $\mathrm{US}_N^\epsilon$. This is checked easily.
\end{proof}
\section{The case $\epsilon=(-1,1,1,\dots,1)$}
\label{sec:case1}
In this section, we consider the case where $\epsilon_0=-1$ and $\epsilon_i=1$ for $0 < i \leq r$. Without loss of generality, we can and do, for this section, assume that the primes $\{\ell_i\}_{i=0}^r$ are ordered so that $\ell_i \equiv -1 \pmod{p}$ for $i=1,\dots,s$ and $\ell_i \not \equiv -1 \pmod{p}$ for $s < i \leq r$. Here $s$ is an integer, $0 \leq s \leq r$. The most interesting case is $s=r$, and, in fact, we immediately reduce to this case.
\subsection{Reduction to the case $s=r$}
Let $N(s)=\prod_{i=0}^s \ell_i$ and $\epsilon(s) \in \{ \pm 1\}^{s+1}$ be defined by $\epsilon(s)_0=-1$ and $\epsilon(s)_i=1$ for $0 < i \leq s$. There is a natural map $\mathbb{T}_N^\epsilon \onto \mathbb{T}_{N(s)}^{\epsilon(s)}$ by restricting to the space of forms that are old at $\ell_i$ for $s < i \leq r$. There is also a natural surjection $R_N^\epsilon \onto R_{N(s)}^{\epsilon(s)}$, since $\rho_{N(s)}^{\epsilon(s)}$ is unramified (resp.\ finite-flat) at $\ell_i$ when $\ell_i \neq p$ (resp.\ $\ell_i = p$) and $s < i \leq r$.
\begin{lem}
\label{lem:reduction to lower level}
The natural map $R_N^\epsilon \onto R_{N(s)}^{\epsilon(s)}$ is an isomorphism. Moreover, if the map $R_{N(s)}^{\epsilon(s)} \twoheadrightarrow \mathbb{T}_{N(s)}^{\epsilon(s)}$ is an isomorphism, then the surjections $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ and $\mathbb{T}_N^\epsilon \onto \mathbb{T}_{N(s)}^{\epsilon(s)}$ of Proposition \ref{prop:R to T} are isomorphisms.
\end{lem}
\begin{proof}
The isomorphy of $R_N^\epsilon \onto R_{N(s)}^{\epsilon(s)}$ can be rephrased as saying that, for all $s < i \leq r$, $\rho_N^\epsilon$ is unramified (resp.\ finite-flat) at $\ell_i$ if $\ell_i \neq p$ (resp.\ if $\ell_i = p$). This follows from Lemma \ref{lem:epsilon=1 and unram} and \S\ref{subsec:epsilon_p=1}. For the second statement, consider the commutative diagram of surjective ring homomorphisms
\begin{align*}
\begin{split}
\xymatrix{
R_N^\epsilon \ar[r] \ar[d]^(.4)\wr & \mathbb{T}_{N}^{\epsilon} \ar[d] \\
R_{N(s)}^{\epsilon(s)} \ar[r] & \mathbb{T}_{N(s)}^{\epsilon(s)}.
}
\end{split}
\qedhere
\end{align*}
\end{proof}
\subsection{The case $s=r$}
\label{subsec:case1 r=s}
Now we assume that $s=r$ (i.e.\ that $\ell_i \equiv -1 \pmod{p}$ for $i=1,\dots, r$). We write ${J^{\min{}}} \subset R_{N}^{\epsilon}$ for the augmentation ideal, and $J^\mathrm{red} = \ker(R_{N}^{\epsilon} \onto (R_{N}^{\epsilon})^\mathrm{red})$. We have the following consequence of Wiles's numerical criterion \cite[Appendix]{wiles1995}.
\begin{prop}
\label{prop:numerical crit}
The surjection $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ is a isomorphism of complete intersection rings if and only if
\[
\# {J^{\min{}}}/{J^{\min{}}}^2 \le p^{v_p(\ell_0-1)} \cdot \prod_{i=1}^r p^{v_p(\ell_i+1)}.
\]
If this is the case, then equality holds.
\end{prop}
\begin{proof}
The surjection comes from Proposition \ref{prop:R to T}. Note that
\[
p^{v_p(\ell_0-1)} \cdot \prod_{i=1}^r p^{v_p(\ell_i+1)} = \# \mathbb{Z}_p/a_0(E^\epsilon)\mathbb{Z}_p.
\]
The proposition follows from Theorem \ref{thm:congruence number} and the numerical criterion, as in \cite[Thm.\ 7.1.1]{WWE3}.
\end{proof}
\begin{lem}
\label{lem:Jm/Jred}
There is an isomorphism
\[
{J^{\min{}}}/J^\mathrm{red} \cong \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p
\]
sending $d_{\gamma_0}-1$ to $1$, and ${J^{\min{}}}^2 \subset J^\mathrm{red}$.
\end{lem}
\begin{proof}
By Lemma \ref{lem:computation of Rred}, we have
\[
(R_N^\epsilon)^\mathrm{red} = \mathbb{Z}_p[Y_0]/((\ell_0-1)Y_0,Y_0^2),
\]
and we can easily see that $d_{\gamma_0}-1$ maps to $Y_0$ and generates the image of ${J^{\min{}}}$. Since $Y_0^2=0$, we have the second statement.
\end{proof}
\begin{lem}
\label{lem:Jred/JmJred}
There is a surjection
\[
\mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \left(\bigoplus_{i=1}^r \mathbb{Z}_p/(\ell_i+1)\mathbb{Z}_p \right) \onto J^\mathrm{red}/{J^{\min{}}} J^\mathrm{red}
\]
given by $e_i \mapsto b_{\gamma_0}c_{\gamma_i}$.
\end{lem}
\begin{proof}
By Lemma \ref{lem:info about B and C}, we have surjections
\[
\mathbb{Z}_p \onto B_N^{\epsilon,{\min}}, \quad 1 \mapsto b_{\gamma_0}
\]
and
\[
\mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \left(\bigoplus_{i=1}^r \mathbb{Z}_p/(\ell_i+1)\mathbb{Z}_p \right) \onto C_N^{\epsilon,{\min}}, \quad e_i \mapsto c_{\gamma_i}.
\]
As is true in any GMA (see e.g.\ \cite[Prop.\ 1.5.1]{BC2009}), we have a surjection
\begin{equation}
\label{eq:GMA cotangent control}
B_N^{\epsilon,{\min}} \otimes C_N^{\epsilon,{\min}} \onto J^\mathrm{red}/{J^{\min{}}} J^\mathrm{red}, \quad b \otimes c \mapsto bc.
\end{equation}
Combining these, we have the lemma.
\end{proof}
\begin{lem}
\label{lem:bc in Jm^2}
The element $b_{\gamma_0}c_{\gamma_0} \in R_N^\epsilon$ is in ${J^{\min{}}}^2$.
\end{lem}
\begin{proof}
Since $V^{\epsilon_0}_{\rho^\epsilon_N}(\gamma_0, \gamma_0) = (\rho_N^\epsilon(\gamma_0)-1)^2=0$, we see that $(a_{\gamma_0}-1)^2+b_{\gamma_0}c_{\gamma_0} =0$. Since $a_{\gamma_0}-1 \in {J^{\min{}}}$, we have the lemma.
\end{proof}
We have arrived at the main theorem.
\begin{thm}
\label{thm:star main}
Let $N=\ell_0\ell_1 \dotsm \ell_r$ and $\epsilon=(-1,1,\dots,1)$. Then the map $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ is a isomorphism of augmented $\mathbb{Z}_p$-algebras, and both rings are complete intersection. The ideal $J^{\min}$ is generated by the elements $b_{\gamma_0}c_{\gamma_i}$ for $i=1,\dots,r$ together with $d_{\gamma_0}-1$. There is an exact sequence
\begin{equation}
\label{eq:star ses}
0 \to \bigoplus_{i=1}^r \mathbb{Z}_p/(\ell_i+1)\mathbb{Z}_p \to I^\epsilon/{I^\epsilon}^2 \to \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \to 0.
\end{equation}
\end{thm}
\begin{proof}
By Lemma \ref{lem:Jm/Jred}, there is an exact sequence
\begin{equation}
\label{eq:proof ses}
0 \to J^\mathrm{red}/{J^{\min{}}}^2 \to {J^{\min{}}}/{J^{\min{}}}^2 \to \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \to 0
\end{equation}
Combining Lemmas \ref{lem:Jred/JmJred} and \ref{lem:bc in Jm^2}, we see that there is a surjection
\begin{equation}
\label{eq:onto Jred/Jm^2}
\bigoplus_{i=1}^r \mathbb{Z}_p/(\ell_i+1)\mathbb{Z}_p \onto J^\mathrm{red}/{J^{\min{}}}^2
\end{equation}
given by $e_i \mapsto b_{\gamma_0}c_{\gamma_i}$. This shows that
\[
\# {J^{\min{}}}/{J^{\min{}}}^2 \le p^{v_p(\ell_0-1)} \cdot \prod_{i=1}^r p^{v_p(\ell_i+1)}.
\]
By Proposition \ref{prop:numerical crit}, this shows that $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ is a isomorphism of complete intersection rings, and that this inequality is actually equality. This implies that \eqref{eq:onto Jred/Jm^2} is an isomorphism. Using Lemma \ref{lem:Jm/Jred} and Nakayama's lemma, this shows that $J^{\min}$ is generated by the stated elements. Since ${J^{\min{}}}$ maps isomorphically onto $I^\epsilon$, the desired sequence follows from \eqref{eq:proof ses}.
\end{proof}
\section{The case $\epsilon=(-1,-1)$}
\label{sec:ep=(-1,-1)}
In this section, we assume that $r=1$ and also that $\epsilon=(-1,-1)$.
\subsection{No interesting primes} If $\ell_i \not \equiv 1 \pmod{p}$ for $i=0,1$, then there are no cusp forms congruent to the Eisenstein series.
\begin{thm}
If $\ell_i \not \equiv 1 \pmod{p}$ for $i=0,1$, then $\mathbb{T}_N^{\epsilon}= \mathbb{Z}_p$ and $\mathbb{T}_N^{\epsilon,0}=0$.
\end{thm}
\begin{proof}
It is enough to show that $R_N^\epsilon=\mathbb{Z}_p$. By Lemma \ref{lem:computation of Rred}, we have $R_N^{\epsilon,\mathrm{red}} = \mathbb{Z}_p$ and by Lemma \ref{lem:info about B and C} we have $C_N^{\epsilon}=0$, so $J^\mathrm{red}=0$. This implies $R_N^\epsilon=\mathbb{Z}_p$.
\end{proof}
\subsection{Generators of $B_N^\epsilon$} Since nothing interesting happens if there are no interesting primes, we now assume that $\ell_0 \equiv 1 \pmod{p}$. We emphasize that, in this section, we do not assume that $\ell_1 \ne p$. Recall the notation $a_\tau,b_\tau,c_\tau, d_\tau$ for $\tau \in G_{\mathbb{Q},S}$ from \eqref{eq:GMA} and the elements $\gamma_i, \sigma_i \in G_{\mathbb{Q},S}$ from \S \ref{subsec:notation}.
\begin{lem}
\label{lem:b_sig and b_gam generate}
Assume that $\ell_1$ is not a $p$-th power modulo $\ell_0$. Then the subset $\{b_{\gamma_0},b_{\sigma_0}\} \subset B_N^\epsilon$ generates $B_N^\epsilon$ as a $R_N^\epsilon$-module.
\end{lem}
\begin{proof}
We give the proof in the case $\ell_1=p$; the case $\ell_1 \ne p$ is exactly analogous, changing `ordinary' to `finite-flat' everywhere. Because $B^{\min}_{\ord}$ surjects onto $B^\epsilon_N$ and by Nakayama's lemma, it is enough to show that the images $\bar b_{\ord,\gamma_0},\bar b_{\ord,\sigma_0}$ of $b_{\ord,\gamma_0}, b_{\ord, \sigma_0}$ in $B^{\min}_\ord/pB^{\min}_\ord$ generate $B^{\min}_\ord/pB^{\min}_\ord$.
Using $b_i$, $\tilde b_i$ defined in \S\ref{subsec:label} and the lemmas there, we know that $\{\bar b_{\ord,\gamma_0}, \bar b_{\ord,\gamma_1}\}$ is a basis for $B^{\min}_\ord/pB^{\min}_\ord$ and $b_1(\bar b_{\ord,\gamma_j}) = \partial_{1j}$ for $j = 0,1$. Hence it is enough to show that $b_1(\bar b_{\ord,\sigma_0}) \neq 0$. As in the proof of Lemma \ref{lem:dual B C and H1}, the fact that $\omega(\sigma_0)=1$ implies that $b_1(\bar b_{\ord,\sigma_0})=\tilde{b}_1(\sigma_0)$. Because $\ell_1$ is not a $p$-th power modulo $\ell_0$, class field theory implies that $\tilde{b}_1(\sigma_0) \neq 0$.
\end{proof}
\begin{prop}
\label{prop:bcs in Jm^2}
Assume that $\ell_1$ is not a $p$-th power modulo $\ell_0$. Then
\[
b_{\gamma_0}c_{\gamma_0},\ b_{\gamma_1}c_{\gamma_1}, \ b_{\gamma_1}c_{\gamma_0} \in {J^{\min{}}}^2.
\]
If, in addition, $\ell_1 \equiv 1 \pmod{p}$ and $\ell_0$ is not a $p$-th power modulo $\ell_1$, then $b_{\gamma_0}c_{\gamma_1} \in {J^{\min{}}}^2$ as well.
\end{prop}
\begin{proof}
The proof for $b_{\gamma_0}c_{\gamma_0},b_{\gamma_1}c_{\gamma_1}$ is just as in Lemma \ref{lem:bc in Jm^2}. If we prove that $b_{\gamma_1}c_{\gamma_0} \in {J^{\min{}}}^2$, then we get $b_{\gamma_0}c_{\gamma_1} \in {J^{\min{}}}^2$ in the second statement by symmetry. So it suffices to prove $b_{\gamma_1}c_{\gamma_0} \in {J^{\min{}}}^2$.
Let $X=a_{\sigma_0}-\ell_0$ and $W = a_{\gamma_0}-1$, and note that $X,W \in {J^{\min{}}}$. From the $(1,1)$-coordinate of the equation $V^{\epsilon_{\ell_0}}_{\rho^\epsilon_N}(\sigma_0, \gamma_0) = 0$ defined in \eqref{eq:US test elts}, we see that $XW+b_{\sigma_0}c_{\gamma_0}=0$. In particular, $b_{\sigma_0}c_{\gamma_0} \in {J^{\min{}}}^2$.
By Lemma \ref{lem:b_sig and b_gam generate}, we know that $b_{\gamma_1}$ is in the $R^\epsilon_N$-linear span of $b_{\sigma_0}$ and $b_{\gamma_0}$. Because both $b_{\sigma_0}c_{\gamma_0}$ and $b_{\gamma_0}c_{\gamma_0}$ lie in ${J^{\min{}}}^2$, so does $b_{\gamma_1}c_{\gamma_0}$.
\end{proof}
\subsection{One interesting prime} We assume that $\ell_0 \equiv 1 \pmod{p}$ and $\ell_1 \not \equiv 1 \pmod{p}$ (including the possibility that $\ell_1=p$). There is a natural surjective homomorphism $\mathbb{T}_N^\epsilon \onto \mathbb{T}_{\ell_0}$ by restricting to forms that are old at $\ell_1$.
\begin{thm}
\label{thm:one interesting prime}
Assume that $\ell_0 \equiv 1 \pmod{p}$, that $\ell_1 \not \equiv 1 \pmod{p}$, and that $\ell_1$ is not a $p$-th power modulo $\ell_0$. Then the natural map $\mathbb{T}_N^\epsilon \twoheadrightarrow \mathbb{T}_{\ell_0}$ is an isomorphism. In particular, $I^\epsilon$ is principal, $\mathbb{T}_N^\epsilon$ and $\mathbb{T}_N^{\epsilon,0}$ are complete intersections, and there are no newforms in $S_2(N)_{\rm Eis}^\epsilon$.
\end{thm}
\begin{proof}
Just as in the proof of Lemma \ref{lem:reduction to lower level}, it suffices to show that the map $R_N^\epsilon \onto \mathbb{T}_{\ell_0}$ is an isomorphism. By Lemma \ref{lem:computation of Rred}, there is an isomorphism
\[
R_N^{\epsilon,\mathrm{red}} \cong \mathbb{Z}_p[Y_0]/(Y_0^2,(\ell_0-1)Y_0),
\]
where the image of ${J^{\min{}}}$ is the ideal generated by $Y_0$. This implies that ${J^{\min{}}}^2 \subset J^\mathrm{red}$ and that there is an isomorphism
\[
\mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \buildrel\sim\over\lra {J^{\min{}}}/J^\mathrm{red}, \quad 1 \mapsto Y_0.
\]
On the other hand, we know that $J^\mathrm{red}$ is generated by the set $\{b_{\gamma_0}c_{\gamma_0},b_{\gamma_1}c_{\gamma_0}\}$ by Lemma \ref{lem:info about B and C} and the surjection \eqref{eq:GMA cotangent control}. By Proposition \ref{prop:bcs in Jm^2}, we see that this set is contained in ${J^{\min{}}}^2$. Hence $J^\mathrm{red} \subset {J^{\min{}}}^2$, and so $J^\mathrm{red}= {J^{\min{}}}^2$.
Now we have $\#{J^{\min{}}}/{J^{\min{}}}^2 = p^{v_p(\ell_0-1)}$ and, by the numerical criterion (Proposition \ref{prop:numerical crit}), $R_N^\epsilon \onto \mathbb{T}_{\ell_0}$ is an isomorphism.
\end{proof}
\begin{rem}
The assumption that $\ell_1$ is not a $p$-th power modulo $\ell_0$ is necessary: see the examples in \S\ref{subsubsec:2 primes no new examples}.
\end{rem}
\subsection{Two interesting primes}
\label{subsec:two int primes}
We consider the case $\ell_i \equiv 1 \pmod{p}$ for $i=0,1$.
\begin{thm}
\label{thm:thm2}
Let $N=\ell_0\ell_1$ and $\epsilon=(-1,-1)$. Assume that $\ell_i \equiv 1 \pmod{p}$ for $i=0,1$ and assume that neither prime is a $p$-th power modulo the other. Then the map $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ is an isomorphism of complete intersection rings augmented over $\mathbb{Z}_p$, and there is an isomorphism
\[
I^\epsilon/{I^\epsilon}^2 \cong \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \mathbb{Z}_p/(\ell_1-1)\mathbb{Z}_p.
\]
\end{thm}
\begin{proof}
By Lemma \ref{lem:computation of Rred}, we see that there is an isomorphism
\[
R_N^{\epsilon,\mathrm{red}} \cong \mathbb{Z}_p[Y_0,Y_1]/(Y_0^2,Y_0Y_1,Y_1^2,(\ell_0-1)Y_0,(\ell_1-1)Y_1)
\]
and that the image of ${J^{\min{}}}$ is the ideal generated by $(Y_0,Y_1)$. In particular ${J^{\min{}}}^2 \subset J^\mathrm{red}$ and
\[
{J^{\min{}}}/J^\mathrm{red} \cong \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \mathbb{Z}_p/(\ell_1-1)\mathbb{Z}_p.
\]
Moreover, by Proposition \ref{prop:bcs in Jm^2} and Lemma \ref{lem:info about B and C}, we see that $J^\mathrm{red} \subset {J^{\min{}}}^2$ so we have
\[
{J^{\min{}}}/{J^{\min{}}}^2 = {J^{\min{}}}/J^\mathrm{red} \cong \mathbb{Z}_p/(\ell_0-1)\mathbb{Z}_p \oplus \mathbb{Z}_p/(\ell_1-1)\mathbb{Z}_p.
\]
In particular, $\#{J^{\min{}}}/{J^{\min{}}}^2 = p^{v_p(\ell_0-1)+v_p(\ell_1-1)}$.
Now the numerical criterion of Proposition \ref{prop:numerical crit} implies that $R_{N}^{\epsilon} \twoheadrightarrow \mathbb{T}_{N}^{\epsilon}$ is a isomorphism of complete intersection augmented $\mathbb{Z}_p$-algebras. It follows that $I^\epsilon = {J^{\min{}}}$, and so the description of $I^\epsilon/{I^\epsilon}^2$ also follows.
\end{proof}
\begin{rem}
Again, the assumptions are necessary. See the examples in \S\ref{subsubsec:2 primes new examples}.
\end{rem}
\begin{defn}
\label{defn:no newforms} We say there are \emph{no newforms in $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon$} if
\[
M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon = M_2(\ell_0;\mathbb{Z}_p)_{\mathrm{Eis}} + M_2(\ell_1;\mathbb{Z}_p)_{\mathrm{Eis}},
\]
where the later are considered submodules of the former via the stabilizations in \S \ref{subsub:stabilizations}. Otherwise, we say there are \emph{newforms in $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon$}.
\end{defn}
\begin{thm}
\label{thm:newforms in case 2}
Let $N=\ell_0\ell_1$ and $\epsilon=(-1,-1)$ and assume that $\ell_i \equiv 1 \pmod{p}$ for $i=0,1$. If there are no newforms in $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon$, then $\mathbb{T}_N^\epsilon$ is not Gorenstein. In particular, if neither prime $\ell_i$ is a $p$-th power modulo the other, then there are newforms in $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon$.
\end{thm}
\begin{proof}
The second statement follows from the first statement by Theorem \ref{thm:thm2}. Now assume that there are no newforms in $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon$. We count that
\[\mathrm{rank}_{\mathbb{Z}_p}( M_2(N; \mathbb{Z}_p)_{\rm Eis}^{\epsilon}) = \mathrm{rank}_{\mathbb{Z}_p}(M_2(\ell_0;\mathbb{Z}_p)_{\rm Eis})+ \mathrm{rank}_{\mathbb{Z}_p}(M_2(\ell_1;\mathbb{Z}_p)_{\rm Eis})-1
\]
(by Lemma \ref{lem:normalization of bT0}, for example).
We claim that, under this assumption, we have an isomorphism $\mathbb{T}_N^{\epsilon} \isoto \mathbb{T}_{\ell_0} \times_{\mathbb{Z}_p} \mathbb{T}_{\ell_1}$. To see this, consider the commutative diagram of free $\mathbb{Z}_p$-modules, where the right square consists of canonical surjective homomorphisms of commutative $\mathbb{Z}_p$-algebras and the rows are exact:
\[\xymatrix{
0 \ar[r] & \mathfrak{a}_1 \ar[r] \ar[d] & \mathbb{T}_N^{\epsilon} \ar@{->>}[r] \ar@{->>}[d] & \mathbb{T}_{\ell_1} \ar[r] \ar@{->>}[d] & 0 \\
0 \ar[r] & I_0 \ar[r] & \mathbb{T}_{\ell_0} \ar@{->>}[r] & \mathbb{Z}_p \ar[r] & 0.
}\]
By Lemma \ref{lem:fiber prods}, it is enough to show that $\mathfrak{a}_1 \to I_0$ is an isomorphism. From this diagram and the above rank count, we see that $\mathrm{rank}_{\mathbb{Z}_p}(\mathfrak{a}_1) = \mathrm{rank}_{\mathbb{Z}_p}( I_0)$. Thus it suffices to show that the $\mathbb{Z}_p$-dual map is surjective. By duality \eqref{eq:M and T duality}, the dual map is identified with the map
\[
M_2(\ell_0;\mathbb{Z}_p)_{\rm Eis}/\mathbb{Z}_p E_{2,\ell_0} \to M_2(N;\mathbb{Z}_p)_{\rm Eis}^{\epsilon}/M_2(\ell_1;\mathbb{Z}_p)_{\rm Eis}
\]
induced by stabilization, which is surjective by our assumption $M_2(N;\mathbb{Z}_p)_{\mathrm{Eis}}^\epsilon = M_2(\ell_0;\mathbb{Z}_p)_{\mathrm{Eis}} + M_2(\ell_1;\mathbb{Z}_p)_{\mathrm{Eis}}$. This proves that $\mathfrak{a}_1 \to I_0$ is an isomorphism.
Using this isomorphism $\mathbb{T}_N^{\epsilon} \isoto \mathbb{T}_{\ell_0} \times_{\mathbb{Z}_p} \mathbb{T}_{\ell_1}$ and Mazur's results (\S\ref{subsec:Mazur results}) on the structure of $\mathbb{T}_{\ell_i}$, it is then a simple computation to see that
\[
\mathbb{T}_N^{\epsilon}/p\mathbb{T}_N^{\epsilon} \cong \mathbb{F}_p[y_0,y_1]/(y_0^{e_0+1},y_1^{e_1+1},y_0y_1), \quad \text{for some } e_0,e_1 >0.
\]
Thus $\mathrm{Soc}(\mathbb{T}_N^{\epsilon}/p\mathbb{T}_N^{\epsilon})=\mathbb{F}_p y_0^{e_0} \oplus \mathbb{F}_p y_1^{e_1}$. By Lemma \ref{lem:Gorenstein defects equality}, $\mathbb{T}_N^{\epsilon}$ is not Gorenstein.
\end{proof}
\section{Generators of the Eisenstein ideal}
\label{sec:GP}
In this section, we prove Part (4) of Theorem \ref{thm:main r primes} about the number of generators of the Eisenstein ideal, as well as Theorems \ref{thm:main good primes r} and \ref{thm:main good primes 2}, about specific generators.
\subsection{Determining the number of generators of $I^\epsilon$ when $\varepsilon = (-1,1,\dotsc, 1)$}
\label{subsec:splitting}
In this subsection, we prove Part (4) of Theorem \ref{thm:main r primes}. Assume we are in the setting of that theorem, so $\epsilon=(-1,1,\dots,1)$. Recall the fields $K_i$ of Definition \ref{defn:K_i}.
\begin{thm}
\label{thm:star split}
Assume that $\ell_i \equiv -1 \pmod{p}$ for $i=1,\dots, r$. The minimal number of generators of $I^\epsilon$ is $r+\delta$ where
\begin{equation}
\label{eq:delta}
\delta =\left\{
\begin{array}{lc}
1 & \text{ if } \ell_0 \text{ splits completely in } K_i \text{ for } i=1,\dots,r \\
0 & \text{ otherwise.}
\end{array}\right.
\end{equation}
\end{thm}
This immediately implies Part (4) of Theorem \ref{thm:main r primes} by Lemma \ref{lem:reduction to lower level}. For the rest of \S \ref{subsec:splitting}, we assume that $r>0$ and $\ell_i \equiv -1\pmod{p}$ for $i=1,\dots, r$, and we use $\delta$ to refer to the integer \eqref{eq:delta}.
Note that because $\mathfrak{m}={J^{\min{}}}+pR_N^\epsilon \subset R_N^\epsilon$ is the maximal ideal, we have
\[
{J^{\min{}}}/\mathfrak{m}{J^{\min{}}} \cong \mathfrak{m}/(p,\mathfrak{m}^2).
\]
By Nakayama's lemma, the minimal cardinality of a generating subset of ${J^{\min{}}}$ is $\dim_{\mathbb{F}_p}\mathfrak{m}/(p,\mathfrak{m}^2)$. By Theorem \ref{thm:star main}, it suffices to show that $\dim_{\mathbb{F}_p}\mathfrak{m}/(p,\mathfrak{m}^2) =r+\delta$, and this is what we will prove.
Recall the notation of \S\ref{subsec:label}, in particular, the class $b_0 \in H^1(\mathbb{Z}[1/Np],\mathbb{F}_p(1))$ and the representing cocycle $\tilde{b}_0$, as well as the classes $c_0,\dots,c_r \in H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$ (note that $c_0$ is only defined if $\ell_0 \equiv \pm 1 \pmod{p}$). The starting point is the following proposition, which is proven in Appendix \ref{appendix:cohomology}.
\begin{prop}
\label{prop:bc cup}
Let $i\in\{1,\dots,r\}$. Then $\ell_0$ splits completely in $K_i$ if and only if $\ell_0 \equiv 1 \pmod{p}$ and $b_0 \cup c_i$ vanishes in $H^2(\mathbb{Z}[1/Np],\mathbb{F}_p)$.
\end{prop}
We can now prove one implication of Theorem \ref{thm:star split}.
\begin{prop}
\label{prop:r+1 generators implies delta=1}
Suppose that the minimal number of generators of $I^\epsilon$ is $r+1$. Then $\delta=1$.
\end{prop}
\begin{proof}
By Theorem \ref{thm:star main}, we see that minimal number of generators of $I^\epsilon$ is $r+1$ if and only if $\ell_0 \equiv 1 \pmod{p}$ and the images of the elements $b_{\gamma_0}c_{\gamma_i}$ for $i=1,\dots,r$ in $\mathfrak{m}/(p,\mathfrak{m}^2)$ are linearly independent. In particular, for each $i$, the image of $b_{\gamma_0}c_{\gamma_i}$ in $\mathfrak{m}/(p,\mathfrak{m}^2)$ is non-zero. Fix such an $i$, and let (writing $\mathbb{F}_p[\varepsilon]$ for $\mathbb{F}_p[\varepsilon]/(\varepsilon^2)$)
\[
\alpha: R_N^\epsilon/(p,\mathfrak{m}^2) \to \mathbb{F}_p[\varepsilon]
\]
be a ring homomorphism sending $b_{\gamma_0}c_{\gamma_i}$ to $\varepsilon$.
Let $E=\sm{\mathbb{F}_p[\varepsilon]}{\mathbb{F}_p}{\mathbb{F}_p}{\mathbb{F}_p[\varepsilon]}$ be the $\mathbb{F}_p[\varepsilon]$-GMA with data $(\mathbb{F}_p,\mathbb{F}_p,m)$ where $m:\mathbb{F}_p \times \mathbb{F}_p \to \mathbb{F}_p[\varepsilon]$ is the map $(x,y) \mapsto xy\varepsilon$. By Lemma \ref{lem:dual B C and H1}, we have a homomorphism of GMAs
$A:E_N^\epsilon \to E$ given by
\[
A=\ttmat{\alpha}{\tilde b_0}{\tilde c_i}{\alpha}.
\]
Let $D_A=\psi(A \circ \rho_N^\epsilon):G_{\mathbb{Q},S} \to \mathbb{F}_p[\varepsilon]$ be the corresponding deformation of ${\bar D}$. Then $D_A$ contributes a non-zero element to the tangent space $\mathfrak{t}_{\bar D}$ of $R_{\bar D}/pR_{\bar D}$. Examining \cite{bellaiche2012}, the image of $D_A$ under $\iota$ in the exact sequence of \cite[Thm.\ A]{bellaiche2012}
\[\xymatrix{
\mathfrak{t}_{\bar D} \ar[r]^-{\iota} & H^1(\mathbb{F}_p(1)) \otimes_{\mathbb{F}_p} H^1(\mathbb{F}_p(-1)) \ar[rrr]^-{b' \otimes c' \mapsto (b' \cup c', c' \cup b')} & & & H^2(\mathbb{F}_p) \oplus H^2(\mathbb{F}_p)
}\]
is $b_0 \otimes c_i$, and hence $b_0 \cup c_i=0$. Since this is true for all $i$, Proposition \ref{prop:bc cup} implies that $\delta=1$.
\end{proof}
The remainder of the proof of Theorem \ref{thm:star split} relies on the following construction.
\subsubsection{Construction of a maximal first-order pseudodeformation}
\label{sssec:constr rho_M}
Let $H$ be the kernel of the map
\begin{align*}
\begin{split}
H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1)) &\longrightarrow H^2(\mathbb{Z}[1/Np],\mathbb{F}_p) \oplus H^1(I_{\ell_0}, \mathbb{F}_p(-1)), \\
x &\mapsto (b_0 \cup x, \ x\vert_{I_{\ell_0}}).
\end{split}
\end{align*}
\begin{lem}
\label{lem:basis of H}
If $\ell_0 \equiv 1 \pmod{p}$ and $\delta=0$, then $b_0 \cup c_i \ne 0$ for some $i$. In that case, there are elements $\alpha_j \in \mathbb{F}_p$ such that the set $\{c_j-\alpha_jc_i\}$ for $j \in \{1,\dots,r\}\setminus\{i\}$ is a basis for $H$. Otherwise, the set $\{c_1,\dots,c_r\}$ is a basis for $H$.
\end{lem}
\begin{proof}
The first statement follows from Proposition \ref{prop:bc cup}. Recall that $c_i$ is ramified at $\ell_0$ if and only if $i = 0$, so $H$ is contained in the span of the linearly independent set $\{c_1,\dots,c_r\}$. Since
\[
\dim_{\mathbb{F}_p} H^2(\mathbb{Z}[1/Np],\mathbb{F}_p) = \left\{
\begin{array}{ll}1 &\text{ when } \ell_0 \equiv 1 \pmod{p} \\
0 &\text{ when } \ell_0 \not\equiv 1 \pmod{p},
\end{array}\right.
\]
the lemma follows.
\end{proof}
\begin{lem}
\label{lem:H cocycles trivial on ell0}
If $\ell_0 \ne p$ and $h \in H$, the image $h\vert_{G_{\ell_0}} \in H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(-1))$ is zero.
\end{lem}
\begin{proof}
If $\ell_0 \not \equiv \pm 1 \pmod{p}$, then $H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(-1))=0$. If $\ell_0 \equiv -1 \pmod{p}$, then $H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(-1))=H^1(\mathbb{Q}_{\ell_0}, \mathbb{F}_p(1))$, and so this follows from Lemma \ref{lem:Kummer injective}. Now assume $\ell_0 \equiv 1 \pmod{p}$. Then, since $h \cup b_0=0$ in $H^2(\mathbb{Z}[1/Np],\mathbb{F}_p)$, $b_0$ is ramified at $\ell_0$, and $h$ is unramified at $\ell_0$, Lemma \ref{lem:local tate desc} implies that $h\vert_{G_{\ell_0}} = 0$. \end{proof}
\begin{constr}
\label{constr:C and F}
We construct a cocycle $C: G_{\mathbb{Q},S} \to H^*(-1)$, where $H^*=\mathrm{Hom}_{\mathbb{F}_p}(H,\mathbb{F}_p)$ with trivial $G_{\mathbb{Q},S}$-action, and a cochain $F:G_{\mathbb{Q},S} \to H^*$ such that:
\begin{enumerate}
\item $C|_{G_p}=0$,
\item if $\ell_0 \ne p$, then $C|_{G_{\ell_0}}$ is a coboundary,
\item $dF = \tilde{b}_0 \smile C$,
\item $F|_{I_p}= 0$,
\item For any cocycle $\tilde{h}$ whose cohomology class $h$ is in $H$, and any $\tau \in G_{\mathbb{Q},S}$ with $\omega(\tau)=1$, we have $C(\tau)(h)=\tilde{h}(\tau)$.
\end{enumerate}
\end{constr}
\begin{proof}
For any $G_{\mathbb{Q},S}$-module $M$, let
\[
Z^1_{(p)}(\mathbb{Z}[1/Np],M) =\{(a,m) \in Z^1(\mathbb{Z}[1/Np],M) \times M \ | \ a(\tau)=(\tau-1)m, \ \forall\, \tau \in G_p\}.
\]
There is a surjection $Z^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1)) \onto H^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$ sending $(a,m)$ to the class of $a$. Choose a linear section $s: H \hookrightarrow Z^1_{(p)}(\mathbb{Z}[1/Np],\mathbb{F}_p(-1))$, and write $s(h)=(s(h)_1,s(h)_2) \in Z^1(\mathbb{Z}[1/Np],\mathbb{F}_p(-1)) \times \mathbb{F}_p(-1)$.
Define an element $(C',x) \in C^1(\mathbb{Z}[1/Np],H^*(-1)) \times H^*(-1)$ by $C'(\tau)(h)=s(h)_1(\tau)$ and $x(h)=s(h)_2$ for $h \in H$. One observes $(C',x)\in Z^1_{(p)}(\mathbb{Z}[1/Np],H^*(-1))$. Then let $C=C'-dx$, so that $C|_{G_p}=0$ and (1) holds. We also see that (5) holds, since the value $\tilde{h}(\tau)$ is independent of the choice of cocycle. Computing with dual vector spaces, it is easy to see that $b_0 \cup C = 0$ in $H^2(\mathbb{Z}[1/Np],H^*)$ and that Lemma \ref{lem:H cocycles trivial on ell0} implies (2).
Finally, to see (3) and (4), let $y$ be any cochain such that $dy=\tilde{b}_0 \smile C$. Note that the restriction map
\[
H^1(\mathbb{Z}[1/Np],H^*)\to H^1(I_p,H^*)
\]
is surjective, and that, since $H^*$ has trivial action, we may and do identify a cohomology class with its representing cocycle. Since $C|_{I_p}=0$ and $dy=\tilde{b}_0 \smile C$, we see that $y|_{I_p} \in H^1(I_p,H^*)$. Hence there is a cocycle $y' \in H^1(\mathbb{Z}[1/Np],H^*)$ with $y'|_{I_p}=y|_{I_p}$. Letting $F=y-y'$, we have $dF=dy=\tilde{b}_0 \smile C$ and $F|_{I_p}=0$.
\end{proof}
Let $M = H^* \oplus \mathbb{Z}/(p, \ell_0-1)$, and let $\mathbb{F}_p[M]$ be the vector space $\mathbb{F}_p \oplus M$ thought of as a local $\mathbb{F}_p$-algebra with square-zero maximal ideal $M$. We write elements of $\mathbb{F}_p[M]$ as triples $(x,y,z)$ with $x \in \mathbb{F}_p$, $y \in H^*$ and $z \in \mathbb{Z}/(p, \ell_0-1)\mathbb{Z}$.
Let $E_M$ be the $\mathbb{F}_p[M]$-GMA given by the data $(\mathbb{F}_p,H^*,m)$ where $m$ is the homomorphism
\[
m: \mathbb{F}_p \otimes_{\mathbb{F}_p} H^* \cong H^* \buildrel\sim\over\ra H^* \oplus \{0\} \subset M \hookrightarrow \mathbb{F}_p[M].
\]
Let $\rho_M: G_{\mathbb{Q},S} \longrightarrow E_M^\times$ be the function
\begin{equation}
\label{eq:rho_M}
\rho_M(\tau) = \ttmat{\omega(\tau)(1,F(\tau),\log_{\ell_0}(\tau))}{\tilde b_0(\tau)}{\omega(\tau) C(\tau)} {(1,\tilde{b}_0(\tau)C(\tau)-F(\tau),-\log_{\ell_0}(\tau))}.
\end{equation}
Then $\rho_M$ is a homomorphism by Construction \ref{constr:C and F}. Let $D_M : G_{\mathbb{Q},S} \rightarrow \mathbb{F}_p[M]$ denote the pseudorepresentation $D_M := \psi(\rho_M)$.
\begin{lem}
\label{lem:rho_M US}
$\rho_M$ satisfies $\mathrm{US}^\epsilon_N$.
\end{lem}
\begin{proof}
As per Definition \ref{defn:US global}, we verify $\mathrm{US}^\epsilon_N$ by proving that $\rho_M\vert_{G_p}$ is finite-flat if $\ell_0 \ne p$, and that $\rho_M\vert_{G_\ell}$ satisfies condition $\mathrm{US}^{\epsilon_\ell}_\ell$ for all $\ell \mid N$.
\noindent
\textbf{If $\ell_0 \ne p$, $\rho_M\vert_{G_p}$ is finite-flat:} For this, we will make frequent use of the
notion of a Cayley-Hamilton module, developed in \cite[\S2.6]{WWE4}.
Let $E_M'$ be the $\mathbb{F}_p[M]$-sub-GMA of $E_M$ given by $E'_M=\sm{\mathbb{F}_p[M]}{\mathbb{F}_p}{0}{\mathbb{F}_p[M]}$. Since $C|_{G_p}=0$, we see that the action of $G_p$ on $E_M$ via $\rho_M$ factors through $E_M'$. Hence $(\rho_M|_{G_p} : G_p \rightarrow E_M^{\prime\times}, E'_M, D_{E'_M} : E'_M \rightarrow \mathbb{F}_p[M])$, which we denote by $\rho'_{M,p}$ for convenience, is a Cayley-Hamilton representation of $G_p$. Then $E_M$ is a faithful Cayley-Hamilton module of $\rho'_{M,p}$; by \cite[Thm.\ 2.6.3]{WWE4}, it is enough to show that $\rho'_{M,p}$ is finite-flat.
Consider the extension $\mathcal{E}_{\tilde{b}_0}$ defined by $\tilde{b}_0$:
\[
0 \longrightarrow \mathbb{F}_p(1) \longrightarrow \mathcal{E}_{\tilde{b}_0} \longrightarrow \mathbb{F}_p \longrightarrow 0,
\]
which is finite-flat by Kummer theory. Let $W_\omega=\mathbb{F}_p[M]$ and $W_1=\mathbb{F}_p[M]$ with $G_p$ acting by the characters $\omega(1,F,\log_{\ell_0})$ and $(1,-F,-\log_{\ell_0})$, respectively. Since $F|_{I_p}$ and $\log_{\ell_0}|_{I_p}$ are zero, $W_\omega$ and $W_1$ are finite-flat. We have exact sequences of $\mathbb{F}_p[M][G_p]$-modules
\[
0 \to M(1) \to W_\omega \to \mathbb{F}_p(1) \to 0, \qquad 0 \to M \to W_1 \to \mathbb{F}_p \to 0.
\]
Let $l: \mathbb{F}_p \hookrightarrow M$ be an injective linear map. This induces a injection $\mathbb{F}_p(1) \hookrightarrow W_\omega$ of $\mathbb{F}_p[M][G_p]$-modules. Taking the pushout of $\mathcal{E}_{\tilde{b}_0}$ by this injection, we obtain an exact sequence
\[
0 \longrightarrow W_\omega \longrightarrow \mathcal{E}_{\tilde{b}_0,\omega} \longrightarrow \mathbb{F}_p \longrightarrow 0.
\]
Pulling back this sequence by $W_1 \twoheadrightarrow \mathbb{F}_p$, we obtain an exact sequence
\[
0 \longrightarrow W_\omega \longrightarrow \mathcal{E}_{\tilde{b}_0,\omega,1} \longrightarrow W_1 \longrightarrow 0.
\]
Following \cite[Appendix C]{WWE3}, we see that $\mathcal{E}_{\tilde{b}_0,\omega,1}$ is finite-flat and that there is an isomorphism $\mathcal{E}_{\tilde{b}_0,\omega,1} \cong \mathbb{F}_p[M]^{\oplus 2}$ under which the action of $G_p$ is given by
\begin{equation}
\label{eq:action on V}
\left.\ttmat{\omega(1,F,\log_{\ell_0})}{(0,\tilde{b}_0 \cdot l(1)) }{0} {(1,-F,-\log_{\ell_0})}\right\vert_{G_p} : G_p \rightarrow \GL_2(\mathbb{F}_p[M]).
\end{equation}
We now use this isomorphism $\mathcal{E}_{\tilde{b}_0,\omega,1} \cong \mathbb{F}_p[M]^{\oplus 2}$ as an identification.
We have an injective $\mathbb{F}_p[M]$-GMA homomorphism $l': E_M' \to \mathrm{End}_{\mathbb{F}_p[M]}(\mathcal{E}_{\tilde{b}_0,\omega,1})=M_{2\times 2}(\mathbb{F}_p[M]))$ given by
\[
l'=\ttmat{\mathrm{id}_{\mathbb{F}_p[M]}}{l}{0}{\mathrm{id}_{\mathbb{F}_p[M]}} .
\]
By \eqref{eq:action on V}, we see that action of $G_p$-action on $\mathcal{E}_{\tilde{b}_0,\omega,1}$ factors through $l'$. In other words, $\mathcal{E}_{\tilde{b}_0,\omega,1}$ is a faithful Cayley-Hamilton module of $\rho'_{M,p}$. Since $\mathcal{E}_{\tilde{b}_0,\omega,1}$ is finite-flat, $\rho'_{M,p}$ is finite-flat by \cite[Thm.\ 2.6.3]{WWE4}.
\noindent
\textbf{If $\ell_0 = p$, then $\rho_M|_{G_p}$ is ordinary:} This follows from Proposition \ref{prop:ord C-H form} and Construction \ref{constr:C and F}.
\noindent
\textbf{If $\ell_0 \equiv 1 \pmod{p}$, then $\rho_M|_{G_{\ell_0}}$ is $\mathrm{US}_{\ell_0}^{-1}$:} Since $\ell_0 \equiv 1 \pmod{p}$, $\omega|_{G_{\ell_0}}=1$. By Construction \ref{constr:C and F}, we have $C|_{G_{\ell_0}}=0$. Then, for any $\sigma, \tau \in G_{\ell_0}$, we have
\[
V^{-1}_{\rho_M}(\sigma, \tau) := (\rho_M(\sigma) - \omega(\sigma))(\rho(\tau) - 1) = \ttmat{\varepsilon_1}{\tilde b_0(\sigma)}{0}{\varepsilon_2}\ttmat{\varepsilon_3}{\tilde b_0(\tau)}{0}{\varepsilon_4} = 0,
\]
where $\varepsilon_i \in M \subset \mathbb{F}_p[M]$.
\noindent
\textbf{If $\ell_0 \not \equiv 0,1 \pmod{p}$, then $\rho_M|_{G_{\ell_0}}$ is $\mathrm{US}_{\ell_0}^{-1}$:}
By assumption, we have $M=H^*$ and $\log_{\ell_0}=0$, so we write elements of $\mathbb{F}_p[M]$ as pairs $(x,y)$ with $x\in \mathbb{F}_p$ and $y \in H^*$. Since $C|_{G_{\ell_0}}$ is a coboundary, there exists $z \in H^*$ such that $C(\tau)=(\omega^{-1}(\tau)-1)z$ for all $\tau \in G_{\ell_0}$.
Let $\rho_M':G_{\mathbb{Q},S} \to E_M^\times$ be the composite of $\rho_M$ with the automorphism $E_M \buildrel\sim\over\ra E_M$ given by conjugation by $\sm{1}{0}{z}{1} \in E_M^\times$. By explicit computation, we see that
\[
\rho_M' = \ttmat{\omega(1,F_a)}{\tilde b_0}{\omega(C-(\omega^{-1}-1)z)}{(1,F_d)},
\]
where $F_a=F-\omega^{-1}\tilde{b}_0z$ and $F_d=\tilde{b}_0C-F+\omega\tilde{b}_0 z$; in particular, the $(2,1)$-coordinate of $\rho_M'|_{G_{\ell_0}}$ is zero. This implies that $F_a\vert_{G_{\ell_0}}, F_d\vert_{G_{\ell_0}} : G_{\ell_0} \rightarrow H^*$ are homomorphisms. Because $\ell_0 \not\equiv 0,1 \pmod{p}$ and $H^*$ has exponent $p$, they are unramified.
For any $(\sigma,\tau) \in G_{\ell_0}\times I_{\ell_0}$, we compute that
\[
V^{-1}_{\rho'_M}(\sigma,\tau) =
\ttmat{\varepsilon}{*}{0}{*}
\ttmat{0}{*}{0}{0} = 0
\]
where $\varepsilon \in M$. Equivalently, $V^{-1}_{\rho_M} = 0$. A similar computation shows that $V^{-1}_{\rho_M}(\sigma,\tau)=0$ for $(\sigma,\tau) \in I_{\ell_0}\times G_{\ell_0}$.
\noindent
\textbf{If $\ell \mid N$ and $\ell \ne \ell_0$, then $\rho_M|_{G_\ell}$ is $\mathrm{US}_{\ell}^{+1}$:}
In this case we have $\ell \equiv -1 \pmod{p}$, and hence $\omega|_{G_\ell}=\lambda(-1)$. Since $\ell \ne \ell_0$, we have $b_0|_{I_\ell}=0$, so $b_0|_{G_\ell}=0$ by Lemma \ref{lem:Kummer injective}. Hence there exists $z \in \mathbb{F}_p$ such that $\tilde{b}_0(\tau)=(\omega(\tau)-1)z$ for all $\tau \in G_\ell$. Exactly as in the previous case, we can show that $V^{+1}_{\rho_M}(\sigma,\tau)=0$ for all $(\sigma,\tau) \in G_{\ell}\times I_\ell \cup I_{\ell}\times G_\ell$ by conjugating $\rho_M$ by $\sm{1}{z}{0}{1} \in E_M^\times$.
\end{proof}
\subsubsection{End of the proof} We will show that $D_M$ is, in a sense, the universal $\mathrm{US}_N^\epsilon$ first-order deformation of ${\bar D}$.
\begin{prop}
\label{prop:first-order ps}
The pseudodeformation $D_M$ of ${\bar D}$ induces an isomorphism $R^\epsilon_N/(p, \mathfrak{m}^2) \buildrel\sim\over\ra \mathbb{F}_p[M]$.
\end{prop}
\begin{proof}
By Lemma \ref{lem:rho_M US}, $\rho_M$ is $\mathrm{US}_N^\epsilon$, so $D_M$ is also $\mathrm{US}_N^\epsilon$ by Definition \ref{defn:US global}, and there is an induced map $E_N^\epsilon \to E_M$. This gives us a local homomorphism $R^\epsilon_N \rightarrow \mathbb{F}_p[M]$, and any such map factors through $R^\epsilon_N/(p, \mathfrak{m}^2) \to \mathbb{F}_p[M]$. Let $f$ denote the restriction $\mathfrak{m}/(p, \mathfrak{m}^2) \to M$. It suffices to show that $f$ is an isomorphism.
Assume that the GMA structure on $E_N^\epsilon$ is chosen so that $E_N^\epsilon \to E_M$ is a morphism of GMAs (such a GMA structure is known to exist by \cite[Thm.\ 3.2.2]{WWE4}). By Theorem \ref{thm:star main}, we see that the elements $b_{\gamma_0}c_{\gamma_i}$ for $i=1,\dots,r$ together with the element $d_{\gamma_0}-1$ generate $\mathfrak{m}/(p,\mathfrak{m}^2)$, and, moreover, if $\ell_0\not \equiv 1 \pmod{p}$, the elements $b_{\gamma_0}c_{\gamma_i}$ for $i=1,\dots,r$ are a basis.
By construction, we see that $f(b_{\gamma_0}c_{\gamma_i})=(0,\tilde{b}_0(\gamma_0)C(\gamma_i),0)=(0,C(\gamma_i),0)$, and that $f(d_{\gamma_0}-1)=(0,0,-\log_{\ell_0}(\gamma_0))$ (which is non-zero if $\ell_0\equiv 1 \pmod{p}$). By Lemma \ref{lem:C spans H*} below, $f$ is surjective.
Now we count dimensions. By Theorem \ref{thm:star main} and Proposition \ref{prop:r+1 generators implies delta=1}, we have
\[
\dim_{\mathbb{F}_p}(\mathfrak{m}/(p, \mathfrak{m}^2)) = \left\{ \begin{array}{ll}
r & \text{ if } \delta=0 \\
r \text{ or } r+1 & \text{ if } \delta=1.
\end{array}\right.
\]
By Lemma \ref{lem:basis of H}, we have
\begin{equation}
\label{eq:dim of M}
\dim_{\mathbb{F}_p}(M) = \left\{ \begin{array}{ll}
r & \text{ if } \delta=0 \\
r+1 & \text{ if } \delta=1.
\end{array}\right.
\end{equation}
Since $f$ is surjective, this implies that $f$ is an isomorphism in all cases.
\end{proof}
\begin{lem}
\label{lem:C spans H*}
Let $\tau_1,\dots,\tau_r \in G_{\mathbb{Q},S}$ be any elements such that:
\begin{itemize}
\item $\omega(\tau_i)=1$ for $i=1,\dots, r$, and
\item $\tilde{c}_j(\tau_i)=\partial_{ij}$ for all $1 \le i,j \le r$.
\end{itemize}
If $\delta=1$ or $\ell_0 \not \equiv 1 \pmod{p}$, then the set $\{C(\tau_i) \ : \ i=1,\dots,r\}$ is a basis for $H^*$. Otherwise $b_0 \cup c_j \neq 0$ for some $j$ and the set $\{C(\tau_i) \ : \ i=1,\dots,r, i \ne j\}$ is a basis for $H^*$.
\end{lem}
\begin{proof}
Indeed, if $c_j-\alpha c_k \in H$ for some $\alpha \in \mathbb{F}_p$ and $j,k\in \{1,\dots,r\}$, then by Construction \ref{constr:C and F}(5) we have
\[
C(\tau_i)(c_j-\alpha c_k) = \tilde{c}_j(\tau_i)-\alpha \tilde{c}_k(\tau_i) = \partial_{ij}-\alpha\partial_{ik}.
\]
Using the explicit basis of $H$ constructed in Lemma \ref{lem:basis of H}, the lemma follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm:star split}]
By Proposition \ref{prop:first-order ps}, we have $\mathfrak{m}/(p, \mathfrak{m}^2) \buildrel\sim\over\ra M$, and the dimension of $M$ is given by \eqref{eq:dim of M}. This completes the proof.
\end{proof}
\subsection{Good sets of primes in the case $\epsilon = (-1, 1, \dotsc, 1)$}
\label{subsec:good primes for r primes}
In this section, we prove Theorem \ref{thm:main good primes r} in precise form. Assume that $\epsilon=(-1,1,\dots,1)$, and, as in \S \ref{sec:case1}, order the primes $\ell_i$ so that $\ell_i \equiv -1\pmod{p}$ for $i=1,\dots,s$ and $\ell_i \not \equiv -1 \pmod{p}$ for $s < i \leq r$. We use the number fields $K_i$ set up in Definition \ref{defn:K_i}.
\begin{defn}
\label{defn:good primes}
Consider an ordered set of primes $\cQ'=\{q_0,q_1,\dots,q_s\}$ disjoint from the primes dividing $N$ and satisfying the following conditions:
\begin{enumerate}
\item $q_0 \not \equiv 1 \pmod{p}$, and
\item $q_0$ not a $p$-th power modulo $\ell_0$;
\end{enumerate}
and, for $i=1,\dots,s$,
\begin{enumerate}[resume]
\item $q_i \equiv 1 \pmod{p}$,
\item $\ell_0$ is not a $p$-th power modulo $q_i$,
\item $q_i$ does not split completely in $K_i$, and
\item $q_i$ does split completely in each $K_j$ for $j=1,\dots,s$ with $j \ne i$.
\end{enumerate}
In the following cases, the described ordered subset $\cQ$ of $\cQ'$ is called a \emph{good set of primes for $(N,p,\epsilon)$}:
\begin{itemize}
\item if $\delta=1$, $\cQ:=\cQ'$,
\item if $\delta=0$ and $\ell_0 \equiv 1 \pmod{p}$, then $\cQ :=\cQ'\setminus\{q_j\}$ for an index $j>0$ such that $b_0 \cup c_j \ne 0$,
\item if $\ell_0 \not \equiv 1 \pmod{p}$, then $\cQ :=\cQ'\setminus\{q_0\}$.
\end{itemize}
\end{defn}
Note that, by Chebotarev density, there is an infinite set of primes $q_0$ satisfying (1)-(2); and, for each $i$, there is an infinite set of primes $q_i$ satisfying (3)-(6). Note that when $p \nmid N$ and $\ell_0 \equiv 1 \pmod{p}$, it is possible that $p \in \cQ$.
\begin{thm}
\label{thm:good1}
Let $\cQ$ be a good set of primes for $(N,p,\epsilon)$. Then $\{T_q - (q+1) \mid q \in \cQ\} \subset \mathbb{T}^\epsilon_N$ is a minimal set of generators for $I^\epsilon$.
\end{thm}
\begin{proof}
We freely refer to $\rho_M$ and related objects in this proof (see \eqref{eq:rho_M}). Let $J$ be the index set of $\cQ$ (i.e.~ $J=\{0,\dots,s\}$, $J=\{0,\dots,s\}\setminus\{j\}$ or $J=\{1,\dots,s\}$ in the three cases of Definition \ref{defn:good primes}, respectively).
By Theorem \ref{thm:star main}, Proposition \ref{prop:first-order ps}, and Nakayama's lemma, it suffices to show that the projection $\Upsilon(q)$ of $T_q - (q+1)$ under $\mathbb{T}^\epsilon_N \buildrel\sim\over\ra R^\epsilon_N \twoheadrightarrow \mathbb{F}_p[M]$ comprise a basis $\{\Upsilon(q)\}_{q \in \cQ}$ of $M$. The conditions (1)-(6) on $\cQ$ have been chosen so that:
\begin{enumerate}[label=(\roman*), leftmargin=2em]
\item If $0 \in J$ and $q_0 \neq p$, then $\omega(\mathrm{Fr}_{q_0}) \neq 1$ and $\log_{\ell_0}(\mathrm{Fr}_{q_0}) \neq 0$. This follows from (1) and (2).
\item $\omega(\mathrm{Fr}_{q_i}) = 1$ for $i \in J$ with $i>0$. This follows from condition (3).
\item $\tilde b_0(\mathrm{Fr}_{q_i}) \neq 0$ for $i \in J$ with $i>0$. This follows from (4) by class field theory.
\item $\{C(\mathrm{Fr}_{q_i}) : i\in J, i>0\}$ is a basis for $H^*$. This follows from Lemma \ref{lem:C spans H*} by (ii), (5), and (6).
\end{enumerate}
When $q_i \neq p$, it is clear that $\Upsilon(q_i) = \mathrm{Tr}\rho_M(\mathrm{Fr}_{q_i}) - (q_i + 1)$, and we calculate:
\begin{enumerate}[label=(\alph*), leftmargin=2em]
\item By (ii), $\Upsilon(q_i) = (0,\tilde b_0(\mathrm{Fr}_{q_i})\cdot C(\mathrm{Fr}_{q_i}),0) \in \mathbb{F}_p[M]$ for $i \in J$ with $i>0$. By (iii) and (iv), these elements form a basis of $H^*$.
\item If $0 \in J$ and $q_0 \neq p$, then $\Upsilon(q_0) \in \mathbb{F}_p[M]$ lies in $M$ and projects via $M \twoheadrightarrow \mathbb{Z}/(p,\ell_0-1)$ to $(\omega(\mathrm{Fr}_{q_0}) - 1)\log_{\ell_0}(\mathrm{Fr}_{q_0})$. This is non-zero, by (i).
\item If $0 \in J$ and $q_0 = p$, we claim that $\Upsilon(p) \in \mathbb{F}_p[M]$ lies in $M$ and maps to $\log_{\ell_0} p \neq 0$ under the summand projection $M \twoheadrightarrow \mathbb{Z}/(p, \ell_0-1)$. This follows from the same argument as in Case $q_0 = p$ of the proof in \S\ref{subsec:prove thm good primes 2}, but is simpler. \qedhere
\end{enumerate}
\end{proof}
\begin{rem}
The reader will note that, in this proof, our conditions are used to ensure that a certain matrix is diagonal with non-zero diagonal entries. Of course, the necessary and sufficient condition is simply that this same matrix is invertible.
\end{rem}
\subsection{Good pairs of primes in the case $\epsilon = (-1,-1)$}
\label{subsec:prove thm good primes 2}
In this section, we prove Theorem \ref{thm:main good primes 2}. We assume we are in the setting of Theorem \ref{thm:thm2}.
\begin{proof}[Proof of Theorem \ref{thm:main good primes 2}]
By Theorem \ref{thm:thm2} and Nakayama's lemma, $\mathbb{T}^\epsilon_N$ is generated by $\{T_{q_i} - (q_i+1)\}_{i=0,1}$ if and only if their images $\{\Upsilon(q_i)\}_{i=0,1}$ via $\mathbb{T}^\epsilon_N \buildrel\sim\over\ra R^\epsilon_N \twoheadrightarrow R^\epsilon_N/(p, \mathfrak{m}^2)$ are a basis of $\mathfrak{m}/(p,\mathfrak{m}^2)$. We see in the proof of Theorem \ref{thm:thm2} that $J^\mathrm{red} ={J^{\min{}}}^2$. In particular, as $\mathfrak{m} = {J^{\min{}}} + (p) \subset R^\epsilon_N$, there are isomorphisms
\[
R_N^\epsilon/(p,\mathfrak{m}^2) \buildrel\sim\over\ra R_N^{\epsilon,\mathrm{red}}/(p) \cong \mathbb{F}_p[Y_0,Y_1]/(Y_0^2,Y_0Y_1,Y_1^2), \quad \mathfrak{m}/(p,\mathfrak{m}^2) \buildrel\sim\over\ra (Y_0, Y_1),
\]
which we use as identifications. Then $D_N^\epsilon$ pulls back to the pseudorepresentation $D=\psi(\omega\dia{-}^{-1}\oplus \dia{-}) : G_{\mathbb{Q},S} \rightarrow R^{\epsilon,\mathrm{red}}_N/(p)$, where, for particular choices of $\log_{\ell_i}$,
\[
G_{\mathbb{Q},S} \ni \tau \mapsto \dia{\tau} := 1+\log_{\ell_0}(\tau)Y_0+\log_{\ell_1}(\tau)Y_1 \in (R_N^{\epsilon,\mathrm{red}}/(p))^\times.
\]
We see that if $q_i \neq p$, then $\Upsilon(q_i) = \mathrm{Tr}_D(\mathrm{Fr}_{q_i}) - (q_i+1)$.
\noindent
\textbf{Case $q_0,q_1 \neq p$.} One computes that the matrix expressing $\{\Upsilon(q_0), \Upsilon(q_1)\}$ in the basis $\{Y_0,Y_1\}$ of $\mathfrak{m}/(p,\mathfrak{m}^2) \cong (Y_0, Y_1)$ is
\[
\ttmat{(q_0-1)\log_{\ell_0} q_0}{(q_1-1)\log_{\ell_0} q_1}{(q_0-1)\log_{\ell_1} q_0}{(q_1-1)\log_{\ell_1} q_1} \in M_2(\mathbb{F}_p),
\]
which completes the proof.
\noindent
\textbf{Case $q_0 = p$.} We note that the images of $T_p-(p+1)$ and $U_p-1$ in $I^\epsilon/\mathfrak{m} I^\epsilon$ are equal, so we may replace $T_p-(p+1)$ by $U_p-1$ in the statement. We recall from Step 3 of the proof of Proposition \ref{prop:R to T} that $U_p$ is the image under $R_N^\epsilon \buildrel\sim\over\ra \mathbb{T}^\epsilon_N$ of $\frac{1}{x-1}(x\mathrm{Tr}(\rho_N^\epsilon)(\sigma_p)-\mathrm{Tr}(\rho_N^\epsilon)(\tau\sigma_p))$, where $\tau \in I_p$ is such that $\omega(\tau) \ne 1$ and $x=\kcyc(\tau)$. We compute that
\[
\Upsilon(p) = \frac{1}{x-1}\big(x\mathrm{Tr}_D(\sigma_p)-\mathrm{Tr}_D(\tau\sigma_p)\big) - 1 = \log_{\ell_0}(p)Y_0+\log_{\ell_1}(p)Y_1.
\]
Thus, the matrix expressing $\{\Upsilon(p), \Upsilon(q_1)\}$ in the basis $\{Y_0,Y_1\}$ of $\mathfrak{m}/(p,\mathfrak{m}^2)$ is
\[
\ttmat{\log_{\ell_0} p}{(q_1-1)\log_{\ell_0} q_1}{\log_{\ell_1} p}{(q_1-1)\log_{\ell_1} q_1} \in M_2(\mathbb{F}_p). \qedhere
\]
\end{proof}
|
1,477,468,751,050 | arxiv | \section*{ 1. Introduction }
Measurements of the photoproduction of the $K^+\Lambda$ final
state with high statistics have become possible in the past
decade due to high-flux photon beams in the GeV range. Recently,
the CLAS collaboration published a compendium of data
for this reaction over a wide range of
angles and photon energies \cite{bradford}.
One motivation for more complete $K^+\Lambda$ data is to study the
details of $N^*$ resonances that were predicted to couple weakly
to pion decay and more strongly to kaon decay.
Some $N^*$ resonances that were predicted in quark models
\cite{capstick}, but were not seen in partial wave analysis
of pion scattering data, might be seen in kaon production.
The CLAS data do not show definitive evidence
for new $N^*$ resonances, but do exhibit a few broad
energy-dependent structures in the differential cross sections,
suggesting that there is a more complicated mechanism
contributing to the $\gamma p \to K\Lambda$ reaction.
The present results extend the existing data for
$K^+\Lambda$ photoproduction to far-backward kaon angles.
Theoretical progress for $K^+\Lambda$ photoproduction has been
published recently \cite{bruno} showing that coupled channel
effects can no longer be ignored. This approach, the
dynamical coupled-channels (DCC) formalism, includes a proper
treatment of off-shell effects. The most important multistep
transition, $\gamma N \to \pi N \to K \Lambda$, has a comparable
cross section to direct production, $\gamma N \to K \Lambda$
\cite{tabakin}. In this case, the DCC formalism is
necessary for a correct interpretation of $K^+\Lambda$
photoproduction data.
The CLAS detector acceptance does not allow measurements at
either very forward or very backward kaon angles.
Recently, the LEPS collaboration published cross sections
for forward angles\cite{sumihama}, showing good
agreement with the CLAS data where the two data sets overlap.
Here, cross sections for backward angles of this reaction are
measured using a different experimental technique,
where the $\Lambda$ is reconstructed from its decay products
which are detected in the LEPS spectrometer. The final state
kaon, which goes backward in the center-of-mass frame, is
measured through the missing mass technique.
\begin{figure}[tb]
\includegraphics[width=8.5cm]{fig1-diag.eps}
\caption{ Diagram of the $u$-channel for the reaction
$\gamma p \to K^+\Lambda$ which contributes at backward kaon angles.
}
\label{fig:diag}
\end{figure}
In the kinematics that LEPS can access, we can study certain reaction
dynamics selectively. For instance, in meson production at forward
angles, $t$-channel diagrams dominate, and the nature of the
exchange particle can be studied (see Ref. \cite{mibe} for an
example in $\phi$ photoproduction). In contrast, the far backward
angles in meson photoproduction are associated with $u$-channel diagrams.
In this paper, we will examine whether the data exhibit
any $u$-channel signature in the cross sections.
At low energies, the $u$-channel amplitude is expected to be
dominated by diagrams where a hyperon, such as a ground state
$\Lambda$ or $\Sigma^0$, is exchanged. These diagrams are shown
in Fig. \ref{fig:diag}. The coupling constants for the hadronic
verticies in Fig. \ref{fig:diag} are inferred from the $t$-channel
in kaon photoproduction data, which are determined either by
theoretical models (e.g. SU(3) symmetry) or by an independent fit in
analyses such as the DCC formalism for $\pi N \to KY$ reactions.
The exchange baryon is neutral, and so the electromagnetic vertex
is dominated by the M1 multipole which acts to flip the spin of
the $\Lambda$ or $\Sigma^0$. The magnetic moments of the $\Lambda$
and $\Sigma$ hyperons are known, so this vertex has little ambiguity.
There is not much freedom in calculations of the diagram shown in
Fig. \ref{fig:diag}.
Of course, we expect that there are further dynamical processes
which may contribute to the reaction at this kinematical region.
In the low energy regime, for instance, coupled channel effects and
nucleon resonances may be important as emphasized in Ref. \cite{bruno}.
The Regge model has also been used in the study of reactions in
this energy region as well as with higher energy photons
\cite{guidal,mart}. In this description, one can study the nature
of baryon trajectories exchanged in the $u$-channel.
The purpose of this paper is to provide the cross sections and
spin asymmetries at far backward angles and discuss reaction
mechanisms which may be relevant to $u$-channel dynamics.
Direct detection of the $\Lambda$
provides additional information. At LEPS, the incident
photon is highly polarized, and so
the reaction plane of the $K^+\Lambda$ can be compared with the plane
of the photon polarization, giving new information on the
reaction mechanism. The beam asymmetry for kaon photoproduction
has been measured at forward angles by LEPS \cite{zegers}, and
is measured here in $u$-channel kinematics for the first time.
The experimental data were taken using the LEPS (laser electron
photons at SPring-8) detector in Japan \cite{nakano}. Ultraviolet
light from an Ar laser was linearly polarized and directed onto the
8 GeV stored electron beam. Backward Compton scattering produced
a narrow beam of photons up to 2.4 GeV. The struck electron
was detected in a tagging spectrometer, giving the energy of
individual photons in the range 1.5-2.4 GeV. The linear polarization
of the photons is calculated for Compton scattering and was
typically 97\% at the maximum energy. The photon beam was
incident on a 16 cm long liquid hydrogen target. Details of
the geometry are given elsewhere \cite{sumihama}.
The LEPS spectrometer consists of a wide-gap dipole magnet, with
charged-particle tracking detectors both before and after the
magnet. An array of scintillator bars were placed 4 meters
downstream of the target, and along with a start counter (SC)
scintillator 5 cm downstream of the target, provided a time-of-flight
(TOF) measurement. The trigger was a coincidence between the
tagger, the SC, and the TOF wall. Electron-positron pairs
were vetoed by an aerogel cerenkov detector just after the SC.
The total number of photons
on target was $1.18 \times 10^{12}$, after correcting for the
transmission factor (for material between the beam production point
and the target) and tagger inefficiencies.
\begin{figure}[tb]
\includegraphics[width=8.5cm]{fig2-lam.eps}
\caption{ Invariant mass of the $p\pi^-$ events, showing a
peak for the $\Lambda$ riding on top of a smooth background:
a) raw data with a reconstructed vertex from the target,
b) with an additional requirement that the missing mass is a kaon.
}
\label{fig:lam}
\end{figure}
Using momentum and TOF, the mass of each detected
particle was measured, giving particle identification (PID).
Events with a proton and a $\pi^-$, each having a measured
mass within $3\sigma$ (where $\sigma$ is the momentum dependent
mass resolution) of its known mass were kept for further
analysis. The point of closest approach between these two
tracks was calculated, and this vertex position was required
to be within the target or downstream of the target.
Because of the long lifetime of the $\Lambda$,
the vertex position can be downstream of the target.
A cut on vertex position before the SC was required in the analysis.
Empty target runs showed that the contribution
of the target windows and the SC was less than about 4\%.
Monte Carlo simulations showed good agreement with the
distribution of vertex position for events with
$\Lambda \to p\pi^-$ decay in the experiment.
In addition to mass cuts, several additional requirements
are used to ensure good PID. For example, when the track
is extrapolated to the TOF wall, the position obtained from
timing measurements on either end of the hit TOF bar must
be within 8 cm of the expected position. The same
PID requirements as described in Ref. \cite{sumihama} are
used in the present analysis.
Fig. \ref{fig:lam} shows the invariant mass of the $p\pi^-$
pair for: (a) all events with good PID and the above vertex cut
and (b) for those events where the missing mass is consistent
with the kaon mass. The smooth background
in Fig. \ref{fig:lam}a likely comes from reactions like
$\gamma p \to p \pi^+\pi^-$ where the $\pi^+$ is not detected.
The spectrum in Fig. \ref{fig:lam}b has much less background because
the missing particle is now required to have the mass of a $K^+$.
The technique of sideband subtraction can be used to remove
the remaining background. Let the $\Lambda$ region be given by
a cut on the mass from 1.110 to 1.120 GeV/c$^2$ (shown by the
vertical lines in Fig. \ref{fig:lam}). The left and right
sideband regions on either side of the peak (of equal width)
were analyzed in the same way as the $\Lambda$
region and subtracted from the final results.
\begin{figure}[htb]
\includegraphics[width=9.0cm]{fig3-fits.eps}
\caption{
Missing mass of the $\gamma p \to \Lambda X$ reaction
for the angular bin $0.90 < \cos\theta_{CM} < 1.0$ where
$\theta_{CM}$ is the center-of-mass angle of the $\Lambda$.
Each figure is for the photon energy shown by the label.
}
\label{fig:fits}
\end{figure}
The missing mass for the reaction $\gamma p \to \Lambda X$ is
shown in Fig. \ref{fig:fits} for six equal bins in the photon energy.
The photon energy bins are 150 MeV wide, in six equal steps
from 1.5 to 2.4 GeV. The number of photons in each energy
bin was measured by the tagger, corrected for the inefficiencies
of each tagger element. The plots in Fig. \ref{fig:fits} are
shown in order of increasing energy bin, from lowest (top left)
to highest (lower right).
The data in Fig. \ref{fig:fits} span the range
$0.9 < \cos\theta_{CM} < 1.0$
where $\theta_{CM}$ is the center-of-mass (CM) polar angle of the
$\Lambda$ momentum vector. These spectra have not yet been
sideband subtracted, which mostly removes counts below a mass
of 0.4 GeV/c$^2$. A clear peak is seen in the missing
mass spectra at the value of the known $K^+$ mass. The
strength at higher mass corresponds to a combination of
$K\Sigma$ (followed by $\Sigma \to \Lambda\gamma$), $K^*\Lambda$
and $KY^*$ photoproduction.
The $K^+$ peak appears to decrease rapidly with increasing
photon energy. These spectra are not corrected
for the the acceptance for $\Lambda$ detection in the LEPS
spectrometer, which is weakly dependent on photon energy.
Data in a second angular bin, $0.8 < \cos\theta_{CM} < 0.9$,
are of similar quality and statistics.
The LEPS acceptance was calculated based on Monte Carlo
simulations for $K^+\Lambda$ production uniformly distributed
in energy and center-of-mass angle. More realistic distributions
are possible, and studies with a phenomenological energy-dependent
event generator showed that the systematic uncertainty
of the acceptance is on the order of 4\% or less. The
simulations were carried out using the GEANT software \cite{geant}
with input for the detector geometry and resolutions.
A realistic Compton scattered photon beam distribution was used.
The simulated peak widths for the invariant mass ($\Lambda$)
and missing mass ($K^+$) are in good agreement with those
shown in Figs. \ref{fig:lam} and \ref{fig:fits}.
In addition to the $K^+\Lambda$ final state, the three reactions
at higher missing mass mentioned above ($K\Sigma$, $K^*\Lambda$, $KY^*$)
were simulated along with a general 3-body phase space for
the $K\pi\Lambda$ final state. Missing mass spectra from all
of the simulations were used as input to a template fit of the
experimental data, where each template spectrum was multiplied
by an overall factor to minimize the reduced $\chi^2$, which was
typically in the range of 1-2. The number of counts in the
$K^+$ peak was extracted from the template fits. The $K^+$ fit
is affected primarily by background from the $K^+\Sigma^0$ reaction,
which was constrained largely by the events at masses about
0.1 GeV/c$^2$ higher than the $K^+$ peak. Simulations show
that the other reactions, including 3-body phase space, have
significant strength only at higher missing mass than for $K\Sigma$.
The systematic uncertainty of the fitting procedure was estimated by
doing fits with gaussians for the background of
$K\Sigma$ production and of the 3-body final states at higher missing
mass. Comparison of the Gaussian and template fit results gives a
mean systematic uncertainty of 5\%. Other systematic uncertaintes
due to target thickness, photon flux, and event selection cuts
added in quadrature contributes 4\%. The overall systematic
uncertainty, including that of the Monte Carlo simulations, is 7.5\%.
For three photon energy ranges, see Fig. \ref{fig:dsdu},
the cross section was binned for several values of
$u = (p_\gamma - p_\Lambda)^2$. The maximum
value, $u_{max}$ occurs when the $\Lambda$ goes forward at $0^\circ$
from the photon direction. The cross sections in Fig. \ref{fig:dsdu}
are shown as a function of $u-u_{max}$.
Theoretical calculations \cite{hosaka} for the $u$-channel only,
shown by the solid line, are far below the data and suggest that
the diagram of Fig. \ref{fig:diag} is not dominant. These calculations
include exchange baryons of $\Lambda$, $\Sigma^0$, and $\Sigma(1385)$
along with a form factor with a cut-off mass of about 0.9 GeV (the
theoretical values increase for higher cut-off mass).
It appears that $s$-channel diagrams still contribute strongly
to the cross section even at these far backward kaon angles, at
least at the lower photon energies.
At photon energies above 4.3 GeV, $K^+\Lambda$ data clearly show
a rise for $u$ between $-0.2$ to $-0.7$ (GeV/c)$^2$ \cite{anderson},
but the present data do not exhibit this $u$-channel signature.
The fact that the cross sections are nearly constant as a function
of $u-u_{max}$ can be interpreted as additional evidence for the
lack of dominance by the $u$-channel diagram of Fig. \ref{fig:diag}
for backward-angle $K^+\Lambda$ data at photon energies below about
3 GeV.
\begin{figure}[htb]
\includegraphics[width=8.5cm]{fig4-dsdu.eps}
\caption{ (color online)
Differential cross sections as a function of the Mandelstam variable
$u$ for the given photon beam energy. The curves are calculations
of the contribution from the diagram in Fig. 1 only, using exchange
of $\Lambda$, $\Sigma^0$ and $\Sigma^*$ hyperons.
}
\label{fig:dsdu}
\end{figure}
Differential cross sections as a function of six photon energy bins,
for each of two $\Lambda$ angle bins (see above),
are plotted in Fig. \ref{fig:xsec}. Here, the cross sections
are presented in the same format as for the CLAS data \cite{bradford}.
The angle-dependent acceptance ranged from about 2-3\% in
the lowest energy bin up to 6-8\% at the highest energy bin.
\begin{figure}[htb]
\includegraphics[width=8.5cm]{fig5-xsec.eps}
\caption{ (color online)
Differential cross sections at the angle shown as a function of
photon beam energy. Calculations in the model of Ref. \cite{bruno}
are shown without (dashed) and with (solid line) coupled channels.
}
\label{fig:xsec}
\end{figure}
Since the $u$-channel diagram alone can not explain the present
data,
theoretical curves from Ref. \cite{bruno} are shown in
Fig. \ref{fig:xsec}, converted to the present units. The
solid curve is for the full dynamical coupled-channels (DCC)
model, whereas the dashed curves do not include DCC effects
(see Fig. 14 of Ref. \cite{bruno}). The present data are closer to
the full DCC calculation, except for one point at the lowest photon
energy ($E_\gamma=1.575$) and most backward kaon angle.
The top plot, for $\cos\theta_{CM}=0.85$ agrees within error bars
with the CLAS data, which is conveniently tabulated in Ref.
\cite{bradford} (but not shown in Fig. \ref{fig:xsec}a).
The lower plot, for $\cos\theta_{CM}=0.95$,
goes beyond the angular region covered by CLAS.
In general the data follows the energy and angle
dependence predicted by the DCC model, but are still
significantly different in the range of $E\gamma=2.0$-2.2 GeV.
Alternatively, we have performed a simple estimation using
the Regge model, where we fit the energy dependence by an
exponential function,
$$ \frac{d \sigma}{du} \propto s^{-2(\alpha(0)-1)} $$
where $\alpha(0)$ is the intercept for the hyperon
trajectory exchanged in the $u$-channel. By choosing
$\alpha(0)=-0.84$, the energy dependence can be fit and
extrapolated to the previous data at 4.3 GeV \cite{anderson}.
This value of $\alpha(0)$ is not too far from the value $-0.68$
extrapolated from the mass dependence of the $\Lambda^*$
resonances \cite{guidal}.
\begin{figure}[htb]
\includegraphics[width=8.5cm]{fig6-asym.eps}
\caption{
Beam asymmetry at the angles shown as a function of
photon beam energy. The points have been averaged over a range
of photon energy as described in the text. Calculations in the
model of Ref. \cite{bruno} are shown for comparison.
}
\label{fig:asym}
\end{figure}
Fig. \ref{fig:asym} shows the beam asymmetry, which was measured
by binning the $K^+\Lambda$ data as a function of $\phi$, which
is defined as the azimuthal angle of the $\Lambda$ as measured
from the linear polarization plane of the photon beam. In order
to gain sufficient statistics for the $\phi$-fit, only two
bins in photon energy were used, one from 1.5-2.0 GeV, and a
second one from 2.0-2.4 GeV. We follow the same procedure
to extract the beam asymmetry as described previously \cite{zegers}.
The results are a positive beam asymmetry, $\Sigma_{K\Lambda}$,
in the first energy bin, and a slightly negative polarzation
at higher photon energy. The positive asymmetry means that
more $K^+\Lambda$ reactions are produced perpendicular to
the linear polarization direction
({\it i.e.}, parallel to the magnetic field of the photon)
than parallel to the beam polarization.
Physically, when the magnetic interaction dominates, the
asymmetry becomes positive, while if the electric interaction
dominates, it becomes negative.
The DCC model calculations predict a slightly
negative polarization over most of the photon energy range in
Fig. \ref{fig:asym}. These results will constrain the
backward angle predictions of theoretical models,
which were largely unconstrained until now.
In contrast to Ref. \cite{zegers}, where the beam asymmetry
increases with increasing photon energy, here we see that
$\Sigma_{K\Lambda}$ decreases with photon energy.
As shown in Fig. 11 of Ref. \cite{bruno}, where calculations
are done without the inclusion of various $N^*$ resonances,
the backward angle beam asymmetry is strongly affected by
the inclusion of a third $D_{13}$ resonance (at 1954 MeV).
The data in Fig. \ref{fig:asym} at lower photon energies
agree better with calculations that do not include this
third $D_{13}$ resonance. However, conclusive results can
only be obtained by including the new data in overall fits
to all $K\Lambda$ photoproduction data.
In summary, measurements of forward-angle $\Lambda$ production have
been carried out at the LEPS spectrometer at SPring-8. These data
correspond to backward-angle kaons in the CM frame. The data are
in good agreement with CLAS data \cite{bradford} where the
angular ranges overlap, and go beyond the CLAS angles
to far backward angles.
The cross sections agree with the
general trend of the calculations in the DCC model \cite{bruno}
but are significantly higher around $E_\gamma=2.1$ GeV.
The Regge model can explain the energy dependence of the present
data, and even extrapolated to previous data at 4.3 GeV, whereas the
effective Lagrangian models cannot reproduce the data over this
range of photon energies.
The beam asymmetries show a positive sign at photon
energies below 2.0 GeV, in contrast with theoretical predictions
of the DCC model.
Meson photoproduction at backward angles (and to some extent
forward angles as well) has not been explored much so far.
Therefore, the present data are useful to constrain various
models of strangeness production. In the present paper, we
have shown that coupled channels effects are important and
that some resonance nature, with mass around 2 GeV, may be
studied in the DCC model \cite{bruno}.
In particular, the $D_{13}$ in the $s$-channel was shown to
affect the cross sections.
The Regge model can also provide information of hyperon
trajectories which has not been fully studied for the
$u$-channel. Further experiments on kaon photoproduction
and other mesons at backward angles will stimulate further
theoretical progress.
The authors thank the SPring-8 staff
for supporting the BL33LEP beam line and the LEPS experiment. We thank
H. Toki (RCNP) for fruitful discussions. This
research was supported in part by the Ministry of Education, Science,
Sports and Culture of Japan, by the National Science Council of
Republic of China (Taiwan), Korea Research Foundation(KRF)
Grant(2006-312-C00507), MEC (Spain) Grant No. FIS2005-03142,
European Hadron Physics Project (RII3-CT-2004-506078)
and the National Science Foundation (NSF Award PHY-0555558).
|
1,477,468,751,051 | arxiv | \section{Introduction}
\IEEEPARstart{A}{daptive} Volterra filters have been thoroughly studied in diverse applications \cite{karakucs2017bayesian,mallouki2016analysis,sicuranza2004filtered,sicuranza2005multichannel}. However, such filters become computationally expensive when a large number of multidimensional coefficients are required. To overcome this problem, many methods were developed \cite{batista2016reduced,wang2016class,chen2016kernel,scarpiniti2018spline}. Among these, the second-order Volterra (SOV) filter was widely applied to identify nonlinear system with acceptable error level \cite{ogunfunmi2007}.
The impulsive noise is a great challenge for nonlinear system identification. It has been shown that the impulsive noise could be better modeled by $\alpha$-stable distribution \cite{shao1993signal}. A symmetric $\alpha$-stable distribution probability density function (PDF) is defined by means of its characteristic function \cite{shao1993signal} $\psi(\omega) = \exp \left\{-\gamma|\omega|^{\alpha}\right\}$, where $0 < \alpha \le 2$ is the \emph{characteristic exponent}, controlling the heaviness of the PDF tails, and $\gamma>0$ is the \emph{dispersion}, which plays an analogous role to the variance. Such $\alpha$-stable noise tends to produce ``outliers''. The recursive least square (RLS), based on the second-order moment, is not robust against outliers \cite{lu2016improved}. To address stability problem in impulsive noise environments, several RLS-based algorithms were proposed \cite{singh2010closed,navia2012combination,zou2001robust}. In \cite{navia2012combination}, a recursive least $p$-norm (RL$p$N) algorithm was proposed. However, this algorithm only achieves improved performance when $p$ closes to $\alpha$, where $p$ is the order of moments \cite{navia2012combination}. Another strategy is named as recursive least M-estimate (RLM) algorithm which exploits the M-estimate function to suppress the outliers \cite{zou2001robust}. Although it is superior to several existing outlier-resistant methods, it suffers from performance degradation in highly impulsive noise environments.
Motivated by these considerations, we employ another M-estimator, named Geman-McClure estimate \cite{li2016learning}, for nonlinear system identification. Like the $\mathcal L_p$ estimator, the Geman-McClure estimator is a non-convex M-estimator, which is more efficient for learning system \cite{mandanas2017robust}. To the best of our knowledge, no adaptive algorithms can achieve improved performance in both Gaussian and $\alpha$-stable noise environments. By integrating the Geman-McClure estimator in the SOV filter structure, the proposed SOV recursive Geman-McClure algorithm achieves smaller steady-state kernel error as compared with the state-of-art algorithms. Note that a significant reduction is in fact vital from an engineering application perspective. Such a gain would likely justify the increase in computational complexity required to run the proposed algorithm. The fact that a considerable amount of mathematics is needed to derive the algorithms is of no consequence for practical applications, where the cost of hardware principally matters. In this paper, by proper application of mathematical concepts (even if at times cumbersome), we showed that key information about the signal environment could be extracted can be from their observation. Particularly, our main contributions are listed as follows:
\noindent(i) The Geman-McClure estimator is first applied in SOV filter for the improved performance in the presence of $\alpha$-stable and Gaussian noises.
\noindent(ii) The steady-state behaviour of the proposed algorithm is analyzed.
\noindent(iii) We validate the theoretical findings and effectiveness of the proposed algorithm through simulations.
\vspace{-3mm}
\section{Problem Formulation}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.65] {Fig01.eps}
\caption{\label{1} Diagram of nonlinear system identification using SOV filter.}
\label{Fig01}
\end{figure}
Fig. \ref{Fig01} shows the nonlinear system identification model based on the SOV filter, where $x(n)$ and $y(n)$ denote the input and output data, $e(n)=d(n)-y(n)$ denotes the error signal, $d(n)$ is the desired signal, and $\xi(n)$ is the noise signal. Given desired signal $d(n)$ that satisfies a model of the form
\begin{equation}
d(n) = \bm h_o^{\mathrm T}\bm x(n)+\xi(n)
\label{001}
\end{equation}
we want to estimate an $L \times 1$ unknown vector $\bm h_o$, where $L$ denotes the length of the SOV filter. The expanded input vector $\bm x(n)$ and the corresponding expanded coefficient vector ${\bm {\hat h}}(n)$ of the SOV system at time $n$ are expressed as
\begin{equation}
\begin{aligned}
\bm x(n)=&[x(n),x(n-1),\ldots,x(n-M+1), \\
& x^2(n),x(n)x(n-1),\ldots,x^2(n-M+1)]^{\mathrm T}
\label{002}
\end{aligned}
\end{equation}
\vspace{-3mm}
\begin{equation}
\begin{aligned}
{\bm {\hat h}}(n)=[h_1(0),\ldots,h_1(M-1),h_2(0,0),\ldots\\
,h_2(M-1,M-1)]^{\mathrm T}
\label{003}
\end{aligned}
\end{equation}
where $M$ denotes the length of the linear kernel, and $h_r$ stands for the $r$th-order Volterra kernel. Thus, the output of the SOV filter is expressed as
\begin{equation}
\begin{array}{l}
y(n) = {\bm {\hat h}}^{\mathrm T}(n)\bm x(n) = \sum\limits_{m_1=0}^{M-1} {h_1(m_1)} x(n - m_1) \\
+ \sum\limits_{m_1=0}^{M-1} {\sum\limits_{m_2=m_1}^{M-1} {h_2(m_1,m_2)} x(n-m_2)x(n - m_1)}.
\end{array}
\label{004}
\end{equation}
In this case, $L=M(M+3)/2$. In practice, the noise signal $\xi(n)$ may be either Gaussian or non-Gaussian. Hence, it is very clear that efforts make sense in pursuing a more effective SOV-algorithm that satisfies faster convergence and smaller misalignment.
\vspace{-5mm}
\section{Proposed algorithm}
The conventional Geman-McClure estimator has the following form \cite{li2016learning}:
\begin{equation}
\Theta(e) = \frac {e^2}{\sigma^2+e^2}
\label{005}
\end{equation}
where $\sigma$ is a constant that modulates the shape of the loss function. Fig. \ref{Fig02} shows the score functions $\phi(e)$ of the M-estimator and the Geman-McClure estimator, where $\phi(e)=\partial\Theta(e)/\partial e$. It can be observed that for larger values of $e$, the weight updation is small and thus the algorithm is stable in the presence of impulsive noise. For performance improvement, the type of recursive algorithms is usually preferred. Inspired by these merits, the cost function of the proposed algorithm is defined as follows:
\begin{equation}
\begin{array}{rcl}
J(n) \triangleq \sum\limits_{k=1}^n {\lambda^{n-k}\frac{e^2(k,n)}{\sigma^2+e^2(k,n)}}
\end{array}
\label{006}
\end{equation}
where $0\ll\lambda<1$ is the forgetting factor. The error signal $e(k,n)$ can be expressed as
\begin{equation}
e(k,n) = d(k) - \bm x^{\mathrm T}(k)\bm {\hat h}(n).
\label{007}
\end{equation}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4] {Fig02.eps}
\caption{\label{2} Score functions of Hampel`s three-part redescending M-estimator and Geman-McClure estimator ($\sigma$=1, and three threshold parameters in M-estimate are set to 0.6, 1.3 and 1.8).}
\label{Fig02}
\end{figure}
Taking the gradient of $J(n)$ with respect to the weight vector ${\bm {\hat h}}(n)$, and letting the equation be zero, we have
\begin{equation}
\begin{array}{l}
\sum\limits_{k=1}^n \lambda^{n-k}\rho(k,n)\bm x(k){\bm x^{\mathrm T}}(k){\bm {\hat h}}(n) = \sum\limits_{k=1}^n {\lambda^{n-k}\rho(k,n)\bm x(k)d(k)}
\end{array}
\label{008}
\end{equation}
where $\rho(k,n)=\frac{\sigma^2}{(\sigma^2+e^2(k,n))^2}$ is the weighting factor. Then, the expression of ${\bm {\hat h}}(n)$ can be rewritten as:
\begin{equation}
\begin{array}{rcl}
\begin{array}{l}
{\bm {\hat h}}(n) = {\bm P}(n){\bm \theta}(n) = {\bm \Phi}^{-1}(n){\bm \theta}(n)
\end{array}
\end{array}
\label{009}
\end{equation}
where ${\bm P}(n) = {\bm \Phi}^{-1}(n)$, ${\bm \Phi}(n) = \sum\limits_{k = 1}^n \lambda^{n-k}\rho(k,n)\bm x(k){\bm x^{\mathrm T}}(i)$
and ${\bm \theta}(n) = \sum\limits_{k = 1}^n \lambda^{n-i}\rho(k,n)d(k)\bm x(k)$. If $\rho(k,n) = 1$, the above update equation becomes the conventional RLS algorithm. If $\rho(k,n)\ne1$, $\bm P(n)$ and ${\bm \theta}(n)$ are the weighted autocorrelation matrix and the weighted cross-correlation vector of the optimal weights via $\rho(k,n)$. We have to recalculate (\ref{009}) at each iteration. In our previous studies, an online recursive method is considered to overcome this limitation \cite{lu2016improved}. By using this approach, $\bm P(n)$ and ${\bm \theta}(n)$ can be adapted by
\begin{equation}
\begin{array}{l}
\bm \Phi(n)\approx \sum\limits_{k = 1}^n \lambda^{n-k}\rho(k,k)\bm x(k){\bm x^{\mathrm T}}(k) \\
\;\;\;\;\;\;\;\;\;= \lambda{\bm \Phi}(n-1) + \rho(n,n)\bm x(n){\bm x^{\mathrm T}}(n),\\
\label{010}
\end{array}
\end{equation}
\vspace{-6mm}
\begin{equation}
\begin{array}{l}
{\bm \theta}(n) \approx \sum\limits_{k = 1}^n \lambda^{n-k}\rho(k,k)d(k)\bm x(k) \\
\;\;\;\;\;\;\;\;= \lambda {\bm \theta}(n-1) + \rho(n,n)\bm x(n)d(n).
\label{011}
\end{array}
\end{equation}
Using the matrix inversion lemma \cite{sayed2003fundamentals}, the adaptation equation of ${\bm P}(n)$ can be obtained as
\begin{equation}
{\bm P}(n) = \lambda^{-1}{\bm P}(n-1) - \lambda^{-1}{\bm \Psi}(n){\bm x^{\mathrm T}}(n){\bm P}(n-1)
\label{012}
\end{equation}
where ${\bm P}(0) = \zeta^{-1}{\bf I}$, ${\bf I}$ is an identity matrix, $\zeta=0.01$ is a small positive number, and the gain vector is
\begin{equation}
{\bm \Psi}(n) = \frac{\rho(n,n){\bm P}(n-1)\bm x(n)}{\lambda + \rho(n,n){\bm x^{\mathrm T}}(n){\bm P}(n-1)\bm x(n)}.
\label{013}
\end{equation}
Rewriting (\ref{009}) in a recursive way, we can obtain the following update equation for ${\bm {\hat h}}(n)$
\begin{equation}
{\bm {\hat h}}(n) = {\bm {\hat h}}(n-1) + {\bm \Psi}(n)[d(n) - {\bm x^{\mathrm T}}(n){\bm {\hat h}}(n-1)].
\label{014}
\end{equation}
$\emph{Remark 1}$: In the expression (\ref{014}), one can see an implicit relationship between ${\bm {\hat h}}(n)$ and $\rho(n,n)$. The algorithm requires an iterative approximation to the optimal solution, where $\rho(n,n)$ is calculated by using ${\bm {\hat h}}(n-1)$, and the new value for ${\bm {\hat h}}(n)$ is obtained with the compute the value of $\rho(n,n)$.
$\emph{Remark 2}$: The proposed algorithm is easy to implement in the SOV filter, since it does not require any priori information on the noise characteristics. It requires two parameters ($\sigma$ and $\lambda$) to improve the overall nonlinear filtering performance.
\vspace{-5mm}
\section{Performance analysis}
In this section, given the Assumptions, we theoretically study the performance of the proposed SOV algorithm. Because the output of the Volterra-based algorithms linearly depend on the coefficients of the filter itself, we can follow the approach in \cite{sayed2003fundamentals,lee1993fast} for analyzing the mean performance and steady-state excess mean square error (EMSE) of SOV recursive Gemen-McClure algorithm.
To begin with, we define the weight deviation vector as $\widetilde{{\bm h}}(n) \triangleq {\bm h_o} - {\bm {\hat h}}(n)$. In steady-state, the \emph{a priori} error $e_a(n)$ and the \emph{a posterior} error $e_p(n)$ are defined as $e_a(n) \triangleq {\bm x^{\mathrm T}}(n)\widetilde{{\bm h}}(n-1),\;e_p(n) \triangleq {\bm x^{\mathrm T}}(n)\widetilde{{\bm h}}(n)$. The mathematical analysis needs the following assumptions.\\
\noindent \textbullet\; \emph{Assumptions}\\
a) The input signal $\bm x(n)$ is independent and identically distributed (i.i.d.) with zero-mean, and $\left \lVert\bm x(n)\right \rVert_{\bm \Omega}^2$ is approximately independent of the \emph{a priori} error $e_a(n)$ at steady-state, where $\left \lVert \bm x\right \rVert_{\bm \Omega}^2={\bm x}^{\mathrm T}{\bm \Omega}{\bm x}$ stands for the squared-weighted Euclidean norm of a vector.
\noindent b) The noise signal $\xi(n)$ is i.i.d. with zero-mean and variance $\sigma_\xi^2$.
\noindent c) $\xi(n)$ and $\bm x(n)$ are mutually independent.
\vspace{-3.5mm}
\subsection{Mean stability}
We can use $\widetilde{{\bm h}}(n)$ to rewrite the adaptation equation (\ref{014}) as
\begin{equation}
\begin{array}{rcl}
&\widetilde{{\bm h}}(n) = \widetilde{{\bm h}}(n-1)-\frac{\rho(n,n){\bm P}(n-1)\bm x(n)}{\lambda+\rho(n,n){\bm x^{\mathrm T}}(n){\bm P}(n-1)\bm x(n)} \\
&\bm\cdot [{d(n) - {\bm x^{\mathrm T}}(n){\bm {\hat h}}(n-1)}].
\label{017}
\end{array}
\end{equation}
Based on the definition of (\ref{012}), and using the matrix inversion lemma, we obtain the adaptaion of ${{\bm P}^{-1}}(n)$, which is ${{\bm P}^{-1}}(n) = \lambda^{n+1}\rho{\bf I} + \sum\limits_{k=0}^n \lambda^{n-k}\rho(k,k)\bm x(k)\bm x^{\mathrm T}(k)$. Then, we rewrite (\ref{017}) as
\begin{equation}
\begin{array}{l}
\widetilde{{\bm h}}(n) = \widetilde{{\bm h}}(n-1) - \mu(n){\bm P}(n-1){\bm x^{\mathrm T}}(n)e(n)
\label{019}
\end{array}
\end{equation}
where $\mu(n)=\frac{1}{\frac{\lambda}{\rho(n,n)}+{\bm x^{\mathrm T}}(n){\bm P}(n-1)\bm x(n)}$ and $e(n)=\bm x^{\mathrm T}(n)\widetilde{{\bm h}}(n-1)+\xi(n)$. Taking expectations of both sides of (\ref{019}) and using $e(n) \approx \bm x^{\mathrm T}(n)\widetilde{{\bm h}}(n-1)$ during the transient state, yields
\begin{equation}
\hskip-1em
\begin{array}{l}
{\mathrm E} \{\widetilde{{\bm h}}(n)\} = {\mathrm E} \{\widetilde{{\bm h}}(n-1)\} - {\mathrm E} \{\mu(n){\bm P}(n-1){\bm x^{\mathrm T}}(n)e(n)\}\\
\approx {\mathrm E} \{\widetilde{{\bm h}}(n-1)\} - {\mathrm E} \left\{\mu(n){\bm P}(n-1){\bm x^{\mathrm T}}(n)\bm x(n)\right\} {\mathrm E} \{\widetilde{{\bm h}}(n-1)\}
\label{019a}
\nonumber
\end{array}
\end{equation}
where ${\mathrm E}\{\cdot\}$ denotes the statistical expectation operator. Therefore, the proposed algorithm can converge in the sense of mean if and only if
\begin{equation}
\begin{aligned}
0 < \lambda_{\max} \left({\mathrm E}\left\{\mu(n){\bm P}(n-1){\bm x^{\mathrm T}}(n)\bm x(n)\right\}\right) <2
\label{019b}
\end{aligned}
\end{equation}
where $\lambda_{\max}(\cdot)$ is the largest eigenvalue of a matrix. Based on the fact that $\lambda_{\max}({\bf A}{\bf B})< {\mathrm {Tr}}({\bf A}{\bf B})$ in (\ref{019b}) \footnote{$\lambda_{\max}({\bf A}{\bf B}) < {\mathrm {Tr}}({\bf A}{\bf B})$ is simply not true in the general case. However, if the input signal is persistently exciting, $\bm P(n)>0$ for all infinite $n$ and the matrix $\bm x(n)\bm x^{\mathrm T}(n)$ is nonnegative definite in (\ref{019b}). Hence, we have such inequality.}, we obtain
\begin{equation}
\begin{array}{l}
\lambda_{\max} \left({\mathrm E}\left\{ \frac{{\bm P}(n-1){\bm x^{\mathrm T}}(n)\bm x(n)}{\frac{\lambda}{\rho(n,n)}+{\bm x^{\mathrm T}}(n){\bm P}(n-1)\bm x(n)} \right\} \right) \\
<{\mathrm E}\left\{ \frac{{\mathrm {Tr}}({\bm x^{\mathrm T}}(n){\bm P}(n-1) \bm x(n))}{\frac{\lambda}{\rho(n,n)}+{\bm x^{\mathrm T}}(n){\bm P}(n-1)\bm x(n)} \right\} <1.
\label{019c}
\end{array}
\end{equation}
Consequently, the mean error weight vector of the proposed algorithm is convergent if the input signal is persistently exciting \cite{sayed2003fundamentals}.
\subsection{Mean-square stability}
Multiplying both sides of (\ref{019}) by $\bm x(n)$, we have the relationship between the \emph{a priori} and the \emph{a posteriori} errors, as below
\begin{equation}
\begin{array}{l}
e_p(n) = e_a(n) - \left \lVert \bm x(n)\right \rVert_{{\bm P}(n-1)\mu(n)}^2e(n){\color{red}.}
\label{020}
\end{array}
\end{equation}
Using the established energy conservation argument, the proposed algorithm can be expressed as
\begin{equation}
\begin{array}{l}
\widetilde{\bm h}(n) + \frac{{\mu(n)\bm P(n-1){\bm x^{\mathrm T}}(n)}}{\left \lVert \bm x(n)\right \rVert_{{\bm P}(n-1)\mu(n)}^2}e_a(n) = \\
{\quad\quad\quad\quad\quad\quad\quad}\widetilde{\bm h}(n-1) + \frac{\mu(n)\bm P(n-1){\bm x^{\mathrm T}}(n)}{\left \lVert \bm x(n)\right \rVert_{\bm P(n-1)\mu(n)}^2}e_p(n).
\label{021}
\end{array}
\end{equation}
Combining (\ref{019}) and (\ref{020}) and employing $\mu^{-1}(n){\bm P}^{-1}(n-1)$ as a weighting matrix for the squared-weighted Euclidean norm of a vector, we can get
\begin{equation}
\begin{array}{l}
|| \widetilde{{\bm h}}(n) ||_{{\mu^{-1}}(n){{\bm P}^{-1}}(n-1)}^2 + \frac{e^2_a(n)}{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2} \\
= || \widetilde{{\bm h}}(n-1) ||_{\mu^{-1}(n){{\bm P}^{-1}}(n-1)}^2 + \frac{e^2_p(n)}{\left \lVert\bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2}.
\label{022}
\end{array}
\end{equation}
In the SOV filter with the recursive Geman-McClure algorithm, the adaptive filter will converge to the optimum (minimum) EMSE at steady{\color{red}-}state. Therefore, we have
\begin{equation}
\begin{array}{l}
{\mathrm E}\left\{{||\widetilde{{\bm h}}(n)||_{\mu^{-1}(n){\bm P}^{-1}(n-1)}^2} \right\}\\
\approx {\mathrm E}\left\{{||\widetilde{{\bm h}}(n-1)||_{{\mu^{-1}}(n){\bm P}^{-1}(n-1)}^2}\right\}
\label{023}
\end{array}
\end{equation}
Taking expectations of both sides of (\ref{022}), and substituting (\ref{023}) into (\ref{022} yields
\begin{equation}
\begin{array}{l}
{\mathrm E}\left\{{\frac{e^2_a(n)}{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2}} \right\} = {\mathrm E}\left\{{\frac{e^2_p(n)}{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2}} \right\}.
\label{024}
\end{array}
\end{equation}
Substituting (\ref{020}) into (\ref{024}) results in
\begin{equation}
\begin{array}{l}
{\mathrm E}\left\{{\frac{e^2_a(n)}{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2}} \right\}
={\mathrm E}\left\{{\frac{e^2_a(n)}{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2}} \right\} \\ + {\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(n){\bm P}(n-1)}^2e^2(n)}\right\} - 2 {{\mathrm E}\left\{{e_a(n)e(n)} \right\}}.
\label{025}
\end{array}
\end{equation}
Therefore, in the steady{\color{red}-}state ($n\to\infty$), (\ref{025}) can be reduced to
\begin{equation}
\begin{array}{l}
{\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2e^2(\infty)}\right\} = 2 {{\mathrm E}\left\{{e_a(\infty)e(\infty)} \right\}}.
\label{026}
\end{array}
\end{equation}
Considering Assumption c) and using $e(n)=e_a(n)+\xi(n)$, the left side of (\ref{026}) can be expressed as
\begin{equation}
\begin{aligned}
&{\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2(e_a(\infty)+\xi(\infty))^2}\right\}\\
=&\;{\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2e_a^2(\infty)}\right\}
+{\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2\xi^2(\infty)}\right\}\\
&+2{\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2e_a(\infty)\xi(\infty)}\right\}.
\label{027}
\end{aligned}
\end{equation}
According to Assumption a), (\ref{027}) is reduced to
\begin{equation}
\begin{array}{l}
\sigma_v^2{\mathrm E}\left\{ {\left \lVert\bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2} \right\} + {\mathrm E}\left\{ \left \lVert\bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2\right\}
{\mathrm E}\left\{e_a^2(\infty)\right\}.
\label{028}
\end{array}
\end{equation}
Reusing $e(n)=e_a(n)+\xi(n)$ and considering Assumption b), the right side of (\ref{026}) can be expressed as
\begin{equation}
\begin{array}{c}
2{{\mathrm E}\left\{{e_a(\infty)e(\infty)} \right\}} \approx 2{\mathrm E}\left\{{e^2_a(\infty)}\right\}.
\label{029}
\end{array}
\end{equation}
Using (\ref{028}) and (\ref{029}), it can be shown that
\begin{equation}
\begin{array}{l}
\sigma_\xi^2{\mathrm E}\left\{{\left \lVert\bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2} \right\} + {\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2}\right\}\\
\bm\cdot {\mathrm E}\left\{e_a^2(\infty)\right\} = 2{\mathrm E}\left\{e_a^2(\infty) \right\}.
\label{030}
\end{array}
\end{equation}
Eq. (\ref{030}) can be rewritten as
\begin{equation}
\tau = \frac{\vartheta\sigma_\xi^2}{2 - \vartheta}
\label{031}
\end{equation}
where $\tau = {\mathrm E}\left\{{e_a^2(\infty)}\right\}$ and $\vartheta = {\mathrm E}\left\{{\left \lVert \bm x(n)\right \rVert_{\mu(\infty){\bm P}(\infty)}^2} \right\}
= {\mathrm E}\left\{{\frac{{\sigma^2{\mathrm {Tr}}(\bm x(n){\bm x^{\mathrm T}}(n){\bm{P}}(\infty))}}{{{({\sigma^2} + e^2(\infty))}^2}\lambda + \sigma^2{\mathrm {Tr}}(\bm x(n){\bm x^{\mathrm T}}(n){\bm P}(\infty))}} \right\}$.
where ${\mathrm {Tr}}(\cdot)$ is trace operation. Now, let us define the steady{\color{red}-}state mean value of ${{\bm P}^{-1}}(n)$ as
${\bm {\mathcal P}^{-1}} \buildrel \Delta \over = \mathop {\lim}\limits_{n \to \infty} {\mathrm E}\left\{{\bm P}^{-1}(n)\right\} = \frac{{{\mathrm E}\{\rho(n,n)\} {\bm \Xi}(n)}}{1-\lambda}$ where ${\bm \Xi}(n)$ is the covariance matrix ${\bm \Xi}(n) = {\mathrm E}\{\bm x(n){\bm x^{\mathrm T}}(n)\}$. When the algorithm is close to the optimal EMSE. In this case, ${\mathrm E}\left\{{{\bm P}(\infty)} \right\}$ can be approximated as
\begin{equation}
\begin{array}{l}
{\mathrm E}\left\{{{\bm P}(\infty)} \right\} \approx {\left( {{\mathrm E}\left\{{{\bm P}^{-1}(\infty)}\right\}} \right)^{-1}} = \bm {\mathcal P} = \frac{(1 - \lambda){\bm \Xi}^{-1}(\infty)}{{\mathrm E}\left\{ {\frac{\sigma^2}{(\sigma^2+e^2(\infty))^2}} \right\}} \\
\approx \frac{{\mathrm E}\{{(\sigma^2 + e^2(\infty))}^2\} (1 - \lambda){\bm \Xi}^{-1}(\infty)}{\sigma^2}.
\label{034}
\end{array}
\end{equation}
For $0\ll\lambda<1$, we have $|e_a(n)| \ll |\xi(n)|$ at steady-state. Finally, substituting (\ref{034}) and into (\ref{031}), we arrive at
\begin{equation}
\epsilon = \frac{\sigma_\xi^2(1-\lambda)L \varphi}{2-(1-\lambda)L \varphi}
\label{035}
\end{equation}
where $\varphi = {\mathrm E}\left[{\frac{{\mathrm E}\left[\left(\sigma^2+\xi^2(n)\right)^2\right]}{\lambda {{(\sigma^2+\xi^2(n))}^2} + {\mathrm E}\left[\left(\sigma^2+\xi^2(n)\right)^2\right](1-\lambda)L}} \right]$.
Note that it is very hard to further simplify (\ref{035}). The theoretical result contains a random variable $\xi(n)$, but after the expect operation, we can obtain an exact value. Furthermore, (\ref{035}) is also applicable to the analysis of linear recursive Geman-McClure algorithm.
Finally, we compare the computational complexities of the algorithms, as shown in Table \ref{Table01}, where $N_w$ denotes the length of sliding-window in the SOV-RLM algorithm. The arithmetic complexity of the proposed algorithm is comparable to that of the SOV-RLS algorithm, except for the $L+3$ more multiplications, 1 addition and 1 division in (\ref{013}).
\begin{table}[tbp]
\scriptsize
\centering
\caption{Summary of the computational complexity in each iteration.}
\doublerulesep=0.5pt
\begin{tabular}{p{1.2cm}|c c p{1.8cm}}
\hline
\hline
\textbf {Algorithms} &\textbf {Multiplications} &\textbf {Additions} &\textbf {Other operations} \\ \hline
SOV-RLS &$2L^2+4L$ &$2L^2+2L$ &1 division \\ \hline
SOV-RLM &$2L^2+5L$ &$2L^2+2L$ &1 division and ${\mathcal O}(N_w{\mathrm {log}}N_w)$ \\ \hline
SOV-RL$p$N &$2L^2+5L+2$ &$2L^2+2L+1$ &2 divisions and $p$-power operation \\ \hline
\rowcolor{mygray}
Proposed algorithm &$2L^2+5L+3$ &$2L^2+2L+1$ &2 divisions \\
\hline
\hline
\end{tabular}
\label{Table01}
\end{table}
\section{Simulation results}
To verify the theoretical findings and to illustrate the effectiveness of the proposed algorithm, we present simulations when implemented in Matlab R2013a running on a 2.1GHz AMD processor with 4GB of RAM. The EMSE and normalized mean square deviation (NMSD) $= 20{\mathrm {log}}_{10} {\mathrm E}\{{|| {\bm {\hat h}}(n)-\bm h_o ||_2}\}/{||\bm h_o||_2}$ are employed to evaluate the performance. The results are averaged over 300 independent simulations.
\subsection{Gaussian scenarios}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.4] {Fig03.eps}
\caption{\label{3} Theoretical and simulation EMSEs for proposed algorithm.}
\label{Fig03}
\end{figure}
\begin{table} [tbp]
\scriptsize
\centering
\caption{EMSEs for the proposed algorithm ($\lambda$=0.99).}
\doublerulesep=0.4pt
\begin{tabular}{ccc|cc}
\hline
\hline
\multicolumn{1}{ c }{\multirow{2}{*} {\textbf{SNR}}} & \multicolumn{1}{c}{\multirow{2}{*} {$\sigma$}} & \multicolumn{1}{ c }{\multirow{2}{*} {$L$}} & \multicolumn{2}{|c}{\textbf{EMSE}} \\ \cline{4-5}
& & & \textbf {Theory} & \textbf {Simulation} \\
\hline
25dB& 0.5 & 14 \cite{lu2016improved}& $-$36.66dB & $-$36.75dB \\
40dB& 1.8 & 14 \cite{lu2016improved}& $-$51.64dB & $-$51.83dB \\
20dB& 0.2 & 14 \cite{lee1993fast}& $-$32.27dB & $-$31.77dB \\
30dB& 0.45 & 14 \cite{lee1993fast}& $-$41.70dB & $-$41.83dB \\
10dB& 0.9 & 20 \cite{kalluri1999general}& $-$20.65dB & $-$20.14dB \\
30dB& 0.6 & 20 \cite{kalluri1999general}& $-$39.83dB & $-$40.40dB \\
\hline
\hline
\end{tabular}
\label{Table02}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.43] {Fig04.eps}
\caption{\label{4} NMSD curves of the algorithms for Gaussian environments.}
\label{Fig04}
\end{figure}
In this example, we provide simulation verification for the analysis. The Gaussian distribution with zero-mean and unit variance is adopted to generate $\bm x(n)$ and $\xi(n)$. Fig. \ref{Fig03} plots the simulation and theoretical results of the EMSEs for different signal-to-noise ratio (SNR) and parameter settings. The unknown plant is a 14-tap nonlinear system, which is presented by \cite{lu2016improved}. It is observed that the simulations agree well with the analysis results in all scenarios. Next, we compare the theoretical and the simulation results of the steady-state EMSEs for different unknown systems. The input and noise signals used have similar characteristics as in Fig. \ref{Fig03}. The unknown parameter vector $\bm h_o$ has $L=14$ or $20$ entries and is defined by \cite{lee1993fast,kalluri1999general}. Table \ref{Table02} provides the simulation and theoretical EMSE values with different Volterra systems. This shows agreements in the SOV filter in spite of different SNRs, parameter settings and unknown systems. The difference between simulation results and theory arises from some approximations and assumptions used for deriving (\ref{035}).
{\color{red}W}e compare the convergence rate and steady-state kernel behaviour of the proposed algorithm with existing algorithms. The unknown system is a 14-tap nonlinear system \cite{lu2016improved}. A zero-mean white Gaussian noise (WGN) is used as the input signal. We observe that the proposed algorithm outperforms the standard SOV-RLS algorithm by nearly 5dB in steady-state where the noise signal is WGN with different SNRs (Fig. \ref{Fig04}).
\vspace{-5.5mm}
\subsection{Non-Gaussian scenarios}
\begin{table}[tbp]
\scriptsize
\centering
\caption{Steady-state NMSDs of the proposed algorithms for different $\sigma$ with similar convergence rate ($\alpha=1.25, \gamma=1/15$).}
\doublerulesep=0.1pt
\begin{tabular}{c|p{0.67cm}<{\centering}|p{0.67cm}<{\centering}|p{0.67cm}<{\centering}|p{0.67cm}<{\centering}|p{0.67cm}<{\centering}|p{0.67cm}<{\centering}}
\hline \hline
$\sigma$ & $0.05$ & $0.2$ & $0.3$ & $0.4$ & $1$ & $1.5$ \\ \hline
\begin{tabular}[c]{@{}l@{}} \textbf{Steady-state}\\ \textbf{NMSD (dB)}\end{tabular} &$-44.24$ &$-43.00$ &$-45.53$ &$-41.19$ &$-38.76$ &$-37.57$ \\ \hline \hline
\end{tabular}
\label{Table03}
\end{table}
\begin{figure}[!htb]
\centering
\includegraphics[scale=0.43] {Fig05.eps}
\caption{\label{6} NMSD curves of the algorithms in $\alpha$-stable noise ($\alpha=1.25, \gamma=1/15$).}
\label{Fig06}
\end{figure}
In the second example, we repeat the same simulations as above, but this time using $\alpha$-stable noise as the noise signal. First, we study the effect of $\sigma$ on the performance of the SOV recursive Geman-McClure algorithm under the impulsive noise. For sake of fair comparison, we use the same fixed forgetting factor $\lambda=0.99$ to obtain a similar convergence rate. The results are shown in Table \ref{Table03}. It is indicated that the performance deteriorates quickly with the increasing of $\sigma$ when $\sigma$ is greater than 0.4. For an overall consideration of steady-state NMSD performance and convergence rate, the best choice is $\sigma=0.3$ in this example. Hence, we fix $\sigma=0.3$ in following simulations. Fig. \ref{Fig06} illustrates the NMSD performance of algorithms in the presence of $\alpha$-stable noise. One can also see that in impulsive noise scenario, the proposed SOV recursive Geman-McClure algorithm achieves better performance than its SOV-based counterparts \footnote{The SOV-RLM and SOV-RL$p$N algorithms can be derived by extending algorithms \cite{navia2012combination,zou2001robust} to the SOV filter structure.}. In the legend of the figure, we use the RT to denote the run time for algorithm. It can be seen that the run time of the proposed algorithm fall in between the RLS and RL$p$N. Since the proposed algorithm achieves improved performance in both cases, we can conclude that the SOV recursive Geman-McClure algorithm has robust performance for various scenarios and is an excellent alternative to the SOV-RLS algorithm for nonlinear system identification task.
\section{Conclusion}
In this paper, we proposed a recursive Geman-McClure algorithm based on SOV filter, which was derived by minimizing the Geman-McClure estimator of the error signal. Detailed steady-state analysis was presented. One of the advantages of the proposed algorithm is that it has only two parameters, the constant $\sigma$ and the forgetting factor $\lambda$, which have quite wide ranges for choice to achieve excellent performance. We carried out computer simulations that support the analytical findings and confirm the effectiveness of the proposed algorithm. Note that the variance of $\alpha$-stable noise is infinite, so it is very difficult, if not impossible, to conduct a steady-state performance analysis of the proposed algorithm. Similarly, theoretical analysis of global stability and convergence is also very hard and rare for the Volterra filters in the presence of $\alpha$-stable noise. Rigorous mathematical analysis is lacking in a long period. We leave these investigations for future work.
\ifCLASSOPTIONcaptionsoff
\newpage
\fi
\footnotesize
\bibliographystyle{IEEEtran}
|
1,477,468,751,052 | arxiv | \section{Introduction}\label{sec:intro}
The logistic fictitious play (LFP) algorithm (a.k.a.\ stochastic fictitious play with best logit response), first introduced by Fudenberg and Kreps in 1993~\cite{Fuden_93}, is a classical algorithm in game theory (see~\cite{Ny_06} and references therein). In this work we focus on the two-player zero-sum version, which is shown in Algorithm~\ref{algo:LFP}.
Specifically,
players I and II are given finite action spaces $[n]:=\{1,2,\ldots,n\}$ and $[m]$, respectively, and a payoff matrix $A\in\mathbb{R}^{m\times n}$. At the beginning,
players I and II choose their initial actions $i_0\in[n]$ and $j_0\in[m]$, respectively, and their initial ``history of actions'' are denoted by $x^0 := e_{i_0}$ and $y^0 := e_{j_0}$, respectively (where $e_i$ denotes the $i$-th standard coordinate vector).
At any time $t\ge 0$, in order for player I
to choose the next action $i_{t+1}$,
she first computes the best-logit-response distribution $v^t$ based on player II's history of actions $y^t$:
\begin{equation}
v^t:= \mathsf{P}_\mathsf{x}(y^t):= {\argmin}_{x\in\Delta_n} \; \lranglet{A^\top y^t}{x} + \eta h_\mathsf{x}(x),\label{eq:def_vt}
\end{equation}
where $h_\mathsf{x}(x):= \sum_{i=1}^n x_i\ln(x_i)$
is the (negative) entropic function with domain $\mathsf{dom}\, h:= \mathbb{R}^n_+$ (where $\mathbb{R}_+:= [0,+\infty)$), $\Delta_n:= \{x\in\mathbb{R}_+^n: \sum_{i=1}^n x_i=1\}$ denotes the $(n-1)$-dimensional probability simplex, and $\eta>0$ is the regularization parameter.
Then, based on $v^t$, she randomly chooses $i_{t+1}$ by sampling from the distribution $v^t$, such that $\Pr(i_{t+1} = i) = v^t_{i}$ for $i\in[n]$. After obtaining $i_{t+1}$, she updates her history of actions from $x^t$ to $x^{t+1}$ by a convex combination of $x^t$ and $e_{i_{t+1}}$:
\begin{equation}
x^{t+1} := (1-\alpha_t)x^t + \alpha_t e_{i_{t+1}},
\end{equation}
where $\alpha_t\in[0,1]$ can be interpreted as the ``step-size'' of player I, and is required to satisfy
\begin{equation}
\textstyle\sum_{t=0}^{+\infty} \,\alpha_t = +\infty\quad \mbox{and}\quad \textstyle\sum_{t=0}^{+\infty} \,\alpha_t^2 < +\infty, \label{eq:alpha_t_cond}
\end{equation}
with a typical choice being $\alpha_t = 1/(t+1)$. For player II, the update of her history of actions from $y^t$ to $y^{t+1}$ is symmetric to that of player I. Specifically, based on player I's history of actions $x^t$, she computes her best-logit-response distribution $s^t$ as
\begin{equation}
s^t:= \mathsf{P}_\mathsf{y}(x^t):={\argmax}_{y\in\Delta_m} \; \lranglet{A x^t}{y} - \eta h_\mathsf{y}(y) . \label{eq:def_st}
\end{equation}
Then she samples her next action $j_{t+1}$ from $s^t$, and updates her history of actions from $y^t$ to $y^{t+1}$ as follows:
\begin{equation}
y^{t+1} := (1-\alpha_t)y^t + \alpha_t e_{j_{t+1}}.
\end{equation}
In the literature, the asymptotic convergence of LFP in Algorithm~\eqref{algo:LFP} has been well-studied. For example, Hofbauer and Sandholm~\cite{Hof_02} proves the following theorem:
\begin{theorem}[Hofbauer and Sandholm~{\cite[Theorem~6.1(ii)]{Hof_02}}]\label{thm:asymp_as}
Consider the fixed-point equation
\begin{equation}
\mathsf{P}_\mathsf{x}(y) = x, \quad \mathsf{P}_\mathsf{y}(x) = y,\label{eq:fixed_pt}
\end{equation}
and its unique solution $(x^*,y^*)\in \mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$ (where $\mathsf{ri}\,\Delta_p$ denotes the relative interior of $\Delta_p$). Then in Algorithm~\ref{algo:LFP}, for any initialization $i_0\in[n]$ and $j_0\in[m]$, and any step-sizes $\{\alpha_t\}_{t\ge 0}$ satisfying~\eqref{eq:alpha_t_cond}, we have
\begin{equation}
\Pr\left({\lim}_{t\to+\infty}\, x_t = x^* \;\; \mbox{and}\;\; {\lim}_{t\to+\infty}\, y_t = y^*\right) = 1. \label{eq:as_conv}
\end{equation}
\end{theorem}
The proof of this result (and similar ones, e.g.,~\cite[Theorem~4.7]{Perkins_14}) typically follows two steps. First, we consider the deterministic variant of LFP, which is denoted by DLFP and shown in Algorithm~\ref{algo:det_LFP},
and its associated continuous-time dynamic system
\begin{equation}
\frac{\mathrm{d} x(s)}{\mathrm{d} s} = \mathsf{P}_{\mathsf{x}}(y) - x, \quad \frac{\mathrm{d} y(s)}{\mathrm{d} s} = \mathsf{P}_{\mathsf{y}}(x) - y, \label{eq:det_dynamics}
\end{equation}
where $s\ge 0$ denotes the continuous time index. Then we show that~\eqref{eq:det_dynamics} admits a unique rest point $(x^*,y^*)\in \mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$ (which is the same as the unique fixed point of~\eqref{eq:fixed_pt}), and starting from any $(x(0),y(0))\in\Delta_n\times\Delta_m$, the differential equations in~\eqref{eq:det_dynamics} always admit a unique solution $\{(x(s),y(s)):s\ge 0\}$ such that $(x(s),y(s))\to(x^*,y^*)$ as $s\to+\infty$. Second, we show that with probability one, the continuous-time linear interpolation of the iterates $\{(x^t,y^t)\}_{t\ge 0}$ in Algorithm~\ref{algo:LFP} is an {\em asymptotic pseudo-trajectory} to the (unique) solution $\{(x(s),y(s)):s\ge 0\}$ of~\eqref{eq:det_dynamics} (see Bena{\"\i}m~\cite[Section~4]{Benaim_99} for details), and hence $(x^t,y^t)\to (x^*,y^*)$ as $t\to+\infty$.
In contrast to the well-understanding of the asymptotic convergence properties of LFP, the non-asymptotic convergence analysis is lacking in the literature. (See Section~\ref{sec:literature} for a detailed review of related work.) Indeed, while the asymptotic pseudo-trajectory approach serves as a powerful tool to analyze the asymptotic convergence of LFP (and stochastic approximation in general; see e.g.,~\cite{Benaim_99,Kushner_03}), it does not easily yield any (local or global) non-asymptotic convergence rate.
Therefore, the purpose of this work is to provide a complementary perspective on LFP from the theory of modern convex optimization. Leveraging this perspective, we first conduct
a global non-asymptotic analysis of DLFP (namely Algorithm~\ref{algo:det_LFP}). Then, by viewing LFP (i.e., Algorithm~\ref{algo:LFP}) as the stochastic version of DLFP, we then obtain a local
convergence rate of LFP, by utilizing the techniques in analyzing Algorithm~\ref{algo:det_LFP} (and the asymptotic almost sure convergence result in Theorem~\ref{thm:asymp_as}).
Our contributions are summarized as follows.
\begin{enumerate}
\item We provide a global non-asymptotic analysis of Algorithm~\ref{algo:det_LFP}.
Specifically, we derive a class of global convergence rates of this algorithm based on different choices of the step-size $\alpha_t$. In particular, we show that Algorithm~\ref{algo:det_LFP} convergences linearly
with constant step-sizes $\{\alpha_t\}_{t\ge 0}$.
\item Leveraging the techniques for analyzing Algorithm~\ref{algo:det_LFP}, we obtain a local convergence rate of the LFP algorithm (i.e., Algorithm~\ref{algo:LFP}). To the best of our knowledge, this is the first (local) non-asymptotic convergence result of LFP. Our result indicates that although some step-sizes are ``asymptotically equivalent'' (i.e., they all satisfy the conditions in~\eqref{eq:alpha_t_cond}), they may yield different local convergence rates (cf.\ Remark~\ref{rmk:as_equiv}).
\item As a result of independent interest,
we extend DLFP (i.e., Algorithm~\ref{algo:det_LFP}) to solve a class of strongly convex composite problems. The resulting algorithm can be regarded as a simple variant of the generalized Frank-Wolfe method in Nesterov~\cite[Section~5]{Nest_18}, in the sense that
it has an additional dual interpolation step. We show that with this additional step, somewhat surprisingly, DLFP converges significantly faster than the original method in~\cite{Nest_18} (see Section~\ref{sec:FW} for details). We believe that this result may provide new insight in designing Frank-Wolfe-type methods.
\end{enumerate}
\noindent {\bf Notations}.\; For any $x\in\mathbb{R}^n$, we define $\normt{x}_1 := \sum_{i=1}^n \abst{x_i}$ and $\normt{x}_\infty := \max_{i=1}^n \abst{x_i}$. We define $e:= (1,1,\ldots,1)$ and $e_i$ as the $i$-th standard coordinate vector. We denote the relative interior of a nonempty set $\mathcal{X}\subseteq\mathbb{R}^n$ by $\mathsf{ri}\,\mathcal{X}$ and the indicator function of $\mathcal{X}$ by $\iota_\mathcal{X}$, i.e., $\iota_\mathcal{X}(x) = 0$ if $x\in\mathcal{X}$ and $+\infty$ otherwise.
\begin{algorithm}[t!]
\caption{Logistic fictitious play (LFP)}\label{algo:LFP}
\begin{algorithmic}
\State {\bf Input}: Initial actions $i_0\in[n]$ and $j_0\in[m]$, and and step-sizes $\{\alpha_t\}_{t\ge 0}\subseteq [0,1]$
\State {\bf Initialize}: $x^0 := e_{i_0}$ and $y^0 := e_{j_0}$
\State {\bf At iteration $t\in\{0,1,\ldots\}$}:
\begin{align}
\label{eq:upd_x}
\begin{split}
x^{t+1} &:= (1-\alpha_t)x^t + \alpha_t e_{i_{t+1}},\quad\mbox{where}\quad i_{t+1}\sim v^t \;\; \mbox{and}\;\; v^t:= \mathsf{P}_\mathsf{x}(y^t):={\argmin}_{x\in\Delta_n} \; \lranglet{A^\top y^t}{x} + \eta h(x),
\end{split}\\
\label{eq:upd_y}
\begin{split}
y^{t+1} &:= (1-\alpha_t)y^t + \alpha_t e_{j_{t+1}},\quad\mbox{where}\quad j_{t+1}\sim s^t\;\; \mbox{and}\;\; s^t:= \mathsf{P}_\mathsf{y}(x^t):={\argmax}_{y\in\Delta_m} \; \lranglet{A x^t}{y} - \eta h(y) .
\end{split}
\end{align}
\end{algorithmic
\end{algorithm}
\begin{algorithm}[t!]
\caption{Deterministic version of LFP (DLFP)}\label{algo:det_LFP}
\begin{algorithmic}
\State {\bf Input}: Starting point $x^0 \in\Delta_n$ and $y^0 \in\Delta_m$, and step-sizes $\{\alpha_t\}_{t\ge 0}\subseteq [0,1]$
\State {\bf At iteration $t\in\{0,1,\ldots\}$}:
\begin{align}
x^{t+1}& := (1-\alpha_t)x^t + \alpha_t v^t = (1-\alpha_t)x^t + \alpha_t \mathsf{P}_\mathsf{x}(y^t),\label{eq:upd_x_exp}\\
y^{t+1}& := (1-\alpha_t)y^t + \alpha_t s^t = (1-\alpha_t)y^t + \alpha_t \mathsf{P}_\mathsf{y}(x^t). \label{eq:upd_y_exp}
\end{align}
\end{algorithmic
\end{algorithm}
\section{Preliminaries}
First, note that we can view DLFP (i.e., Algorithm~\ref{algo:det_LFP}) as an algorithm solving a class of saddle-point problems, which reads
\begin{equation}
\quad {\min}_{x\in\Delta_n}\; {\max}_{y\in\Delta_m} \; [S(x,y):=\eta h_\mathsf{x}(x) + \lranglet{Ax}{y} - \eta h_\mathsf{y}(y)]. \tag{SPP}\label{eq:SPP}
\end{equation}
Note that ${\rm (SPP)}$
has a unique saddle point $(\bar{x},\bar{y})\in\mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$ that satisfies
\begin{equation}
S(\bar{x},y)\le S(\bar{x},\bar{y}) \le S(\bar{x},y), \quad \forall\,(x,y)\in\Delta_n\times\Delta_m.
\end{equation}
Recall that $(x^*,y^*)\in \mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$ is the unique solution of the fixed-point equation~\eqref{eq:fixed_pt}. From the definitions of $\mathsf{P}_\mathsf{x}$ and $\mathsf{P}_\mathsf{y}$, we easily see that $(\bar{x},\bar{y}) = (x^*,y^*)$. Thus existing results (e.g.,~\cite[Theorem~4.4]{Perkins_14}) already show that in DLFP, $(x^t,y^t)\to (\bar{x},\bar{y})$ as $t\to+\infty$. To measure the rate of such a convergence, however, we will use the duality gap $G(\cdot,\cdot)$ of ${\rm (SPP)}$, which is defined as
\begin{align*}
G(x,y) &= {\max}_{y'\in\Delta_m} \; S(x,y') - {\min}_{x'\in\Delta_n}\; S(x',y)\\
&= \eta h_\mathsf{x}(x) + g^\eta_\mathsf{y}(Ax) + g^\eta_\mathsf{x}(-A^\top y) +\eta h_\mathsf{y}(y),\nt\label{eq:def_G}
\end{align*}
where
\begin{align}
g^\eta_\mathsf{x}(w):= {\max}_{x\in\Delta_n}\; \lranglet{w}{x} -\eta h_\mathsf{x}(x)\quad\mbox{and}\quad g^\eta_\mathsf{y}(u): = {\max}_{y\in\Delta_m}\; \lranglet{u}{y} - \eta h_\mathsf{y}(y) \label{eq:def_gx_gy}
\end{align}
denote the Fenchel conjugate functions of $\eta h_\mathsf{x}+\iota_{\Delta_n}$ and $\eta h_\mathsf{y}+\iota_{\Delta_m}$, respectively. Note that the duality gap in~\eqref{eq:def_G} is also commonly used as the Lyapunov function for analyzing the asymptotic convergence of LFP in previous works (e.g.,~\cite{Hof_05,Perkins_14}).
Also, by the definitions of $\mathsf{P}_\mathsf{x}$ and $\mathsf{P}_\mathsf{y}$,
we have
\begin{align}
g^\eta_\mathsf{x}(-A^\top y)= -\lranglet{A\mathsf{P}_\mathsf{x}(y)}{y} -\eta h_\mathsf{x}(\mathsf{P}_\mathsf{x}(y))\quad\mbox{and}\quad g^\eta_\mathsf{y}(Ax)=\lranglet{Ax}{\mathsf{P}_\mathsf{y}(x)} - \eta h_\mathsf{y}(\mathsf{P}_\mathsf{y}(x)),
\end{align}
and consequently,
\begin{align}
G(x,y)&= \eta h_\mathsf{x}(x)-\eta h_\mathsf{x}(\mathsf{P}_\mathsf{x}(y)) + \lranglet{Ax}{\mathsf{P}_\mathsf{y}(x)} -\lranglet{A\mathsf{P}_\mathsf{x}(y)}{y}+\eta h_\mathsf{y}(y) - \eta h_\mathsf{y}(\mathsf{P}_\mathsf{y}(x)) \label{eq:def_G2} \\
&= S(x,\mathsf{P}_\mathsf{y}(x)) - S(\mathsf{P}_\mathsf{x}(y),y).\label{eq:def_G3}
\end{align}
In the following, let us introduce some well-known properties of the functions $h_\mathsf{x}$, $h_\mathsf{y}$, $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ that will be useful in our analyses in Sections~\ref{sec:det_glocal} and~\ref{sec:stoc_local}.
\begin{lemma} \label{lem:sc_smooth}
The functions $\eta h_\mathsf{x}$ and $\eta h_\mathsf{y}$ are $\eta$-strongly convex with respect to $\normt{\cdot}_1$ on $\Delta_n$ and $\Delta_m$, respectively, namely
\begin{align}
\eta h_\mathsf{x}(x')\ge \eta h_\mathsf{x}(x) + \eta \lranglet{\nabla h_\mathsf{x}(x)}{x'-x} + (\eta/2) \normt{x'-x}_1^2,\quad \forall\,x\in\mathsf{ri}\,\Delta_n,\;\forall\,x'\in\Delta_n,\label{eq:sc_hx}\\
\eta h_\mathsf{y}(y')\ge \eta h_\mathsf{y}(y) + \eta \lranglet{\nabla h_\mathsf{y}(y)}{y'-y} + (\eta/2) \normt{y'-y}_1^2, \quad \forall\,y\in\mathsf{ri}\,\Delta_n,\;\forall\,y'\in\Delta_m. \label{eq:sc_hy}
\end{align}
Consequently, the functions $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ are differentiable on $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively, with $\eta^{-1}$-Lipschitz gradients
with respect to $\normt{\cdot}_\infty$. Namely,
\begin{align}
\normt{\nabla g^\eta_\mathsf{x}(w) - \nabla g^\eta_\mathsf{x}(w')}_1&\le \eta^{-1}\normt{w-w'}_\infty, \quad \forall\,w,w'\in\mathbb{R}^n, \label{eq:smooth_gx}\\
\normt{\nabla g^\eta_\mathsf{y}(u) - \nabla g^\eta_\mathsf{y}(u')}_1&\le \eta^{-1}\normt{u-u'}_\infty, \quad \forall\,u,u'\in\mathbb{R}^m. \label{eq:smooth_gy}
\end{align}
In other words, the functions $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ are {\em $\eta^{-1}$-smooth} with respect to $\normt{\cdot}_\infty$ on $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively.
\end{lemma}
\begin{proof}
The strong convexity of $h_\mathsf{x}$ and $h_\mathsf{y}$ is known as Pinsker's inequality~\cite{Pinsker_64}, and the smoothness of $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ follows from Nesterov~\cite[Theorem~1]{Nest_05}.\hfill$\square$
\end{proof}
\noindent Note that the $\eta^{-1}$-smoothness of $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$
is equivalent to the following upper bounds on $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ (see e.g.,~\cite[Lemma~1.30]{Peyp_15}).
\begin{lemma}\label{lem:descent}
The functions $g^\eta_\mathsf{x}$ and $g^\eta_\mathsf{y}$ are $\eta^{-1}$-smooth with respect to $\normt{\cdot}_\infty$ on $\mathbb{R}^n$ and $\mathbb{R}^m$, respectively, if and only if
\begin{alignat}{2}
g^\eta_\mathsf{x}(w')&\le g^\eta_\mathsf{x}(w) + \lranglet{\nabla g^\eta_\mathsf{x}(w)}{w'-w} + (\eta^{-1}/2) \normt{w'-w}_\infty^2,&&\quad\forall\,w,w'\in\mathbb{R}^n,\label{eq:descent_gx}\\
g^\eta_\mathsf{y}(u')&\le g^\eta_\mathsf{y}(u) + \lranglet{\nabla g^\eta_\mathsf{y}(u)}{u'-u} + (\eta^{-1}/2) \normt{u'-u}_\infty^2,&&\quad\forall\,u,u'\in\mathbb{R}^m.\label{eq:descent_gy}
\end{alignat}
\end{lemma}
\noindent In addition, by Danskin's theorem~\cite{Danskin_67} and the strict convexity of $h_\mathsf{x}$ and $h_\mathsf{y}$, the following holds.
\begin{lemma}\label{lem:grad}
For any $w\in\mathbb{R}^n$ and any $u\in\mathbb{R}^m$, we can compute $\nabla g^\eta_\mathsf{x}(w)$ and $\nabla g^\eta_\mathsf{y}(u)$ as follows:
\begin{equation}
\nabla g^\eta_\mathsf{x}(w):= {\argmax}_{x\in\Delta_n}\; \lranglet{w}{x} - \eta h_\mathsf{x}(x)\quad\mbox{and}\quad \nabla g^\eta_\mathsf{y}(u): = {\argmax}_{y\in\Delta_m}\; \lranglet{u}{y} - \eta h_\mathsf{y}(y).
\end{equation}
\end{lemma}
\section{Global convergence rate of DLFP} \label{sec:det_glocal}
Let the sequence $\{(x^t,y^t)\}_{t\ge 0}$ be produced in DLFP (i.e., Algorithm~\ref{algo:det_LFP}), with any starting point $(x^0,y^0)\in\Delta_n\times\Delta_m$. For convenience, for all $t\ge 0$, define $V_t:= G(x^t,y^t)$, namely the duality gap of $(\rm SPP)$ at $(x^t,y^t)$.
A crucial step to obtain the global convergence rate of DLFP is to show the following two important recursions about $\{V_t\}_{t\ge 0}$.
\begin{prop} \label{prop:recursion}
In Algorithm~\ref{algo:det_LFP}, for any starting point $(x^0,y^0)\in\Delta_n\times\Delta_m$, we have
\begin{alignat}{2}
V_{t+1} &\le (1-\alpha_t+\kappa^2 \alpha_t^2)V_t, &&\quad \forall\,t\ge 0,\label{eq:detV_1}\\
V_{t+1} &\le (1-\alpha_t) V_t + 4\alpha_t^2\eta\kappa^2,&&\quad \forall\,t\ge 0,\label{eq:detV_2}
\end{alignat}
where $\kappa:= {\normt{A}_{1\to\infty}}/{\eta}$ and $\normt{A}_{1\to\infty}:= \max_{\normt{x}_1\le 1} \normt{Ax}_{\infty} = \max_{i\in[m],j\in[n]}\abst{A_{i,j}}$.
\end{prop}
\begin{proof}
First, note that the convexity of $g$ and $h$ implies that
\begin{align}
h_\mathsf{x}(x^{t+1}) - h_\mathsf{x}(x^t)\le (1-\alpha_t) h_\mathsf{x}(x^t) + \alpha_t h_\mathsf{x}(v^t) - h_\mathsf{x}(x^t)= \alpha_t(h_\mathsf{x}(v^t)- h_\mathsf{x}(x^t)),\label{eq:Jensen_hx}\\
h_\mathsf{y}(y^{t+1}) - h_\mathsf{y}(y^t)\le (1-\alpha_t) h_\mathsf{y}(y^t) + \alpha_t h_\mathsf{y}(s^t) - h_\mathsf{y}(y^t) = \alpha_t(h_\mathsf{y}(s^t)- h_\mathsf{y}(y^t)). \label{eq:Jensen_hy}
\end{align}
Also, using~\eqref{eq:descent_gx} and~\eqref{eq:descent_gy} in Lemma~\ref{lem:descent}, together with Lemma~\ref{lem:grad}, we have
\begin{align}
g^\eta_\mathsf{y}(Ax^{t+1})&\le g^\eta_\mathsf{y}(Ax^{t}) + \lranglet{\nabla g^\eta_\mathsf{y}(Ax^{t})}{A(x^{t+1}-x^t)} + (\eta^{-1}/2) \normt{A (x^{t+1}-x^t)}_\infty^2,\nn\\
&\le g^\eta_\mathsf{y}(Ax^{t}) + \lranglet{s^t}{A(x^{t+1}-x^t)} + (\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{x^{t+1}-x^t}_1^2,\nn\\
&= g^\eta_\mathsf{y}(Ax^{t}) + \alpha_t\lranglet{s^t}{A(v^t-x^t)} + \alpha_t^2(\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{v^t-x^t}_1^2, \label{eq:smooth_gx_A}\\
g^\eta_\mathsf{x}(-A^\top y^{t+1})&\le g^\eta_\mathsf{x}(-A^\top y^{t}) - \lranglet{\nabla g^\eta_\mathsf{x}(-A^\top y^{t})}{A^\top (y^{t+1}-y^{t})} + (\eta^{-1}/2) \normt{A^\top(y^{t+1}-y^t)}_\infty^2,\nn\\
&\le g^\eta_\mathsf{x}(-A^\top y^{t}) - \lranglet{v^t}{A^\top (y^{t+1}-y^{t})} + (\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{y^{t+1}-y^t}_1^2,\nn\\
&=g^\eta_\mathsf{x}(-A^\top y^{t}) - \alpha_t\lranglet{v^t}{A^\top (s^t-y^{t})} + \alpha_t^2(\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{s^t-y^t}_1^2.\label{eq:smooth_gy_A}
\end{align}
Now, combining~\eqref{eq:Jensen_hx} to~\eqref{eq:smooth_gy_A}, and by the definition of $V_t=G(x^t,y^t)$ in~\eqref{eq:def_G}, we have
\begin{align*}
V_{t+1} - V_t =& (\eta h_\mathsf{x}(x^{t+1}) - \eta h_\mathsf{x}(x^t)) + (\eta h_\mathsf{y}(y^{t+1}) - \eta h_\mathsf{y}(y^t))\\
& + (g^\eta_\mathsf{x}(Ax^{t+1}) - g^\eta_\mathsf{x}(Ax^{t})) + (g^\eta_\mathsf{x}(-Ay^{t+1})- g^\eta_\mathsf{x}(-Ay^{t}))\\
\le & \alpha_t\big\{\eta h_\mathsf{x}(v^{t}) - \eta h_\mathsf{x}(x^t) + \eta h_\mathsf{y}(s^{t}) - \eta h_\mathsf{y}(y^t)+ \lranglet{s^t}{A (v^t-x^t)}- \lranglet{v^t}{A^\top (s^t-y^{t})}\big\}\\
& + \alpha_t^2 \big\{(\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{v^t-x^t}_1^2 + (\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{s^t-y^t}_1^2\big\}\\
= &-\alpha_t V_t + \alpha_t^2 \kappa^2 \big\{(\eta/2) \normt{v^t-x^t}_1^2 + (\eta/2) \normt{s^t-y^t}_1^2\big\}, \nt\label{eq:V_t+1_V_t}
\end{align*}
where the last step uses the definition of $G(\cdot,\cdot)$ in~\eqref{eq:def_G2} and the definition of $\kappa$.
Now, we have two ways to upper bound $C_t:= (\eta/2) \normt{v^t-x^t}_1^2 + (\eta/2) \normt{s^t-y^t}_1^2$. The first way is rather simple. Indeed, we note that
\begin{align}
\normt{v^t-x^t}_1\le D_1(\Delta_n):= {\max}_{x,x'\in\Delta_n}\;\normt{x-x'}_1 = 2\quad \mbox{and}\quad \normt{s^t-y^t}_1\le D_1(\Delta_m) = 2, \label{eq:diam_ub}
\end{align}
so that $C_t\le 4\eta$, and this leads to the recursion in~\eqref{eq:detV_2}. For the second way, we note that due to the $\eta$-strong convexity of $\eta h_\mathsf{x}$ and $\eta h_\mathsf{y}$ (with respect to $\normt{\cdot}_1$), both $S(\cdot,y^t)$ and $-S(x^t,\cdot)$ are $\eta$-strongly convex on $\Delta_n$ and $\Delta_m$, respectively. Since $v^t= \argmin_{x\in\Delta_n}S(x,y^t)$ and $s^t= \argmax_{y\in\Delta_m}S(x^t,y)$, we have
\begin{align*}
S(x^t,y^t)\ge S(v^t,y^t) + (\eta/2)\normt{x^t-v^t}_1^2,\\
-S(x^t,y^t)\ge -S(x^t,s^t) + (\eta/2)\normt{s^t-y^t}_1^2,
\end{align*}
and consequently,
\begin{equation}
C_t = (\eta/2) \normt{v^t-x^t}_1^2 + (\eta/2) \normt{s^t-y^t}_1^2 \le S(x^t,s^t) - S(v^t,y^t) \stackrel{\rm(a)}{=} G(x^t,y^t) = V_t,
\end{equation}
where in (a) we use the definition of $G(\cdot,\cdot)$ in~\eqref{eq:def_G3}. This leads to~\eqref{eq:detV_1}. \hfill$\square$
\end{proof}
Equipped with Proposition~\ref{prop:recursion}, we now present the global convergence rates of Algorithm~\ref{algo:det_LFP} for different step-sizes $\{\alpha_t\}_{t\ge 0}$.
\begin{theorem}\label{thm:global_det_rate}
In Algorithm~\ref{algo:det_LFP}, for any starting point $(x^0,y^0)\in\Delta_n\times\Delta_m$:
\begin{enumerate}[label=(\alph*)]
\item\label{item:det_rate1} If $\alpha_t = \min\{1/(2\kappa^2),1\}$ for all $t\ge 0$, then
\begin{equation}
V_{t}\le
\rho(\kappa^2)^t V_0, \quad \forall\,t\ge 1, \quad \mbox{where}\;\; \rho(z):= \begin{cases}
1-{1}/(4z), & \mbox{if} \;\; z\ge 1/2\\
z, & \mbox{if} \;\; 0\le z < 1/2
\end{cases}.
\end{equation}
\item \label{item:det_rate2} If $\alpha_t = 1/(t+1)$ for all $t\ge 0$, then
\begin{equation}
V_{t}\le \frac{4\eta\kappa^2 (1+\ln t)}{t}, \quad \forall\,t\ge 1.
\end{equation}
\item \label{item:det_rate3} If $\alpha_t = 2/(t+2)$ for all $t\ge 0$, then
\begin{equation}
V_{t}\le \frac{16\eta\kappa^2 }{t+1}, \quad \forall\,t\ge 1.
\end{equation}
\end{enumerate}
\end{theorem}
The proof of Lemma~\ref{thm:global_det_rate} requires the following lemma, whose proof can be found in e.g.,~\cite[Section~3]{Freund_16}. However, for completeness, we provide its proof in Appendix~\ref{app:proof}.
\begin{lemma} \label{lem:recur_V}
Let $\{V_t\}_{t\ge 0}$ be a nonnegative sequence that satisfies the following recursion:
\begin{equation}
V_{t+1}\le (1-\alpha_t)V_t + \alpha_t^2 C,\quad\forall\,t\ge 0, \label{eq:recur_V}
\end{equation}
where $C\ge 0$ and $\alpha_t\in[0,1]$ for $t\ge 0$. Then
\begin{alignat}{3}
V_t &\le \frac{C(1+\ln t)}{t}, \quad &&\forall\,t\ge 1,\qquad && \mbox{if} \;\;\alpha_t = \frac{1}{t+1},\quad \forall\,t\ge 0,\\
V_t &\le \frac{4C}{t+1}, \quad &&\forall\,t\ge 1,\qquad && \mbox{if} \;\;\alpha_t = \frac{2}{t+2},\quad \forall\,t\ge 0.
\end{alignat}
\end{lemma}
\noindent {\em Proof of Theorem~\ref{thm:global_det_rate}.} Part~\ref{item:det_rate1} follows from the recursion~\eqref{eq:detV_1}. Specifically, we set $$\alpha_t:= {\argmin}_{\alpha\in[0,1]} \, [G(\alpha):=1-\alpha_t+\kappa^2 \alpha_t^2]= \min\{1/(2\kappa^2),1\}, $$ and let $\rho(\kappa^2):=G(\alpha_t)$.
If $\kappa^2\ge 1/2$, then $\alpha_t = 1/(2\kappa^2)$ and $G(\alpha_t) = 1-1/(4\kappa^2)$; otherwise, $\alpha_t = 1$ and $G(\alpha_t) = \kappa^2$. This proves part~\ref{item:det_rate1}. Parts~\ref{item:det_rate2} and~\ref{item:det_rate3} follow from the recursion~\eqref{eq:detV_2} and Lemma~\ref{lem:recur_V}. \hfill$\square$
\begin{remark} \label{rmk:cst_det_step}
Note that in part~\ref{item:det_rate1}, the linear rate function $\rho:\mathbb{R}_+\to [0,1)$ is strictly increasing on $\mathbb{R}_+$, and hence the smaller the $\kappa$, the better the linear rate. However, in practice, the parameter $\eta>0$ is usually set to be very small (see e.g.,~\cite{Perkins_14}). Thus we typically have $\kappa^2 \gg 1/2$, and hence $\alpha_t = 1/(2\kappa^2)< 1$ and $\rho(\kappa^2) = 1-1/(4\kappa^2)$.
\end{remark}
\begin{remark} \label{rmk:det_to_stoc_step}
From Theorem~\ref{thm:global_det_rate}, we see that the constant step-size $\alpha_t = \min\{1/(2\kappa^2),1\}$ yields a linear convergence rate of Algorithm~\ref{algo:det_LFP}, which is better than the sub-linear convergence rates resulted from the two decreasing step-sizes $\alpha_t = 1/(t+1)$ and $\alpha_t = 2/(t+2)$. However, in the stochastic case (i.e., LFP in Algorithm~\ref{algo:LFP}), the constant step-size does not work. To see this, note that it does not satisfy the conditions in~\eqref{eq:as_conv}, and hence cannot guarantee the asymptotic almost sure convergence of Algorithm~\ref{algo:LFP}. In fact, this can also be seen from our local non-asymptotic analysis of
LFP in Theorem~\ref{thm:stoc} below.
\end{remark}
\section{Local convergence rate of LFP} \label{sec:stoc_local}
To begin with, let us note that from Theorem~\ref{thm:asymp_as}, since $(x^*,y^*)\in \mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$, there exists $r_\mathsf{x},r_\mathsf{y}>0$ such that $\mathcal{B}_1(x^*,r_\mathsf{x})\times \mathcal{B}_1(y^*,r_\mathsf{y})\subseteq \mathsf{ri}\,\Delta_n\times\mathsf{ri}\,\Delta_m$, where
\begin{equation}
\mathcal{B}_1(x^*,r_\mathsf{x}):= \{x\in\Delta_n:\normt{x-x^*}_1\le r_\mathsf{x}\}
\end{equation}
and $\mathcal{B}_1(y^*,r_\mathsf{y})$ is defined similarly. With simple algebra, one can easily show that there exist $0\le L_\mathsf{x},L_\mathsf{y}<+\infty$ such that $h_\mathsf{x}$ and $h_\mathsf{y}$ are $L_\mathsf{x}$- and $L_\mathsf{y}$-smooth (with respect to $\normt{\cdot}_1$) on $\mathcal{B}_1(x^*,r_\mathsf{x})$ and $\mathcal{B}_1(x^*,r_\mathsf{y})$, respectively. Therefore, by~\cite[Lemma~1.30]{Peyp_15}, we hav
\begin{alignat}{2}
h_\mathsf{x}(x')&\le h_\mathsf{x}(x) + \lranglet{\nabla h_\mathsf{x}(x)}{x'-x} + (L_\mathsf{x}/2)\normt{x'-x}_1^2,\quad &&\forall\,x,x'\in \mathcal{B}_1(x^*,r_\mathsf{x}), \label{eq:smooth_hx}\\
h_\mathsf{y}(y')&\le h_\mathsf{y}(y) + \lranglet{\nabla h_\mathsf{y}(y)}{y'-y} + (L_\mathsf{y}/2)\normt{y'-y}_1^2,\quad &&\forall\,y,y'\in \mathcal{B}_1(y^*,r_\mathsf{y}). \label{eq:smooth_hy}
\end{alignat}
Let $\{(x^t,y^t)\}_{t\ge 0}$ be produced in LFP (i.e., Algorithm~\ref{algo:LFP}), for any initial actions $i_0\in[n]$ and $j_0\in[m]$, and any step-sizes $\{\alpha_t\}_{t\ge 0}$ satisfying the conditions in~\eqref{eq:alpha_t_cond}. Our analysis starts with a simple corollary of Theorem~\ref{thm:asymp_as}, which states that with high probability, after a sufficiently long time $T$, $\{(x^t,y^t)\}_{t\ge T}$ will lie inside the neighborhood $\mathcal{B}_1(x^*,r_\mathsf{x})\times \mathcal{B}_1(y^*,r_\mathsf{y})$ of $(x^*,y^*)$.
\begin{corollary}\label{cor:whp}
For any $\delta\in(0,1)$, there exists $T(\delta)<+\infty$ such that $\Pr\big(\mathcal{A}_{T(\delta)}\big)\ge 1-\delta,$ where
\begin{equation}
\mathcal{A}_T:= \big\{\forall\,t\ge T,\;\; (x^t,y^t)\in \mathcal{B}_1(x^*,r_\mathsf{x})\times \mathcal{B}_1(y^*,r_\mathsf{y})\big\}, \quad\forall\,T\ge 0. \label{eq:event_A}
\end{equation}
\end{corollary}
\begin{proof}
Indeed, from standard results (e.g.,~\cite[Theorem~3.3]{Hunter_06}), we know that the almost sure convergence result in~\eqref{eq:as_conv}
is equivalent to $\lim_{T\to\infty}\Pr(A_T) = 1$. Therefore, for any $\delta\in(0,1)$, there exists $T(\delta)<+\infty$ such that for all $T\ge T(\delta)$, $\Pr(A_T)\ge 1-\delta$. This completes the proof. \hfill$\square$
\end{proof}
For convenience of our analysis, we rewrite the iterations~\eqref{eq:upd_x} and~\eqref{eq:upd_y} in Algorithm~\ref{algo:LFP} as
\begin{alignat}{2}
x^{t+1}&:= (1-\alpha_t)x^t + \alpha_t (v^t + \zeta_\mathsf{x}^t) = x^t + \alpha_t (v^t + \zeta_\mathsf{x}^t-x^t),&&\quad\mbox{where}\quad \zeta_\mathsf{x}^t:= e_{i_{t+1}} - v^t,\\
y^{t+1}&:= (1-\alpha_t)y^t + \alpha_t (s^t + \zeta_\mathsf{y}^t) = y^t + \alpha_t (s^t + \zeta_\mathsf{y}^t - y^t),&&\quad\mbox{where}\quad \zeta_\mathsf{y}^t:= e_{j_{t+1}} - s^t.
\end{alignat}
In addition, let us define a filtration $\{\mathcal{F}_t\}_{t\ge 0}$ such that for all $t\ge 0$, $\mathcal{F}_t:= \sigma\big(\{(x_i,y_i)\}_{i=0}^t\big)$, namely the $\sigma$-field generated by the set of random variables $\{(x_i,y_i)\}_{i=0}^t$. Since $i_{t+1}\sim v^t$ and $j_{t+1}\sim s^t$, we easily see that $\{\zeta_\mathsf{x}^t\}_{t\ge 0}$ and $\{\zeta_\mathsf{y}^t\}_{t\ge 0}$ are martingale difference sequences (MDS) with uniformly bounded (conditional) variances. Namely,
\begin{align}
\mathbb{E}_t[{\zeta_\mathsf{x}^t}]: =\mathbb{E}[{\zeta_\mathsf{x}^t}\,|\,\mathcal{F}_t] = 0 \quad \mbox{and}\quad \mathbb{E}_t[{\zeta_\mathsf{y}^t}]: =\mathbb{E}[{\zeta_\mathsf{y}^t}\,|\,\mathcal{F}_t] = 0, \quad \forall\,t\ge 0,\label{eq:unbiased}
\end{align}
and there exist $0\le \sigma_\mathsf{x},\sigma_\mathsf{y}<+\infty$ such that
\begin{align}
\mathbb{E}_t[\normt{\zeta_\mathsf{x}^t}_1^2]
\le \sigma_\mathsf{x}^2, \quad \mathbb{E}_t[\normt{\zeta_\mathsf{y}^t}_1^2
\le \sigma_\mathsf{y}^2, \quad \forall\,t\ge 0.\label{eq:bounded_var}
\end{align}
Let us now present the local convergence rate of Algorithm~\ref{algo:LFP}.
\begin{theorem}\label{thm:stoc}
In Algorithm~\ref{algo:LFP}, choose any initial actions $i_0\in[n]$ and $j_0\in[m]$. For any $\delta\in(0,1)$, there exists $T(\delta)<+\infty$ such that $\Pr(\mathcal{A}_{T(\delta)})\ge 1-\delta$
and we have that
\begin{alignat}{3}
\mathbb{E}[V_{T(\delta)+t}|\mathcal{A}_{T(\delta)}] &\le \frac{\bar{C}(1+\ln t)}{t},\quad &&\forall\,t\ge 1, \qquad && \mbox{if} \;\;\alpha_t = \frac{1}{t+1},\quad\forall\,t\ge 0,\\
\mathbb{E}[V_{T(\delta)+t}|\mathcal{A}_{T(\delta)}] &\le \frac{4\bar{C}}{t+1},\quad &&\forall\,t\ge 1, \qquad && \mbox{if} \;\;\alpha_t = \frac{2}{t+2}, \quad\forall\,t\ge 0, \label{eq:stoc_2/(t+2)}
\end{alignat}
where
\begin{equation}
\bar C:=(\kappa^2 + \kappa_\mathsf{x}) \eta(4 + \sigma_\mathsf{x}^2) + (\kappa^2 + \kappa_\mathsf{y}) \eta(4 + \sigma_\mathsf{y}^2),\quad \kappa_\mathsf{x}:=L_\mathsf{x}/\eta \quad\mbox{and} \quad\kappa_\mathsf{y}:= L_\mathsf{y}/\eta.
\end{equation}
\end{theorem}
\begin{proof}
Using the smoothness conditions of $h_\mathsf{x}$ and $h_\mathsf{y}$ in~\eqref{eq:smooth_hx} and~\eqref{eq:smooth_hy}, respectively, we have
\begin{align*}
h_\mathsf{x}(x^{t+1}) - h_\mathsf{x}(x^t)&\le \lranglet{\nabla h_\mathsf{x}(x^t)}{x^{t+1}-x^t} + ({L_\mathsf{x}}/{2})\normt{x^{t+1} - x^t}_1^2\\
&= \alpha_t\lranglet{\nabla h_\mathsf{x}(x^t)}{v^t + \zeta_\mathsf{x}^t - x^t} + \alpha_t^2(L_\mathsf{x}/{2}) \normt{v^t + \zeta_\mathsf{x}^t - x^t}_1^2\\
&\le \alpha_t(h_\mathsf{x}(v^t) - h_\mathsf{x}(x^t)) + \alpha_t\lranglet{\nabla h_\mathsf{x}(x^t)}{\zeta_\mathsf{x}^t} + \alpha_t^2{L_\mathsf{x}} (\normt{v^t - x^t}_1^2 + \normt{\zeta_\mathsf{x}^t}_1^2),\nt\label{eq:ub_hx}\\
h_\mathsf{y}(y^{t+1}) - h_\mathsf{y}(y^t)&\le \lranglet{\nabla h_\mathsf{y}(y^t)}{y^{t+1} - y^t} + (L_\mathsf{y}/2) \normt{y^{t+1} - y^t}_1^2\\
&= \alpha_t\lranglet{\nabla h_\mathsf{y}(y^t)}{s^t + \zeta_\mathsf{y}^t - y^t} + \alpha_t^2({L_\mathsf{y}}/{2}) \normt{s^t + \zeta_\mathsf{y}^t - y^t}_1^2\\
&\le \alpha_t(h(s^t) - h(y^t)) + \alpha_t\lranglet{\nabla h(y^t)}{\zeta_\mathsf{y}^t} + \alpha_t^2{L_\mathsf{y}} (\normt{s^t - y^t}_1^2 + \normt{\zeta_\mathsf{y}^t}_1^2),\nt \label{eq:ub_hy}
\end{align*}
where in~\eqref{eq:ub_hx} and~\eqref{eq:ub_hy} we use the convexity of $h_\mathsf{x}$ and $h_\mathsf{y}$, and the inequality $\normt{a-b}_1^2\le 2(\normt{a}_1^2+\normt{b}_1^2)$. In addition, similar to~\eqref{eq:smooth_gx_A} and~\eqref{eq:smooth_gy_A}, we have
\begin{align}
g^\eta_\mathsf{y}(Ax^{t+1})&\le g^\eta_\mathsf{y}(Ax^{t}) + \alpha_t\lranglet{s^t}{A(v^t+\zeta_\mathsf{x}^t-x^t)} + \alpha_t^2(\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{v^t+\zeta_\mathsf{x}^t-x^t}_1^2, \nn\\
\label{eq:smooth_gx_A_stoc}
\begin{split}
&\le g^\eta_\mathsf{y}(Ax^{t}) + \alpha_t(\lranglet{s^t}{A(v^t-x^t)}+\lranglet{s^t}{A\zeta_\mathsf{x}^t})\\
&\hspace{3cm} + \alpha_t^2\eta^{-1}\normt{A}_{1\to\infty}^2(\normt{v^t-x^t}_1^2+\normt{\zeta_\mathsf{x}^t}_1^2),\end{split}\\
g^\eta_\mathsf{x}(-A^\top y^{t+1})&\le g^\eta_\mathsf{x}(-A^\top y^{t}) - \alpha_t\lranglet{v^t}{A^\top (s^t+ \zeta_\mathsf{y}^t-y^{t})} + \alpha_t^2(\eta^{-1}/2) \normt{A}_{1\to\infty}^2\normt{s^t+ \zeta_\mathsf{y}^t-y^t}_1^2,\nn\\
\label{eq:smooth_gy_A_stoc}
\begin{split}
&\le g^\eta_\mathsf{x}(-A^\top y^{t}) - \alpha_t(\lranglet{v^t}{A^\top (s^t-y^{t})} + \lranglet{v^t}{A^\top\zeta_\mathsf{y}^t})\\
&\hspace{3cm} + \alpha_t^2\eta^{-1} \normt{A}_{1\to\infty}^2(\normt{s^t-y^t}_1^2+\normt{\zeta_\mathsf{y}^t}_1^2),
\end{split}
\end{align}
Therefore, by combining~\eqref{eq:ub_hx} to~\eqref{eq:smooth_gy_A_stoc}, we have
\begin{align*}
V_{t+1} - V_t &\le \alpha_t\big\{h_\mathsf{x}(v^t)-h_\mathsf{x}(x^t)+\alpha_t\lranglet{\nabla h_\mathsf{x}(x^t)}{\zeta_\mathsf{x}^t} + h_\mathsf{y}(s^t) -h_\mathsf{y}(y^t)+\alpha_t\lranglet{\nabla h_\mathsf{y}(y^t)}{\zeta_\mathsf{y}^t}\big\}\\
&+ \alpha_t\big\{\lranglet{s^t}{A (v^t-x^t)} + \lranglet{s^t}{A \zeta_\mathsf{x}^t} - \lranglet{A v^t}{ (s^t-y^t)} - \lranglet{A v^t}{\zeta_\mathsf{y}^t}\big\}\\
&+ \alpha_t^2\left\{ (\kappa^2 + \kappa_\mathsf{x}) \eta(\normt{v^t - x^t}_1^2 + \normt{\zeta_\mathsf{x}^t}_1^2) + (\kappa^2 + \kappa_\mathsf{y}) \eta(\normt{s^t - y^t}_1^2 + \normt{\zeta_\mathsf{y}^t}_1^2)\right\}.
\end{align*}
Consequently, taking expectation (by conditioning on $\mathcal{F}_t$) and using both~\eqref{eq:unbiased} and~\eqref{eq:bounded_var}, we have
\begin{align*}
\mathbb{E}_t[V_{t+1}]&\le (1-\alpha_t)V_t + \alpha_t^2\left\{ (\kappa^2 + \kappa_\mathsf{x}) \eta(\normt{v^t - x^t}_1^2 + \sigma_\mathsf{x}^2) + (\kappa^2 + \kappa_\mathsf{y}) \eta(\normt{s^t - y^t}_1^2 + \sigma_\mathsf{y}^2)\right\},\\
&\le (1-\alpha_t)V_t + \alpha_t^2 \bar C,\nt \label{eq:V_t+1_V_t_stoc
\end{align*}
where
in~\eqref{eq:V_t+1_V_t_stoc} we use~\eqref{eq:diam_ub}. Therefore, using Lemma~\ref{lem:recur_V}, we complete the proof. \hfill$\square$
\end{proof}
\begin{remark}\label{rmk:as_equiv}
From Theorem~\ref{thm:stoc}, we see that between the two decreasing step-sizes $\alpha_t = 1/(t+1)$ and $\alpha_t = 2/(t+2)$, the latter yields a faster convergence rate than the former, although the former is more commonly used in the literature on LFP (see e.g.,~\cite{Hof_02}).
Therefore, our analysis suggests that in terms of local convergence rate, $\alpha_t = 2/(t+2)$ is preferred over $\alpha_t = 1/(t+1)$, although both step-sizes are ``asymptotically equivalent'', meaning that both of them satisfy the conditions in~\eqref{eq:alpha_t_cond} and hence guarantee the asymptotic almost sure convergence of LFP (cf.\ Theorem~\ref{thm:asymp_as}). In addition, one can easily check (from the proof of Lemma~\ref{lem:recur_V} in Appendix~\ref{app:proof}) that the $O(1/t)$ convergence rate resulted from $\alpha_t = 2/(t+2)$ cannot be improved by choosing $\alpha_t = q/(t+q)$ for some integer $q>2$ (see also~\cite[Section~2]{Nest_18}).
\end{remark}
\section{Extension of DLFP for solving strongly convex composite problems}
\label{sec:FW}
Let us first observe that DLFP (i.e., Algorithm~\ref{algo:det_LFP}) can be extended to solve
the following class of saddle-point problems (that subsumes the problem in~\eqref{eq:SPP} as a special case):
\begin{equation}
{\min}_{x\in\mathcal{X}}\; {\max}_{y\in\mathcal{Y}} \; [S(x,y):= f_\mathsf{x}(x) + \lranglet{Ax}{y} - f_\mathsf{y}(y)], \label{eq:SPP2}
\end{equation}
where $\mathcal{X}\subseteq\mathbb{R}^n$ and $\mathcal{Y}\subseteq\mathbb{R}^m$ are non-empty, closed and convex sets, $f_\mathsf{x}$ is $\mu_\mathsf{x}$-strongly convex on $\mathcal{X}$ and $f_\mathsf{y}$ is $\mu_\mathsf{y}$-strongly convex on $\mathcal{Y}$ ($\mu_\mathsf{x},\mu_\mathsf{y}>0$). (However, note that neither $f_\mathsf{x}$ nor $f_\mathsf{y}$ is necessarily differentiable.) The extended algorithm is identical to Algorithm~\ref{algo:det_LFP}, except that one replaces $\Delta_n$, $\Delta_m$, $\etah_\mathsf{x}$ and $\etah_\mathsf{y}$ in Algorithm~\ref{algo:det_LFP} with $\mathcal{X}$, $\mathcal{Y}$, $f_\mathsf{x}$ and $f_\mathsf{y}$, respectively.
Indeed, let us observe that the proof of Proposition~\ref{prop:recursion} only uses the strong convexity of the functions $h_\mathsf{x}$ and $h_\mathsf{y}$, and the convexity and closedness of the constraint sets $\Delta_n$ and $\Delta_m$. As a result, the extension of Algorithm~\ref{algo:det_LFP} enjoys similar recursions to those in~\eqref{eq:detV_1} and~\eqref{eq:detV_2}. In particular, defining $\bar{\kappa}^2:= \normt{A}_{1\to\infty}^2/\mu_\mathsf{x}\mu_\mathsf{y}$. we have
\begin{equation}
V_{t+1} \le (1-\alpha_t+\bar{\kappa}^2 \alpha_t^2)V_t, \quad \forall\,t\ge 0.
\end{equation}
Consequently, if $\alpha_t = 1/(2\bar{\kappa}^2)$ for all $t\ge 0$, we have
\begin{equation}
V_{t}\le \left(1-\frac{1}{4\bar{\kappa}^2}\right)^t V_0, \quad \forall\,t\ge 1 . \label{eq:lin_conv}
\end{equation}
Next, let us define the Fenchel conjugate of $f_\mathsf{y}+\iota_{\mathcal{Y}}$ as
\begin{equation}
\ell(u):= {\max}_{y\in\mathcal{Y}} \; \lranglet{u}{y} - f_\mathsf{y}(y), \quad \forall\,u\in\mathbb{R}^m.
\end{equation}
With this definition, we can write down primal problem associated with
\eqref{eq:SPP2}:
\begin{equation}
F^*:={\min}_{x\in\mathcal{X}}\; [F(x):=\ell(Ax) + f_\mathsf{x}(x)]. \label{eq:P}
\end{equation}
Note that by~\cite[Theorem~1]{Nest_05}, $\ell$ is $\mu_\mathsf{y}^{-1}$-smooth on $\mathbb{R}^m$ as $f_\mathsf{y}$ is $\mu_\mathsf{y}$-strongly convex on $\mathcal{Y}$. Therefore,~\eqref{eq:P} is the strongly convex composite problem that has received considerable attention in the past 15 years (see e.g.,~\cite{Nest_13,Nest_18}). In particular, in~\cite[Section~5]{Nest_18}, Nesterov proposed a generalized Frank-Wolfe method for solving~\eqref{eq:P} that reads:
\begin{align}
x^{t+1}& := (1-\alpha_t)x^t + \alpha_t \bar{v}^t, \quad\mbox{where}\quad \bar{v}^t:= \mathsf{P}_\mathsf{x}(s^t)= {\argmin}_{x\in\mathcal{X}} \; \lranglet{A^\top s^t}{x} + f_\mathsf{x}(x) \label{eq:Nest}
\end{align}
and $s^t= \mathsf{P}_\mathsf{y}(x^t)$ (cf.~\eqref{eq:upd_y}). Nesterov showed that for any starting point $x^0\in\mathcal{X}$, this method converges at the {\em sub-linear} rate $O(1/t^2)$ in terms of the primal sub-optimality gap $F(x^t)-F^*$, with step-sizes $\alpha_t = {6(t+1)}/({(t+2)(2t+3)})$ for $t\ge 0$. Note that DLFP is almost the same as~\eqref{eq:Nest}, except only one difference: in DLFP, $\bar{v}^t$ is defined to be $\mathsf{P}_\mathsf{x}(y^t)$, rather than $\mathsf{P}_\mathsf{x}(s^t)$.
Therefore, we can interpret DLFP as a variant of the generalized Frank-Wolfe method for solving~\eqref{eq:P} with an additional dual interpolation step. However,
the benefit of this simple averaging step is rather significant: DLFP achieves {\em linear} convergence in terms of the duality gap of~\eqref{eq:SPP2} (see~\eqref{eq:lin_conv}). Since $V_t\ge F(x^t) -F^*$, DLFP also converges linearly in terms of the primal sub-optimality gap $F(x^t)-F^*$. We believe that this result may provide new insight in designing Frank-Wolfe-type methods for solving problems similar to~\eqref{eq:P}.
\section{Related Work} \label{sec:literature}
\subsection{Exponential Weight Algorithm (EWA)}\label{sec:MD}
On the surface level, it seems that DLFP (in Algorithm~\ref{algo:det_LFP}) is closely related to EWA~\cite{Freund_99} for solving two-player zero-sum matrix games, as both methods involve the entropic projection step (onto the probability simplex; cf.~\eqref{eq:def_vt}). (Depending on the fields, EWA has different names, e.g., entropic mirror descent (EMD) in optimization~\cite{Nemi_79,Beck_03} and follow the regularized leader in online learning~\cite{Shalev_12}.) At iteration $t$, given $(x^t,y^t)\in \Delta_n\times \Delta_m$, EWA reads
\begin{equation}
\begin{split}
x^{t+1}&:= {\argmin}_{x\in\Delta_n} \; \lranglet{A^\top y^t}{x} + \beta_t D_{h_\mathsf{x}}(x,x^t) = {\argmin}_{x\in\Delta_n} \; \lranglet{A^\top y^t - \beta_t \nabla h_\mathsf{x} (x^t)}{x} + \beta_t h_\mathsf{x}(x), \\
y^{t+1}&:= {\argmax}_{y\in\Delta_m} \; \lranglet{A x^t}{y} - \beta_t D_{h_\mathsf{y}}(y,y^t)={\argmax}_{y\in\Delta_m} \; \lranglet{A x^t+\beta_t \nabla h_\mathsf{y}(y^t)}{y} - \beta_t {h_\mathsf{y}}(y),
\end{split
\label{eq:EWA}
\end{equation}
where $\{\beta_t\}_{t\ge 0}$ is the sequence of ``step-sizes'', and
\begin{equation}
D_h(z',z):= h(z') - h(z) - \lranglet{\nabla h(z)}{z'-z}, \quad \forall z'\in\mathsf{dom}\, h, \; z\in\mathsf{int}\,\mathsf{dom}\, h,
\end{equation}
is the Bregman divergence induced by the function $h$.
However, a close inspection of DLFP reveals that it has (at least) the following two key differences from EWA (and any of its weighted-average variants).
First, DLFP solves a different problem as compared to EWA. Specifically, EWA
solves the following saddle-point problem:
\begin{equation}
\quad \quad {\min}_{x\in\Delta_n}\; {\max}_{y\in\Delta_m} \; [\widetilde{S}(x,y):=\lranglet{Ax}{y}]. \tag{SPP$'$}\label{eq:SPP'}
\end{equation}
Note that~\eqref{eq:SPP'} is different from~\eqref{eq:SPP} as we require $\eta >0$ in~\eqref{eq:SPP}. As such, the iterates produced by EWA converge to the set of saddle points of~\eqref{eq:SPP'}, whereas the iterates produced by DLFP converge to the unique saddle point of~\eqref{eq:SPP}. Also note that in EWA, the entropic functions $h_\mathsf{x}$ and $h_\mathsf{y}$ as well as the sequence of ``step-sizes'' $\{\beta_t\}_{t\ge 0}$ are not part of the saddle function $\widetilde{S}(\cdot,\cdot)$, but are introduced only for algorithmic purposes. This is in stark contrast with DLFP, where the entropic functions $h_\mathsf{x}$ and $h_\mathsf{y}$ as well as the parameter $\eta$ are part of the saddle function $S(\cdot,\cdot)$.
Second, the algorithmic structure of DLFP is very different from that of EWA, in at least two aspects. For discussion purposes, let $\beta_t\equiv \eta$ for all $t\ge 0$ in EWA. The first aspect lies in the entropic projection step. Specifically, in EWA, this step involves the gradients of the entropy functions (namely $\nabla h_\mathsf{x} (x^t)$ and $\nabla h_\mathsf{y}(y^t)$), which is not the case for DLFP. Indeed, in EWA, the iterate $x^{t+1}$ depends on both $x^t$ and $y^t$, whereas in DLFP, it depends on $y^t$ only, and the same is true for $y^{t+1}$. The second difference is that DLFP involves an interpolation step with parameter $\alpha_t$, with the typical choice of $\alpha_t$ lying in $(0,1)$ (cf.\ Remarks~\ref{rmk:cst_det_step} and~\ref{rmk:det_to_stoc_step}). This interpolation step is absent in EWA. Even if it does, this interpolation step is different from running a weighted-average of EWA on the fly, since the averaged iterates are immediately used in the next iteration. Due to these differences, the analysis techniques of DLFP are very different from those of EWA. Indeed, the proof of Proposition~\ref{prop:recursion} heavily leverages the smoothness of the Fenchel conjugate functions $g_\mathsf{x}$ and $g_\mathsf{y}$ (defined~\eqref{eq:def_gx_gy}), which is not the case for the typical proof of EWA (see e.g.,~\cite{Nemi_79}). In addition, we need to develop novel techniques to handle the interpolation step, and judiciously choose the parameter $\alpha_t$ to ensure different convergence rates of DLFP (cf.\ Theorem~\ref{thm:global_det_rate}).
Finally, we remark that although the above discussions are mainly for DLFP, they can be naturally extend to explain that LFP, which can be regarded as a stochastic version of DLFP, is very different from stochastic EMD (see e.g.,~\cite{Nemi_09}) and any of its weighted-average variants.
\subsection{Game-Theoretic Algorithms Related to LFP} \label{sec:learning_algo}
In the following, we briefly point out some examples of the game-theoretic algorithms that are related to LFP. Leslie and Collins~\cite{Leslie_06} considered a generalized weakened fictitious play process that includes LFP as a special case. In another paper~\cite{Leslie_05}, they considered an individual Q-learning procedure whose limiting behavior is similar to that of LFP. Coucheney et al.~\cite{Couch_15} considered a penalty-regulated continuous-time learning dynamics that involves the smooth best responses as defined in~\eqref{eq:def_vt} and~\eqref{eq:def_st}. Cominetti et al.~\cite{Com_10} considered a payoff-based learning procedure, whose rest point is the same as that of LFP. Note that in all the above, the convergence analyses are asymptotic, namely they show the learning algorithms eventually converge to the Nash equilibrium with probability one. In addition, Bravo et al.~\cite{Bravo_18} considered a mirror-descent-based learning procedure under first-order and bandit feedback. They showed that under first-order feedback, Euclidean projection and strong monotonicity of the game, this procedure converges to the Nash equilibrium at rate $O(1/t)$ in terms of the squared Euclidean distance. However, as discussed in Section~\ref{sec:MD}, the learning procedure in~\cite{Bravo_18} is very different from LFP. In addition, the convergence rate in~\cite{Bravo_18} and ours hold under different projections (namely, Euclidean vs.\ entropic) and different criteria (squared Euclidean distance vs.\ duality gap).
\section{Conclusion and Future Work}
In this work, we provide a local convergence rate analysis of LFP (namely Algorithm~\ref{algo:LFP}). Note that our local convergence rate easily yields an estimation of the total number of iterations of LFP to achieve an expected $\varepsilon$-duality gap. To see this, first note that the duality gap $G(\cdot,\cdot)$ is uniformly bounded on $\Delta_n\times\Delta_m$, namely
\begin{equation}
{\max}_{(x,y)\in \Delta_n\times\Delta_m} \; G(x,y):= \bar{V}<+\infty.
\end{equation}
For any $\delta\in (0,1)$, let $T(\delta)<+\infty$ satisfy that $\Pr(\mathcal{A}_{T(\delta)})\ge 1-\delta$, and let the accuracy $\varepsilon<\mathbb{E}[V_{T(\delta)}]$. By law of total expectation, we have
\begin{align}
\mathbb{E}[V_{T(\delta)+t}]\le \mathbb{E}[V_{T(\delta)+t}|\mathcal{A}_{T(\delta)}] + \delta\mathbb{E}[V_{T(\delta)+t}|\mathcal{A}^\mathrm{c}_{T(\delta)}]\le \mathbb{E}[V_{T(\delta)+t}|\mathcal{A}_{T(\delta)}] + \delta\bar{V}.
\end{align}
If we choose $\alpha_t = 2/(t+2)$ and $\delta=\varepsilon/(2\bar{V})$, then by~\eqref{eq:stoc_2/(t+2)} in Theorem~\ref{thm:stoc}, we have
\begin{equation}
\mathbb{E}[V_{T(\varepsilon/(2\bar{V}))+t}]\le \frac{4\bar{C}}{t+1}+ \frac{\varepsilon}{2}.
\end{equation}
Thus if $t\ge \lceil 8\bar{C}/\varepsilon\rceil - 1$, then $\mathbb{E}[V_{T(\varepsilon/(2\bar{V}))+t}]\le \varepsilon$. In other words, LFP finds a stochastic primal-dual pair $(x,y)\in \Delta_n\times\Delta_m$ with $\mathbb{E}[G(x,y)]\le \varepsilon$ in no more than
\begin{equation}
T(\varepsilon/(2\bar{V})) + \lceil 8\bar{C}/\varepsilon\rceil - 1 \label{eq:global_comp}
\end{equation}
iterations. As such, an important future research topic would be upper bounding $T(\delta)$ in terms of $\delta$, $r_\mathsf{x}$, $r_\mathsf{y}$ and other problem parameters. Such an upper bound, together with~\eqref{eq:global_comp}, would provide a global iteration complexity of LFP.
\section{\hskip 0em~~#1}\vspace{-1pt}}
\newcommand{\Subsection}[1]{\vspace{-8pt}\subsection{\hskip -1em~~#1}\vspace{-3pt}}
\newcommand{\overset{\raisebox{-1mm}{\scriptsize$\mathrm{c}$}}{\le}}{\overset{\raisebox{-1mm}{\scriptsize$\mathrm{c}$}}{\le}}
\newcommand{\overset{\mathrm{c}}{\ge}}{\overset{\mathrm{c}}{\ge}}
\newcommand{\mathbbm{1}}{\mathbbm{1}}
\newcommand{\bar{\nabla}}{\bar{\nabla}}
\newcommand{\bar{\bf{a}}}{\bar{\bf{a}}}
\newcommand{\bar{\bf b}}{\bar{\bf b}}
\newcommand{\bar{\bf c}}{\bar{\bf c}}
\newcommand{\bar{\bf d}}{\bar{\bf d}}
\newcommand{\bar{\bf e}}{\bar{\bf e}}
\newcommand{\bar{\bf f}}{\bar{\bf f}}
\newcommand{\bar{\bf g}}{\bar{\bf g}}
\newcommand{\bar{\bf h}}{\bar{\bf h}}
\newcommand{\bar{\bf i}}{\bar{\bf i}}
\newcommand{\bar{\bf j}}{\bar{\bf j}}
\newcommand{\bar{\bf k}}{\bar{\bf k}}
\newcommand{\bar{\bf l}}{\bar{\bf l}}
\newcommand{\bar{\bf m}}{\bar{\bf m}}
\newcommand{\bar{\bf n}}{\bar{\bf n}}
\newcommand{\bar{\bf o}}{\bar{\bf o}}
\newcommand{\bar{\bf p}}{\bar{\bf p}}
\newcommand{\bar{\bf q}}{\bar{\bf q}}
\newcommand{\bar{\bf r}}{\bar{\bf r}}
\newcommand{\bar{\bf s}}{\bar{\bf s}}
\newcommand{\bar{\bf t}}{\bar{\bf t}}
\newcommand{\bar{\bf u}}{\bar{\bf u}}
\newcommand{\bar{\bf v}}{\bar{\bf v}}
\newcommand{\bar{\bf w}}{\bar{\bf w}}
\newcommand{\bar{\bf x}}{\bar{\bf x}}
\newcommand{\bar{\bf y}}{\bar{\bf y}}
\newcommand{\bar{\bf z}}{\bar{\bf z}}
\newcommand{\bar{\bm{\lambda}}}{\bar{\bm{\lambda}}}
\newcommand{\bar{\sigma}}{\bar{\sigma}}
\newcommand{\bar{\alpha}}{\bar{\alpha}}
\newcommand{\bar{\bf A}}{\bar{\bf A}}
\newcommand{\bar{\bf B}}{\bar{\bf B}}
\newcommand{\bar{\bf C}}{\bar{\bf C}}
\newcommand{\bar{\bf D}}{\bar{\bf D}}
\newcommand{\bar{\bf E}}{\bar{\bf E}}
\newcommand{\bar{\bf F}}{\bar{\bf F}}
\newcommand{\bar{\bf G}}{\bar{\bf G}}
\newcommand{\bar{\bf H}}{\bar{\bf H}}
\newcommand{\bar{\bf I}}{\bar{\bf I}}
\newcommand{\bar{\bf J}}{\bar{\bf J}}
\newcommand{\bar{\bf K}}{\bar{\bf K}}
\newcommand{\bar{\bf L}}{\bar{\bf L}}
\newcommand{\bar{\bf M}}{\bar{\bf M}}
\newcommand{\bar{\bf N}}{\bar{\bf N}}
\newcommand{\bar{\bf O}}{\bar{\bf O}}
\newcommand{\bar{\bf P}}{\bar{\bf P}}
\newcommand{\bar{\bf Q}}{\bar{\bf Q}}
\newcommand{\bar{\bf R}}{\bar{\bf R}}
\newcommand{\bar{\bf S}}{\bar{\bf S}}
\newcommand{\bar{\bf T}}{\bar{\bf T}}
\newcommand{\bar{\bf U}}{\bar{\bf U}}
\newcommand{\bar{\bf V}}{\bar{\bf V}}
\newcommand{\bar{\bf W}}{\bar{\bf W}}
\newcommand{\bar{\bf X}}{\bar{\bf X}}
\newcommand{\bar{\bf Y}}{\bar{\bf Y}}
\newcommand{\bar{\bf Z}}{\bar{\bf Z}}
\newcommand{\bar{\ell}}{\bar{\ell}}
\newcommand{\bar{\kappa}}{\bar{\kappa}}
\newcommand{\mathcal{A}}{\mathcal{A}}
\newcommand{\mathcal{B}}{\mathcal{B}}
\newcommand{\mathcal{C}}{\mathcal{C}}
\newcommand{\mathcal{D}}{\mathcal{D}}
\newcommand{\mathcal{E}}{\mathcal{E}}
\newcommand{\mathcal{F}}{\mathcal{F}}
\newcommand{\mathcal{G}}{\mathcal{G}}
\newcommand{\mathcal{H}}{\mathcal{H}}
\newcommand{\mathcal{I}}{\mathcal{I}}
\newcommand{\mathcal{J}}{\mathcal{J}}
\newcommand{\mathcal{K}}{\mathcal{K}}
\newcommand{\mathcal{L}}{\mathcal{L}}
\newcommand{\mathcal{M}}{\mathcal{M}}
\newcommand{\mathcal{N}}{\mathcal{N}}
\newcommand{\mathcal{O}}{\mathcal{O}}
\newcommand{\mathcal{P}}{\mathcal{P}}
\newcommand{\mathcal{Q}}{\mathcal{Q}}
\newcommand{\mathcal{R}}{\mathcal{R}}
\newcommand{\mathcal{S}}{\mathcal{S}}
\newcommand{\mathcal{T}}{\mathcal{T}}
\newcommand{\mathcal{U}}{\mathcal{U}}
\newcommand{\mathcal{V}}{\mathcal{V}}
\newcommand{\mathcal{W}}{\mathcal{W}}
\newcommand{\mathcal{X}}{\mathcal{X}}
\newcommand{\mathcal{Y}}{\mathcal{Y}}
\newcommand{\mathcal{Z}}{\mathcal{Z}}
\newcommand{\hatcalA}{\widehat{\mathcal{A}}}
\newcommand{\hatcalB}{\widehat{\mathcal{B}}}
\newcommand{\hatcalC}{\widehat{\mathcal{C}}}
\newcommand{\hatcalD}{\widehat{\mathcal{D}}}
\newcommand{\hatcalE}{\widehat{\mathcal{E}}}
\newcommand{\hatcalF}{\widehat{\mathcal{F}}}
\newcommand{\hatcalG}{\widehat{\mathcal{G}}}
\newcommand{\hatcalH}{\widehat{\mathcal{H}}}
\newcommand{\hatcalI}{\widehat{\mathcal{I}}}
\newcommand{\hatcalJ}{\widehat{\mathcal{J}}}
\newcommand{\hatcalK}{\widehat{\mathcal{K}}}
\newcommand{\hatcalL}{\widehat{\mathcal{L}}}
\newcommand{\hatcalM}{\widehat{\mathcal{M}}}
\newcommand{\hatcalN}{\widehat{\mathcal{N}}}
\newcommand{\hatcalO}{\widehat{\mathcal{O}}}
\newcommand{\hatcalP}{\widehat{\mathcal{P}}}
\newcommand{\hatcalQ}{\widehat{\mathcal{Q}}}
\newcommand{\hatcalR}{\widehat{\mathcal{R}}}
\newcommand{\hatcalS}{\widehat{\mathcal{S}}}
\newcommand{\hatcalT}{\widehat{\mathcal{T}}}
\newcommand{\hatcalU}{\widehat{\mathcal{U}}}
\newcommand{\hatcalV}{\widehat{\mathcal{V}}}
\newcommand{\hatcalW}{\widehat{\mathcal{W}}}
\newcommand{\hatcalX}{\widehat{\mathcal{X}}}
\newcommand{\hatcalY}{\widehat{\mathcal{Y}}}
\newcommand{\hatcalZ}{\widehat{\mathcal{Z}}}
\newcommand{\tilcalA}{\widetilde{\mathcal{A}}}
\newcommand{\tilcalB}{\widetilde{\mathcal{B}}}
\newcommand{\tilcalC}{\widetilde{\mathcal{C}}}
\newcommand{\tilcalD}{\widetilde{\mathcal{D}}}
\newcommand{\tilcalE}{\widetilde{\mathcal{E}}}
\newcommand{\tilcalF}{\widetilde{\mathcal{F}}}
\newcommand{\tilcalG}{\widetilde{\mathcal{G}}}
\newcommand{\tilcalH}{\widetilde{\mathcal{H}}}
\newcommand{\tilcalI}{\widetilde{\mathcal{I}}}
\newcommand{\tilcalJ}{\widetilde{\mathcal{J}}}
\newcommand{\tilcalK}{\widetilde{\mathcal{K}}}
\newcommand{\tilcalL}{\widetilde{\mathcal{L}}}
\newcommand{\tilcalM}{\widetilde{\mathcal{M}}}
\newcommand{\tilcalN}{\widetilde{\mathcal{N}}}
\newcommand{\tilcalO}{\widetilde{\mathcal{O}}}
\newcommand{\tilcalP}{\widetilde{\mathcal{P}}}
\newcommand{\tilcalQ}{\widetilde{\mathcal{Q}}}
\newcommand{\tilcalR}{\widetilde{\mathcal{R}}}
\newcommand{\tilcalS}{\widetilde{\mathcal{S}}}
\newcommand{\tilcalT}{\widetilde{\mathcal{T}}}
\newcommand{\tilcalU}{\widetilde{\mathcal{U}}}
\newcommand{\tilcalV}{\widetilde{\mathcal{V}}}
\newcommand{\tilcalW}{\widetilde{\mathcal{W}}}
\newcommand{\tilcalX}{\widetilde{\mathcal{X}}}
\newcommand{\tilcalY}{\widetilde{\mathcal{Y}}}
\newcommand{\tilcalZ}{\widetilde{\mathcal{Z}}}
\newcommand{\barcalA}{\bar{\mathcal{A}}}
\newcommand{\barcalB}{\bar{\mathcal{B}}}
\newcommand{\barcalC}{\bar{\mathcal{C}}}
\newcommand{\barcalD}{\bar{\mathcal{D}}}
\newcommand{\barcalE}{\bar{\mathcal{E}}}
\newcommand{\barcalF}{\bar{\mathcal{F}}}
\newcommand{\barcalG}{\bar{\mathcal{G}}}
\newcommand{\barcalH}{\bar{\mathcal{H}}}
\newcommand{\barcalI}{\bar{\mathcal{I}}}
\newcommand{\barcalJ}{\bar{\mathcal{J}}}
\newcommand{\barcalK}{\bar{\mathcal{K}}}
\newcommand{\barcalL}{\bar{\mathcal{L}}}
\newcommand{\barcalM}{\bar{\mathcal{M}}}
\newcommand{\barcalN}{\bar{\mathcal{N}}}
\newcommand{\barcalO}{\bar{\mathcal{O}}}
\newcommand{\barcalP}{\bar{\mathcal{P}}}
\newcommand{\barcalQ}{\bar{\mathcal{Q}}}
\newcommand{\barcalR}{\bar{\mathcal{R}}}
\newcommand{\barcalS}{\bar{\mathcal{S}}}
\newcommand{\barcalT}{\bar{\mathcal{T}}}
\newcommand{\barcalU}{\bar{\mathcal{U}}}
\newcommand{\barcalV}{\bar{\mathcal{V}}}
\newcommand{\barcalW}{\bar{\mathcal{W}}}
\newcommand{\barcalX}{\bar{\mathcal{X}}}
\newcommand{\barcalY}{\bar{\mathcal{Y}}}
\newcommand{\barcalZ}{\bar{\mathcal{Z}}}
\newcommand{\mathbf{a}}{\mathbf{a}}
\newcommand{\mathbf{A}}{\mathbf{A}}
\newcommand{\mathbf{b}}{\mathbf{b}}
\newcommand{\mathbf{B}}{\mathbf{B}}
\newcommand{\mathbf{c}}{\mathbf{c}}
\newcommand{\mathbf{C}}{\mathbf{C}}
\newcommand{\mathbf{d}}{\mathbf{d}}
\newcommand{\mathbf{D}}{\mathbf{D}}
\newcommand{\mathbf{E}}{\mathbf{E}}
\newcommand{\mathbf{f}}{\mathbf{f}}
\newcommand{\mathbf{F}}{\mathbf{F}}
\newcommand{\mathbf{g}}{\mathbf{g}}
\newcommand{\mathbf{G}}{\mathbf{G}}
\newcommand{\mathbf{h}}{\mathbf{h}}
\newcommand{\mathbf{H}}{\mathbf{H}}
\newcommand{\mathbf{I}}{\mathbf{I}}
\newcommand{\mathbf{j}}{\mathbf{j}}
\newcommand{\mathbf{J}}{\mathbf{J}}
\newcommand{\mathbf{k}}{\mathbf{k}}
\newcommand{\mathbf{K}}{\mathbf{K}}
\newcommand{\mathbf{l}}{\mathbf{l}}
\newcommand{\mathbf{L}}{\mathbf{L}}
\newcommand{\mathbf{m}}{\mathbf{m}}
\newcommand{\mathbf{M}}{\mathbf{M}}
\newcommand{\mathbf{n}}{\mathbf{n}}
\newcommand{\mathbf{N}}{\mathbf{N}}
\newcommand{\mathbf{o}}{\mathbf{o}}
\newcommand{\mathbf{O}}{\mathbf{O}}
\newcommand{\mathbf{p}}{\mathbf{p}}
\newcommand{\mathbf{P}}{\mathbf{P}}
\newcommand{\mathbf{q}}{\mathbf{q}}
\newcommand{\mathbf{Q}}{\mathbf{Q}}
\newcommand{\mathbf{r}}{\mathbf{r}}
\newcommand{\mathbf{R}}{\mathbf{R}}
\newcommand{\mathbf{s}}{\mathbf{s}}
\newcommand{\mathbf{S}}{\mathbf{S}}
\newcommand{\mathbf{t}}{\mathbf{t}}
\newcommand{\mathbf{T}}{\mathbf{T}}
\newcommand{\mathbf{u}}{\mathbf{u}}
\newcommand{\mathbf{U}}{\mathbf{U}}
\newcommand{\mathbf{v}}{\mathbf{v}}
\newcommand{\mathbf{V}}{\mathbf{V}}
\newcommand{\mathbf{w}}{\mathbf{w}}
\newcommand{\mathbf{W}}{\mathbf{W}}
\newcommand{\mathbf{x}}{\mathbf{x}}
\newcommand{\mathbf{X}}{\mathbf{X}}
\newcommand{\mathbf{y}}{\mathbf{y}}
\newcommand{\mathbf{Y}}{\mathbf{Y}}
\newcommand{\mathbf{z}}{\mathbf{z}}
\newcommand{\mathbf{Z}}{\mathbf{Z}}
\newcommand{\mathrm{a}}{\mathrm{a}}
\newcommand{\mathrm{A}}{\mathrm{A}}
\newcommand{\mathrm{b}}{\mathrm{b}}
\newcommand{\mathrm{B}}{\mathrm{B}}
\newcommand{\mathrm{c}}{\mathrm{c}}
\newcommand{\mathrm{C}}{\mathrm{C}}
\newcommand{\mathrm{d}}{\mathrm{d}}
\newcommand{\mathrm{D}}{\mathrm{D}}
\newcommand{\mathrm{e}}{\mathrm{e}}
\newcommand{\mathrm{E}}{\mathrm{E}}
\newcommand{\mathrm{f}}{\mathrm{f}}
\newcommand{\mathrm{F}}{\mathrm{F}}
\newcommand{\mathrm{g}}{\mathrm{g}}
\newcommand{\mathrm{G}}{\mathrm{G}}
\newcommand{\mathrm{h}}{\mathrm{h}}
\newcommand{\mathrm{H}}{\mathrm{H}}
\newcommand{\mathrm{i}}{\mathrm{i}}
\newcommand{\mathrm{I}}{\mathrm{I}}
\newcommand{\mathrm{j}}{\mathrm{j}}
\newcommand{\mathrm{J}}{\mathrm{J}}
\newcommand{\mathrm{k}}{\mathrm{k}}
\newcommand{\mathrm{K}}{\mathrm{K}}
\newcommand{\mathrm{l}}{\mathrm{l}}
\newcommand{\mathrm{L}}{\mathrm{L}}
\newcommand{\mathrm{m}}{\mathrm{m}}
\newcommand{\mathrm{M}}{\mathrm{M}}
\newcommand{\mathrm{n}}{\mathrm{n}}
\newcommand{\mathrm{N}}{\mathrm{N}}
\newcommand{\mathrm{o}}{\mathrm{o}}
\newcommand{\mathrm{O}}{\mathrm{O}}
\newcommand{\mathrm{p}}{\mathrm{p}}
\newcommand{\mathrm{P}}{\mathrm{P}}
\newcommand{\mathrm{q}}{\mathrm{q}}
\newcommand{\mathrm{Q}}{\mathrm{Q}}
\newcommand{\mathrm{r}}{\mathrm{r}}
\newcommand{\mathrm{R}}{\mathrm{R}}
\newcommand{\mathrm{s}}{\mathrm{s}}
\newcommand{\mathrm{S}}{\mathrm{S}}
\newcommand{\mathrm{t}}{\mathrm{t}}
\newcommand{\mathrm{T}}{\mathrm{T}}
\newcommand{\mathrm{u}}{\mathrm{u}}
\newcommand{\mathrm{U}}{\mathrm{U}}
\newcommand{\mathrm{v}}{\mathrm{v}}
\newcommand{\mathrm{V}}{\mathrm{V}}
\newcommand{\mathrm{w}}{\mathrm{w}}
\newcommand{\mathrm{W}}{\mathrm{W}}
\newcommand{\mathrm{x}}{\mathrm{x}}
\newcommand{\mathrm{X}}{\mathrm{X}}
\newcommand{\mathrm{y}}{\mathrm{y}}
\newcommand{\mathrm{Y}}{\mathrm{Y}}
\newcommand{\mathrm{z}}{\mathrm{z}}
\newcommand{\mathrm{Z}}{\mathrm{Z}}
\newcommand{\mathbb{A}}{\mathbb{A}}
\newcommand{\mathbb{B}}{\mathbb{B}}
\newcommand{\mathbb{C}}{\mathbb{C}}
\newcommand{\mathbb{D}}{\mathbb{D}}
\newcommand{\mathbb{E}}{\mathbb{E}}
\newcommand{\mathbb{F}}{\mathbb{F}}
\newcommand{\mathbb{G}}{\mathbb{G}}
\newcommand{\mathbb{H}}{\mathbb{H}}
\newcommand{\mathbb{I}}{\mathbb{I}}
\newcommand{\mathbb{J}}{\mathbb{J}}
\newcommand{\mathbb{K}}{\mathbb{K}}
\newcommand{\mathbb{L}}{\mathbb{L}}
\newcommand{\mathbb{M}}{\mathbb{M}}
\newcommand{\mathbb{N}}{\mathbb{N}}
\newcommand{\mathbb{O}}{\mathbb{O}}
\newcommand{\mathbb{P}}{\mathbb{P}}
\newcommand{\mathbb{Q}}{\mathbb{Q}}
\newcommand{\mathbb{R}}{\mathbb{R}}
\newcommand{\mathbb{S}}{\mathbb{S}}
\newcommand{\mathbb{T}}{\mathbb{T}}
\newcommand{\mathbb{U}}{\mathbb{U}}
\newcommand{\mathbb{V}}{\mathbb{V}}
\newcommand{\mathbb{W}}{\mathbb{W}}
\newcommand{\mathbb{X}}{\mathbb{X}}
\newcommand{\mathbb{Y}}{\mathbb{Y}}
\newcommand{\mathbb{Z}}{\mathbb{Z}}
\newcommand{\bar{\bbR}}{\bar{\mathbb{R}}}
\newcommand{\bar{\bbN}}{\bar{\mathbb{N}}}
\newcommand{\mathfrak{A}}{\mathfrak{A}}
\newcommand{\mathfrak{B}}{\mathfrak{B}}
\newcommand{\mathfrak{C}}{\mathfrak{C}}
\newcommand{\mathfrak{D}}{\mathfrak{D}}
\newcommand{\mathfrak{E}}{\mathfrak{E}}
\newcommand{\mathfrak{F}}{\mathfrak{F}}
\newcommand{\mathfrak{G}}{\mathfrak{G}}
\newcommand{\mathfrak{H}}{\mathfrak{H}}
\newcommand{\mathfrak{I}}{\mathfrak{I}}
\newcommand{\mathfrak{J}}{\mathfrak{J}}
\newcommand{\mathfrak{K}}{\mathfrak{K}}
\newcommand{\mathfrak{L}}{\mathfrak{L}}
\newcommand{\mathfrak{M}}{\mathfrak{M}}
\newcommand{\mathfrak{N}}{\mathfrak{N}}
\newcommand{\mathfrak{O}}{\mathfrak{O}}
\newcommand{\mathfrak{P}}{\mathfrak{P}}
\newcommand{\mathfrak{Q}}{\mathfrak{Q}}
\newcommand{\mathfrak{R}}{\mathfrak{R}}
\newcommand{\mathfrak{S}}{\mathfrak{S}}
\newcommand{\mathfrak{T}}{\mathfrak{T}}
\newcommand{\mathfrak{U}}{\mathfrak{U}}
\newcommand{\mathfrak{V}}{\mathfrak{V}}
\newcommand{\mathfrak{W}}{\mathfrak{W}}
\newcommand{\mathfrak{X}}{\mathfrak{X}}
\newcommand{\mathfrak{Y}}{\mathfrak{Y}}
\newcommand{\mathfrak{Z}}{\mathfrak{Z}}
\newcommand{\mathscr{A}}{\mathscr{A}}
\newcommand{\mathscr{B}}{\mathscr{B}}
\newcommand{\mathscr{C}}{\mathscr{C}}
\newcommand{\mathscr{D}}{\mathscr{D}}
\newcommand{\mathscr{E}}{\mathscr{E}}
\newcommand{\mathscr{F}}{\mathscr{F}}
\newcommand{\mathscr{G}}{\mathscr{G}}
\newcommand{\mathscr{H}}{\mathscr{H}}
\newcommand{\mathscr{I}}{\mathscr{I}}
\newcommand{\mathscr{J}}{\mathscr{J}}
\newcommand{\mathscr{K}}{\mathscr{K}}
\newcommand{\mathscr{L}}{\mathscr{L}}
\newcommand{\mathscr{M}}{\mathscr{M}}
\newcommand{\mathscr{N}}{\mathscr{N}}
\newcommand{\mathscr{O}}{\mathscr{O}}
\newcommand{\mathscr{P}}{\mathscr{P}}
\newcommand{\mathscr{Q}}{\mathscr{Q}}
\newcommand{\mathscr{R}}{\mathscr{R}}
\newcommand{\mathscr{S}}{\mathscr{S}}
\newcommand{\mathscr{T}}{\mathscr{T}}
\newcommand{\mathscr{U}}{\mathscr{U}}
\newcommand{\mathscr{V}}{\mathscr{V}}
\newcommand{\mathscr{W}}{\mathscr{W}}
\newcommand{\mathscr{X}}{\mathscr{X}}
\newcommand{\mathscr{Y}}{\mathscr{Y}}
\newcommand{\mathscr{Z}}{\mathscr{Z}}
\DeclareMathAlphabet{\mathbsf}{OT1}{cmss}{bx}{n}
\newcommand{\mathsf{a}}{\mathsf{a}}
\newcommand{\mathsf{A}}{\mathsf{A}}
\newcommand{\hat{\rvA}}{\hat{\mathsf{A}}}
\newcommand{\mathbsf{a}}{\mathbsf{a}}
\newcommand{\mathbsf{A}}{\mathbsf{A}}
\newcommand{\mathsf{b}}{\mathsf{b}}
\newcommand{\mathsf{B}}{\mathsf{B}}
\newcommand{\mathbsf{b}}{\mathbsf{b}}
\newcommand{\mathbsf{B}}{\mathbsf{B}}
\newcommand{\mathsf{c}}{\mathsf{c}}
\newcommand{\mathsf{C}}{\mathsf{C}}
\newcommand{\mathbsf{c}}{\mathbsf{c}}
\newcommand{\mathbsf{C}}{\mathbsf{C}}
\newcommand{\mathsf{d}}{\mathsf{d}}
\newcommand{\mathsf{D}}{\mathsf{D}}
\newcommand{\mathbsf{d}}{\mathbsf{d}}
\newcommand{\mathbsf{D}}{\mathbsf{D}}
\newcommand{\mathsf{e}}{\mathsf{e}}
\newcommand{\mathsf{E}}{\mathsf{E}}
\newcommand{\mathbsf{e}}{\mathbsf{e}}
\newcommand{\mathbsf{E}}{\mathbsf{E}}
\newcommand{\mathsf{f}}{\mathsf{f}}
\newcommand{\mathsf{F}}{\mathsf{F}}
\newcommand{\mathbsf{f}}{\mathbsf{f}}
\newcommand{\mathbsf{F}}{\mathbsf{F}}
\newcommand{\mathsf{g}}{\mathsf{g}}
\newcommand{\mathsf{G}}{\mathsf{G}}
\newcommand{\mathbsf{g}}{\mathbsf{g}}
\newcommand{\mathbsf{G}}{\mathbsf{G}}
\newcommand{\mathsf{h}}{\mathsf{h}}
\newcommand{\mathsf{H}}{\mathsf{H}}
\newcommand{\mathbsf{h}}{\mathbsf{h}}
\newcommand{\mathbsf{H}}{\mathbsf{H}}
\newcommand{\mathsf{i}}{\mathsf{i}}
\newcommand{\mathsf{I}}{\mathsf{I}}
\newcommand{\mathbsf{i}}{\mathbsf{i}}
\newcommand{\mathbsf{I}}{\mathbsf{I}}
\newcommand{\mathsf{j}}{\mathsf{j}}
\newcommand{\mathsf{J}}{\mathsf{J}}
\newcommand{\mathbsf{j}}{\mathbsf{j}}
\newcommand{\mathbsf{J}}{\mathbsf{J}}
\newcommand{\mathsf{k}}{\mathsf{k}}
\newcommand{\mathsf{K}}{\mathsf{K}}
\newcommand{\mathbsf{k}}{\mathbsf{k}}
\newcommand{\mathbsf{K}}{\mathbsf{K}}
\newcommand{\mathsf{l}}{\mathsf{l}}
\newcommand{\mathsf{L}}{\mathsf{L}}
\newcommand{\mathbsf{l}}{\mathbsf{l}}
\newcommand{\mathbsf{L}}{\mathbsf{L}}
\newcommand{\mathsf{m}}{\mathsf{m}}
\newcommand{\mathsf{M}}{\mathsf{M}}
\newcommand{\mathbsf{m}}{\mathbsf{m}}
\newcommand{\mathbsf{M}}{\mathbsf{M}}
\newcommand{\mathsf{n}}{\mathsf{n}}
\newcommand{\mathsf{N}}{\mathsf{N}}
\newcommand{\mathbsf{n}}{\mathbsf{n}}
\newcommand{\mathbsf{N}}{\mathbsf{N}}
\newcommand{\mathsf{o}}{\mathsf{o}}
\newcommand{\mathsf{O}}{\mathsf{O}}
\newcommand{\mathbsf{o}}{\mathbsf{o}}
\newcommand{\mathbsf{O}}{\mathbsf{O}}
\newcommand{\mathsf{p}}{\mathsf{p}}
\newcommand{\mathsf{P}}{\mathsf{P}}
\newcommand{\mathbsf{p}}{\mathbsf{p}}
\newcommand{\mathbsf{P}}{\mathbsf{P}}
\newcommand{\mathsf{q}}{\mathsf{q}}
\newcommand{\mathsf{Q}}{\mathsf{Q}}
\newcommand{\mathbsf{q}}{\mathbsf{q}}
\newcommand{\mathbsf{Q}}{\mathbsf{Q}}
\newcommand{\mathsf{r}}{\mathsf{r}}
\newcommand{\mathsf{R}}{\mathsf{R}}
\newcommand{\mathbsf{r}}{\mathbsf{r}}
\newcommand{\mathbsf{R}}{\mathbsf{R}}
\newcommand{\mathsf{s}}{\mathsf{s}}
\newcommand{\mathsf{S}}{\mathsf{S}}
\newcommand{\mathbsf{s}}{\mathbsf{s}}
\newcommand{\mathbsf{S}}{\mathbsf{S}}
\newcommand{\mathsf{t}}{\mathsf{t}}
\newcommand{\mathsf{T}}{\mathsf{T}}
\newcommand{\mathbsf{t}}{\mathbsf{t}}
\newcommand{\mathbsf{T}}{\mathbsf{T}}
\newcommand{\mathsf{u}}{\mathsf{u}}
\newcommand{\mathsf{U}}{\mathsf{U}}
\newcommand{\mathbsf{u}}{\mathbsf{u}}
\newcommand{\mathbsf{U}}{\mathbsf{U}}
\newcommand{\mathsf{v}}{\mathsf{v}}
\newcommand{\mathsf{V}}{\mathsf{V}}
\newcommand{\mathbsf{v}}{\mathbsf{v}}
\newcommand{\mathbsf{V}}{\mathbsf{V}}
\newcommand{\mathsf{w}}{\mathsf{w}}
\newcommand{\mathsf{W}}{\mathsf{W}}
\newcommand{\mathbsf{w}}{\mathbsf{w}}
\newcommand{\mathbsf{W}}{\mathbsf{W}}
\newcommand{\mathsf{x}}{\mathsf{x}}
\newcommand{\mathsf{X}}{\mathsf{X}}
\newcommand{\mathbsf{x}}{\mathbsf{x}}
\newcommand{\mathbsf{X}}{\mathbsf{X}}
\newcommand{\mathsf{y}}{\mathsf{y}}
\newcommand{\mathsf{Y}}{\mathsf{Y}}
\newcommand{\mathbsf{y}}{\mathbsf{y}}
\newcommand{\mathbsf{Y}}{\mathbsf{Y}}
\newcommand{\mathsf{z}}{\mathsf{z}}
\newcommand{\mathsf{Z}}{\mathsf{Z}}
\newcommand{\mathbsf{z}}{\mathbsf{z}}
\newcommand{\mathbsf{Z}}{\mathbsf{Z}}
\newcommand{\ssfTheta}{\ssfTheta}
\newcommand{\Theta}{\Theta}
\newcommand{\bsfTheta}{\bsfTheta}
\newcommand{\boldsymbol{\Theta}}{\boldsymbol{\Theta}}
\newcommand{\ssfPhi}{\ssfPhi}
\newcommand{\Phi}{\Phi}
\newcommand{\bsfPhi}{\bsfPhi}
\newcommand{\boldsymbol{\Phi}}{\boldsymbol{\Phi}}
\newcommand{\mathsf{\Xi}}{\mathsf{\Xi}}
\newcommand{\widehat{a}}{\widehat{a}}
\newcommand{\widehat{A}}{\widehat{A}}
\newcommand{\widetilde{a}}{\widetilde{a}}
\newcommand{\widetilde{A}}{\widetilde{A}}
\newcommand{\widehat{\ba}}{\widehat{\mathbf{a}}}
\newcommand{\widehat{\bA}}{\widehat{\mathbf{A}}}
\newcommand{\widetilde{\ba}}{\widetilde{\mathbf{a}}}
\newcommand{\widetilde{\bA}}{\widetilde{\mathbf{A}}}
\newcommand{\widehat{b}}{\widehat{b}}
\newcommand{\widehat{B}}{\widehat{B}}
\newcommand{\widetilde{b}}{\widetilde{b}}
\newcommand{\widetilde{B}}{\widetilde{B}}
\newcommand{\widehat{\bb}}{\widehat{\mathbf{b}}}
\newcommand{\widehat{\bB}}{\widehat{\mathbf{B}}}
\newcommand{\widetilde{\bb}}{\widetilde{\mathbf{b}}}
\newcommand{\widetilde{\bB}}{\widetilde{\mathbf{B}}}
\newcommand{\underline{\tilb}}{\underline{\widetilde{b}}}
\newcommand{\widehat{c}}{\widehat{c}}
\newcommand{\widehat{C}}{\widehat{C}}
\newcommand{\widetilde{c}}{\widetilde{c}}
\newcommand{\widetilde{C}}{\widetilde{C}}
\newcommand{\widehat{\bc}}{\widehat{\mathbf{c}}}
\newcommand{\widehat{\bC}}{\widehat{\mathbf{C}}}
\newcommand{\widetilde{\bc}}{\widetilde{\mathbf{c}}}
\newcommand{\widetilde{\bC}}{\widetilde{\mathbf{C}}}
\newcommand{\widehat{d}}{\widehat{d}}
\newcommand{\widehat{D}}{\widehat{D}}
\newcommand{\widetilde{d}}{\widetilde{d}}
\newcommand{\widetilde{D}}{\widetilde{D}}
\newcommand{\widehat{\bd}}{\widehat{\mathbf{d}}}
\newcommand{\widehat{\bD}}{\widehat{\mathbf{D}}}
\newcommand{\widetilde{\bd}}{\widetilde{\mathbf{d}}}
\newcommand{\widetilde{\bD}}{\widetilde{\mathbf{D}}}
\newcommand{\widehat{e}}{\widehat{e}}
\newcommand{\widehat{E}}{\widehat{E}}
\newcommand{\widetilde{e}}{\widetilde{e}}
\newcommand{\widetilde{E}}{\widetilde{E}}
\newcommand{\widehat{\be}}{\widehat{\mathbf{e}}}
\newcommand{\widehat{\bE}}{\widehat{\mathbf{E}}}
\newcommand{\widetilde{\be}}{\widetilde{\mathbf{e}}}
\newcommand{\widetilde{\bE}}{\widetilde{\mathbf{E}}}
\newcommand{\widehat{f}}{\widehat{f}}
\newcommand{\widehat{F}}{\widehat{F}}
\newcommand{\widetilde{f}}{\widetilde{f}}
\newcommand{\widetilde{F}}{\widetilde{F}}
\newcommand{\widehat{\boldf}}{\widehat{\mathbf{f}}}
\newcommand{\widehat{\bF}}{\widehat{\mathbf{F}}}
\newcommand{\widetilde{\boldf}}{\widetilde{\mathbf{f}}}
\newcommand{\widetilde{\bF}}{\widetilde{\mathbf{F}}}
\newcommand{\widehat{g}}{\widehat{g}}
\newcommand{\widehat{G}}{\widehat{G}}
\newcommand{\widetilde{g}}{\widetilde{g}}
\newcommand{\widetilde{G}}{\widetilde{G}}
\newcommand{\widehat{\bg}}{\widehat{\mathbf{g}}}
\newcommand{\widehat{\bG}}{\widehat{\mathbf{G}}}
\newcommand{\widetilde{\bg}}{\widetilde{\mathbf{g}}}
\newcommand{\widetilde{\bG}}{\widetilde{\mathbf{G}}}
\newcommand{\hat{h}}{\hat{h}}
\newcommand{\widehat{H}}{\widehat{H}}
\newcommand{\widetilde{h}}{\widetilde{h}}
\newcommand{\widetilde{H}}{\widetilde{H}}
\newcommand{\widehat{\bh}}{\widehat{\mathbf{h}}}
\newcommand{\widehat{\bH}}{\widehat{\mathbf{H}}}
\newcommand{\widetilde{\bh}}{\widetilde{\mathbf{h}}}
\newcommand{\widetilde{\bH}}{\widetilde{\mathbf{H}}}
\newcommand{\widehat{i}}{\widehat{i}}
\newcommand{\widehat{I}}{\widehat{I}}
\newcommand{\widetilde{i}}{\widetilde{i}}
\newcommand{\widetilde{I}}{\widetilde{I}}
\newcommand{\widehat{\bi}}{\widehat{\mathbf{i}}}
\newcommand{\widehat{\bI}}{\widehat{\mathbf{I}}}
\newcommand{\widetilde{\bi}}{\widetilde{\mathbf{i}}}
\newcommand{\widetilde{\bI}}{\widetilde{\mathbf{I}}}
\newcommand{\widehat{j}}{\widehat{j}}
\newcommand{\widehat{J}}{\widehat{J}}
\newcommand{\widetilde{j}}{\widetilde{j}}
\newcommand{\widetilde{J}}{\widetilde{J}}
\newcommand{\widehat{\bj}}{\widehat{\mathbf{j}}}
\newcommand{\widehat{\bJ}}{\widehat{\mathbf{J}}}
\newcommand{\widetilde{\bj}}{\widetilde{\mathbf{j}}}
\newcommand{\widetilde{\bJ}}{\widetilde{\mathbf{J}}}
\newcommand{\widehat{k}}{\widehat{k}}
\newcommand{\widehat{K}}{\widehat{K}}
\newcommand{\widetilde{k}}{\widetilde{k}}
\newcommand{\widetilde{K}}{\widetilde{K}}
\newcommand{\widehat{\bk}}{\widehat{\mathbf{k}}}
\newcommand{\widehat{\bK}}{\widehat{\mathbf{K}}}
\newcommand{\widetilde{\bk}}{\widetilde{\mathbf{k}}}
\newcommand{\widetilde{\bK}}{\widetilde{\mathbf{K}}}
\newcommand{\widehat{l}}{\widehat{l}}
\newcommand{\widehat{L}}{\widehat{L}}
\newcommand{\widetilde{l}}{\widetilde{l}}
\newcommand{\widetilde{L}}{\widetilde{L}}
\newcommand{\widehat{\bl}}{\widehat{\mathbf{l}}}
\newcommand{\widehat{\bL}}{\widehat{\mathbf{L}}}
\newcommand{\widetilde{\bl}}{\widetilde{\mathbf{l}}}
\newcommand{\widetilde{\bL}}{\widetilde{\mathbf{L}}}
\newcommand{\widehat{m}}{\widehat{m}}
\newcommand{\widehat{M}}{\widehat{M}}
\newcommand{\widetilde{m}}{\widetilde{m}}
\newcommand{\widetilde{M}}{\widetilde{M}}
\newcommand{\widehat{\boldm}}{\widehat{\mathbf{m}}}
\newcommand{\widehat{\bM}}{\widehat{\mathbf{M}}}
\newcommand{\widetilde{\boldm}}{\widetilde{\mathbf{m}}}
\newcommand{\widetilde{\bM}}{\widetilde{\mathbf{M}}}
\newcommand{\widehat{n}}{\widehat{n}}
\newcommand{\widehat{N}}{\widehat{N}}
\newcommand{\widetilde{n}}{\widetilde{n}}
\newcommand{\widetilde{N}}{\widetilde{N}}
\newcommand{\widehat{\bn}}{\widehat{\mathbf{n}}}
\newcommand{\widehat{\bN}}{\widehat{\mathbf{N}}}
\newcommand{\widetilde{\bn}}{\widetilde{\mathbf{n}}}
\newcommand{\widetilde{\bN}}{\widetilde{\mathbf{N}}}
\newcommand{\widehat{o}}{\widehat{o}}
\newcommand{\widehat{O}}{\widehat{O}}
\newcommand{\widetilde{o}}{\widetilde{o}}
\newcommand{\widetilde{O}}{\widetilde{O}}
\newcommand{\widehat{\bo}}{\widehat{\mathbf{o}}}
\newcommand{\widehat{\bO}}{\widehat{\mathbf{O}}}
\newcommand{\widetilde{\bo}}{\widetilde{\mathbf{o}}}
\newcommand{\widetilde{\bO}}{\widetilde{\mathbf{O}}}
\newcommand{\widehat{p}}{\widehat{p}}
\newcommand{\widehat{P}}{\widehat{P}}
\newcommand{\widetilde{p}}{\widetilde{p}}
\newcommand{\widetilde{P}}{\widetilde{P}}
\newcommand{\widehat{\bp}}{\widehat{\mathbf{p}}}
\newcommand{\widehat{\bP}}{\widehat{\mathbf{P}}}
\newcommand{\widetilde{\bp}}{\widetilde{\mathbf{p}}}
\newcommand{\widetilde{\bP}}{\widetilde{\mathbf{P}}}
\newcommand{\widehat{q}}{\widehat{q}}
\newcommand{\widehat{Q}}{\widehat{Q}}
\newcommand{\widetilde{q}}{\widetilde{q}}
\newcommand{\widetilde{Q}}{\widetilde{Q}}
\newcommand{\widehat{\bq}}{\widehat{\mathbf{q}}}
\newcommand{\widehat{\bQ}}{\widehat{\mathbf{Q}}}
\newcommand{\widetilde{\bq}}{\widetilde{\mathbf{q}}}
\newcommand{\widetilde{\bQ}}{\widetilde{\mathbf{Q}}}
\newcommand{\widehat{r}}{\widehat{r}}
\newcommand{\widehat{R}}{\widehat{R}}
\newcommand{\widetilde{r}}{\widetilde{r}}
\newcommand{\widetilde{R}}{\widetilde{R}}
\newcommand{\widehat{\br}}{\widehat{\mathbf{r}}}
\newcommand{\widehat{\bR}}{\widehat{\mathbf{R}}}
\newcommand{\widetilde{\br}}{\widetilde{\mathbf{r}}}
\newcommand{\widetilde{\bR}}{\widetilde{\mathbf{R}}}
\newcommand{\widehat{s}}{\widehat{s}}
\newcommand{\widehat{S}}{\widehat{S}}
\newcommand{\widetilde{s}}{\widetilde{s}}
\newcommand{\widetilde{S}}{\widetilde{S}}
\newcommand{\widehat{\bs}}{\widehat{\mathbf{s}}}
\newcommand{\widehat{\bS}}{\widehat{\mathbf{S}}}
\newcommand{\widetilde{\bs}}{\widetilde{\mathbf{s}}}
\newcommand{\widetilde{\bS}}{\widetilde{\mathbf{S}}}
\newcommand{\widehat{t}}{\widehat{t}}
\newcommand{\widehat{T}}{\widehat{T}}
\newcommand{\widetilde{t}}{\widetilde{t}}
\newcommand{\widetilde{T}}{\widetilde{T}}
\newcommand{\widehat{\bt}}{\widehat{\mathbf{t}}}
\newcommand{\widehat{\bT}}{\widehat{\mathbf{T}}}
\newcommand{\widetilde{\bt}}{\widetilde{\mathbf{t}}}
\newcommand{\widetilde{\bT}}{\widetilde{\mathbf{T}}}
\newcommand{\widehat{u}}{\widehat{u}}
\newcommand{\widehat{U}}{\widehat{U}}
\newcommand{\widetilde{u}}{\widetilde{u}}
\newcommand{\widetilde{U}}{\widetilde{U}}
\newcommand{\widehat{\bu}}{\widehat{\mathbf{u}}}
\newcommand{\widehat{\bU}}{\widehat{\mathbf{U}}}
\newcommand{\widetilde{\bu}}{\widetilde{\mathbf{u}}}
\newcommand{\widetilde{\bU}}{\widetilde{\mathbf{U}}}
\newcommand{\widehat{v}}{\widehat{v}}
\newcommand{\widehat{V}}{\widehat{V}}
\newcommand{\widetilde{v}}{\widetilde{v}}
\newcommand{\widetilde{V}}{\widetilde{V}}
\newcommand{\widehat{\bv}}{\widehat{\mathbf{v}}}
\newcommand{\widehat{\bV}}{\widehat{\mathbf{V}}}
\newcommand{\widetilde{\bv}}{\widetilde{\mathbf{v}}}
\newcommand{\widetilde{\bV}}{\widetilde{\mathbf{V}}}
\newcommand{\widehat{w}}{\widehat{w}}
\newcommand{\widehat{W}}{\widehat{W}}
\newcommand{\widetilde{w}}{\widetilde{w}}
\newcommand{\widetilde{W}}{\widetilde{W}}
\newcommand{\widehat{\bw}}{\widehat{\mathbf{w}}}
\newcommand{\widehat{\bW}}{\widehat{\mathbf{W}}}
\newcommand{\tilde{\bw}}{\tilde{\mathbf{w}}}
\newcommand{\widetilde{\bW}}{\widetilde{\mathbf{W}}}
\newcommand{\hat{x}}{\hat{x}}
\newcommand{\widehat{X}}{\widehat{X}}
\newcommand{\widetilde{x}}{\widetilde{x}}
\newcommand{\widetilde{X}}{\widetilde{X}}
\newcommand{\widehat{\bx}}{\widehat{\mathbf{x}}}
\newcommand{\widehat{\bX}}{\widehat{\mathbf{X}}}
\newcommand{\widetilde{\bx}}{\widetilde{\mathbf{x}}}
\newcommand{\widetilde{\bX}}{\widetilde{\mathbf{X}}}
\newcommand{\hat{y}}{\hat{y}}
\newcommand{\widehat{Y}}{\widehat{Y}}
\newcommand{\widetilde{y}}{\widetilde{y}}
\newcommand{\widetilde{Y}}{\widetilde{Y}}
\newcommand{\widehat{\by}}{\widehat{\mathbf{y}}}
\newcommand{\widehat{\bY}}{\widehat{\mathbf{Y}}}
\newcommand{\widetilde{\by}}{\widetilde{\mathbf{y}}}
\newcommand{\widetilde{\bY}}{\widetilde{\mathbf{Y}}}
\newcommand{\widehat{z}}{\widehat{z}}
\newcommand{\widehat{Z}}{\widehat{Z}}
\newcommand{\widetilde{z}}{\widetilde{z}}
\newcommand{\widetilde{Z}}{\widetilde{Z}}
\newcommand{\widehat{\bz}}{\widehat{\mathbf{z}}}
\newcommand{\widehat{\bZ}}{\widehat{\mathbf{Z}}}
\newcommand{\widetilde{\bz}}{\widetilde{\mathbf{z}}}
\newcommand{\widetilde{\bZ}}{\widetilde{\mathbf{Z}}}
\newcommand{\widehat{\bm{\xi}}}{\widehat{\bm{\xi}}}
\newcommand{\bar{\bm{\xi}}}{\bar{\bm{\xi}}}
\newcommand{\bar{a}}{\bar{a}}
\newcommand{\bar{b}}{\bar{b}}
\newcommand{\bar{c}}{\bar{c}}
\newcommand{\bar{d}}{\bar{d}}
\newcommand{\bar{e}}{\bar{e}}
\newcommand{\bar{f}}{\bar{f}}
\newcommand{\bar{g}}{\bar{g}}
\newcommand{\bar{h}}{\bar{h}}
\newcommand{\bar{i}}{\bar{i}}
\newcommand{\bar{j}}{\bar{j}}
\newcommand{\bar{k}}{\bar{k}}
\newcommand{\bar{l}}{\bar{l}}
\newcommand{\bar{m}}{\bar{m}}
\newcommand{\bar{n}}{\bar{n}}
\newcommand{\bar{o}}{\bar{o}}
\newcommand{\bar{p}}{\bar{p}}
\newcommand{\bar{q}}{\bar{q}}
\newcommand{\bar{r}}{\bar{r}}
\newcommand{\bar{s}}{\bar{s}}
\newcommand{\bar{t}}{\bar{t}}
\newcommand{\bar{u}}{\bar{u}}
\newcommand{\bar{v}}{\bar{v}}
\newcommand{\bar{w}}{\bar{w}}
\newcommand{\bar{x}}{\bar{x}}
\newcommand{\bar{y}}{\bar{y}}
\newcommand{\bar{z}}{\bar{z}}
\newcommand{\bar{A}}{\bar{A}}
\newcommand{\bar{B}}{\bar{B}}
\newcommand{\bar{C}}{\bar{C}}
\newcommand{\bar{D}}{\bar{D}}
\newcommand{\bar{E}}{\bar{E}}
\newcommand{\bar{F}}{\bar{F}}
\newcommand{\bar{G}}{\bar{G}}
\newcommand{\bar{H}}{\bar{H}}
\newcommand{\bar{I}}{\bar{I}}
\newcommand{\bar{J}}{\bar{J}}
\newcommand{\bar{K}}{\bar{K}}
\newcommand{\bar{L}}{\bar{L}}
\newcommand{\bar{M}}{\bar{M}}
\newcommand{\bar{N}}{\bar{N}}
\newcommand{\bar{O}}{\bar{O}}
\newcommand{\bar{P}}{\bar{P}}
\newcommand{\bar{Q}}{\bar{Q}}
\newcommand{\bar{R}}{\bar{R}}
\newcommand{\bar{S}}{\bar{S}}
\newcommand{\bar{T}}{\bar{T}}
\newcommand{\bar{U}}{\bar{U}}
\newcommand{\bar{V}}{\bar{V}}
\newcommand{\bar{W}}{\bar{W}}
\newcommand{\bar{X}}{\bar{X}}
\newcommand{\bar{Y}}{\bar{Y}}
\newcommand{\bar{Z}}{\bar{Z}}
\newcommand{\bar{\mu}}{\bar{\mu}}
\newcommand{\bar{\rho}}{\bar{\rho}}
\newcommand{\bm{\alpha}}{\bm{\alpha}}
\newcommand{\bm{\beta}}{\bm{\beta}}
\newcommand{\bm{\gamma}}{\bm{\gamma}}
\newcommand{\bm{\delta}}{\bm{\delta}}
\newcommand{\bm{\theta}}{\bm{\theta}}
\newcommand{\bm{\tau}}{\bm{\tau}}
\newcommand{\bm{\pi}}{\bm{\pi}}
\newcommand{\bm{\epsilon}}{\bm{\epsilon}}
\newcommand{\varepsilon}{\varepsilon}
\newcommand{\bm{\varepsilon}}{\bm{\varepsilon}}
\newcommand{\bm{\sigma}}{\bm{\sigma}}
\newcommand{\bm{\zeta}}{\bm{\zeta}}
\newcommand{\bm{\eta}}{\bm{\eta}}
\newcommand{\bm{\kappa}}{\bm{\kappa}}
\newcommand{\bm{\chi}}{\bm{\chi}}
\newcommand{\bm{\phi}}{\bm{\phi}}
\newcommand{\bm{\psi}}{\bm{\psi}}
\newcommand{\bm{\omega}}{\bm{\omega}}
\newcommand{\bm{\xi}}{\bm{\xi}}
\newcommand{\bm{\lambda}}{\bm{\lambda}}
\newcommand{\bm{\rho}}{\bm{\rho}}
\newcommand{\bm{\Gamma}}{\bm{\Gamma}}
\newcommand{\bm{\Lambda}}{\bm{\Lambda}}
\newcommand{\bSigma }{\bm{\Sigma}}
\newcommand{\bm{\Psi}}{\bm{\Psi}}
\newcommand{\bm{\Delta}}{\bm{\Delta}}
\newcommand{\bm{\Xi}}{\bm{\Xi}}
\newcommand{\bm{\Upsilon}}{\bm{\Upsilon}}
\newcommand{\bm{\Omega}}{\bm{\Omega}}
\newcommand{\bm{\Phi}}{\bm{\Phi}}
\newcommand{\bm{\Pi}}{\bm{\Pi}}
\newcommand{\bm{\Theta}}{\bm{\Theta}}
\newcommand{\bar{\bomega}}{\bar{\bm{\omega}}}
\newcommand{\widetilde{\blambda}}{\widetilde{\bm{\lambda}}}
\newcommand{\widetilde{\alpha}}{\widetilde{\alpha}}
\newcommand{\widetilde{\beta}}{\widetilde{\beta}}
\newcommand{\widetilde{\gamma}}{\widetilde{\gamma}}
\newcommand{\widetilde{\Gamma}}{\widetilde{\Gamma}}
\newcommand{\widetilde{\delta}}{\widetilde{\delta}}
\newcommand{\widetilde{\theta}}{\widetilde{\theta}}
\newcommand{\widetilde{\tau}}{\widetilde{\tau}}
\newcommand{\widetilde{\pi}}{\widetilde{\pi}}
\newcommand{\widetilde{\epsilon}}{\widetilde{\epsilon}}
\newcommand{\widetilde{\varepsilon}}{\widetilde{\varepsilon}}
\newcommand{\widetilde{\sigma}}{\widetilde{\sigma}}
\newcommand{\widetilde{\zeta}}{\widetilde{\zeta}}
\newcommand{\widetilde{\eta}}{\widetilde{\eta}}
\newcommand{\widetilde{\kappa}}{\widetilde{\kappa}}
\newcommand{\widetilde{\chi}}{\widetilde{\chi}}
\newcommand{\widetilde{\phi}}{\widetilde{\phi}}
\newcommand{\widetilde{\psi}}{\widetilde{\psi}}
\newcommand{\widetilde{\omega}}{\widetilde{\omega}}
\newcommand{\widetilde{\xi}}{\widetilde{\xi}}
\newcommand{\widetilde{\lambda}}{\widetilde{\lambda}}
\newcommand{\widetilde{\rho}}{\widetilde{\rho}}
\newcommand{\widetilde{\nu}}{\widetilde{\nu}}
\newcommand{\widetilde{\iota}}{\widetilde{\iota}}
\newcommand{\widetilde{\bdelta}}{\widetilde{\bm{\delta}}}
\newcommand{\widetilde{\Delta}}{\widetilde{\Delta}}
\newcommand{\widetilde{\mu}}{\widetilde{\mu}}
\newcommand{\widetilde{\bx}}{\widetilde{\mathbf{x}}}
\newcommand{\widehat{\alpha}}{\widehat{\alpha}}
\newcommand{\widehat{\beta}}{\widehat{\beta}}
\newcommand{\widehat{\gamma}}{\widehat{\gamma}}
\newcommand{\widehat{\delta}}{\widehat{\delta}}
\newcommand{\widehat{\theta}}{\widehat{\theta}}
\newcommand{\widehat{\tau}}{\widehat{\tau}}
\newcommand{\widehat{\pi}}{\widehat{\pi}}
\newcommand{\widehat{\epsilon}}{\widehat{\epsilon}}
\newcommand{\widehat{\varepsilon}}{\widehat{\varepsilon}}
\newcommand{\widehat{\sigma}}{\widehat{\sigma}}
\newcommand{\widehat{\zeta}}{\widehat{\zeta}}
\newcommand{\widehat{\eta}}{\widehat{\eta}}
\newcommand{\widehat{\kappa}}{\widehat{\kappa}}
\newcommand{\widehat{\chi}}{\widehat{\chi}}
\newcommand{\widehat{\phi}}{\widehat{\phi}}
\newcommand{\widehat{\psi}}{\widehat{\psi}}
\newcommand{\widehat{\omega}}{\widehat{\omega}}
\newcommand{\widehat{\xi}}{\widehat{\xi}}
\newcommand{\widehat{\lambda}}{\widehat{\lambda}}
\newcommand{\widehat{\rho}}{\widehat{\rho}}
\newcommand{\underline{a}}{\underline{a}}
\newcommand{\underline{A}}{\underline{A}}
\newcommand{\underline{b}}{\underline{b}}
\newcommand{\underline{B}}{\underline{B}}
\newcommand{\underline{c}}{\underline{c}}
\newcommand{\underline{C}}{\underline{C}}
\newcommand{\underline{d}}{\underline{d}}
\newcommand{\underline{D}}{\underline{D}}
\newcommand{\underline{e}}{\underline{e}}
\newcommand{\underline{E}}{\underline{E}}
\newcommand{\underline{f}}{\underline{f}}
\newcommand{\underline{F}}{\underline{F}}
\newcommand{\underline{g}}{\underline{g}}
\newcommand{\underline{G}}{\underline{G}}
\newcommand{\underline{h}}{\underline{h}}
\newcommand{\underline{\bh}}{\underline{\mathbf{h}}}
\newcommand{\underline{H}}{\underline{H}}
\newcommand{\underline{i}}{\underline{i}}
\newcommand{\underline{I}}{\underline{I}}
\newcommand{\underline{j}}{\underline{j}}
\newcommand{\underline{J}}{\underline{J}}
\newcommand{\underline{k}}{\underline{k}}
\newcommand{\underline{K}}{\underline{K}}
\newcommand{\underline{l}}{\underline{l}}
\newcommand{\underline{L}}{\underline{L}}
\newcommand{\underline{m}}{\underline{m}}
\newcommand{\underline{M}}{\underline{M}}
\newcommand{\underline{n}}{\underline{n}}
\newcommand{\underline{N}}{\underline{N}}
\newcommand{\underline{o}}{\underline{o}}
\newcommand{\underline{O}}{\underline{O}}
\newcommand{\underline{P}}{\underline{P}}
\newcommand{\underline{q}}{\underline{q}}
\newcommand{\underline{Q}}{\underline{Q}}
\newcommand{\underline{r}}{\underline{r}}
\newcommand{\underline{R}}{\underline{R}}
\newcommand{\underline{s}}{\underline{s}}
\newcommand{\underline{S}}{\underline{S}}
\newcommand{\underline{t}}{\underline{t}}
\newcommand{\underline{T}}{\underline{T}}
\newcommand{\underline{u}}{\underline{u}}
\newcommand{\underline{U}}{\underline{U}}
\newcommand{\underline{v}}{\underline{v}}
\newcommand{\underline{V}}{\underline{V}}
\newcommand{\underline{w}}{\underline{w}}
\newcommand{\underline{W}}{\underline{W}}
\newcommand{\underline{x}}{\underline{x}}
\newcommand{\underline{X}}{\underline{X}}
\newcommand{\underline{y}}{\underline{y}}
\newcommand{\underline{Y}}{\underline{Y}}
\newcommand{\underline{z}}{\underline{z}}
\newcommand{\underline{Z}}{\underline{Z}}
\newcommand{\underline{\bE}}{\underline{\mathbf{E}}}
\newcommand{\underline{\bW}}{\underline{\mathbf{W}}}
\newcommand{\underline{\bH}}{\underline{\mathbf{H}}}
\newcommand{\underline{\lambda}}{\underline{\lambda}}
\newcommand{\dot{B}}{\dot{B}}
\newcommand{\dot{c}}{\dot{c}}
\newcommand{\dot{P}}{\dot{P}}
\newcommand{\dot{L}}{\dot{L}}
\newcommand{\dot{\bA}}{\dot{\mathbf{A}}}
\newcommand{\dot{\bx}}{\dot{\mathbf{x}}}
\newcommand{\dot{\by}}{\dot{\mathbf{y}}}
\newcommand{\dot{\bz}}{\dot{\mathbf{z}}}
\def\, \cdot \,{\, \cdot \,}
\def\, \diamond \,{\, \diamond \,}
\def\, \star \,{\, \star \,}
\newcommand{\eexp}[1]{e^{#1}}
\newcommand{i.i.d.\ }{i.i.d.\ }
\newcommand{\stackrel{\mathrm{p}}{\longrightarrow}}{\stackrel{\mathrm{p}}{\longrightarrow}}
\newcommand{\stackrel{\mathrm{w.p.1}}{\longrightarrow}}{\stackrel{\mathrm{w.p.1}}{\longrightarrow}}
\newcommand{\xrightarrow{\mathrm{a.s.}}}{\xrightarrow{\mathrm{a.s.}}}
\newcommand{\stackrel{\mathrm{d}}{\longrightarrow}}{\stackrel{\mathrm{d}}{\longrightarrow}}
\newcommand{\stackrel{\mathrm{D}}{\longrightarrow}}{\stackrel{\mathrm{D}}{\longrightarrow}}
\newcommand{\ceil}[1]{\lceil{#1}\rceil}
\newcommand{\floor}[1]{\lfloor{#1}\rfloor}
\newcommand{\lrangle}[2]{\left\langle{#1},{#2}\right\rangle}
\newcommand{\lranglet}[2]{\langle{#1},{#2}\rangle}
\newcommand{\stackrel{.}{\leq}}{\stackrel{.}{\leq}}
\newcommand{\stackrel{.}{<}}{\stackrel{.}{<}}
\newcommand{\stackrel{.}{\geq}}{\stackrel{.}{\geq}}
\newcommand{\stackrel{.}{>}}{\stackrel{.}{>}}
\newcommand{\stackrel{\,..}{=}}{\stackrel{\,..}{=}}
\newcommand{\stackrel{\rm(a)}{=}}{\stackrel{\rm(a)}{=}}
\newcommand{\stackrel{\rm(b)}{=}}{\stackrel{\rm(b)}{=}}
\newcommand{\stackrel{\rm(c)}{=}}{\stackrel{\rm(c)}{=}}
\newcommand{\stackrel{\rm(d)}{=}}{\stackrel{\rm(d)}{=}}
\newcommand{\stackrel{\rm(e)}{=}}{\stackrel{\rm(e)}{=}}
\newcommand{\stackrel{\rm(f)}{=}}{\stackrel{\rm(f)}{=}}
\newcommand{\stackrel{\rm(g)}{=}}{\stackrel{\rm(g)}{=}}
\newcommand{\stackrel{\rm(h)}{=}}{\stackrel{\rm(h)}{=}}
\newcommand{\stackrel{\rm(a)}{\le}}{\stackrel{\rm(a)}{\le}}
\newcommand{\stackrel{\rm(b)}{\le}}{\stackrel{\rm(b)}{\le}}
\newcommand{\stackrel{\rm(c)}{\le}}{\stackrel{\rm(c)}{\le}}
\newcommand{\stackrel{\rm(d)}{\le}}{\stackrel{\rm(d)}{\le}}
\newcommand{\stackrel{\rm(e)}{\le}}{\stackrel{\rm(e)}{\le}}
\newcommand{\stackrel{\rm(f)}{\le}}{\stackrel{\rm(f)}{\le}}
\newcommand{\stackrel{\rm(g)}{\le}}{\stackrel{\rm(g)}{\le}}
\newcommand{\stackrel{\rm(h)}{\le}}{\stackrel{\rm(h)}{\le}}
\newcommand{\les}[1]{\stackrel{#1}{\le}}
\newcommand{\stackrel{\rm(a)}{\ge}}{\stackrel{\rm(a)}{\ge}}
\newcommand{\stackrel{\rm(b)}{\ge}}{\stackrel{\rm(b)}{\ge}}
\newcommand{\stackrel{\rm(c)}{\ge}}{\stackrel{\rm(c)}{\ge}}
\newcommand{\stackrel{\rm(d)}{\ge}}{\stackrel{\rm(d)}{\ge}}
\newcommand{\stackrel{\rm(e)}{\ge}}{\stackrel{\rm(e)}{\ge}}
\newcommand{\stackrel{\rm(f)}{\ge}}{\stackrel{\rm(f)}{\ge}}
\newcommand{\stackrel{\rm(g)}{\ge}}{\stackrel{\rm(g)}{\ge}}
\newcommand{\stackrel{\rm(h)}{\ge}}{\stackrel{\rm(h)}{\ge}}
\newcommand{P_{\mathrm{e}}^{(n)}}{P_{\mathrm{e}}^{(n)}}
\newcommand{P_{\mathrm{e}, 1}^{(n)}}{P_{\mathrm{e}, 1}^{(n)}}
\newcommand{P_{\mathrm{e}, 2}^{(n)}}{P_{\mathrm{e}, 2}^{(n)}}
\DeclareMathOperator*{\argmax}{arg\,max}
\DeclareMathOperator*{\argmin}{arg\,min}
\DeclareMathOperator*{\argsup}{arg\,sup}
\DeclareMathOperator*{\arginf}{arg\,inf}
\DeclareMathOperator{\minimize}{minimize}
\DeclareMathOperator{\maximize}{maximize}
\DeclareMathOperator{\st}{s.t.}
\DeclareMathOperator{\erfc}{erfc}
\DeclareMathOperator{\cum}{cum}
\DeclareMathOperator{\sgn}{sgn}
\DeclareMathOperator{\tr}{tr}
\DeclareMathOperator{\spn}{span}
\DeclareMathOperator{\supp}{supp}
\DeclareMathOperator{\adj}{adj}
\DeclareMathOperator{\var}{\mathsf{Var}}
\DeclareMathOperator{\Vol}{Vol}
\DeclareMathOperator{\cov}{\mathsf{Cov}}
\DeclareMathOperator{\sech}{sech}
\DeclareMathOperator{\sinc}{sinc}
\DeclareMathOperator{\rank}{rank}
\DeclareMathOperator{\poly}{poly}
\DeclareMathOperator{\polylog}{polylog}
\DeclareMathOperator{\vect}{vec}
\newcommand{\Hb}{H_{\mathrm{b}}
\newcommand{\mathrm{Bern}}{\mathrm{Bern}}
\DeclareMathOperator*{\lms}{l.i.m.\,}
\newcommand{\varop}[1]{\var\left[{#1}\right]}
\newcommand{\covop}[2]{\cov\left({#1},{#2}\right)}
\newcommand{\mathbf{0}}{\mathbf{0}}
\newcommand{\mathbf{1}}{\mathbf{1}}
\def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern5mu{#1#2}}}
\newcommand\indep{\protect\mathpalette{\protect\independenT}{\perp}}
\newtheorem{prop}{Proposition}
\newcommand{\qednew}{\nobreak \ifvmode \relax \else
\ifdim\lastskip<1.5em \hskip-\lastskip
\hskip1.5em plus0em minus0.5em \fi \nobreak
\vrule height0.75em width0.5em depth0.25em\fi}
|
1,477,468,751,053 | arxiv | \section{Introduction}
\label{sec:intro}
Several real-world systems such as social-interactions, brain-connectome, epidemic spread, recommender systems, and traffic patterns are best represented by structured data in the form of large graphs. Graph neural networks (GNNs) are deep neural network architectures that leverage these graph structures to learn meaningful representations of node and edge data \cite{kipf17-classifgcnn,hamilton2017inductive,defferrard17-cnngraphs,gama18-gnnarchit}. GNNs have shown remarkable empirical performance in a number of graph machine learning tasks, but can be hard to train on large-graph data. Recent research efforts have attempted to understand the large-graph behavior of GNNs, and in particular why GNNs trained on small graphs scale well to large networks \cite{ruiz20-transf,levie2019transferability,keriven2020convergence}. However, the specific learning dynamics of a GNN trained directly on the large network, which are known to be challenging, are not as well understood.
Modern deep neural networks (DNNs) are typically overparametrized. The benefits of overparametrization include faster convergence \cite{allen2019convergence, arora2018optimization} and better generalization \cite{cao2019generalization}, but on the other hand
the parameters can be difficult to interpret and the learning dynamics harder to understand. A remarkable contribution of \cite{jacot2018neural} was the observation that, in the infinite width limit, learning the weights of a DNN via gradient descent reduces to kernel regression with a deterministic and fixed kernel called the \emph{neural tangent kernel} (NTK), which captures the first-order approximation of the neural network's evolution during gradient descent \cite{lee2019wide}.
Since its introduction, the NTK has been an important and widely-studied tool in the machine learning toolbox, and kernel regression using the NTK has shown strong performance on small datasets \cite{arora2019harnessing}.
In the graph case, it is straightforward to define the NTK associated with a GNN, or the graph neural tangent kernel (GNTK) \cite{NEURIPS2019_663fd3c5}. The GNTK allows studying the training dynamics of the GNN when the number of features (the analog of width in the DNN) is large. Nonetheless, the GNTK does not give much insight into the case in which \textit{the underlying graph is large}, which can itself be thought of as another ``width dimension''. Herein, we propose to to understand the training dynamics of GNNs that are wide in both of these senses by combining the NTK formalism with the theory of graphons \cite{lovasz2012large,borgs2008convergent}.
A graphon is a symmetric bounded measurable function~$\bbW: [0,1]^2 \rightarrow [0,1]$ representing the limit of a sequence of dense graphs. It can also be interpreted as a random graph model, in which case we can use the graphon to sample stochastic graphs. The interpretation of $\bbW$ as both a graph limit and a random graph model makes it so that each graphon defines a family of similar graphs. Hence, one can expect properties of a graphon to generalize, in a probabilistic sense, the properties of graphs belonging to its family. Graphons have been used to study the limit behavior of GNNs, which converge to so-called graphon neural networks (WNNs) \cite{ruiz20-transf}. The fact that GNNs have a limit on the graphon implies that they are transferable across graphs in the same family, thus allowing a GNN to be trained on a graph of moderate size and transferred to a larger graph.
\subsection{Contributions}
In this paper, our first contribution is to define the graphon NTK (WNTK) associated with the WNN (Sec. \ref{sec:ntk}).
We then prove, using mathematical induction, that the GNTK converges to the WNTK (Thm. \ref{theorem1}). In practice, this implies that GNTKs, like GNNs, are transferable across graphs of different sizes associated with the same graphon. That is to say, one can subsample a small graph and the corresponding data from a large graph, then fit the subsampled data to the small-graph GNTK via kernel regression, and then transfer the fitted model to the large-graph GNTK.
A more important implication of the convergence of the GNTK is that it is possible to understand the training dynamics of GNNs on large graphs by analyzing the behavior of the corresponding GNTK in graphs of moderate size.
For instance, the eigenvalues of the NTK are associated with the speed of convergence along the corresponding eigendirections \cite{jacot2018neural}. In Thm. \ref{theorem2}, we show that the eigenvalues of the graph GNTK, which indicate the directions of fastest convergence of the GNN, converge to the eigenvalues of the WNTK. This allows these eigenvalues to be estimated from GNTKs associated with smaller graphs. Lastly, we verify our theoretical results in two numerical applications: prediction of opinion dynamics on random graphs and node classification on the Cora, CiteSeer and PubMed networks. We observe the convergence of the GNTK (Sec. \ref{sbs:sims_conv}), the effect of width in kernel regression and GNN training (Sec. \ref{sbs:wide_sims}), and the convergence of the GNTK eigenvalues on sequences of graphs (Sec. \ref{sbs:eig_sims}).
\comment{
Let $\bbPhi(\bbx,\ccalH)$ be a neural network where $\bbx \in \reals^d$ is the input data and $\ccalH$ is a set containing the neural network's learnable weights, which we stack in the vector $\bbh \in \mbR^{|\ccalH|}$. Consider the first-order Taylor approximation of $\bbPhi$ with respect to $\bbh$, explicitly
\begin{equation}
\bbPhi(\bbx,\ccalH) - \bbPhi(\bbx,\ccalH_0) \approx \nabla_\bbh \bbPhi(\bbx,\ccalH_0)^T(\bbh-\bbh_0) \text{.}
\end{equation}
Since this expression is linear in the model weights $\bbh$, if the neural network is close to its linear approximation, finding the optimal weights $\bbh$ consists of solving a linear regression where the inputs $\bbx$ are modified by the function $\nabla_\bbh \bbPhi(\bbx,\ccalH_0)^T$. Seeing this function as a feature map, we can define the corresponding kernel $\bbTheta(\bbx,\bbx')=\nabla_\bbh\bbPhi(\bbx,\ccalH_0)^T\nabla_\bbh\bbPhi(\bbx',\ccalH_0)$. This kernel---which depends only on the initial weights $\ccalH_0$---is called the neural tangent kernel or NTK \cite{jacot2018neural}.
In \cite{jacot2018neural}, it was shown that, in the infinite width limit as $d \to \infty$, the neural network $\bbPhi$ behaves very similarly to its linear approximation. To see this, let $\bby \in \reals^d$ be target output we are trying to approximate with $\bbPhi$. When the neural network has infinitely many parameters, a small change in $\ccalH$ can lead to a large change in $\bbPhi(\bbx,\ccalH)$. Hence, the change in $\ccalH$ has to be very small in order for $\bbPhi$ not to deviate from its current estimate of $\bby$. In other words, $\ccalH$ stays almost constant (close to $\ccalH_0$), making it so that the first-order Taylor approximation is accurate. In practice, the NTK is important because it helps understand the training dynamics of wide neural networks. For example, the eigenvalues of $\bbTheta$ determine the speed at which training converges along each eigendirection of the kernel. The NTK associated with standard, fully connected neural networks has been the subject of much research, and a handful of works have studied the graph NTK (GNTK) associated with GNNs where the graph $\bbG$ is seen as the input data, such as in graph classification, where the graphs (e.g., molecules) tend to be small. On the other hand, the behavior of wide GNNs on node-level tasks, where the node data varies but the graphs are fixed and typically large (e.g., user similarity networks in recommendation systems), remains to be understood. At Simons, \textbf{Sanjukta Krishnagopal} and I worked on this problem by defining the appropriate GNTK and studying its convergence to the graphon NTK on dense graphs. The convergence of the GNTK follows from the fact that the graph spectra converges to the graphon spectra, and uses similar steps as the proof of convergence of GNNs in \cite{jacot2018neural}. In a preliminary set of experiments, we observe that, in the same way that the NTK provides a good approximation of fully connected neural networks as their width increases, the GNTK provides a good approximation of the GNN as the number of features in its layers increases. This is illustrated in Fig. \ref{fig:wntk} for GNNs with $50$, $500$, and $1000$ features per layer trained to solve a consensus task. In our future work, we plan on running experiments on graphs of growing size, to analyze the convergence of the GNTK; and on understanding the influence of the graph and graphon spectra in the spectra of the GNTK and of the WNTK. This work will be featured in a paper under preparation for submission at ICML 2023.
}
\section{Related Work}
\label{sec:related}
\noindent \textbf{Neural tangent kernels.} The connection between infinitely wide DNNs and kernel methods (Gaussian processes) has been known since the 1990s \cite{neal1996priors, williams1996computing}, but a more theoretical formulation was presented in \cite{jacot2018neural}, which introduced the NTK and proved its constancy property in the infinite width limit, with \cite{liu2020linearity} later showing that constancy only holds for architectures with linear output layer. Several works have derived NTKs for a generalized classes of neural networks, including convolutional neural networks \cite{arora2019exact,li2019enhanced}, ResNets \cite{huang2020deep} and, most closely related to our work, GNNs \cite{NEURIPS2019_663fd3c5}.
\noindent \textbf{Graphons in deep learning.} Graphons have been used to understand graph neural network convergence and transferability \cite{ruiz20-transf,ruiz2020graphonsp,maskey2023transferability}, to analyze the generalization properties of GNNs in large graphs \cite{maskey2022generalization}, and to propose more computationally efficient training algorithms for large-scale GNNs \cite{cervino2022training}. More recently, \cite{xia2022implicit} proposed implicit graphon neural representations, which use neural networks to estimate graphons.
\section{Graph and Graphon Neural Networks}
\label{sec:prelim}
Let $\bbG_n=(\ccalV,\ccalE,w)$ be a graph where $\ccalV$, $|\ccalV|=n$, is the set of nodes or vertices, $\ccalE \subseteq \ccalV \times \ccalV$ is the set of edges and $w:\ccalE\to\reals$ is a map assigning weights to the edges in $\ccalE$. The \textit{size-normalized} adjacency matrix of $\bbG_n$, denoted $\bbA_n \in \reals^{n\times n}$, is given by $[\bbA]_{ij}=w(i,j)/n$. In this paper we focus on undirected graphs $\bbG_n$, with symmetric $\bbA_n$
\subsection{Graph Neural Networks}
Supported on the graph $\bbG_n$ with $n$ nodes, we define node data $\bbx_n \in \reals^n$---also called \textit{graph signals} \cite{shuman13-mag}---where $[\bbx_n]_i$ is the value of the signal at node $i$.
GNNs iteratively update each node's data by aggregating the data from its neighbors using \textit{graph convolutions}.
An order-$K$ graph convolution is defined as
\begin{equation} \label{eqn:graph_convolution}
\bby_n = H(\bbx_n) = \sum_{k=0}^{K-1} h_k \bbA_n^k \bbx_n
\end{equation}
where $h_0,\dots,h_{K-1}$ are the convolution coefficients. When $K=2$ and $\bbA_n$ is binary, \eqref{eqn:graph_convolution} can be seen as an aggregation operation akin to the $\texttt{AGGREGATE}$ operation in, e.g., \cite{xu2018how,hamilton2017inductive}, or as the message-passing operation in message-passing neural networks (MPNNs) \cite{gilmer2017neural}.
More generally, let $\bbX_n \in \reals^{n \times F}$ and $\bbY_n \in \reals^{n \times G}$ be $F$- and $G$-dimensional signals respectively, where the $f$th ($g$th) column is a \textit{feature} $\bbx_n^f$ ($\bby_n^g$).
In this case, the graph convolution generalizes to
\begin{equation} \label{eqn:gen_graph_convolution}
\bbY_n = H(\bbX_n) = \sum_{k=0}^{K-1} \bbA_n^k \bbX_n \bbH_k
\end{equation}
with weights $\bbH_0, \bbH_1, \ldots, \bbH_{K-1} \in \reals^{F \times G}$. Note that the number of parameters of the graph convolutions in \eqref{eqn:graph_convolution} and \eqref{eqn:gen_graph_convolution}, $K$ and $KFG$ respectively, are independent of $n$.
A GNN consists of $L$ layers, each of which composes a graph convolution and a nonlinear activation function. Explicitly, the $l${th} layer of a GNN can be written as
\begin{align} \label{eqn:gcn_layer}
\begin{split}
\bbX_{l,n} &= \sigma \left(\bbU_{l,n}\right) \\
\bbU_{l,n} &= H(\bbX_{l-1,n}) = \sum_{k=0}^{K-1} \bbA_n^k \bbX_{l-1,n} \bbH_{l,k}
\end{split}
\end{align}
where $\bbX_{l-1,n} \in \reals^{n \times F_{l-1}}$ is the layer input, $\bbX_{l,n} \in \reals^{n \times F_l}$ is the layer output, $\bbH_{l,k} \in \reals^{F_{l-1} \times F_l}$ are the layer weights and $\sigma$ is a pointwise nonlinear activation function, i.e., $[\sigma(\bbX)]_{ij}=\sigma([\bbX]_{ij})$. Typical choices for $\sigma$ include ReLU, tanh, and sigmoid.
At the first layer of the GNN, the input $\bbX_{0,n}$ is the input data $\bbX$. Similarly, the GNN output $\bbY_n$ is given by the last layer output $\bbX_{n,L}$.
In the following, we represent the entire GNN consisting of the concatenation of $L$ layers like \eqref{eqn:gcn_layer} as the parametric map $\bbY_n = f(\bbX_n; \bbA_n, \ccalH)$, where $\ccalH = \{\bbH_{l,k}\}_{l,k}$ groups the learnable weights $\bbH_{l,k}$ at all layers. This more concise representation highlights the independence between the adjacency matrix $\bbA_n$ (i.e., the graph) and the parameters $\ccalH$, which the GNN inherits from the graph convolution.
\subsection{Graphon Neural Networks}
Graphons are bounded, symmetric, measurable functions~$\bbW:[0,1]^2\to [0,1]$ representing limits of sequences of dense graphs \cite{lovasz2012large,borgs2008convergent}.
A graph sequence $\{\bbG_n\}$ converges to a graphon in the sense that the densities of homomorphisms of any finite, unweighted and undirected graph $\bbF=(\ccalV',\ccalE')$ into $\bbG_n$ converge to the densities of homomorphisms of $\bbF$ into $\bbW$. Explicitly,
let $\mbox{hom}(\bbF,\bbG_n)$ denote the total number of homomorphisms between $\ccalV'$ and $\ccalV_n$.
The density of such homomorphisms is given by $t(\bbF,\bbG_n) = {\mbox{hom}(\bbF,\bbG_n)}/{n^{|\ccalV'|}}$ and we can similarly define $t(\bbF,\bbW$) (see \cite{borgs2008convergent}). Then, we say that $\bbG_n \to \bbW$ if and only if
\begin{equation}\label{eqn:graph_convergence}
\lim_{n \to \infty} t(\bbF,\bbG_n) = t(\bbF, \bbW)
\end{equation}
for all simple $\bbF$ \cite{lovasz2006limits}.
Alternatively, the graphon can also be seen as a generative model for stochastic (also called $\bbW$-random) graphs. Nodes are picked by sampling points $u_i$, $1 \leq i \leq n$, from the unit interval and connecting edges between nodes $i$ and $j$ with probability $\bbW(u_i,u_j)$. Importantly, sequences of stochastic graphs generated in this way converge almost surely to the graphon \cite{lovasz2012large}[Cor. 10.4].
The notion of graph signals is extended to graphons by defining graphon signals, which are functions $X:[0,1] \to\reals$ \cite{ruiz2020graphonsp}. We restrict attention to graphon signals with finite energy, i.e., $X \in L^2([0,1])$.
Analogously to \eqref{eqn:gen_graph_convolution}, given $F$- and $G$-dimensional graphon signals $X:[0,1]\to\reals^F$ and $Y:[0,1]\to\reals^G$, the graphon convolution is defined as \cite{ruiz2020graphonsp}
\begin{align} \label{eqn:gen_graphon_convolution}
\begin{split}
Y = T_H X &= \sum_{k=0}^{K-1} T_{W}^{(k)} X \bbH_k \\
T_{W}^{(k)}X &= \int_0^1 \bbW(u,v)T_W^{(k-1)} X(u)du
\end{split}
\end{align}
where $T_{W}^{(0)}=\bbI$ is the identity and the weights are collected in the matrices $\bbH_0, \bbH_1, \ldots, \bbH_{K-1} \in \reals^{F \times G}$.
The extension of the GNN to graphon data is the graphon neural network (WNN). Akin to the GNN, the WNN is formed by $L$ layers each of which composes a graphon convolution and a nonlinear activation function. Explicitly, the $l$th layer of the WNN is given by
\begin{align}\label{eqn:wcn_layer}
\begin{split}
X_{l} &= \sigma\left(U_l\right) \\
U_l &= T_{H_l} X_{l-1} = \sum_{k=0}^{K-1} T_{W}^{(k)} X_{l-1}\bbH_{l,k}
\end{split}
\end{align}
where $X_{l-1}: [0,1] \to \reals^{F_{l-1}}$ is the layer input, $X_{l}: [0,1] \to \reals^{F_{l}}$ is its output, $\bbH_{l,k} \in \reals^{F_{l-1}\times F_l}$ are its weights and $\sigma$ is a pointwise nonlinearity (e.g., the ReLU).
The input of the first layer of the WNN is $X_{0}=X$, and the output of the WNN is the last layer ouput, i.e., $Y=X_L$.
We can describe the WNN more compactly as the map $Y = f(X;\bbW,\ccalH)$, with $\ccalH = \{\bbH_{l,k}\}_{l,k}$ the set of learnable parameters at all layers. Note that, if the weights $\ccalH$ are the same, the WNN map $f(X;\bbW,\ccalH)$ is the same as the GNN map $f(X;\bbA_n,\ccalH)$ with $\bbA_n$ swapped with $\bbW$. This is important because it implies that, similarly to how graphons are generative models for graphs, \emph{WNNs are generative models for GNNs}. Indeed, we can use the WNN $f(X;\bbW,\ccalH)$ to sample the GNN $\bby_n = f(\bbx_n;\bbA_n,\ccalH)$ with
\begin{align} \label{eqn:gcn_obtained}
&[\bbA_n]_{ij} \sim \mbox{Ber}(\bbW(u_i,u_j))\nonumber \\
&[\bbx_n]_i = X(u_i)
\end{align}
where the $u_i$ are sampled uniformly and independently at random from $[0,1]$ and $\mbox{Ber}()$ is the Bernoulli distribution.
As $n \to \infty$, sequences of GNNs sampled from the WNN as described above converge to the WNN. In fact, as long as $\ccalH$ is the same, any GNN $f(\bbx_n;\bbA_n,\ccalH)$ applied to a sequence $\{(\bbG_n,\bbx_n)\}$ converging to $(\bbW,X)$ converges to $f(X;\bbW,\ccalH)$ \cite{ruiz2020graphonsp}. A more important result in practice is that this convergence implies that GNNs are transferable across graphs associated with the same graphon, i.e., they can be trained on graphs $\bbG_n$ and executed on graphs $\bbG_m$ with an error that decreases asymptotically with $n$ and $m$ \cite{ruiz2021transferability,ruiz20-transf}.
In Sec. \ref{sec:convergence}, we prove a similar convergence result for the neural tangent kernels (NTKs) associated with $f(\bbx_n;\bbA_n,\ccalH)$ and $f(X;\bbW,\ccalH)$, but before doing so, we need to introduce \textit{induced WNNs}.
The WNN induced by the GNN $f(\bbx_n;\bbA_n,\ccalH)$ is defined as $Y_n = f(X_n;\bbW_n,\ccalH)$, with
\begin{align} \label{eqn:wnn_induced}
\begin{split}
&\bbW_n(u,v) = \sum_{i=1}^n\sum_{j=1}^n [\bbA_n]_{ij} \mbI(u \in I_i) \mbI(v \in I_j), \\
&X_n(u) = \sum_{i=1}^n [\bbx_n]_i \mbI(u \in I_i).
\end{split}
\end{align}
and where $\mbI$ is the indicator function, $I_i=[(i-1)/n,i/n)$ for $1 \leq i \leq n-1$, and $I_n = [(n-1)/n,1]$. The graphon $\bbW_n$ is \textit{induced by the graph} $\bbG_n$, and the graphon signals $X_n$ and $Y_n$ are \textit{ induced by the graph signals} $\bbx_n$ and $\bby_n$.
\section{Graph and Graphon Neural Tangent Kernel} \label{sec:ntk}
Consider a general, fully-connected neural network $f(\bbx;\ccalH)$, with layers given by $\bbx_l = \sigma(\bbH_l\bbx_{l-1})$, input $\bbx_0 = \bbx \in \reals^{d_0}$ and learnable parameters $\ccalH=\{\bbH_l\}_l \in \reals^{d_{l-1} \times d_l}$, $[\bbH_l]_{pq} = h_{l,pq}$. For a training set $\{\bbx_i, \tby_i\}_{i=1}^M$, assume that the loss to be minimized is the mean squared error (MSE) or quadratic loss
\begin{equation}
\min_\ccalH \ell(\ccalH) = \min_\ccalH \sum_{i=1}^M (f(\bbx_i;\ccalH) - \tby_i)^2.
\end{equation}
As the training progresses, the output of the neural network for input $\bbx_i$ is updated as $\bby_i(t) = f(\bbx_i;\ccalH(t))$. As such, the weight update rule is given by
$$\frac{\partial \ccalH}{\partial t} = - \nabla \ell(\ccalH(t))$$
and so the output evolves as
\begin{align}
\begin{split}
&\frac{\partial \bby_i (t)}{\partial t} = \textstyle\sum_j \Theta(\bbx_i,\bbx_j;\ccalH(t)) (f(\bbx_i;\ccalH(t)) - \hby_i), \\
&\Theta(\bbx_i,\bbx_j; \ccalH)\ = \sum_{l,p,q} \frac{\partial f(\bbx_i;\ccalH)}{\partial h_{l,pq}} \frac{\partial f(\bbx_j;\ccalH)}{\partial h_{l,pq}}.
\label{eq:NTK}
\end{split}
\end{align}
In the infinite-width limit as $d_l \to \infty$, \cite{jacot2018neural,liu2020linearity} showed that, provided that the last layer has linear output (i.e., it does not have an activation $\sigma$), $\Theta(\bbx_i,\bbx_j;\ccalH)$ converges to a limiting kernel, the NTK. This kernel stays constant during training---a property called the constancy---, which is equivalent to replacing the outputs of the wide neural network by their first-order Taylor expansion in parameter space \cite{lee2019wide}. Hence, in the infinite-width limit the training dynamics of \eqref{eq:NTK} reduce to kernel ridge regression\footnote{In practice, it is not necessary to consider the MSE for this derivation. It suffices for the loss to be such that the norm of the training direction $f(\ccalH_0)-f(\ccalH^\star)$ is strictly decreasing during training \cite{jacot2018neural}. E.g., if we consider the cross-entropy loss, the training dynamics reduce to kernel logistic regression.} on the NTK, which has a closed-form solution. This facilitates understanding the learning dynamics of overparametrized neural networks, which are notoriously difficult to study directly, by analysis of the corresponding NTK.
\subsection{Graph Neural Tangent Kernel}
For simplicity, we will consider a GNN with only one feature per layer; the generalization to multiple features is more involved but straightforward.
Recall that the $L$-layer GNN supported on $\bbA_n \in \reals^{n\times n}$ is written as
\begin{align*}
&\bbx_{n,0} = \bbx_n \quad \quad \quad \ \ \ \bbu_{l,n} = H_l (\bbx_{l - 1,n}) \\
&\bbx_{l,n} = \sigma(\bbu_{l,n}) \quad \quad f(\bbx_n; \bbA_n, \ccalH) = \bbx_{L,n} \text{.}
\end{align*}
{
\setlength{\jot}{11pt}
The graph NTK (GNTK) associated with this GNN is defined as
\begin{align*}
\Theta (\bbx_n,\bbx'_n;\bbA_n,\ccalH) = &\nabla_\ccalH f(\bbx_n;\bbA_n,\ccalH)^T \nabla_\ccalH f(\bbx'_n;\bbA_n,\ccalH) \\
& \hspace{-2cm} = \textstyle\sum_{k} \sigma'(\bbu_{L,n}) \bbA_n^{k} \bbx_{L-1,n} \otimes \sigma'(\bbu_{L,n}') \bbA_n^{k} \bbx_{L-1,n}' \\
&\quad \hspace{-2cm} + \sigma'(\bbu_{L,n} ) (\sigma'(\bbu_{L-1,n}) \bbA_n^{k} \bbx_{L-2,n}) \\
&\quad \quad \hspace{-2cm} \otimes \sigma'(\bbu_{L,n}') H_L(\sigma'(\bbu_{L-1,n}) \bbA_n^{k} \bbx_{L-2,n}) + \ldots
\end{align*}
where there are $L$ terms; the first term corresponds to the last layer, the second term to the second-last layer, and so on.
We have used $\sigma'(\bbu)$ to denote the diagonal matrix with $jj$th entry equal to $\sigma'([\bbu]_j)$, where $\sigma'$ is the derivative of $\sigma$. Note that $\nabla_\ccalH f$ is a $|\ccalH| \times n$ matrix, hence the GNTK is a matrix $\Theta_n( \bbx_n, \bbx_n';\bbA_n,\ccalH) \in \reals^{n \times n}$.
}
\subsection{Graphon Neural Tangent Kernel}
To derive the WNTK, we will also consider a WNN with only one feature per layer for simplicity. Recall that the $L$-layer WNN associated with the graphon $\bbW$ is given by
\begin{align*}
&X_0 = X \quad \quad \quad \ \ U_l = \textstyle T_{H_l}X_{l - 1}\\
&X_l = \sigma(U_l) \quad \quad f(X;\bbW,\ccalH) = X_L \text{.}
\end{align*}
{
\setlength{\jot}{11pt}
Therefore, the WNTK is given by
\begin{align} \label{eqn:wntk}
\begin{split}
\Theta(X,X';\bbW,\ccalH) &= \\
& \hspace{-2.5cm} = \textstyle\sum_{l,k} \partial_{h_{l,k}} f(X;\bbW,\ccalH) \otimes \partial_{h_{l,k}} f(X';\bbW,\ccalH) \text{.}
\end{split}
\end{align}
Calculating the derivative with respect to the $k$th weight in layer $l_j=L-j$, we get
\begin{equation*}
\partial_{h_{l_j,k}} u_L =
\textstyle T_{W}^{(k)} X_{L-1}
\end{equation*}
for $j=0$, and similarly for $0 < j < L$,
\begin{equation*}
\partial_{h_{l_j,k}} u_L = \textstyle T_{H_L} \sigma^\prime(U_{L-1}) \dots \textstyle T_{H_{l{j-1}}} \sigma'(U_{L-j}) \textstyle T_W^{(k)} X_{l_{j-1}} \text{.}
\end{equation*}
Hence, \eqref{eqn:wntk} has $L$ terms in total, explicitly
\begin{align*}
\begin{split}
\Theta(X,X';\bbW,\ccalH) &= \\
& \hspace{-1.8cm} = \textstyle\sum_{k} \sigma'(U_L) T_{W}^{(k)} X_{L-1} \otimes \sigma'(U_L') T_{W}^{(k)} X_{L-1}' \\
&\quad \hspace{-1.8cm} + \sigma'(U_L) T_{H_L}(\sigma'(U_{L-1}) T_W^{(k)} X_{L-2}) \\
&\quad \hspace{-1.8cm} \quad \otimes \sigma'(U_L') T_{H_L} (\sigma'(U_{L-1}) T_W^{(k)} X_{L-2}) + \ldots
\end{split}
\end{align*}
where $\sigma'(U_L)$ is now evaluated and multiplied pointwise.
Note that $\Theta$ is a linear operator on graphon signals, i.e., given a signal $Y \in L^2([0,1])$, the WNTK applied to this signal yields
\begin{align*}
\Theta(X,X';\bbW,\ccalH)Y &= \textstyle\int_0^1 \textstyle\sum_{k} \sigma'(U_L(u)) T_{W}^{(k)} X_{L-1}(u)\\
& \quad \, \, \times \sigma'(U_L'(v)) T_{W}^{(k)} X_{L-1}'(v) Y(v) \\
& \hspace{-3.1cm} \quad + \sigma'(U_L(u)) T_{H_L}(\sigma'(U_{L-1}) T_W^{(k)} X_{L-2})(u) \\
& \quad \, \hspace{-3.1cm} \, \times \sigma'(U_L(v)) T_{H_L}(\sigma'(U_{L-1}) T_W^{(k)} X_{L-2})(v)Y(v) +\ldots \, dv
\end{align*}
where the first term corresponds to the last layer, the second corrresponds to the second-last layer and so on.
For a graph $\bbA_n$ sampled from $\bbW$ with associated induced graphon $\bbW_n$, and for input features $\bbx_n$ sampled from a graphon signal $X$ and associated induced graphon signal $X_n$ [cf. \eqref{eqn:wnn_induced}],
the induced WNTK is given by the same formula as \eqref{eqn:wntk} but with $\bbW$, $X$ replaced by $\bbW_n$, $X_n$.
}
\section{Graph NTK Converges to Graphon NTK}
\label{sec:convergence}
Next, we consider a sequence of graph signals $\{(\bbG_n,\bbx_n)\}$ converging to a graphon signal $(\bbW,X)$ to evaluate the convergence of the GNTK $\Theta(\bbx_n,\bbx_n';\bbA_n,\ccalH)$ to the WNTK $\Theta(X,X';\bbW,\ccalH)$.
First, for each pair of signals $X,X'$, the WNTK $\Theta(X,X';\bbW,\ccalH)$---which we denote $\Theta(X,X')$ to simplify notation---is a linear operator on functions in $L^2$. Hence, to prove convergence to this operator we introduce the operator norm
\begin{align}
\norm{\Theta(X,X')} = \sup_{Y \neq 0} \frac{\norm{\Theta(X,X')Y}}{\norm{Y}} \text{.}
\label{eq:normdef}
\end{align}
\begin{figure*}[t]
\centering
\includegraphics[width=0.47\linewidth]{figures/transf_kernel_geom_sbm0.pdf}
\includegraphics[width=0.49\linewidth]{figures/transf_kernel_cora_cite_pub0.pdf}
\caption{(left) Difference between the test error achieved by the GNTK fitted on the $n$-node graph when used for prediction the $n$-node graph, and the test error achived by the same GNTK when used for predicition on the $N$-node graph. (left) Opinion dynamics on geometric and SBM graphs, where the test error is the MSE $N=300$. (right) Node classification on Cora, CiteSeer and Pubmed, where the test error is the CE and $N=2708$, $3327$, and $10000$ respectively.}
\label{fig:convergence}
\end{figure*}
Let $\bbW$ be a graphon and $X,X'$ be fixed graphon signals.
Consider a sequence of graph signals $\{(\bbG_n,\bbx_n)\}$ (respectively $\{(\bbG_n,\bbx_n')\}$) converging to $(\bbW,X)$ (respectively $(\bbW,X')$) in the sense of \cite{ruiz2020graphonsp}[Def. 2]. The notion of convergence of graph signals on a sequence of dense graphs is simple; $\bbG_n$ converges to $\bbW$ in the homomorphism density sense [cf. \eqref{eqn:graph_convergence}], and $X_n$, the graphon signal induced by $\bbx_n$, converges to $X$ in the $L^2$ norm, i.e.
\begin{equation} \label{eqn:signal_convergence}
\|X_n-X\| \to 0
\end{equation}
up to node relabelings (see \cite{ruiz2020graphonsp} for further details, and in particular Lemma 2 for the relationship between $(\bbG_n,\bbx_n)$ and the induced signal $(\bbW_n,X_n)$).
Let $(\bbW_n,X_n)$ be the graphon signal induced by the graph signal $(\bbG_n,\bbx_n)$ [cf. \eqref{eqn:wnn_induced}].
The main result of this paper is the following theorem that shows that the WNTK induced by the GNTK converges to the limiting WNTK. The proof is deferred to Appendix \ref{appendix:convergence_proof}.
\begin{theorem}
Let $\textbf{W}$ be a graphon and $X,X' \in L^2([0,1])$ be arbitrary graphon signals.
Suppose that $\set{\bbG_n}$ is a sequence of graphs converging to $\bbW$ [cf. \eqref{eqn:graph_convergence}] and $\set{\bbx_n}, \set{\bbx_n'}$ are sequences of graph signals converging to $X$ [cf. \eqref{eqn:signal_convergence}].
\comment{
\begin{align*}
& \lim_{n \to \infty} ||X_n - X|| = 0 \\
& \lim_{n \to \infty} ||X_n' - X'|| = 0 \\
& \lim_{n \to \infty} ||W_n - W|| = 0.
\end{align*}
}
Then, for any $L$-layer GNN with finite $K$, fixed weights $\ccalH$ and $1$-Lipschitz nonlinearity $\sigma$, the associated GNTKs $\Theta(\bbx_n,\bbx'_n;\bbA_n,\ccalH)$ converge in the operator norm:
\begin{align*}
\lim_{n \to \infty}\norm{\Theta(X_n,X'_n;\bbW_n,\ccalH)-\Theta(X,X';\bbW,\ccalH)} = 0
\end{align*}
where $\Theta(X_n,X'_n;\bbW_n,\ccalH)$ is the WNTK induced by the GNTK $\Theta(\bbx_n,\bbx_n';\bbA_n,\ccalH)$.
\label{theorem1}
\end{theorem}
The convergence of the GNTK to a limit object---the WNTK---has two important implications for machine learning on large-scale graphs. First, consider that we have a large graph $\bbG_N$ on which we want to predict signals $\bby_N$ using the GNTK, but that we do not have enough computational resources to compute the full-sized GNTK $\Theta(\bbx_N,\bbx_N';\bbA_N,\ccalH)$. This is expected, since calculating the kernel regression weights on this graph requires inverting a matrix with dimension proportional to $M$, the number of training samples, and the graph size $N$, see Sec. \ref{sec:ntk}. Thm. \ref{theorem1} implies that we can subsample the GNTK and the labels $Y_N$ on a smaller graph $\bbG_n$, $n \ll N$---as $\Theta(\bbx_n,\bbx_n';\bbA_n,\ccalH)$ and $\bby_n$ respectively---and fit the data to the GNTK on this smaller graph. Once the kernel regression weights are obtained, they can then be \textit{transferred} to make predictions on $\bbG_N$. Naturally, there will be an error associated with this transference; however, due to convergence, this error vanishes asymptotically in $n$.
A second implication of Thm. \ref{theorem1}, which is perhaps more important, is that GNTK convergence implies that a GNTK fitted on a smaller graph can be used to infer details about the training dynamics of wide/overparametrized GNNs on larger graphs. This follows from Thm. \ref{theorem1} in conjunction with theoretical results from \cite{liu2020linearity}, which proves that in the infinite-width limit the training dynamics of general convolutional models (including GNNs) reduces to kernel ridge regression on the corresponding NTK; and empirical results from \cite{sabanayagam2021new} showing that even when the output layer is nonlinear, constancy of the GNTK can still be observed.
An example of property of a large-graph GNTK $\Theta(\bbx_N,\bbx_N'; \bbA_N, \ccalH)$ that can be estimated from a small-graph GNTK $\Theta(\bbx_n,\bbx_n'; \bbA_n, \ccalH)$ is its eigenvalue spectrum. As we show in Thm. \ref{theorem2}, the spectrum of $\Theta(X_n,X_n'; \bbW_n, \ccalH)$ converges to the the spectrum of $\Theta(X,X'; \bbW, \ccalH)$. The proof is deferred to Appendix \ref{appendix:eigenvalue_proof}.
\begin{theorem} \label{theorem2}
Let $\{\bbx_{i,n}\}_{i=1}^M$ be a set of $M$ signals on the graph $\bbG_n$, and $\{X_{i,n}\}_{i=1}^M$ the corresponding induced signals on the induced graphon $\bbW_n$. Assume that $\bbG_n \to \bbW$ [cf. \eqref{eqn:graph_convergence}] and $X_{i,n} \to X$ [cf. \eqref{eqn:signal_convergence}] for all $i$. Define the operators $\bbTheta_n $ and $\bbTheta$ where $[\bbTheta_n]_{ij} = \Theta(X_{i,n},X_{j,n};\bbW_n,\ccalH)$ and $[\bbTheta]_{ij} = \Theta(X_{i},X_{j};\bbW,\ccalH)$. Let $\lambda_p(T)$, $p \in \mbZ \setminus \{0\}$, denote the $p$th eigenvalue of a compact, self-adjoint operator $T$, with $\lambda_{-1} < \lambda_{-2} < \ldots \leq 0 \leq \ldots < \lambda_2 < \lambda_1$.
If the weights in $\ccalH$ are bounded, then, for all $p$,
\begin{equation*}
\lim_{n\to \infty} |\lambda_p(\bbTheta_n)-\lambda_p(\bbTheta)| \to 0 \text{.}
\end{equation*}
Moreover, the corresponding eigenspaces converge in the sense that the spectral projectors converge.
\end{theorem}
This is an important result because, as proved in \cite{jacot2018neural}, the convergence of kernel gradient descent follows the kernel principal components. Thus, if the GNN $f(\ccalH)$ is sufficiently wide, and if $f(\ccalH_0)-f(\ccalH^\star)$ is aligned with the $i$th GNTK principal component, we can expect gradient descent to converge with rate proportional to $\lambda_i$, the $i$th eigenvalue of the GNTK. Thm. \ref{theorem2} shows that the GNTK eigenvalues converge to the eigenvalues of the WNTK. As such, we can estimate the speed of convergence of a large-graph GNN $f(\bbx_N; \bbA_N, \ccalH)$ from the eigenvalues of a small-graph GNTK $\Theta(\bbx_n,\bbx_n'; \bbA_n, \ccalH)$. A numerical example of this application of Thms. \ref{theorem1} and \ref{theorem2} is given in Sec. \ref{sbs:eig_sims}.
\comment{
As a final comment, we point out that one can also deduce convergence of the GNTK spectrum to the WNTK spectrum. \sanjukta{The eigenvalues of the NTKs capture the rate of convergence along the directions that are captured by the eigenvectors. For instance, the eigenvector corresponds to the leading eigenvalue is the direction of fastest convergence.}\red{terminology question: should we use eigenfunctions instead for the WNTK?}
\sanjukta{still working on this. Do you happen to know the citation below off the top of your head?}
\begin{theorem}
For a bounded, linear operator $Q:L^2 \to L^2$ denote $\sigma(Q)$ the spectrum of $Q$.
If $W$ is Lipschitz and $W_n \to W$ in the Lipschitz metric and the coefficients $h_{k,n}$ are bounded uniformly, then for all $X,X'$ and sequences such that $\norm{X_n - X} \to 0$ and $\norm{X_n' - X'} \to 0$,
\begin{align*}
\sup_{z\in \sigma(\Theta(X,X'))} \inf_{z' \in \sigma(\Theta(X_n,X'_n))} |z-z'| \to 0.
\end{align*}
\end{theorem}
\begin{proof}
Since $W$ and $W_n$ are Lipschitz, it follows that
\begin{align*}
\norm{\partial_u T_W} \leq \norm{\nabla W}_{L^2} \\
\norm{\partial_u T_{W_n}} \leq \norm{\nabla W_n}_{L^2},
\end{align*}
and therefore the $T_W,T_{W_n}$ are compact operators.
Since the coefficients are bounded, it follows from the formulae above that the graphon-NTK and the induced graphon-NTKs are all compact operators on $L^2$.
From \red{cite?!}, the convergence of the spectrum (and in fact also of all finite dimensional spectral projectors) follows.
\end{proof}
}
\comment{
\begin{corollary}
Let $\bbW$ be a Lipschitz continuous graphon and $X,X'$ be arbitrary, Lipschitz continuous graphon signals. Suppose that $\set{\bbA_n}_{n=1}^\infty$ be a sequence of generic GSO's sampled from $\bbW$ of dimension $\reals^{n \times n}$, and let $\bbW_n$ be the associated induced graphons and $X_n,X_n'$ be the induced graphon signals associated to $X,X'$ respectively.
Then, for any $L$-layer WNN with $K$ fixed (finite) and fixed filter coefficients $\ccalH$, the associated NTK's converge in the operator norm:
\begin{align*}
\lim_{n \to \infty}\norm{\Theta_W(X,X') - \Theta_{W_n}(X_n,X'_n)} = 0.
\end{align*}
\label{corollary}
\end{corollary}
}
\comment{
\subsection{Convergence for different random graph models}
An approximate estimate depends on what kind of random graph is being generated.
For a weighted graph, Propositions 13 in the previous paper gives with probability at least $(1-2\chi_1)(1-\chi_2)$ for $n \geq 4/\chi_2$,
\begin{align*}
\| X - X_n \| & \leq \frac{A_x}{n} \log \left( \frac{(n+1)^2}{\log(1-\chi_1)^{-1}} \right) \|X\|
\end{align*}
(where $A_x$ is the Lipschitz constant of $X$) and
\begin{align*}
\| W_n - W \| & \leq \frac{2A_w}{n} \log \left( \frac{(n+1)^2}{\log(1-\chi_1)^{-1}} \right) .
\end{align*}
Let us now assume that $n$ is so large that
\begin{align*}
\frac{A_x}{n} \log \left( \frac{(n+1)^2}{\log(1-\chi_1)^{-1}} \right) < 1.
\end{align*}
In the weighted graph case, it seems like we then have (assuming $X,X'$ have the same Lipschitz constants)
For stochastic graphs, the paper is a little less clear about exactly what the estimate is, but I feel like similar estimates, possibly up to another logarithm, hold due to Proposition 14.
\noindent \red{L: I like the proof steps above, linking with the bounds from the transferability paper which avoids all the ugly triangle inequalities and Cauchy-Schwarz. However, I think we still need to polish this section and add a bit more detail.}
\red{TO DO: (1) write estimates for $\| X - X_n \|$ and $\| W - W_n \|$ for stochastic graphs? (2) Evaluate a quantitative estimate for Eqn \ref{eq:NTK_matrix} for 2 layers by adding and subtracting (8 terms total) and using triangle inequality (since $U_l = \sigma X_l$ and we have bounds on $U_l$ for graph and graphon in terms of $W-W_n$. (3) Once we have this estimate, trivially write transferability $\norm{\Theta_{n1} - \Theta_{n2}}$ using triangle inequality (Similar to how it's done below).}
\noindent \red{L: Will get to (1) soon.}
}
\section{Numerical Results}
\label{sec:numerical}
\begin{figure*}[t]
\centering
\includegraphics[width=0.33\linewidth]{figures/op_dyn_sbm/final_projection-gnn0-3.pdf}
\includegraphics[width=0.33\linewidth]{figures/op_dyn_sbm/final_projection-gnn1-3.pdf}
\includegraphics[width=0.33\linewidth]{figures/op_dyn_sbm/final_projection-gnn2-3.pdf}
\caption{Projections of the inputs and outputs of the GNTK and GNN, and of the target or true labels onto the second eigenvector of the adjacency matrix of a $80$-node SBM graph for widths $F=10$ (left), $F=50$ (center) and $F=250$ (right).}
\label{fig:width}
\end{figure*}
In the following, we illustrate the convergence of the GNTK to the WNTK on both a simulated opinion dynamics model on random graphs and on the Cora, CiteSeer and PubMed citation networks (Sec. \ref{sbs:sims_conv}). In the opinion dynamics problem, we further vary the width of the network to analyze the large width behavior of the GNTK and how it relates to the behavior of the GNN (Sec. \ref{sbs:wide_sims}). We conclude with an example of how the convergence of the GNTK can be used in practice to estimate the eigenvalues of the GNTK, and thus the speed of GNN training along its principal components, on large-scale graphs (Sec. \ref{sbs:eig_sims}).
\noindent \textbf{Opinion dynamics.} This is a node-level task modeling the outcomes of a mathematical model for studying the evolution of ideologies, affiliations, and opinions in society \cite{lorenz2007continuous}, including topics of important practical interest such as political ideologies and misinformation spread. On an undirected $n$-node graph $\bbG = (\ccalV,\ccalE)$, we consider an opinion dynamics process $\bbx_t \in \reals^n$. The node data $[\bbx_t]_i$ is the opinion of individual $i$ under a standard bounded-confidence model described by \cite{hegselmann2002opinion}:
\begin{equation} \label{eqn:op_dyn}
[\bbx_t]_i = \sum_{j \in \ccalS_{i,t-1}} c [\bbx_{t-1}]_j.
\end{equation}
where $c>0$ is the so-called influence parameter and $\ccalS_{i,t-1} = \ccalN_i \cap \ccalX_{i,t-1}$ is the intersection of $\ccalN_i = \{j \in \ccalV\ |\ (i,j) \in \ccalE\}$, the neighborhood of $i$, and $\ccalX_{i,t} = \{j \in \ccalV\ |\ |[\bbx_t]_j-[\bbx_t]_i| \leq \epsilon\}$, the set of nodes whose opinion at time $t$ diverges by at most $ 0\leq \epsilon \leq 1$ of the opinion of node $i$. This model reduces to the classic De-Groot \cite{degroot1974reaching} opinion model for $\epsilon=1$. We fix $\epsilon=0.3$ and $c=0.1$.
After drawing the initial opinions $\bbx_0$ from a Gaussian distribution with $\bbmu = \boldsymbol{0}$ and $\bbSigma = 2\bbI$, we run the process \eqref{eqn:op_dyn} until convergence (for at most $T=1000$ iterations) to obtain the final opinions $\bbx_T$. The goal of the learning problem is then to predict $\bby=\bbx_T$ from $\bbx=\bbx_0$. We fix the training set size to $300$ samples, and use $30$ samples for both validation and testing.
\noindent \textbf{Citation networks.} The second problem we consider is a node classification problem on the Cora, CiteSeer and PubMed networks, where nodes represent documents and undirected edges represent citations between papers in either direction.
Each node $i$ has a bag-of-words representation $\bbX_i \in \{0,1\}^{n \times F}$ and is associated with one of $C$ classes. We consider the networks and {train-test-splits} from the Planetoid distribution \cite{yang2016revisiting}, but only sample $F=1000$ features for CiteSeer and $N=10000$ nodes for PubMed due to memory limitations.
\noindent \textbf{Architectures and experiment details.} In all experiments, the GNNs have $L=1$ layer \eqref{eqn:gcn_layer} with $K=2$ [cf. \eqref{eqn:gcn_layer}] and ReLU nonlinearity followed by a perceptron layer. In the citation network experiment, the GNN architecture includes a softmax layer followed by argmax. For opinion dynamics, we consider the MSE and fit the GNTK using linear regression. For node classification, we consider the cross-entropy (CE) and fit the GNTK using logistic regression. All reported results are averaged over $5$ and $10$ realizations for opinion dynamics and citation networks respectively. Additional architecture and experiment details are listed in each subsection. The code can be found in \href{https://github.com/luanaruiz9/wntk.git}{this} repository. All experiments were run on a NVIDIA RTX A6000 GPU.
\subsection{Convergence} \label{sbs:sims_conv}
To visualize the convergence of the GNTK, we fit a GNTK with $F_1=10$ on the training set of a small $n$-node graph, and transfer this GNTK, without retraining, to predict the outputs on the test set of both the same $n$-node graph and a larger $N$-node graph. To visualize convergence, we plot in Fig. \ref{fig:convergence} the absolute difference between the test errors attained on the $n$-node and the $N$-node graph (normalized by the test error on the $N$-node graph) as a function of the size of the smaller graph $n$, for both the opinion dynamics (left) and node classification (right) experiments.
\begin{figure}[t]
\centering
\includegraphics[width=\linewidth]{figures/eigenvalues-0.pdf}
\caption{The leading eigenvalue of the GNTK with width $F=10$ for Cora, CiteSeer and PubMed as a function of the graph size $n$.}
\label{fig:eigs}
\end{figure}
In opinion dynamics, $N=300$. We consider two types of graphs, corresponding to two graphon families: (a) a symmetrized geometric $k$-nearest neighbor graph where nodes are drawn at random from a $50 \times 50$ square and $k=n/10$; and (b) a stochastic block model (SBM) graph with intra-community probability $p=0.1$ and inter-community probability $q=0.05$. As $n$ increases from $20$ to $100$, we observe that the difference between the MSEs for the $n$-node and the $N$-node graphs decrease steadily
from $\sim 70$\% to $\sim 20$\% on the geometric graphs and from $\sim 90$\% to $\sim 40$\% on the SBM graphs. For reference, the MSE achieved by the GNTK fitted on the $100$-node graph when applied to the $N$-node graph was $0.22$ for SBM and $0.03$ for geometric.
In the node classification experiment, $N=2708$ for Cora, $N=3327$ for CiteSeer, and $N=10000$ for PubMed (sampled from the $19717$ nodes in the original due to memory limitations). For all datasets, the difference between the CE loss for the $n$-node and the $N$-node graph also decreases as $n$ increases, but errors are significantly smaller (even at $n=400$) to begin with. Specifically, the CE difference drops to less than $1$\% on average with $n=1600$ nodes for all datasets, independent of the size of the graph.
This can be interpreted to mean that node classification on these citation networks depends largely on local information, so beyond a critical size $n$, the transference error saturates. An interesting point to make is that although these citation networks are not dense (i.e., are not best modeled by a graphon), we still see the empirical manifestation of our theoretical results.
For reference, the CE achieved by the GNTK fitted on the $2000$-node graph when applied to the $N$-node graph was $1.49$ for Cora, $1.32$ for CiteSeer, and $0.59$ for PubMed.
\subsection{Wide Network Behavior} \label{sbs:wide_sims}
Next, we analyze the effect of width on both the GNN and the GNTK when they are trained/fitted on a small graph and transferred to a large graph.
From the NTK analysis in \cite{jacot2018neural}, as the GNN width increases we expect (i) the GNTK and the GNN to exhibit less variance over multiple weight initializations and (ii) the GNN outputs to approach those of kernel regression with the GNTK.
For the opinion dynamics experiment on SBMs, we consider three GNNs with widths (number of features) $F^{(1)}_1=10$, $F^{(2)}_1=50$, and $F^{(3)}_1=250$ supported on the same $80$-node graph. Using their initial weights, we construct the corresponding GNTKs. We train the three GNNs by minimizing the MSE loss over $20$ epochs and with batch size $32$, using ADAM \cite{kingma17-adam} with learning rate $1e^{-3}$ and weight decay $5e^{-3}$. Simultaneously, we fit the GNTKs to the training set,
and then transfer both the GNNs and the corresponding GNTKs to the $N$-node graph and compute their outputs on the test set.
In Fig. \ref{fig:width}, we plot the projection of the outputs $\bby$ onto the second graph eigenvector $\bbv_2$, $[\hby]_2=\bbv_2^T\bby$ against the projections of the inputs onto the same vector $[\hbx]_2 = \bbv_2^T\bbx$ (sorted in ascending order) for both the GNNs and the GNTKs for $F^{(1)}_1=10$ (left), $F^{(2)}_1=50$ (center), and $F^{(3)}_1=250$ (right). The second graph eigenvector was chosen as it captures the community structure, however, note that the same behavior was observed upon projecting onto other eigenvectors, as shown in Appendix \ref{appendix:wide}. The results are averaged over $5$ GNN initializations, with the solid lines representing the mean and the shaded areas representing the standard deviation. The behavior is as expected: as the width increases, the GNN and the GNTK have smaller variance in behavior for different weight initializations, and the GNN and GNTK curves align. In Appendix \ref{appendix:wide}, we include additional plots displaying the mean and variance over different initializations as we vary the graph size.
\subsection{Application: Eigenvalue Convergence} \label{sbs:eig_sims}
In this section, we elucidate an important application of the convergence of the GNTK. \cite{jacot2018neural} shows that the convergence of kernel gradient descent follows the kernel principal components. Therefore, if the GNN is sufficiently wide, and $f(\bbX_n;\bbA_n,\ccalH_0)-f(\bbX_n;\bbS_n,\ccalH^\star)$ is aligned with the $i$th GNTK principal component, we can expect gradient descent to converge with rate proportional to $\lambda_i$, the $i$th eigenvalue of the GNTK\footnote{Theoretically, constancy of the NTK in the infinite-width limit and equivalence between GNN training and kernel regression only holds for architectures with linear output layer \cite{liu2020linearity}. Although this is not the case of the GNNs used here, GNTKs associated with GNNs with nonlinear output layers seem to exhibit constancy empirically \cite{sabanayagam2021new}}.
Recall that we prove, in Thm. 2, that the spectrum of the GNTKs converge in the graphon limit. In this context, a simple application of the convergence of the GNTK spectrum is as follows: say that we know that $f(\bbX_N;\bbA_N,\ccalH_0)-f(\bbX_N;\bbA_N,\ccalH^\star)$ is aligned with the $p$th GNTK eigenvector, and we want to estimate its speed of convergence, but the kernel $\Theta(\bbX_N,\bbX_N';\bbA_N,\ccalH)$ is too expensive to compute. We could in this case sample a graph $\bbG_n$, $n \ll N$ from the induced graphon $\bbW_N$ and calculate $\Theta(\bbX_n,\bbX_n';\bbA_n,\ccalH)$. From Thm. \ref{theorem2}, $\lambda_{n,p}$ converges to $\lambda_{N,p}$. Hence, for large enough $n$, $\lambda_{n,p}$ should provide a good approximation of $\lambda_{N,p}$.
To illustrate this empirically, in Fig. \ref{fig:eigs} we plot the dominant eigenvalues of the GNTK associated with a GNN with $F_1=10$ features supported on graphs of increasing size $n$ sampled from the Cora, CiteSeer, and PubMed networks (averaged over 10 initializations). As $n$ grows, we see that the difference between the dominant eigenvalues of consecutive GNTKs reduces. I.e., for each citation dataset, the value of the dominant eigenvalue converges as $n$ increases, indicating that the speed of convergence of the GNN along the dominant GNTK eigenvector converges as the graph grows. For all but the PubMed dataset, the $1600$-node graph gives an approximation of $\lambda_{N,1}$ that is over $80$\% good.
\section{Conclusions}
\label{sec:conclusion}
In this paper, we define WNTKs as the limiting objects of GNTKs, and study how the GNTK evolves as the underlying graph of the corresponding GNN grows. We show that GNTKs converge to the WNTK and that their spectra also converges, thus providing theoretical insight into how the learning dynamics of a GNN evolves as an important dimension---the graph size---grows. In practice, these convergences imply that one can transfer the GNTK of a smaller graph to solve the same task on a larger graph without any further optimization, with theoretical guarantees of performance and insight on the rates of learning along eigendirections that converge as the graph grows.
These results were demonstrated through simulations on synthetic and real-world graphs. To conclude, a limitation of this work is that graphons are only good models for dense graphs. In future work, we plan to extend our results to more general graph models, and to better understand the relationship between the spectra of the graph and of the GNTK.
|
1,477,468,751,054 | arxiv | \subsubsection*{\bibname}}
\usepackage[utf8]{inputenc}
\usepackage[T1]{fontenc}
\usepackage{hyperref}
\usepackage{url}
\usepackage{booktabs}
\usepackage{amsfonts}
\usepackage{nicefrac}
\usepackage{graphicx}
\usepackage{subcaption}
\usepackage{chngcntr}
\usepackage{xargs}
\usepackage[pdftex,dvipsnames]{xcolor}
\usepackage[colorinlistoftodos,prependcaption,textsize=tiny]{todonotes}
\usepackage{amsmath, amsthm, amssymb, bbm}
\usepackage{color}
\usepackage{algorithmic,algorithm}
\newtheorem{example}{Example}[section]
\newtheorem{definition}{Definition}
\newtheorem{theorem}{Theorem}[section]
\newtheorem{lemma}[theorem]{Lemma}
\newtheorem{condition}[theorem]{Condition}
\theoremstyle{remark}
\newtheorem{remark}{Remark}[section]
\defMTP$_2${MTP$_2$}
\def\mathbf{R}{\mathbf{R}}
\def\mathbb{R}{\mathbb{R}}
\def\mathbf{P}{\mathbf{P}}
\def\mathcal{E}{\mathcal{E}}
\def\text{TP}{\text{TP}}
\def\text{TN}{\text{TN}}
\def\text{FP}{\text{FP}}
\def\text{FN}{\text{FN}}
\newcommand\indep{\protect\mathpalette{\protect\independenT}{\perp}}
\newcommand{\boldsymbol{\rho}}{\boldsymbol{\rho}}
\newcommand{\mathbf{L}}{\mathbf{L}}
\newcommandx{\addition}[2][1=]{\todo[linecolor=Plum,backgroundcolor=Plum!25,bordercolor=Plum,#1]{#2}}
\def\independenT#1#2{\mathrel{\rlap{$#1#2$}\mkern2mu{#1#2}}}
\newcommand{\widehat{\Sigma}}{\widehat{\Sigma}}
\newcommand{\widehat{\Omega}}{\widehat{\Omega}}
\DeclareMathOperator{\tr}{trace}
\def\textrm{adj}{\textrm{adj}}
\runningtitle{Learning High-dimensional GGMs under MTP$_2$ without Adjustment of Tuning Parameters}
\begin{document}
\twocolumn[
\aistatstitle{Learning High-dimensional Gaussian Graphical Models \\ under Total Positivity without Adjustment of Tuning Parameters}
\aistatsauthor{ Yuhao Wang \And Uma Roy \And Caroline Uhler}
\aistatsaddress{
University of Cambridge \\
\texttt{yw505@cam.ac.uk}
\And
Google Research\\
\texttt{uma.roy.us@gmail.com}
\And
Massachusetts Institute of Technology\\
\texttt{cuhler@mit.edu }} ]
\begin{abstract}
We consider the problem of estimating an undirected Gaussian graphical model when the underlying distribution is multivariate totally positive of order 2 (MTP$_2$), a strong form of positive dependence. Such distributions are relevant for example for portfolio selection, since assets are usually positively dependent. A large body of methods have been proposed for learning undirected graphical models without the MTP$_2$ constraint. A major limitation of these methods is that their structure recovery guarantees in the high-dimensional setting usually require a particular choice of a tuning parameter, which is unknown a priori in real world applications.
We here propose a new method to estimate the underlying undirected graphical model under MTP$_2$ and show that it is provably consistent in structure recovery without adjusting the tuning parameters. This is achieved by a constraint-based estimator that infers the structure of the underlying graphical model by testing the signs of the empirical partial correlation coefficients. We evaluate the performance of our estimator in simulations and on financial data.
\end{abstract}
\section{Introduction}
\label{sec:intro}
Gaining insights into complex phenomena often requires characterizing the relationships among a large number of variables. Gaussian graphical models offer a powerful framework for representing high-dimensional distributions by capturing the conditional dependencies between the variables of interest in the form of a network. These models have been extensively used in a wide variety of domains ranging from speech recognition~\citep{johnson2012mathematical} to genomics~\citep{kishino2000correspondence} and finance~\citep{wang2011dynamic}.
In this paper we consider the problem of learning a Gaussian graphical model under the constraint that the distribution is multivariate totally positive of order 2 (MTP$_2$), or equivalently, that all partial correlations are non-negative. Such models are also known as attractive Gaussian random fields. MTP$_2$ was first studied in~\citep{B82,FKG71,KarlinRinott80,KR83} and later also in the context of graphical models~\citep{Fallat17,LUZ17}. MTP$_2$ is a strong form of positive dependence, which is relevant for modeling in various applications including phylogenetics or portfolio selection, where the shared ancestry or latent global market variable often lead to positive dependence among the observed variables~\citep{MS05,Zwiernik15}.
Due to the explosion of data where the number of variables $p$ is comparable to or larger than the number of samples $N$, the problem of learning undirected Gaussian graphical models in the high-dimensional setting has been a central topic in machine learning, statistics and optimization. There are two main classes of algorithms for structure estimation for Gaussian graphical models in the high-dimensional setting. A first class of algorithms attempts to explicitly recover which edges exist in the graphical model, for example using conditional independence tests~\citep{Anand12,ST18} or neighborhood selection~\citep{MB06}. A second class of algorithms instead focuses on estimating the precision matrix.
The most prominent of these algorithms is \emph{graphical lasso}~\citep{BGA08,FHT08,RWRY11,YL07}, which applies an $\ell_1$ penalty to the log-likelihood function to estimate the precision matrix. Other algorithms include linear programming based approaches such as graphical Dantzig~\citep{Yuan10} and CLIME~\citep{CLL11,CLZ16}; optimization with non-convex penalties like~\citep{FFW09,LF09,LW17}; as well as greedy methods like~\citep{JJR12,SPZ12}.
The main limitation of all aforementioned approaches is the requirement of a specific tuning parameter to obtain consistency guarantees in estimating the edges of the underlying graphical model. In most real-world applications, the correct tuning parameter is unknown and difficult to discover. To make the estimate less sensitive to misspecification of tuning parameters,~\citet{LWang17} and~\citet{SZ13} proposed estimating high-dimensional precision matrices using square-root lasso~\citep{BCW11} and scaled lasso~\citep{SZ12} respectively. These estimators have the advantage that their theoretical guarantees do not rely on an unknown tuning parameter, thereby allowing them to consistently estimate precision matrices without tuning parameter adjustment. While the estimated precision matrices from these methods are guaranteed to converge to the true precision matrix, the zero patterns of the estimated matrices are not guaranteed to recover the underlying graph.
The algorithms described above are for learning the underlying undirected graph in \emph{general} Gaussian models. In this paper, we consider the special setting of MTP$_2$ Gaussian models. Several algorithms have been proposed
that are able to exploit the additional structure imposed by MTP$_2$
with the goal of obtaining stronger results than for general Gaussian graphical models.
In particular,~\citet{LUZ17} showed that the MLE exists whenever the sample size $N > 2$ (independent of the number of variables $p$), which is striking given that $N > p$ is required for the MLE to exist in general Gaussian graphical models.
Since the MLE under MTP$_2$ is not a consistent estimator for the structure of the graph~\citep{SH15},~\citet{SH15} considered applying thresholding to entries in the MLE, but this procedure requires a tuning parameter and does not have consistency guarantees.
The three main contributions of this paper are:
\begin{enumerate}
\item[1)] we provide a new algorithm for learning Gaussian graphical models under MTP$_2$ that is based on conditional independence testing;
\item[2)] we prove that this algorithm does not require adjusting any tuning parameters for the theoretical consistency guarantees in structure recovery;
\item[3)] we show that our algorithm compares favorably to other methods for learning graphical models on both simulated data and financial data.
\end{enumerate}
\section{Preliminaries and Related Work}
\label{sec:pre}
\paragraph{Gaussian graphical models:} Given a graph $G = ([p], \mathcal{E})$ with vertex set $[p] = \{1, \cdots, p\}$ and edge set $\mathcal{E}$ we associate to each node $i$ in $G$ a random variable $X_i$. A distribution $\mathbf{P}$ on the nodes $[p]$ forms an \emph{undirected graphical model} with respect to $G$ if
\begin{equation}
\label{eq_pair}
X_i\indep X_j\mid X_{[p]\setminus\{i,j\}} \quad \textrm{for all } (i,j)\notin E.
\end{equation}
When $\mathbf{P}$ is Gaussian with mean zero, covariance matrix $\Sigma$ and precision matrix $\Theta:=\Sigma^{-1}$, the setting we concentrate on in this paper, then (\ref{eq_pair}) is equivalent to $\Theta_{ij}= 0$ for all $(i,j)\notin E$. By the Hammersley-Clifford Theorem, for strictly positive densities such as the Gaussian, (\ref{eq_pair}) is equivalent to
\begin{equation*}
X_i\indep X_j\mid X_S \quad \textrm{for all } S\subseteq[p]\setminus\{i,j\} \textrm{ that separate } i,j,
\end{equation*}
where $i,j$ are separated by $S$ in $G$ when $i$ and $j$ are in different connected components of $G$ after removing the nodes $S$ from $G$.
In the Gaussian setting, $X_i\indep X_j\mid X_S$ if and only if the corresponding \emph{partial correlation coefficient} $\rho_{ij \mid S}$ is zero, which can be calculated from submatrices of $\Sigma$, namely
\begin{align*}
\rho_{ij \mid S} = & - \frac{((\Sigma_{M,M})^{-1})_{i, j}}{\sqrt{((\Sigma_{M,M})^{-1})_{i, i} ((\Sigma_{M,M})^{-1})_{j, j}}}, \\
& \textrm{where} M=S\cup\{i,j\}.
\end{align*}
\paragraph{MTP$_2$ distributions:}
A density function $f$ on $\mathbb{R}^p$ is MTP$_2$ if
$$ f(x) f(y) \leq f(x \wedge y) f(x \vee y) \quad \textrm{for all } x, y \in \mathbb{R}^p,$$
where $\vee, \wedge$ denote the coordinate-wise minimum and maximum respectively~\citep{FKG71,KarlinRinott80}.
In particular, a Gaussian distribution is MTP$_2$ if and only if its precision matrix $\Theta$ is an $M$-matrix, i.e. $\Theta_{ij} \leq 0$ for all $i \neq j$~\citep{B82,KR83}. This implies that all partial correlation coefficients are non-negative, i.e., $\rho_{ij \mid S} \geq 0$ for all $i, j, S$~\citep{KR83}. In addition, for MTP$_2$ distributions it holds that $X_i \indep X_j \mid X_S$ if and only if $i,j$ are separated in $G$ given $S$~\citep{Fallat17}.
Hence $i,j$ are connected in $G$ given $S$ if and only if
$\rho_{ij \mid S} > 0$.
MTP$_2$ distributions are relevant for various applications. In particular, Gaussian tree models with latent variables are MTP$_2$ up to sign~\citep{LUZ17}; this includes the important class of single factor analysis models. As an example, in~\citep{SH15} MTP$_2$ was used for data measuring students' performance on different math subjects, an application where a factor analysis model with a single latent factor measuring general mathematical ability seems fitting. In addition, factor analysis models are used frequently in psychology and finance; the MTP$_2$ constraint has been applied to a dataset from psychology in~\citep{LUZ17} and auctions in~\citep{HLP12}.
MTP$_2$ was also used in the modelling of global stock prices, motivated by the fact that asset price changes are usually positively correlated~\citep{ARU19}; in particular, the authors reported that the correlation matrix of the daily returns of $5$ global stocks is an inverse M-matrix~\citep[Figure~1]{ARU19}. In the same paper, the authors also showed that using a covariance matrix among stocks estimated under MTP$_2$ achieves better performance at portfolio selection than other state-of-the-art methods.
\paragraph{Algorithms for learning Gaussian graphical models:} An algorithm is called \emph{consistent} if the estimated graph converges to the true graph $G$ as the sample size $N$ goes to infinity. \emph{CMIT}, an algorithm proposed in~\citep{Anand12}, is most related to the approach in this paper. Starting in the complete graph, edge $(i,j)$ is removed if there exists $S\subseteq [p]\setminus\{i,j\}$ with $|S| \leq \eta$ (for a tuning parameter $\eta$ that represents the maximum degree of the underlying graph) such that the corresponding empirical partial correlation coefficient satisfies $|\hat{\rho}_{ij \mid S}| \leq \lambda_{N,p}$. For consistent estimation, the tuning parameter $\lambda_{N,p}$ needs to be selected carefully depending on the sample size $N$ and number of nodes $p$. Intuitively, if $(i,j) \notin G$, then $\rho_{ij | S} = 0$ for all $S$ that separate $(i,j)$. Since $\hat{\rho}_{ij \mid S}$ concentrates around $\rho_{ij \mid S}$, it holds with high probability that there exists $S\subseteq [p]\setminus\{i,j\}$ for which $|\hat{\rho}_{ij \mid S}|\leq \lambda_{N,p}$,
so that edge $(i,j)$ is removed from $G$. Other estimators such as graphical lasso~\citep{RWRY11} and neighborhood selection~\citep{MB06} also require a tuning parameter: $\lambda_{N,p}$ represents the coefficient of the $\ell_1$ penalty and critically depends on $N$ and $p$ for consistent estimation.
Finally, with respect to estimation specifically under the MTP$_2$ constraint, the authors in~\citep{SH15} propose thresholding the MLE $\widehat{\Omega}$ of the precision matrix, which can be obtained by solving the following convex optimization problem:
\begin{equation} \label{eq:SH_obj}
\widehat{\Omega} := \min_{\Omega \succeq 0, \; \Omega_{ij} \leq 0\; \forall i \neq j} - \log \det (\Omega) + \tr (\Omega \hat{\Sigma}),
\end{equation}
where $\hat{\Sigma}$ is the sample covariance matrix. The threshold quantile $q$ is a tuning parameter, and apart from empirical evidence that thresholding works well, there are no known theoretical consistency guarantees for this procedure.
In addition to relying on a specific tuning parameter for consistent estimation, existing estimators require additional conditions with respect to the underlying distribution. The consistency guarantees of graphical lasso~\citep{RWRY11} and moment matching approaches such as CLIME~\citep{CLL11} require that the diagonal elements of $\Sigma$ are upper bounded by a constant and that the minimum edge weight
$
\min_{i\neq j, \Theta_{ij} \neq 0} |\Theta_{ij}| \geq C \sqrt{\log (p)/N}
$
for some positive constant $C$. Consistency of CMIT~\citep{Anand12} also requires the minimum edge weight condition. Consistency of CLIME requires a bounded matrix $L_1$ norm of the precision matrix $\Theta$, which implies that all diagonal elements of $\Theta$ are bounded.
\paragraph{Learning a precision matrix without adjusting any tuning parameters:} Another recent line of work similar to ours considers estimating high-dimensional Gaussian precision matrices without the tuning of parameters. The most prominent such approach is TIGER~\citep{LWang17} and related works include scaled and organic lasso~\citep{SZ12,YB19}.
These estimators have the desirable property that the estimated precision matrix $\hat{\Theta}$ is guaranteed to converge to the true $\Theta$ without requiring any adjustment of the regularization parameter. However, the support of the estimated $\hat{\Theta}$ is not guaranteed to converge to the underlying graph $G$ (see e.g. Theorem~4.3 of~\citep{LWang17}), which is the particular task we are interested in this paper.
\section{Algorithm and Consistency Guarantees}
\label{sec:alg}
Algorithm~\ref{alg:mtp2} is our proposed procedure for learning a Gaussian graphical model under the MTP$_2$ constraint. In the following, we first describe Algorithm~\ref{alg:mtp2} in detail and then prove its consistency without the need of performing any adjustment of tuning parameters.
\begin{algorithm*}[!t]
\caption{Structure learning under total positivity}
\label{alg:mtp2}
\textbf{Input:} Matrix of observations $\hat{X} \in \mathbf{R}^{N \times p}$ with sample size $N$ on $p$ nodes. \\
\textbf{Output:} Estimated graph $\hat{G}$.
\begin{algorithmic}[1]
\STATE Set $\hat{G}$ as the completely connected graph over the vertex set $[p]$; set $\ell := -1$;
\REPEAT
\STATE set $\ell = \ell + 1$;
\REPEAT
\STATE select a (new) ordered pair $(i,j)$ that are adjacent in $\hat{G}$ and such that $|\textrm{adj}_i(\hat{G}) \setminus \{j\}| \geq \ell$;
\REPEAT
\STATE choose a (new) subset $S \subseteq \textrm{adj}_i(\hat{G}) \setminus \{j\}$ with $|S| = \ell$ and then choose a (new) node $k \in [p] \setminus S \cup \{i,j\}$;
\STATE calculate the empirical partial coefficient $\hat{\rho}_{ij \mid S \cup \{k\}}$ using randomly drawn data with batch size $M := N^\gamma$; if $\hat{\rho}_{ij \mid S \cup \{k\}} < 0$, delete $i - j$ from $\hat{G}$;
\UNTIL{edge $i - j$ is deleted from $\hat{G}$ or all $S$ and $k$ are considered;}
\UNTIL{all ordered pairs $i,j$ that are adjacent in $\hat{G}$ with $|\textrm{adj}_i(\hat{G}) \setminus \{j\}| \geq \ell$ are considered;}
\UNTIL{for each $i,j$, $\textrm{adj}_i(\hat{G}) \setminus \{j\} < \ell$.}
\end{algorithmic}
\end{algorithm*}
Similar to CMIT~\citep{Anand12}, Algorithm~\ref{alg:mtp2} starts with the fully connected graph $\hat{G}$ and sequentially removes edges based on conditional independence tests. The algorithm iterates with respect to a parameter $\ell$ that starts at $\ell = 0$. In each iteration, for all pairs of nodes $i, j$ such that the edge $(i,j) \in \hat{G}$ and node $i$ has at least $\ell$ neighbors (denoted by $\textrm{adj}_i(\hat{G})$), the algorithm considers all combinations of subsets $S$ of $\textrm{adj}_i(\hat{G})$ excluding $j$ that have size $\ell$ and all nodes $k \neq i,j $ that are not in $S$. For each combination of subset $S$ and node $k$, it calculates the empirical partial correlation coefficient $\hat{\rho}_{ij \mid S \cup \{ k \}}$.
Importantly, $\hat{\rho}_{ij \mid S \cup \{ k \}}$
is calculated only on a \emph{subset} (which we refer to as a \emph{batch}) of size $M := N^\gamma$ that we draw randomly from the $N$ samples.
If any of these empirical partial correlation coefficients are negative, then edge $i - j$ is deleted from $\hat G$ (and no further tests are performed on $(i,j)$). Each iteration of the algorithm increases $\ell$ by~1 and the algorithm terminates when for all nodes $i,j$ such that $(i,j) \in \hat{G}$, the neighborhood of $i$ excluding $j$ has size strictly less than $\ell$.
The basic intuition behind Algorithm~\ref{alg:mtp2} is that if there is an edge $i-j$ in $G$, then all partial correlations $\rho_{ij \mid S}$ are positive because of the basic properties of MTP$_2$. In the limit of large $N$, this implies that all $\hat{\rho}_{ij \mid S}$ are positive. On the other hand, when $i$ and $j$ are not connected in the true underlying graph, then there exists a list of conditioning sets $S_1, \cdots, S_K$ such that $\rho_{ij \mid S_k}=0$ for all $1\leq k\leq K$. When $K$ is large enough, then intuitively there should exist $1\leq k\leq K$ such that $\hat{\rho}_{ij \mid S_k} < 0$ with high probability. However, since for overlapping conditioning sets the empirical partial correlations are highly correlated, we use separate batches of data for their estimation.
This leads to a procedure for learning the underlying Gaussian graphical model by deleting edges based on the signs of empirical partial correlation coefficients.
Having provided the high level intuition behind Algorithm~\ref{alg:mtp2}, we now prove its consistency under common assumptions on the underlying data generating process. Let $d$ denote the maximum degree of the true underlying graph $G$. For any positive semidefinite matrix $A$, let $\lambda_{\min}(A)$ and $\lambda_{\max}(A)$ denote the minimum and maximum eigenvalues of $A$ respectively.
\begin{condition}\label{cd:eigen}
There exist positive constants $\sigma_{\min}$ and $\sigma_{\max}$ such that for any subset of nodes $S \subseteq [p]$ with $|S| \leq d + 4$, the true underlying covariance matrix satisfies
\begin{align*}
\lambda_{\min}(\Sigma_S) \geq \sigma_{\min} \quad\textrm{and}\quad \lambda_{\max}(\Sigma_S) \leq \sigma_{\max}.
\end{align*}
\end{condition}
Note that since $\lambda_{\max}(\Sigma_S) \leq \tr(\Sigma_S)$ and $|S| \leq d + 4$, it is straightforward to show that a sufficient condition for $\lambda_{\max}(\Sigma_S)\leq\sigma_{\max}$ is that all diagonal entries of $\Sigma$ scale as a constant. This condition is also required by many existing methods including graphical lasso and CLIME; see Section~\ref{sec:pre}.
Similarly, a sufficient condition for $\lambda_{\min}(\Sigma_S)\geq \sigma_{\min}$ is that all diagonal entries of $\Theta$ scale as a constant (see the Supplementary Material for a proof); this assumption is also required by CLIME.
\begin{condition}\label{cd:signal}
There exists a positive constant $c_\rho$ such that for any two nodes $i, j \in [p]$, if $(i,j) \in G$, then $\rho_{i,j \mid [p] \setminus \{i,j\}} \geq c_\rho \sqrt{(\log p)/(N^{3 / 4})}$.
\end{condition}
Condition~\ref{cd:signal} is a standard condition for controlling the minimum edge weight in $G$ as required, for example, by graphical lasso.
While the minimum threshold in our condition scales as $\sqrt{(\log p)/(N^{3 / 4})}$, graphical lasso only requires $\sqrt{(\log p)/N}$ (but instead requires a particular choice of tuning parameter and the incoherence condition).
\begin{condition}\label{cd:size}
The size of $p$ satisfies that $p \geq N^{\frac{1}{8}} + d + 2$.
\end{condition}
Condition~\ref{cd:size} implies that the high-dimensional consistency guarantees of Algorithm~\ref{alg:mtp2} cannot be directly generalized to the low-dimensional setting where $p$ scales as a constant.
We now provide the main result of our paper, namely consistency of Algorithm~\ref{alg:mtp2}.
\begin{theorem} \label{thm:main}
Assume that the maximum neighbourhood size $d$ scales as a constant and let Conditions~\ref{cd:eigen}-\ref{cd:size} be satisfied with $c_\rho$ sufficiently large. Then for any $\gamma \in (\frac{3}{4}, 1)$, there exist positive constants $\tau$ and $C$ that depend on $(c_\rho, \sigma_{\max}, \sigma_{\min}, d, \gamma)$ such that with probability at least $1 - p^{-\tau} - p^2 e^{-C N^{\frac{1 - \gamma}{2} \wedge (4 \gamma - 3)}}$, the graph estimated by Algorithm~\ref{alg:mtp2} is the same as the underlying graph $G$.
\end{theorem}%
\begin{remark}\label{rmk:gamma}
The consistency guarantees of our algorithm hold for any $\gamma\in(\frac{3}{4},1)$. This means that our algorithm does not require tuning of the parameter $\gamma$ to consistently estimate the underlying graph $G$.
Note that this is in contrast to other methods like graphical lasso or CLIME,
where the consistency guarantees require a specific choice of the tuning parameter in the algorithm, which is unknown a priori. This is advantageous, since our algorithm can consistently estimate the graph without running any computationally expensive tuning parameter selection approaches, such as stability selection~\citep{MB10}. By setting $\frac{1 - \gamma}{2} = (4 \gamma - 3)$, we obtain that the \emph{theoretically optimal} value is $\gamma=7/9$, as this leads to the best asymptotic rate. However, as seen in Section~\ref{sec:eval}, in practice different values of $\gamma$ can lead to different results. In particular, higher values of $\gamma$ empirically lead to removing less edges since the overlap between batches is higher and thus the empirical partial correlation coefficients are more correlated with each other.
\end{remark}
\begin{remark}
In applications where domain knowledge regarding the graph sparsity is available, $\gamma$ can still be tuned to incorporate such knowledge to improve estimation accuracy. We see it as a benefit of our method that a tuning parameter can be used when one has access to domain knowledge, but doesn't have to be tuned in order to obtain consistent estimates, since it is provably consistent for all $\gamma\in(\frac{3}{4},1)$.
\end{remark}
\paragraph{Proof of Theorem~\ref{thm:main}:} In the following, we provide an overview of the proof of our main result. %
Theorems~\ref{thm:fn} and~\ref{thm:fp} show that at iteration $\ell = d + 1$, the graph $\hat{G}$ estimated by Algorithm~\ref{alg:mtp2} is exactly the same as the underlying graph $G$. The proof is then completed by showing that Algorithm~\ref{alg:mtp2} stops exactly at iteration $\ell = d + 1$. All proofs are provided in the Supplementary Material.
We start with Theorem~\ref{thm:fn}, which bounds the \emph{false negative rate} of Algorithm~\ref{alg:mtp2}, i.e.~showing that all edges $(i,j)$ in the true graph $G$ are retained.
\begin{theorem}[False negative rate] \label{thm:fn}
Under Conditions~\ref{cd:eigen} and~\ref{cd:signal} and $c_\rho$ sufficiently large, there exists a positive constant $\tau$ that depends on $(c_\rho, \sigma_{\max}, \sigma_{\min}, d)$ such that with probability at least $1 - p^{-\tau}$, the graph $\hat G$ estimated by Algorithm~\ref{alg:mtp2} at iteration $\ell = d + 1$ contains all edges $(i,j) \in G$.
\end{theorem}
The proof of Theorem~\ref{thm:fn} is based on concentration inequalities in estimating partial correlation coefficients.
The high-level intuition behind the proof is that because the empirical partial correlation coefficients concentrate exponentially around the true partial correlation coefficients, then with high probability if an edge exists, no empirical partial correlation coefficient will be negative; as a consequence, Algorithm~\ref{alg:mtp2} will not eliminate the edge.
The following theorem bounds the \emph{false positive rate}; namely, it shows that with high probability Algorithm~\ref{alg:mtp2} will delete all edges $(i,j)$ that are not in the true graph $G$.
\begin{theorem}[False positive rate]\label{thm:fp}
Under the same conditions as Theorem~\ref{thm:main}, there exists positive constants $C, \tau$ that depend on $(c_\rho, \sigma_{\max}, \sigma_{\min}, d, \gamma)$ such that with probability at least $1 - p^{-\tau} - p^2e^{-C\frac{1 - \gamma}{2} \wedge 4 \gamma - 3}$, the graph $\hat G$ estimated by Algorithm~\ref{alg:mtp2} at iteration $\ell = d + 1$ does not contain any edges $(i,j) \notin G$.
\end{theorem}
The proof of Theorem~\ref{thm:fp} relies heavily on the following lemma that considers the orthant probability of partial correlation coefficients. Recall in Algorithm~\ref{alg:mtp2} that for a particular edge $i -j $ in the estimated graph $\hat G$ at a given iteration, we calculate a series of empirical partial correlation coefficients with different conditioning sets. The only way Algorithm~\ref{alg:mtp2} will not delete the edge is if all empirical partial correlation coefficients are $\geq 0$. Thus given 2 nodes $i, j$ for which $(i, j) \notin G$, we need to upper bound the orthant probability that all empirical partial correlation coefficients computed by Algorithm~\ref{alg:mtp2} are non-negative. As we will discuss next, the use of batches is critical for this result.
\begin{lemma}\label{lem:fn}
Consider a pair of nodes $(i,j)\notin G$. Assume that there exists $K := N^{\frac{1 - \gamma}{2}}$ sets of nodes $S_1, \cdots, S_K \subseteq [p] \setminus \{i,j\}$ with $|S_k| \leq d + 2$ that satisfy $\rho_{ij \mid S_k} = 0$. Then there exists positive constants $C$ and $N_0$ that depends on $(\sigma_{\max}, \sigma_{\min},d)$ such that
\begin{align}\label{eq:exp}
\Pr(\hat{\rho}_{ij \mid S_k} > 0\quad \forall k \in [K]) \leq \exp(- C N^{\frac{1 - \gamma}{2} \wedge 4 \gamma - 3}).
\end{align}
\end{lemma}
\begin{figure*}[!t]
\centering
\subcaptionbox{Random graphs}{\includegraphics[width=0.31\textwidth]{MCC_random.png}
\hfill
\subcaptionbox{Chain graphs}{\includegraphics[width=0.31\textwidth]{MCC_chain.png}}%
\hfill
\subcaptionbox{Grid graphs}{\includegraphics[width=0.31\textwidth]{MCC_grid.png}}%
\caption{Comparison of different algorithms evaluated on MCC across (a) random, (b) chain, (c) grid graphs with $p=100$ and $N \in \{ 25, 50, 100, 200, 500, 1000\}$. For each graph and choice of $p$ and $N$, results are shown as an average across $20$ trials. The shaded areas correspond to $\pm 1$ standard deviation of MCC over $20$ trials.}
\label{fig:mcc}
\vspace{-0.2cm}
\end{figure*}
To provide intuition for the proof of Lemma~\ref{lem:fn}, consider a scenario where the batch size $M$ is chosen small enough such that the batches used to estimate the different $\hat{\rho}_{ij \mid S_k}$'s have no overlap. Since in this case all $\hat{\rho}_{ij \mid S_k}$'s are independent, the bound in Lemma~\ref{lem:fn} can easily be proven, namely: for some positive constant $\delta < 1$, it holds that
\begin{align*}
\Pr(\hat{\rho}_{ij \mid S_k} & > 0 \quad \forall k \in [K]) = \prod_{k=1}^K \Pr(\hat{\rho}_{ij \mid S_k} > 0) \\
& \leq \delta^K = \exp\big(-\log (1 / \delta) \cdot N^{\frac{1 - \gamma}{2}}\big).
\end{align*}
However, for small batch size $M$ the empirical partial correlation coefficients $\hat{\rho}_{ij \mid S}$ don't concentrate around $\rho_{ij \mid S}$, which may result in false negatives. In the proof of Lemma~\ref{lem:fn} we show that choosing a batch size of $M = N^\gamma$ guarantees the required concentration result as well as a sufficiently weak dependence among the empirical partial correlation coefficients $\hat{\rho}_{ij \mid S_k}$'s to obtain the exponential upper bound in~\eqref{eq:exp} as in the independent case.
Lemma~\ref{lem:fn} implies Theorem~\ref{thm:fp} by taking uniform control over all edges $(i,j) \not\in G$.
Finally, to complete the proof of Theorem~\ref{thm:main}, it remains to show that Algorithm~\ref{alg:mtp2} terminates at iteration $\ell = d + 1$.
\begin{proof}[Proof of Theorem~\ref{thm:main}]
It follows from Theorem~\ref{thm:fn} and Theorem~\ref{thm:fp} that with probability at least $1 - p^{-\tau} - p^2 e^{-CN^{\frac{1 - \gamma}{2} \wedge 4 \gamma - 3}}$, the graph estimated by Algorithm~\ref{alg:mtp2} at iteration $\ell = d + 1$ is exactly the same as $G$. Since the maximum degree of $G$ is at most $d$, it matches the stopping criterion of Algorithm~\ref{alg:mtp2}. As a consequence, Algorithm~\ref{alg:mtp2} terminates at iteration $\ell = d + 1$.
\end{proof}
\section{Empirical Evaluation}
\label{sec:eval}
In the following, we evaluate the performance of our algorithm for structure recovery in MTP$_2$ Gaussian graphical models in the high-dimensional, sparse regime. We first compare the performance of Algorithm~\ref{alg:mtp2} to various other methods on synthetically generated datasets and then present an application to graphical model estimation on financial data. The code to reproduce our experimental results is available at \url{https://github.com/puma314/MTP2-no-tuning-parameter}.
\subsection{Synthetic Data}
\label{sec_synthetic}
\begin{figure*}[!t]
\centering
\subcaptionbox{ROC curve
}{\includegraphics[width=0.3\textwidth]{ROC_curve_500_all.png}}
\hfill
\subcaptionbox{MCC
{\includegraphics[width=0.3\textwidth]{ALL_MCC_curves_500_all.png}}%
\hfill
\subcaptionbox{True positive rate
{\includegraphics[width=0.3\textwidth]{ALL_TPR_curves_500_all.png}}%
\caption{(a) ROC curves, (b) MCC, and (c) true positive rate versus normalized tuning parameter for random graphs with $p=100$ and $N=500$ across $30$ trials. The shaded regions correspond to $\pm 1$ standard deviation of MCC (TPR resp.) across $30$ trials.}
\label{fig:roc}
\vspace{-0.2cm}
\end{figure*}
Given a precision matrix $\Theta \in \mathbb{R}^{p \times p}$, we generate $N$ i.i.d. samples $x^{(1)}, \ldots, x^{(N)} \sim \mathcal{N}(0, \Theta^{-1})$. We let $\hat{\Sigma} = \frac{1}{N} \sum_{i=1}^{N} (x^{(i)})(x^{(i)})^T$ denote the sample covariance matrix. To analyze the performance of our algorithm in various scenarios, we vary $N$ for $p=100$. In addition, we consider three different sparsity patterns in the underlying precision matrix $\Theta$ that are similarly considered by~\citet{SH15}, namely:
\emph{Grid: } Let $B$ be the adjacency matrix of a 2d-grid of size $\sqrt{p}$. Let $\delta := 1.05 \cdot \lambda_1(B)$, $\tilde{\Theta} := \delta I - B$ and $\Theta = D \tilde{\Theta} D$, where $D$ is a diagonal matrix such that $\Sigma = \Theta^{-1}$ has unit diagonal entries.
\emph{Random: } Same as for \emph{grid} above, but with $B$ replaced with a symmetric matrix having $0$ diagonal and one percent non-zero off diagonal entries uniform on $[0,1]$ chosen uniformly at random.
\emph{Chain: } We let $\Sigma^* := (\sigma^*_{jk}) = (0.9^{|j-k|}), j,k = 1, \ldots, p$. Then we take $\Omega := (\Sigma^*)^{-1}$.
Our primary interest in comparing different algorithms is their performance at recovering the underlying graph structure associated with $\Theta$.
Similarly as in~\citep{SH15}, in Figure~\ref{fig:mcc} we evaluate their performance using Matthew's correlation coefficient (MCC):
\begin{align*}
& \text{MCC} =\\
& \frac{\text{TP} \cdot \text{TN} - \text{FP} \cdot \text{FN}}{ \left( (\text{TP} + \text{FP})(\text{TP} + \text{FN})(\text{TN} + \text{FP})(\text{TN} + \text{FN})\right)^{1/2}},
\end{align*}
where $\text{TP}$, $\text{TN}$, $\text{FP}$ and $\text{FP}$ denote the number of true positives, true negatives, false positives and false negatives respectively. Intuitively, MCC measures the correlation between the presence of edges in the true and estimated graphs. Thus, a higher MCC score means less number of false positives \emph{and} false negatives. Since MCC combines true positive rates (TPR) and false positive rates (FPR), we think it is a compelling metric. MCC has also been used in similar work~\citep{SH15}. In the appendix, we also provide evaluation results based on TPR and FPR.
\emph{Choice of Parameters:} We fix $p=100$ and vary $N = 25, 50, 100, 200, 500, 1000$ to analyze how the ratio $p/N$ affects performance for the various algorithms. For each setup and value of $N$, we do $20$ trials of each algorithm and report the average of the MCCs across the trials.
\emph{Methods Compared:} We benchmark our algorithm against a variety of state-of-the-art methods for structure learning in Gaussian graphical models (see Section~\ref{sec:pre}) for a range of tuning parameters:
\begin{itemize}
\item {SH:} Slawski and Hein~\citep{SH15} considered the same problem as in this paper. For comparison to their algorithm we use the same range of tuning parameters as considered by them, namely $q \in \{ 0.7, 0.75, 0.8, 0.85, 0.9, 0.95, 0.99\}$.
\item \emph{glasso:} For graphical lasso~\citep{FHT08} we vary the sparsity parameter around the the theoretically motivated tuning parameter of $\sqrt{\log(p)/n}$, namely $\lambda \in \{ 0.055, 0.16, 0.45, 1.26, 3.55, 10 \}$.
\item \emph{nbsel:} For neighborhood selection~\citep{MB06} we use the same $\lambda$ values as for \emph{glasso}.
\item \emph{TIGER:} For TIGER~\citep{LWang17}, we use the theoretically optimal value $\lambda := \pi \sqrt{\frac{\log (p)}{n}}$.
\item \emph{CMIT: } This algorithm~\citep{Anand12} has two tuning parameters. Since the run-time is $p^{\eta+2}$ in the maximal size of the conditioning set $\eta$, we set $\eta = 1$ for computational reasons. For $\lambda$, we use the the same values as for \emph{glasso}.
\item \emph{Our algorithm:} We use the asymptotically optimal choice of $\gamma = 7/9$ (see Remark \ref{rmk:gamma}) and also compare to $\gamma = 0.85$, which falls in the allowable range $(0.75,1)$.
\end{itemize}
For the comparison based on MCC in Figure~\ref{fig:mcc}, we use \emph{stability selection} \citep{MB10}, where an algorithm is run multiple times with different subsamples of the data for each tuning parameter and an edge is included in the estimated graph if it is selected often enough (we used $80\%$).
\emph{Discussion: } Figure~\ref{fig:mcc} compares the performance of the various methods based on MCC for random graphs, chain graphs and grid graphs. Compared with the algorithm that has similar theoretical properties as ours, namely TIGER, our algorithm has better overall performance across all simulation set-ups. For the other state-of-the-art methods, Figure~\ref{fig:mcc}(a) shows that our algorithm is able to offer a significant improvement for random graphs over competing methods. Also on chain graphs (Figure~\ref{fig:mcc}(b)) our algorithm is competitive with the other algorithms, with \emph{SH} and \emph{nbsel} performing comparably. For the grid graph (Figure~\ref{fig:mcc}(c)), for $N \leq 200$ \emph{SH} with stability selection outperforms our algorithm with $\gamma = 7/9$. However, it is important to note that stability selection is a major advantage for the compared algorithms and comes at a significant computational cost. Moreover, by varying $\gamma$ in our algorithm its performance can be increased and becomes competitive to \emph{SH} with stability selection. Both points are discussed in more detail in the Supplementary Material. Another interesting phenomenon is that in Figure~\ref{fig:mcc}(c), our algorithm with $\gamma = 0.85$ performs better than the ``theoretically optimal'' $\gamma = 7 / 9$, which may seem to contradict our theoretical results. Notice, however, that ``theoretical optimality'' holds for $N\to\infty$. In the finite sample regime considered here factors such as $\sigma_{\min}$, $\sigma_{\max}$ and $d$ can influence the optimal choice.
To evaluate the sensitivity of the various algorithms to their respective tuning parameters,
we generate an ROC curve for each algorithm on random graphs with $p=100$ and $N \in \{25, 50, 100, 200, 500, 1000\}$, of which $N=500$ is shown in Figure~\ref{fig:roc}(a); see the Supplementary Material for more details and plots. All algorithms perform similarly in terms of their ROC curves. Note that since our algorithm can only choose $\gamma$ from the range $(0.75, 1)$, its false positive rate is upper bounded and thus it is impossible to get a full ``ROC'' curve.
Figure~\ref{fig:roc}(b) and (c) show the MCC and true positive rate (TPR) for each algorithm as a function of the tuning parameter normalized to vary between $[0,1]$.
Our algorithm is the least sensitive to variations in the tuning parameter, as it has one of the smallest ranges in both MCC and TPR (the $y$-axes) as compared to the other algorithms. Our algorithm also shows the smallest standard deviations in MCC and in TPR, showing its consistency across trials (especially compared to \emph{SH}). We here concentrate on TPR since the variation in FPR between all algorithms is small across trials.
Taken together, it is quite striking that our algorithm with fixed $\gamma$ generally outperforms methods with stability selection.
\subsection{Application to Financial Data}
We now examine an application of our algorithm to financial data. The MTP$_2$ constraint is relevant for such data, since the presence of a latent global market variable leads to positive dependence among stocks~\citep{HL02,MS05}. We consider the daily closing prices for $p=452$ stocks that were consistently in the S\& P 500 index from January 1, 2003 to January 1, 2018, which results in a sample size of $N=1257$. Due to computational limitations of stability selection primarily with \emph{CMIT}, we performed the analysis on the first $p = 100$ of the $452$ stocks.
The $100$ stocks are categorized into 10 sectors, known as the Global Industry Classification Standard (GICS) sectors. This dataset is gathered from Yahoo Finance and has also been analyzed in~\citep{LHZ12}.
A common task in finance is to estimate the covariance structure between the log returns of stocks. Let $S_j^{(t)}$ denote the closing price of stock $j$ on day $t$ and let $X_j^{(t)} := \log ( S_j^{(t)} / S_j^{(t-1)} )$ denote the log return of stock $j$ from day $t-1$ to $t$. Denoting by $X := (X_1, \ldots, X_{100})^T$ the random vector of daily log returns of the $100$ stocks in the data set, then our goal is to estimate the undirected graphical model of $X$. We do this by treating the $1257$ data points $X^{(t)} := (X_1^{(t)}, \ldots, X_{100}^{(t)})$ corresponding to the days $t=1, \ldots, 1257$ as i.i.d.~realizations of the random vector $X$.
As in Section~\ref{sec_synthetic}, we compare our method to \emph{SH}, \emph{glasso} (using both stability selection and cross-validation), \emph{nbsel}, \emph{CMIT} (using both stability selection and the hyperparameter with the best performance) and TIGER.
Note that here we cannot assess the performance of the various methods using MCC since the graph structure of the true underlying graphical model is unknown. Instead, we assess each estimated graph based on its \emph{modularity coefficient}, namely the performance at grouping stocks from the same sector together. Table~\ref{tab:modularity} shows that our method using fixed $\gamma = 7 / 9$ outperforms all other methods in grouping the stocks. For further details on the analysis see the Supplementary Material.
\begin{table}[t!]
\centering
\begin{tabular}{ c c }
Method & Modularity \\ & Coefficient \\ \hline \hline
Our Algorithm $(\gamma = 7./9.$) & 0.482 \\ \hline
Slawski-Hein with st.~sel. & 0.418 \\ \hline
Neighborhood selection with st.~sel. & 0.350\\ \hline
Graphical Lasso with st.~sel. & 0. \\ \hline
Cross-validated graphical lasso & 0.253\\ \hline
\emph{CMIT} with st.~sel. & -0.0088 \\ \hline
\emph{CMIT} with best hyperparameter & -0.0085 \\ \hline
TIGER & -0.5 \\
\end{tabular}
\vspace{-0.1cm}
\caption{Modularity scores of the estimated graphs; higher score indicates better clustering performance; ``st.~sel'' stands for ``stability selection''. For our algorithm we used the theoretically optimal value of $\gamma=7/9$.}\label{tab:modularity}
\vspace{-0.3cm}
\end{table}
\section{Discussion}
\label{sec:discuss}
In this paper, we proposed a tuning-parameter free, constraint-based estimator for learning the structure of the underlying Gaussian graphical model under the constraint of MTP$_2$. We proved consistency of our algorithm in the high-dimensional setting without relying on an unknown tuning parameter. We further benchmarked our algorithm against existing algorithms in the literature with both simulated and real financial data, thereby showing that it outperforms existing algorithms in both settings. A limitation of our algorithm is that its time complexity scales as $O(p^d)$; it would be interesting in future work to develop a more computationally efficient algorithm for graphical model estimation under MTP$_2$. Another limitation is that our algorithm is only provably consistent in the high-dimensional setting. However, the strong empirical performance of our algorithm as compared to existing algorithms is quite striking, given in particular these results are from fixed $\gamma$. To our knowledge, this is the first tuning-parameter free algorithm for structure recovery in Gaussian graphical models with consistency guarantees
\subsubsection*{Acknowledgements}
We thank Dheeraj Nagaraj, Cheng Mao and Philippe Rigollet and the anonymous reviewers for helpful discussions. The authors acknowledge support by NSF (DMS-1651995), ONR (N00014-17-1-2147 and N00014-18-1-2765), IBM, a Sloan Fellowship and a Simons Investigator Award. At the time this research was completed, Yuhao Wang and Uma Roy was at the Massachusetts Institute of Technology.
|
1,477,468,751,055 | arxiv | \section{Introduction}
The detection of gravitational waves (GWs) by the Advanced Laser Interferometer
Gravitational-Wave Observatory (aLIGO) has enabled some of the first
experimental studies of gravity in the highly dynamical and strong-field
regimes \cite{gw150914, gw151226, o1bbh, gw170104, gw150914_tgr}. These first
few detections have already been used to place some of the most stringent
constraints on deviations from the general theory of relativity (GR) in this
domain, which is inaccessible to laboratory, Solar System or cosmological tests
of gravity.
However, it has not been possible to use LIGO signals to learn about the
polarization content of GWs \cite{gw150914_tgr}, a measurement highly relevant
when comparing GR to many of its alternatives \cite{tegp, Will2006}. In fact,
all existing observations are so far consistent with the extreme case of purely
non-GR polarizations. The reason for this is that the two LIGO instruments are
nearly coaligned, meaning that they are sensitive to approximately the
same linear combination of polarizations. This makes it nearly impossible to
unequivocally characterize the polarization content of transient GW signals
like the compact-binary coalescences (CBCs) observed so far, at least not
without making assumptions about the way the signals were sourced
\cite{Will2006, Chatziioannou2012}.
Existing observations that are usually taken to constrain the amount of allowed
non-GR polarizations can do so only in an indirect manner. For example,
measurements of the orbital decay of binary systems are sensitive to the total
radiated GW power, but do not probe the geometric effect (namely, the
directions in which space is stretched and squeezed) of the waves directly (see
e.g.\ \cite{Weisberg2010, Freire2012}, or \cite{Stairs2003, Wex2014} for
reviews). In the context of specific alternative theories (e.g.\ scalar-tensor)
such observations can indeed constrain the power contained in extra
polarizations. However, such measurements provide no direct, model-independent
information on the actual polarization content of the gravitational radiation.
Thus, there may be multiple theories, with different polarization content, that
still predict the correct observed GW emitted power.
To see that the above is the case, consider a scenario in which GWs are emitted
precisely as in GR, but where the polarizations change during propagation: the
phase evolution would be similar to GR, but the geometric effect of the wave
would be completely different \cite{Berezhiani2007, Hassan2013, Max2017,
Brax2017}. (This polarization mutation could take place if the linear
polarization basis does not diagonalize the kinetic matrix of the theory, as is
the case for neutrino oscillations \cite{Pontecorvo1957, Pontecorvo1967}, or
for the circular GW polarization states in dynamical Chern-Simons gravity
\cite{Alexander2009}.) Because the same limitations of pulsar binary analyses
apply to studies of the details in the phasing of signals previously detected
with LIGO, and other traditional tests of GR (like Solar System tests) have no
bearing on GWs, there currently exist no direct measurements of GW
polarizations.
Prospects for the direct measurement of GW polarizations are improved by the
addition of Advanced Virgo to the detector network. In principle, at least five
noncoaligned differential-arm detectors would be needed to break {\em all} the
degeneracies among the five nondegenerate polarizations allowed by generic
metric theories of gravity \cite{Eardley1973a, Eardley1973b}, if transient
signals are used \cite{Isi2017, Callister2017}. However, as we will show, the
current Advanced-LIGO--Advanced-Virgo network can already be used to
distinguish between {\em some} of the possible combinations of polarizations
without the need to use specific knowledge about the phase evolution of the
source.
In this note, we present a simple Bayesian method to extract information about
GW polarizations directly from strong CBC signals by using the relative
amplitudes and timing at the different detectors.
\section{Background} \label{sec:background}
\subsection{Polarizations} \label{sec:polarizations}
\begin{figure}
\includegraphics[width=0.33\columnwidth]{polcircles_p}\hfill
\includegraphics[width=0.33\columnwidth]{polcircles_x}\hfill
\includegraphics[width=0.33\columnwidth]{polcircles_b}\\
\includegraphics[width=0.33\columnwidth]{polcircles_c}\hfill
\includegraphics[width=0.33\columnwidth]{polcircles_y}\hfill
\includegraphics[width=0.33\columnwidth]{polcircles_l}
\caption{{\em Effect of different GW polarizations on a ring of free-falling
test particles}. Plus (+) and cross ($\times$) tensor modes (green); vector-x
(x) and vector-y (y) modes (red); breathing (b) and longitudinal (l) scalar
modes (black). In all of these diagrams the wave propagates in the
\emph{z} direction. This decomposition into polarizations was first proposed
for generic metric theories in \cite{Eardley1973b}.}
\label{fig:circles}
\end{figure}
In all theories that respect Einstein's equivalence principle, including GR,
gravitational interactions may be fully described via the universal coupling of
matter to a metric tensor \cite{Thorne1973, tegp}. Because of this, it may be
shown that, in any such {\em metric theory}, a (nearly-)null plane GW may be
encoded in at most six independent components of the Riemann tensor at any
given point in spacetime \cite{Eardley1973a, Eardley1973b, tegp}. These degrees
of freedom give rise to six geometrically distinct polarizations, corresponding
to the six linearly independent components of an arbitrary metric perturbation.
At any given spacetime point $\fvec{x}$, the metric perturbation may thus be
written as
\begin{equation} \label{eq:hab}
h_{ab}(\fvec{x}) = h_{A}(\fvec{x})\, e^A_{~ab}\, ,
\end{equation}
for six independent amplitudes, $h_{A}(\fvec{x})$, and six polarization tensors
$e^A_{~ab}$ (implicit sum over polarizations $A$). For instance, letting
${\bf w}_z={\bf w}_x \times {\bf w}_y$ be a spatial unit vector in the
direction of propagation of the wave, we may consider the set of linear
polarization tensors
\begin{equation}
{\bf e}^{+}= {\bf w}_x \otimes {\bf w}_x - {\bf w}_y \otimes {\bf w}_y\, ,
\end{equation}
\begin{equation}
{\bf e}^{\times}= {\bf w}_x \otimes {\bf w}_y + {\bf w}_y \otimes {\bf w}_x\, ,
\end{equation}
\begin{equation}
{\bf e}^{\rm x}= {\bf w}_x \otimes {\bf w}_z + {\bf w}_z \otimes {\bf w}_x\, ,
\end{equation}
\begin{equation}
{\bf e}^{\rm y}= {\bf w}_y \otimes {\bf w}_z + {\bf w}_z \otimes {\bf w}_y\, ,
\end{equation}
\begin{equation}
{\bf e}^{\rm b}= {\bf w}_x \otimes {\bf w}_x + {\bf w}_y \otimes {\bf w}_y\, ,
\end{equation}
\begin{equation}
{\bf e}^{\rm l}= {\bf w}_z \otimes {\bf w}_z\, .
\end{equation}
Then \eq{hab} implies that there exists some
gauge in which,
in a local Lorentz frame with Cartesian coordinates along $({\bf w}_x,\,
{\bf w}_y,\, {\bf w}_z)$,
\begin{equation} \label{eq:polarizations}
[h_{ij}] = \begin{pmatrix}
h_{\rm b} + h_+ & h_\times & h_{\rm x} \\
h_\times & h_{\rm b} - h_+ & h_{\rm y} \\
h_{\rm x} & h_{\rm y} & h_{\rm l}
\end{pmatrix} ,
\end{equation}
where the $h_A$'s represent the amplitudes of the linear polarizations: plus
($+$), cross ($\times$), vector x (x), vector y (y), breathing (b) and
longitudinal (l). The effect of each of these modes on a ring of
freely-falling particles is represented in \fig{circles}.
Polarizations may be characterized by their behavior under Lorentz
transformations, and different theories may be classified according to the
polarizations they allow, as seen by different observers; this is known as the
E(2) or {\em Eardley} classification \cite{Eardley1973a, Eardley1973b}. From a
field-theoretic perspective, the two tensor modes, the two vector modes and the
breathing (transverse) scalar mode correspond to the helicity $\pm2$, helicity
$\pm1$, and helicity $0$ states of a massive spin-2 particle (the graviton).
The remaining longitudinal scalar mode is usually linked to a ghost-like degree
of freedom (associated with the trace).
This correspondence between geometric (Eardly's classification) and
field-theoretic (Wigner's classification) language is, however, limited because
the the E(2) classification is only semi-Lorentz-invariant (although it is
usually taken to hold, at least in the weak field regime) \cite{Eardley1973b}.
Einstein's theory only allows for the existence of linear combinations of the
tensor $+$ and $\times$ polarizations \cite{tegp}. On the other hand,
scalar-tensor theories famously predict the presence of some breathing
component associated with the theory's extra scalar field \cite{Brans1961}, as
do some theories with extra dimensions \cite{Andriot2017}. On top of tensor and
scalar modes, bimetric theories, like Rosen or Lightman-Lee theories, may also
predict vector modes \cite{Lightman1973, Rosen1974, Chatziioannou2012}. The
same is true in general for massive-graviton frameworks \cite{DeRham2014}.
Furthermore, less conventional theories might, in principle, predict the
existence of vector or scalar modes \emph{only}, while still possibly being in
agreement with all other non-GW tests of GR (see e.g.~\cite{Mead2015}, for an
unconventional example).
\subsection{Antenna patterns} \label{sec:aps}
Because different polarizations have geometrically distinct effects, as
illustrated in \fig{circles}, GW detectors will react differently to each mode.
The {\em strain} produced by a GW metric perturbation $h_{ab}$ on certain
detector $I$ spatially located at ${\bf x}_I$, is given by
\begin{equation}
h_I(t) = D^{ab}_{I} h_{ab}(t, {\bf x}_I) = h_A(t, {\bf x}_I) D^{ab}_I e^A_{ab}.
\end{equation}
The detector tensor, $D^{ab}$, encodes the geometry of the instrument and the
measurement it makes; for diferential-arm detectors (sometimes called {\em
quadrupolar antennas}, because of the symetries of their angular response
functions, cf.\ \fig{aps}), like LIGO and Virgo, this is
\begin{equation}
D^{ab} = \frac{1}{2}\left( d_x^{~a} d_x^{~b} - d_y^{~a} d_y^{~b} \right),
\end{equation}
where ${\bf d}_x$ and ${\bf d}_y$ are spatial unit vectors along the detector
arms (with common origin at the vertex ${\bf x}_I$). Although $D^{ab}$ is
technically also a function of time due to the motion of Earth with respect to
the fixed stars, in practice it can be taken as constant when treating
short-lived CBC signals, as is done here.
The $h_A(t)$'s are determined by a nontrivial combination of the source dynamics,
the details of the matter-gravity coupling, and the vacuum structure of the
theory. However, the response ({\em antenna pattern}) of detector $I$ to
polarization $A$,
\begin{equation} \label{eq:response}
F^A \equiv D^{ab}_I e^A_{ab}\, ,
\end{equation}
depends {\em only} on the local geometry of the gravitational wave and the
detector, irrespective of the properties of the source. This decoupling makes
the antenna patterns a unique resource for studying GW polarizations directly.
The response functions, \eq{response}, encode the effect of a linearly
$A$-polarized GW with unit amplitude, $h_A=1$. Ground-based GW detectors, like
LIGO and Virgo are quadrupolar antennas that perform low-noise measurements of
the strain associated with the differential motion of two orthogonal arms.
Their detector response functions can thus be written as \cite{Nishizawa2009,
Blaut2012, Isi2015, Poisson2014}:
\begin{equation} \label{eq:Fp}
F_{+} = \frac{1}{2} \left[ ({\bf w}_x \cdot {\bf d}_x)^2-({\bf w}_x \cdot
{\bf d}_y)^2 - ({\bf w}_y \cdot {\bf d}_x)^2+({\bf w}_y \cdot {\bf d}_y)^2
\right],
\end{equation}
\begin{equation} \label{eq:Fc}
F_{\times}=({\bf w}_x \cdot {\bf d}_x) ({\bf w}_y \cdot {\bf d}_x)-({\bf w}_x
\cdot {\bf d}_y) ({\bf w}_y \cdot {\bf d}_y),
\end{equation}
\begin{equation} \label{eq:Fx}
F_{\rm x}= ({\bf w}_x \cdot {\bf d}_x) ({\bf w}_z \cdot {\bf d}_x)- ({\bf w}_x
\cdot {\bf d}_y) ({\bf w}_z \cdot {\bf d}_y),
\end{equation}
\begin{equation} \label{eq:Fy}
F_{\rm y}= ({\bf w}_y \cdot {\bf d}_x) ({\bf w}_z \cdot {\bf d}_x)- ({\bf w}_y
\cdot {\bf d}_y) ({\bf w}_z \cdot {\bf d}_y),
\end{equation}
\begin{equation} \label{eq:Fb}
F_{\rm b}= \frac{1}{2} \left[ ({\bf w}_x \cdot {\bf d}_x)^2-({\bf w}_x \cdot
{\bf d}_y)^2+({\bf w}_y \cdot {\bf d}_x)^2-({\bf w}_y \cdot {\bf
d}_y)^2\right],
\end{equation}
\begin{equation} \label{eq:Fl}
F_{\rm l}=\frac{1}{2}\left[ ({\bf w}_z \cdot {\bf d}_x)^2- ({\bf w}_z
\cdot {\bf d}_y)^2 \right].
\end{equation}
Here, as before, the spatial vectors ${\bf d}_x$, ${\bf d}_y$ have unit norm
and point along the detector arms such that ${\bf d}_z={\bf d}_x\times{\bf
d}_y$ is the local zenith; the direction of propagation of the wave from a
source at known sky location (specified by right ascension $\alpha$, and
declination $\delta$) is given by ${\bf w}_z$, and ${\bf w}_x$, ${\bf w}_y$ are
such that ${\bf w}_z={\bf w}_x\times{\bf w}_y$.
We choose ${\bf w}_x$ to lie along the intersection of the equatorial plane of
the source with the plane of the sky, and let the angle between ${\bf w}_y$ and
the celestial north be $\psi$, the {\em polarization angle}.
\begin{figure}
\subfloat[][Plus (+)]{\includegraphics[width=0.33\columnwidth]{ap_p}}\hspace{1cm}
\subfloat[][Cross ($\times$)]{\includegraphics[width=0.33\columnwidth]{ap_c}}\\
\subfloat[][Vector-x (x)]{\includegraphics[width=0.33\columnwidth]{ap_x}}\hspace{1cm}
\subfloat[][Vector-y (y)]{\includegraphics[width=0.33\columnwidth]{ap_y}}\\
\subfloat[][Scalar (s)]{\includegraphics[width=0.33\columnwidth]{ap_b}}
\caption{{\em Angular response of a quadrupolar detector to each GW
polarization}. The radial distance represents the response of a single
quadrupolar antenna to a unit-amplitude gravitational signal of a tensor (top),
vector (middle), or scalar (bottom) polarization, i.e.\ $|F_A|$ for each
polarization $A$ as given by Eqs.\ (\ref{eq:Fp_ifo}--\ref{eq:Fs_ifo}) for
$\psi=0$. The polar and azimuthal coordinates correspond to the source
location with respect to the detector, which is to be imagined as placed with
its vertex at the center of each plot and arms along the $x$ and $y$-axes. The
response is plotted to scale, such that the black lines representing the
detector arms have unit length in all plots. The response to breathing and
longitudinal modes is identical, so we only display it once and label it
``scalar''. (Reproduced from \cite{Isi2017}.)}
\label{fig:aps}
\end{figure}
Because of their symmetries, the breathing and longitudinal modes are fully
degenerate to networks of quadrupolar antennas (see e.g.\ Sec.\ VI of
\cite{Chatziioannou2012}). This means that no model-independent measurement
with such a network can possibly distinguish between the two, so it is enough
for us to consider just one of them explicitly; we will refer to the scalar
modes jointly by the subscript ``s''. (This degeneracy may not be present
for detectors with different geometries \cite{Lee2008, Chamberlin2012}.)
The response of a given differential-arm detector to signals of certain linear
polarization and direction of propagation can be written, in the local Lorentz
frame of the detector itself, as [see e.g.\ Eqs.\ (13.98) in
\cite{Poisson2014} with $\psi \rightarrow -\psi-\pi/2$, to account for the
different wave-frame definition]:
\begin{align} \label{eq:Fp_ifo}
F_+(\vartheta, \varphi, \psi) = &-\frac{1}{2}\left(1+\cos^2\vartheta \right)
\cos 2\varphi \cos2\psi \nonumber\\ &-\cos\vartheta \sin2\varphi \sin2\psi~,
\end{align}
\begin{align}
F_\times(\vartheta, \varphi, \psi) &= \frac{1}{2}\left(1+\cos^2\vartheta \right)
\cos 2\varphi \sin2\psi \nonumber \\ &-\cos \vartheta \sin 2\varphi \cos2\psi~,
\end{align}
\begin{align}
F_{\rm x}(\vartheta, \varphi, \psi) &= -\sin\vartheta \sin 2\varphi \cos\psi
\nonumber\\ &+\sin\vartheta \cos\vartheta\cos 2\varphi \sin\psi~,
\end{align}
\begin{align}
F_{\rm y}(\vartheta, \varphi, \psi) &= \sin\vartheta \sin 2\varphi \sin\psi
\nonumber\\ &+\sin\vartheta \cos\vartheta
\cos 2\varphi\cos\psi~,
\end{align}
\begin{equation} \label{eq:Fs_ifo}
F_{\rm b/l}(\vartheta, \varphi, \psi) = \mp \frac{1}{2} \sin^2\vartheta
\cos 2\varphi~,
\end{equation}
where $\vartheta$ and $\varphi$ are the polar an azimuthal coordinates of the
source with respect to the antenna at any given time (with detector arms along
the $x$ and $y$-axes). The tensor, vector and scalar nature of the
different polarizations is evident in this form, given how each mode
depends on $\psi$ (i.e.\ how it transforms under rotations around the
direction of propagation).
Equations \eqref{eq:Fp_ifo}--\eqref{eq:Fs_ifo} are represented in \fig{aps} by
a spherical polar plot in which the radial coordinate corresponds to the
sensitivity given by the magnitude $|F_A|$, shown for $\psi=0$. The angular
response functions have quadrupolar symmetry around the detector's zenith,
regardless of the helicitiy of the polarization itself. This figure also makes it
clear that differential-arm detectors will generally be more sensitive to some
polarizations than others, although this will vary with the sky location of the
source. For example, for all but a few sky locations, quadrupolar antennas will
respond significantly less to a breathing signal than a plus or cross signal.
\fig{aps} shows the response of a single differential-arm detector to waves
coming from different directions in the local frame of the instrument. However,
we are usually interested in the sensitivity of a {\em network} of detectors,
and its ability to distinguish the different polarizations. To visualize this,
define the effective response to each of the helicities, for a given source
sky-location $(\alpha,\, \delta)$ and detector $I$:
\begin{equation}
|F_{\rm t}^I(\alpha, \delta)| \equiv \sqrt{F_+^I(\alpha, \delta)^2 +
F_\times^I(\alpha, \delta)^2}\, ,
\end{equation}
\begin{equation}
|F_{\rm v}^I(\alpha, \delta)| \equiv \sqrt{F_{\rm x}^I(\alpha, \delta)^2 + F_{\rm
y}^I(\alpha, \delta)^2}\, ,
\end{equation}
\begin{align}
|F_{\rm s}^I(\alpha, \delta)| &\equiv \sqrt{F_{\rm b}^I(\alpha, \delta)^2 + F_{\rm
l}^I(\alpha, \delta)^2} \\
&= \sqrt{2}\, |F^I_{\rm b}(\alpha, \delta)|\, , \nonumber
\end{align}
for tensor, vector and scalar waves respectively. (Here, since we are not
dealing with any specific source, we {\em define} our polarization frame
letting $\psi=0$.) For a network of $N$ detectors, we may then construct an
effective response vector for each of the polarization sets above,
\begin{equation} \label{eq:ap_vector}
\vec{F}_H(\alpha, \delta) \equiv \left( |F_H^1(\alpha, \delta)|,\, \dots,
|F_H^N(\alpha, \delta)| \right),
\end{equation}
for $H \in \{\rm t,\, v,\, s\}$. Finally, we may compare the overall sensitivity of
the network to different polarizations by defining the {\em overlap}, as a
normalized inner product between two of these vectors.
For instance, to compare the effective scalar or vector network sensitivity to
the tensor one, we may look at the overlap factor:
\begin{equation} \label{eq:ap_overlap}
{\cal F}_{H/{\rm t}}(\alpha, \delta) = \frac{\vec{F}_H(\alpha, \delta) \cdot
\vec{F}_{\rm t}(\alpha, \delta)}{\vec{F}_{\rm t}(\alpha, \delta) \cdot
\vec{F}_{\rm t}(\alpha, \delta)},
\end{equation}
which will take values greater (less) than unity if the response to
polarizations $H$ is better (worse) than to tensor, with ${\cal F}_{\rm
t/t}(\alpha, \delta)=1$ by construction. The scalar and vector overlaps with
tensor are displayed for the LIGO-Virgo network in the skymap of
\fig{ap_skymap}, over a map of Earth for reference. Colored regions roughly
correspond to areas in the sky for which the tensor and nontensor responses of
the network are highly distinguishable. The patterns are anchored to angular
locations with respect to Earth (not the fixed stars), and is determined by the
specific location and orientation of the three detectors.
Averaged over all sky locations, the response of the network is worse for
scalar signals than tensor ones, which is apparent from the top skymap in
\fig{ap_skymap} and the distribution in \fig{ap_histogram}. This is expected
given that each interferometer is individually less sensitive to scalar waves,
as seen in \fig{aps}. On average, there is no significant difference between
vector and tensor responses.
\begin{figure}
\subfloat[][Scalar]{\includegraphics[width=\columnwidth]{ap_norm-overlap_scalar}}\\
\subfloat[][Vector]{\includegraphics[width=\columnwidth]{ap_norm-overlap_vector}}\\
\caption{{\em Overlaps of LIGO-Virgo network effective antenna patterns.} The
normalized inner-products of \eq{ap_overlap} for the three-instrument network.
The top plot compares scalar to tensor (${\cal F}_{\rm s/t}$), and the bottom
one compares vector to tensor (${\cal F}_{\rm v/t}$). Blue (red) marks regions
for which the effective nontentor response is greater (less) than tensor. A map
of Earth is overlaid for reference.}
\label{fig:ap_skymap}
\end{figure}
\begin{figure}
\includegraphics[width=\columnwidth]{ap_norm-overlap_histogram}
\caption{{\em Overlaps of LIGO-Virgo network effective antenna patterns.} The
normalized inner-products of \eq{ap_overlap} for the three-instrument network.
The top plot compares scalar to tensor (${\cal F}_{\rm s/t}$), and the bottom
one compares vector to tensor (${\cal F}_{\rm v/t}$). Blue (red) marks regions
for which the effective nontentor response is greater (less) than tensor. A map
of Earth is overlaid for reference.}
\label{fig:ap_histogram}
\end{figure}
\section{Method} \label{sec:method}
Ideally, we would like to unequivocally measure the polarizations of the GW
that produced a given transient strain signal in our detector network.
Formally, this would mean finding which of the seven possible Bayesian
hypotheses the data favor: pure tensor ($\hyp{t}$), pure vector ($\hyp{v}$),
pure scalar ($\hyp{s}$), scalar-tensor ($\hyp{st}$), vector-tensor
($\hyp{tv}$), scalar-vector ($\hyp{sv}$), or scalar-vector-tensor
($\hyp{stv}$). A comprehensive Bayesian treatment of this polarization
model-selection problem was presented in \cite{Isi2017} for the case of
continuous signals from known pulsars, later applied to stochastic GW
backgrounds in \cite{Callister2017}, and could be easily by adapted to the case
of transient signals considered here.
Yet, a simple counting argument is enough to show that three detectors are not
sufficient to break {\em all} degeneracies between the five distinguishable GW
polarizations using transient signals \cite{Chatziioannou2012, tegp}.
Therefore, with the current LIGO-Virgo network, we expect the results of an
all-encompassing model-selection analysis, as discussed above, to be
inconclusive or dominated by priors. Nevertheless, we may still attempt to
distinguish between {\em some} of the possible hypotheses.
As mentioned in the introduction, {\em all LIGO-only observations so far are
consistent with the extreme scenario of GWs being composed of purely vector or
purely scalar polarizations}.
Therefore, here we will focus on the problem of directly distinguishing between
these theoretically far-fetched, yet phenomenologically valid, possibilities.
That is, we will study our ability to choose between $\hyp{s}$ vs $\hyp{t}$,
and between $\hyp{v}$ vs $\hyp{t}$. Importantly, this is qualitatively distinct
from the more standard question about the presence of small nontensorial
components in addition to the tensor wave predicted by GR. Although perhaps not
as interesting as these ``mixed'' polarization studies (which, as explained
above, will not fully succeed with current detectors), the problem of
distinguishing between the ``pure'' polarization cases is well-defined and
experimentally valuable.
We would like to ask the question: {\em is it geometrically possible that a
\underline{given} strain signal observed in the LIGO-Virgo network was produced
by a GW with polarization other than GR's tensor $+$ and $\times$?}
The only way for us to answer this question is to probe the
antenna patterns of our instruments, \eq{response}, which are a direct
manifestation of local geometry only (polarizations and detector geometry),
independent of source or the details of the underlying theory (see Sec.\
\ref{sec:aps}). We may thus exploit the difference in the response of the
network to the different polarizations (\fig{ap_skymap}).
One way to extract polarization information using the antenna patterns would be
to construct linear combinations of the detector outputs that are guaranteed to
contain no tensorial signal \cite{Chatziioannou2012}. If coherent power (as
seen by, e.g.\ a wavelet analysis) remains in such a {\em null-stream}, then
that signal could not have been produced by a tensor (GR) wave. This approach
has the strong advantage that it requires no knowledge of the spectral features
of the signal whatsoever. However, to construct null-streams one needs to very
accurately know the location of the source {\em a priori}, which is never the
case without an electromagnetic counterpart (or more detectors).
Alternatively, one could carry out a morphology-independent sine-Gaussian
analysis (e.g.~using \texttt{BayesWave} \cite{Cornish2014, Littenberg2016}) to
reconstruct the best-fit unmodeled waveform from the data, and use that to
extract information about times of arrival, phase offsets and relative
amplitudes at different detectors. One could then just replace the tensor
antenna patterns used in the signal reconstruction by their scalar or vector
counterparts, and see how well each case fits the data (as measured by a Bayes
factor). In such test, {\em no polarization information is extracted from the
phase evolution}. In particular, the waveform reconstruction is only used to
infer the source location from the time lag between detectors, and the
best-fitting combination of antenna patterns from the amplitudes and phases at
peak energy. (See pedagogical example in Sec.\ \ref{sec:example} below.) An
analysis like this was implemented for scalar modes and applied to the GW150914
signal, yielding no conclusive results as mentioned above \cite{gw150914_tgr}.
However, all signals observed by LIGO so far are exceptionally well described
by GR CBC waveforms \cite{o1bbh, gw150914_tgr, gw151226, gw170104}. This match
is established on a case-by-case basis through comparisons between the GR
templates and morphology independent burst reconstructions of the signal in the
data, and is largely independent of the polarization. In fact, for any of these
confident detections, the waveform reconstructed from burst analyses is
effectively identical to a GR template. As emphasized above, in the
pure-polarization test ($\hyp{s}$ vs $\hyp{t}$, or $\hyp{v}$ vs $\hyp{t}$) all
that matters is that most of the signal power is captured by the template,
regardless of small potential mismatches in the phasing. Therefore, we may
carry out the same study proposed in the previous paragraph using GR waveforms
to fit the data, while replacing the tensor antenna patterns with those of
different polarizations.
In other words, when the signal is clearly well-captured by a GR template, we
may use that directly to extract polarization information from the antenna
patterns in a model independent way, without implicitly assuming that the GW
that caused it was tensor polarized as GR predicts. The waveform reconstruction
will be dominated by the measurement at the most sensitive detector, while the
amplitude information is encoded in the relations between measurements by
different detectors.
Whether we use GR templates or a collection of sine-Gaussians to reconstruct
the waveform, the effect of changing the antenna patterns will always result in
different inferred sky location and orientation for the source. Yet, not all
antenna patterns will be equally consistent with the observed relative
amplitudes, phase offsets and delays between the signals in our three
detectors---this will result in a poorer signal likelihood, and hence odds
favoring tensor vs nontensor. Precisely because the waveform used to capture
the signal is the same, we know that any difference between the tensor and
nontensor results {\em must} come from the antenna patterns (polarizations).
\blue{This approach does not extract any information from the specific phase
evolution of the signal}, and is insensitive to small changes in the waveform.
Therefore, using a GR template to measure the signal power is justified, and does
not imply a contradiction when testing for nontensorial polarizations. For
the purpose of this study, the CBC signal is just probing the impulse response
function of our network, and the same results would be obtained if the waveform
was just a Delta function rather than a chirp.
\subsection{Toy example} \label{sec:example}
For concreteness, consider the example of an elliptically-polarized,
two-component GW (e.g.\ two tensor modes, or two vector modes) with waveform
roughly described by a simple sine-Gaussian wavepacket, with some
characteristic frequency $\Omega$ and relaxation time $\tau$. Letting $t$ be
the time measured at Earth's center, then the strain measured by a given
detector $I$ will be:
\begin{equation} \label{eq:toy_h}
h_I(t) = \Re\left[ A \left( F_{1}^I + i \epsilon F_{2}^I \right)
e^{i\Omega (t-t_0-\delta t_I)} \right] e^{-(t-t_0-\delta t_I)^2/\tau^2},
\end{equation}
where $F_1^I$ and $F_2^I$ are the responses of detector $I$ to the two
polarizations, $A\equiv|A|e^{i\phi_0}$ is a complex-valued amplitude,
$\epsilon$ is an ellipticity parameter controlling the relative amounts of each
polarization, and $\Re$ denotes the real part. Also, $t_0$ marks the time of
arrival at Earth's center, which is delayed with respect to each interferometer
by
\begin{equation}
\delta t_I = \hat{{\bf n}} \cdot {\bf x}_I / c\, ,
\end{equation}
where $\hat{{\bf n}}$ is a unit vector from Earth to the source, and ${\bf
x}_I$ joins Earth's center to the detector (with magnitude equal to Earth's
radius). Here we are assuming that the GW travels at the speed of light, $c$.
The signal of \eq{toy_h} may be written more simply as
\begin{equation}
h_I(t) = {\cal A}_I \cos [\Omega (t - \Delta t_I) + \Phi_I] \, e^{-(t-\Delta
t_I)^2/\tau^2}\, ,
\end{equation}
after defining the three main observables at each detector:
\begin{equation}
{\cal A}_I \equiv |A| \left| F_{1}^I + i \epsilon F_{2}^I \right|\, ,
\end{equation}
\begin{equation}
\Phi_I \equiv \phi_0 + \arctan (\epsilon F_2^I/F^I_1)\, ,
\end{equation}
\begin{equation}
\Delta t_I \equiv t_0 + \delta t_I
\end{equation}
From the output of three detectors ($H$, $L$, $V$), we may implement a simple
inference analysis to extract these three numbers for the signal as seen by
each instrument. The times at peak amplitude provide the three $\Delta t_I$'s,
while measurements of the phase and amplitude at peak itself give the $\Phi$'s
and ${\cal A}_I$'s respectively. As always, recovery of all these parameters
will be negatively affected by instrumental noise.
The three timing measurements alone suffice to recover the sky location of the
source, $\hat{\bf n}$. With this knowledge, it is then possible to compute the
values of all the corresponding antenna response functions, and thus obtain
predictions for the $(F_1^I + i \epsilon F_2^I)$ factors for any given
ellipticity. Ratios of amplitudes and phase differences between detectors may
then be used to infer measured values for these quantities, and then find the
best fitting polarization model. This may be achieved, for instance, via a
maximum-likelihood analysis, effectively minimizing the distance between
vectors like those of \eq{ap_vector} and a similar one inferred from the
data. (Note $|A|$, $\phi_0$, and $\epsilon$ are nuisance parameters, and can be
marginalized over.)
Although for this example we used a simple sine-Gaussian wavepacket to measure the
signal, at no point we made use of the specific details of this phase evolution.
The only requirement is that the GW have a well-defined peak, in order to extract
meaningful information about how the relative timing, phase and amplitude of
this peak as seen by different detectors. In particular, this analysis would work
precisely the same way if CBC-like chirp waveform was used, as long as most of the
power in the actual signal is indeed captured by such a template.
This toy analysis makes the dependence on $A_I$, $\Phi_I$, $\Delta t_I$
explicit. In reality, when studying actual data, one would ideally implement a
full Bayesian analysis, marginalizing over all parameters to compute evidences
for the different polarization hypotheses ($\hyp{t}$, $\hyp{v}$, $\hyp{s}$),
and to produce the Bayes factors (likelihood ratios) of interest. This can be
achieved using a code like \texttt{LALInference} \cite{Veitch2015}. The
polarization information extracted by this more rigorous analysis would still,
nonetheless, effectively come from the values of $A_I$, $\Phi_I$, $\Delta t_I$.
As emphasized before, this is the case whether one uses GR templates or a
collection of sine-Gaussians to capture the signal power.
\section{Conclusion}
By extracting polarization information from the antenna patterns we may
directly probe the geometry of the GW metric perturbation (i.e.~the directions
along which space is stretched and squeezed by the passing wave) from its
projection onto our detector network. With transient signals, instruments at five
or more different orientations would be needed to break all degeneracies
between the five independent (as seen by differential-arm detectors)
polarizations allowed by generic metric theories of gravity. However, we may
already distinguish between some of the possibilities using the current
LIGO-Virgo network. How well we can do this will depend on the specific
properties of each transient event (mainly, sky location).
The kind of geometric observational statement discussed in this note is
independent of any theory or source model, and is only possible with the
addition of Virgo to the network. Although here we focused on the problem of
distinguishing between ``pure'' polarization states (tensor, vector or
scalar), the case of ``mixed'' polarizations will be addressed in future work.
More details and a demonstration of the analysis proposed here on simulated
signals will be provided soon in an expanded version of this document.
\begin{acknowledgments}
LIGO was constructed by the California Institute of Technology and
Massachusetts Institute of Technology with funding from the National Science
Foundation and operates under cooperative agreement PHY-0757058.
This paper carries LIGO Document Number LIGO-P1700276{}.
\end{acknowledgments}
|
1,477,468,751,056 | arxiv | \section{Zero-Width Attack}\label{sec.zwspa}
In this section, we present the Zero-Width attack (ZeW).
We first introduce in Section~\ref{sub.motivations} the motivations and the intuition that drives our investigation. In Section~\ref{sub.tp} we describe how ZeW can affect different NLP pipelines. We conclude with Section~\ref{sub.counter} by describing a countermeasure to our proposed attack.
\subsection{Motivations}\label{sub.motivations}
Three main motivations guide our investigation.
\begin{enumerate}
\item \textit{UNICODE representation}. Most NLP tools allow the use of UNICODE characters. This is essential, especially for the analysis of web text. For example, on Social Networks, non-ASCII characters are often used (e.g., emoji).
\item \textit{Readability Preservation}. The attack strategy should apply fewer modifications as possible to maintain the sentence readability.
\item \textit{Indexing stage vulnerabilities}. To the best of our knowledge, most of the attack strategies aim to leverage ML-models' weaknesses, while little attention has been put to the security of other stages of the text ML pipeline, such as the indexing stage (see Section~\ref{sub.textpipeline}).
\end{enumerate}
We asked ourselves if it exists a technique that allows us to relax the constraint of the number of modifications to malicious sentences, allowing us to focus only on the disruption of target models' performance.
We found the answer in the \textit{steganography} discipline, which is the ``art of hiding secret messages into plain sources''~\cite{Bennett04linguisticsteganography}. In the UNICODE representation, there are characters whose width is zero, i.e., when printed, they are invisible, and human beings cannot perceive them.
Some examples of these characters are \textit{zero-width space} (U+200B) and \textit{zero-width joiner} (U+200C). These allow us to insert an arbitrary number of ``invisible'' characters in a given sentence.
Thanks to this particular property, we can forget to consider the problem of readability preservation since sentences semantic is intact.
The presence of zero-width characters allows an attacker to affect the decision of the indexing stage (see Section~\ref{sub.tp}).
We identify in total 24 malicious characters\footnote{https://github.com/pajola/ZeW/blob/main/ZeW.py}.
In cybersecurity, we can find the usage of zero-width characters in different ways. For example, in~\cite{7346813} the authors use zero-width characters in the communication protocol of a botnet, ELISA; here, the botmaster secretly communicates through public posts with the zombies over social networks such as Facebook. In late 2018, the security team AVANAN discovered a phishing method against Office 365, bypassing Microsoft's security mechanisms~\cite{avanan}; in this attack, hackers used zero-width characters in the middle of malicious URLs, evading Microsoft's detection mechanisms. While in security zero-width characters are a known threat, to the best of our knowledge, we are the first to explore their effect in the adversarial machine learning context.
\subsection{Theoretical Perspective}\label{sub.tp}
Zero-width characters give us the power to break the intra-relationship between the characters of a given sentence. Let's represent from now on zero-width characters with the symbol ``\$''. We can recall the example reported in Section~\ref{sec.intro} ``I hate this album''; the malicious version ``I h\$a\$t\$e this album'' appears identical to the original sentence from a human point of view, while different from a machine perspective. Figure~\ref{fig:g_translate} presents a real example of zero-width characters. We can notice that the malicious sentence appears legitimate.
In Section~\ref{sub.textpipeline} we described possible numerical representations of a given sentence (\textit{indexing} stage). We now explain how ZeW can affect those representations.
\begin{itemize}
\item \textit{Word-based representations}. In word-based representations, a sentence can be seen as a temporal vector $s = t_0 + t_1 + ... + t_n$, where $t_i$ is the token (i.e., word, punctuation symbol) at the time $i$, and $n$ is the length of the tokenized sentence (see Section~\ref{fig:text_pipeline}, \textit{preprocessing} stage). Here, it is unlikely that words containing ``\$'' are present in the vocabulary $V$.
Two possible scenarios can occur.
\begin{itemize}
\item Unrecognized words are mapped to special tokens (e.g., placeholders, ``UNK''). It is likely that unpoisoned words and ``UNK'' have different meanings and effects to target models, since they appear with a different representation. For example, the sentence ``I h\$a\$t\$e this album'' is represented as ``[I, UNK, this, album]''.
\item Unrecognized words are discarded from the analysis, with a consequence of loose of expressiveness of the malicious sentence. For example, the sentence ``I h\$a\$t\$e this album'' is represented as ``[I, this, album]'' (the word ``h\$a\$t\$e'' is discarded). In this case, the target model analyzes only the remaining sentence. Potentially, by adding one zero-width character per token, the resulting sentence will be empty.
\end{itemize}
\item \textit{Character-based representations}. In char-based representations, a sentence can be seen as a temporal vector $s = t_0 + t_1 + ... + t_n$, where $t_i$ is the character at position $i$, and $n$ is the total number of characters that compose the sentence. As in the previous case, two possible scenarios can occur.
\begin{itemize}
\item Unrecognized characters are mapped to the special tokens (e.g., placeholders, ``UNK''), resulting in an addition of noise in the vectorial representation. For example, the word ``h\$a\$t\$e'' is represented as ``[h, UNK, a, UNK, t, UNK, e]''.
\item Unrecognized characters are discarded from the analysis. In this case, the poisoned sentence coincides with the original sentence. The attack has no effect in this scenario. For example, the word ``h\$a\$t\$e'' is correctly represented as ``[h, a, t, e]''.
\end{itemize}
\end{itemize}
In general, ZeW leads to an increase in noise or reduction of information in the sentence representation. The attack can be seen as an \textit{injection} attack, where malicious characters are injected into target sentences. Potentially, an attacker can insert an arbitrary number of ``\$'' on malicious sentences, without any constraint. This gave us the capability of not considering the perturbation measurement described in Section~\ref{sub.text_aml}.
To the best of our knowledge, injection strategies using ZeW characters can be further optimized with target ML-models, resulting in the following adversarial attacks:
\begin{itemize}
\item \textit{Evasion}. ZeW characters can be optimally inserted in target sentences to affect ML models' decisions.
\item \textit{Poisoning}. If the adversary has access to the training data, the addition of malicious samples could lead to a noisy dataset, decreasing the overall performance.
\item \textit{Trojan}. If the adversary has access to the training data, he/she can inject a rare sequence of zero-width characters in a small portion of the dataset and let the model overfit over them. At test time, the trojan is triggered by samples containing that specific sequence.
\end{itemize}
The definition of such adversarial attacks is out of the scope of the paper.
\subsection{Countermeasure}\label{sub.counter}
Overall, ZeW is an injection attack that influences the indexing stage (see Section~\ref{sub.textpipeline}), with consequences in the following steps (i.e., machine learning algorithms).
ZeW leverages peculiar properties of UNICODE representation, which contains non-printable characters. In the security field, injection attacks are a standard and well-known problem~\cite{10.1145/1111037.1111070}.
A typical example is the SQL injection, where the definition of malicious input can damage the target database structure and destroy its contents. Injections can be severe, especially when users are allowed to insert arbitrary input used for critical operations.
Similarly, MLaaS offer users to interact with ML-models through APIs. It is thus essential to have mechanisms that control any input feeding the models, placed at the \textit{preprocessing stage}; these are also called sanitization or input validation mechanisms. Regarding ZeW, a simple filter that rejects malicious sentences containing non-printable characters is enough. Similarly, the sanitizer can just discard the malicious characters.
\section{Background and Preliminaries}\label{sec.prel}
This section presents the preliminary concepts required for the rest of the paper. Section~\ref{sub.textpipeline} presents an overview of the standard pipeline in Natural Language Processing (NLP) applications, followed by an introduction to the adversarial machine learning theory in Section~\ref{sub.aml}, and, finally, Section~\ref{sub.text_aml} describes keys and challenges of adversarial machine learning in the text-domain.
\subsection{Text Pipeline}\label{sub.textpipeline}
ML-based applications on text-domain follow a common pipeline, as described in~\cite{10.1145/3374217, Korde}, and shown in Figure \ref{fig:text_pipeline}.
\begin{figure}[!ht]
\centering
\includegraphics[width=1.0\linewidth]{Figures/NLP_pipeline.pdf}
\caption{Machine Learning pipeline in Natural Language Processing.}
\label{fig:text_pipeline}
\end{figure}
The pipeline consists of the following components: original documents, preprocessing, indexing, and machine learning model.
\paragraph{\textit{Original Documents}}Collection of the corpus of documents to analyze. The origin of these documents can differ, such as text files, PDFs, or HTML web pages.
\paragraph{\textit{Preprocessing}}Set of mechanisms that prettify documents, with the removal of useless information (e.g., TAG, format controls). This stage can involve different techniques, such as \textit{tokenization}, where sentences are decomposed in lists of words, \textit{stopword removal}, where common words (and meaningless) are removed (e.g., articles), and \textit{stemming}, where words are converted in their root form (e.g., books $\xrightarrow{}$ book).
\paragraph{\textit{Indexing}}The mechanism that converts the symbolic representation of a document/sentence into a numerical vector. At training time, a vocabulary $V$ of the possible representations (word/character level) is defined. The vectorial representation is usually handled in three possible ways:
\begin{itemize}
\item \textit{Word-count encoding}. Each document is represented as a vector of words occurrences. For example, given the sentences $s_1=$ ``hello there'' and $s_2=$ ``hello hello''', a vocabulary $V=[hello, there]$, the sentences are represented as
\begin{align*}
&s_1=[1, 1],\\&s_2=[2,0],
\end{align*}
where the numbers represents the number of occurrences of the correspondent index in the vocabulary (i.e., ``hello'' in position 0, ``there'' in position 1). A variant of the word count often use the Term Frequency-Inverse Document Frequency (TF-IDF); this encoding tries to capture the importance of a word in the document given a collection of documents.
\item \textit{One-hot encoding}. This encoding represents a document as a list of vectors (one per word/char in the document). Given the previous example, the sentences are represented as
\begin{align*}
&s_1=[[1, 0], [0,1]],\\&s_2=[[1, 0], [1,0]].
\end{align*}
\item \textit{Dense encoding}. In this category, we find word embeddings, powerful vectorial representations of words~\cite{mikolov2013efficient, 10.5555/2999792.2999959}. Here, each word is represented as a vector of real numbers (abstract representation). Dense representations can be pre-trained or trained end-to-end (e.g., using Language Models).
For example, given the words ``dog'', ``cat'' and ``hello'', the dense representation indicates that the word ``dog'' is spatially closer to ``cat'' rather than ``hello''.
\end{itemize}
During the indexing phase, the pipeline needs to deal with unrecognized items (word/character level), i.e., items out-of-vocabulary (OOV). There are two possible ways to deal with it: i) discarding them from the analysis or ii) mapping them to a special token ``UNK''. In the latter case, the special token has a proper representation, based on the indexing strategy (i.e., word-counting encoding, one-hot encoding, dense encoding).
The problem of unrecognized items is well-established in NLP, where the frequency of items in a dataset usually follows long-tail distributions. To reduce the complexity of the problem, the standard approach is to maintain small vocabularies with the most frequent items~\cite{chen-etal-2019-large}. In Natural Machine Translation tasks this could be a problem, where all of the OOVs are mapped to a single token ``UNK''. For example, we can consider the translation task from English to a target language of the sentence ``Liam meets Noel''. Likely, both proper names are not present in the vocabulary, and thus they are mapped to the same token (i.e., ``UNK meets UNK''), losing the name information in the target language. A standard approach, proposed in~\cite{luong-etal-2015-addressing}, consists of using placeholders to map rare items with unique pointers (e.g., ``UNK1 meets UNK2'', where UNK1 = Liam and UNK2 = Noel) with a final name replacement in post-processing.
\paragraph{\textit{Machine Learning Model}}The ML model used for the task. The set of models vary from simple architectures (e.g., Logistic Regression, Random Forest), to Neural Networks (NN) and Deep Neural Networks (DNN). In the latter case, we can find variants of Recurrent Neural Networks (RNN) such as Long Short-Term Memory (LSTM)~\cite{doi:10.1162/neco.1997.9.8.1735} or Gated Recurrent Units (GRU)~\cite{chung2014empirical}.
ZeW aims to attack and disrupt the pipeline by affecting the indexing stage, affecting the ML-model performance.
\subsection{Adversarial Machine Learning \& Application Security}\label{sub.aml}
Adversarial Machine Learning (AML) is the discipline that studies the security of ML algorithms~\cite{Huang:2011:AML:2046684.2046692, Biggio_2013}.
In literature, we can find several classes of attacks.
For example, the model \textit{evasion attack}, where the adversary defines an \textit{adversarial example} with the aim to affect target ML predictions~\cite{Biggio_2013, goodfellow2014explaining}. Formally, given a target model $\mathcal{F}$, an input sample $x$, the adversary aims to find a small \textit{perturbation} $\epsilon$ in such a way that $\mathcal{F}(x) = y_i$ and $\mathcal{F}(x + \epsilon) = y_j$, where $y_i \neq y_j$.
Another popular attack is the \textit{poisoning attack}: if the attacker has access to the training data, he/she can inject malicious samples that affect the model performance \cite{10.5555/3042573.3042761}.
A variant of this attack is called \textit{trojan/backdoor attack}, where the attacker does not influence the model performance, but instead creates a backdoor in the model. At test time, the attacker triggers this backdoor, with results similar to the evasion \cite{gu2017badnets, 10.1145/3394486.3403064}.
Nevertheless, ML applications contain not only ML algorithms but also additional steps such as preprocessing and transformations. For example, in Section~\ref{sub.textpipeline} we described an overview of standard text pipelines.
Therefore, the concept of \textit{applications security} must assess each component that defines such pipelines.
Despite its importance, not a lot of attention is given to this broader area of application security.
Examples of software security-related vulnerabilities are presented in~\cite{8424643}, where the authors disclose a set of attacks (e.g., denial of service) related to popular ML frameworks as Caffe and Tensorflow.
Similarly, in~\cite{236348} the authors introduce the \textit{Camouflage Attack}, where malicious images change their semantic meaning after scaling them. Such attack can be exploited in computer vision (CV) applications, where image scaler algorithms are usually employed upstream of CV pipelines.
We remark that adversarial machine learning techniques aim at machine learning models, while our proposed attack ZeW exploits vulnerabilities of \textit{preprocessing} and \textit{indexing engine} algorithms of text pipelines (see Section~\ref{sub.textpipeline}).
\subsection{Challenges of adversaries in Text-Domain}\label{sub.text_aml}
While AML gained popularity in Computer Vision (CV) from its early stages, only in recent years researchers moved onto the NLP domain. As identified by~\cite{10.1145/3374217}, three major aspects differentiate AML in NLP from CV.
\begin{itemize}
\item \textit{Input Domain}. While images are defined in a continuous space (e.g., RGB matrix), sentences are discrete and represented as a list of symbols. It implies that the meaning of perturbation that we want to add changes its nature. For example, in CV the perturbation is defined as a matrix of values to sum up to the original image. This is not possible in NLP since there is no meaning in adding an integer to a word (e.g., ``dog’’ + 1).
\item \textit{Human Perception}. From a human point of view, perturbations in CV are difficult to perceive, since the modifications are at a pixel level. Vice-versa, on text, small changes are easily detectable by both human beings and machines (e.g., spell checkers).
\item \textit{Semantic}. From the semantic point-of-view, the addition of a perturbation into an image rarely changes its meaning. In NLP, the modification/addition/removal of a character/word may lead to a completely different meaning of the sentence (e.g., ``I hate you’’, ``I ate you’’).
\end{itemize}
As a consequence, state-of-the-art attacks on NLP are either CV-algorithms adapted to face NLP challenges, or novel solutions designed from scratch.
In this work, we are mainly interested in the \textit{evasion attack}. As previously introduced, our goal is to define a perturbation that influences the target model while preserving the semantic and readability of the sentence. A small amount of perturbation can guarantee a correct human perception; for example, in~\cite{leet}, authors show the human resistance to leet speech, e.g., ``R34D1NG W0RD5 W1TH NUMB3R5''.
The choice of the measurement is not trivial as in CV, where spatial distance metrics between the original sample $x$ and the malicious $x'$ are used. As stated in~\cite{10.1145/3374217}, we can measure the perturbation in different ways, such as norm-based distances for dense representations, or edit-based measurement, which identify the number of changes required by making $x'$ equal to $x$.
\section{Case Study: Hate Speech Manipulation}\label{sec.cs}
In this section we introduce the disignated case study (Section~\ref{sub.cs_over}), followed by the injector algorithm description (Section~\ref{sub.alg}).
\subsection{Overview}\label{sub.cs_over}
We test and evaluate ZeW on text Machine-Learning-as-a-Service provided by popular companies such as Amazon, Google, IBM, and Microsoft. These services vary from sentiment analyzer to language translators. Our idea is to test some of the most popular ML-based text services to understand how many applications can be affected by the attack.
As a case study, we analyze the \textit{hate speech manipulation}, a topic that raised the interest of a broad area of researchers in the last years~\cite{waseem-hovy-2016-hateful, schmidt-wiegand-2017-survey}. Our goal is to understand how zero-width characters affect the outcomes of different MLaaS. We consider the attack successful if the injection of zero-width characters affects in some way the performance of a target model. We are also interested to understand the magnitude of the vulnerability.
In our opinion, this is a likely scenario where a malicious user aims to offend a target victim without being detected, since it is known the problem of malicious interaction between users and Artificial Intelligence systems. A famous example is the Microsoft chatbot Tay, which becomes hateful after a poisoning attack of a group of users~\cite{WOLF20171}.
\subsection{Manipulation algorithm}\label{sub.alg}
In this work, we aim to define a simple yet effective strategy using ZeW attack. Simple and non-optimal attacks have been shown to be effective in~\cite{8835391}, where cybercriminals evade sexually explicit content detection with simple image transformations (e.g., random noise addition).
In our attack, we assume that hateful sentences contain a negative part-of-speech, as shown in Figure~\ref{fig:vader} on the \textit{Real} corpus. We thus want to understand how the performance of the tested MLaaS are affected when ``deleting'' such negative parts. To do so, we designed a simple injection strategy that, given a sentence, identifies negative words (i.e., words with negative polarity scores) and injects on them zero-width characters. In the experiment, we inject zero-width characters in two possible fashions.
\begin{enumerate}
\item \textit{Mask1}. Only one random Zero-Width SPace character is injected in the middle of the target word (e.g., $hate \longrightarrow ha\$te$).
\item \textit{Mask2}. Multiple random Zero-Width SPace characters are injected, one between each character (e.g., $hate \longrightarrow \$h\$a\$t\$e\$$).
\end{enumerate}
The idea behind these two strategies is to measure the impact of ZeW with different levels of injection. Algorithm~\ref{Alg.1} shows the overall attack strategy.
To identify negative words we use \textit{VaderSentiment}, a free sentiment analyzer tool available for Python~\cite{vader}. The code of the injector is available on GitHub\footnote{https://github.com/pajola/ZeW}.
\begin{algorithm}
\SetAlgoLined
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\Input{An original sentence $s$ and the type of injection mask $m$}
\Output{A poisoned sentence $s_{pois}$}
$tokens=Tokenizer(s)$\\
$N_{tok} = length(t)$\\
$i = 0$
$s_{pois} = []$
\While{$i < N_{tok}$}{
$t = tokens[i]$\\
$t_{stem} = Stem(t)$\\
$t_{sent} = Sentiment(t_{stem})$\\
\If{$t_{sent}\;is\;negative$}{
$t_{pois} = Injector(t, m)$\\
}
$s_{pois}.add(t_{pois})$\\
$i = i + 1$\\
}
$s_{pois} = Join(s_{pois})$
\caption{HS-Manipulation}
\label{Alg.1}
\end{algorithm}
\section{Conclusions}\label{sec.end}
The migration of machine learning applications from research to commercial and industrial purposes increases the necessity of finding security mechanisms that guarantee the correct usage of them.
In this work, we present a novel injection algorithm: Zero-Width attack (ZeW). This attack injects non-printable UNICODE characters on malicious sentences, with a potential disruption of the indexing stage of the ML application pipeline, while maintaining the full-readability of the text. This gives us the opportunity to do not consider the readability constraint, one of the major obstacles in the text adversarial machine learning field.
Our goal was twofold: i) understand how different pipelines respond to ZeW attack, and ii) whether commercial applications are vulnerale to ZeW attack.
In Section~\ref{sec.res_indoor} we showed that different implementation are vulnerable with different magnitude to the attack, while character based models show promesing ``\textit{security by design}'' patterns. We then demonstrate the ferocity of the attack on commercial solutions (Section~\ref{sec.results}):
on 12 services developed by top IT companies such as Amazon, Google, IBM, and Microsoft, 11 show vulnerabilities. Among these 11, only 3 present a good resistance to the attack, while the remaining 8 are heavily affected.
The simplicity of the attack allows it to spread to a broad population of malicious users and activities since no prior knowledge of machine learning theory is required.
Potentially, we can find several use-cases of our attack and not only hate-speech manipulation. For example, we can consider web data mining techniques that can be used for counter-terrorism~\cite{Thuraisingham} where, NLP technologies can help to identify malicious content. In this scenario, terrorists could use ZeW to obfuscate the contents of their web-pages, affecting the performance of the analyzer. Because of this, our simple but effective countermeasure based on an input-validation technique should be integrated into every real-life NLP tool.
The security of machine learning applications is strictly related to their input domain. Computer Vision has different challenges compared to Natural Language Processing, which has different challenges compared to the signal domain. Moreover, state-of-the-art mainly focus on the security of the machine learning models, by forgetting that a machine learning application is composed by several stages where the ML model is only one of these.
In conclusion, we believe that novel malicious opportunities can be derived by exploiting vulnerabilities of different components of the ML pipeline, and one of these directions is the leverage of multiple representations of the text, such as the usage of ASCII and UNICODE.
\section{Introduction}\label{sec.intro}
Without any doubt, machine learning applications had considerable success in the 2010s, finding space in different areas, from the automotive industry with autonomous vehicles~\cite{Sallab:2017:2470-1173:70} to the biomedical sector with brain tumor segmentation~\cite{Havaei_2017}. \blfootnote{This paper appears in the proceedings of the 6th IEEE European Symposium on Security and Privacy (EuroS\&P) 2021.}
The popularity of machine learning (ML) had a boost-up thanks to the increase of machines' computational power, making ML easily accessible to researchers and industrial developers, one of the significant obstacles of previous decades.
Even though ML nowadays is accessible to developers, we can find three main limitations in the deployment of ML solutions, due to the lack of: i) amount of data required to train a robust model, ii) amount of computational resources, and iii) machine-learning engineers with suitable expertise. For instance, we can consider the task of sentence language translation: in 2016, Google presented a translator based on Long Short-Term Memory (8 layers both encoder and decoder), trained over a parallel corpus\footnote{A parallel corpus is a dataset used for sequence-to-sequence tasks, where each sample has a source and a target. The goal of the model is to translate the source to the target.} of 26 million sentences (English-French)~\cite{wu2016googles}. Not only the difficulty of the model's architecture implementation (i.e., the choice of hyperparameters), but it requires an enormous amount of resources: the training procedure involved the use of 96 NVIDIA K80 GPU, with six days of computation.\par
The aftermath of such complexity is that real-world tasks are unlikely modeled with ML by companies or users without enough resources (i.e., computational power, data, ML engineers).
To overcome this issue, the principal IT organizations (e.g., Amazon, IBM, Google, Microsoft) started developing solutions for common complex tasks (e.g., text analysis, optical character recognition) called Machine-Learning-as-a-Service (MLaaS), where users pay for a certain amount of queries.
In this way, for example, companies that require to analyze documents can use advance and well-performing techniques at an affordable price without caring for the complex training process.
MLaaS had a discrete success, and in 2019 this market was valued 1.0 billion USD, with an estimation of 8.48 billion USD by 2025~\cite{mordor}. \par
The rapid growth of ML in real case applications also attracted the security community.
Researchers started asking whether users can maliciously affect ML-applications decisions: this area is called \textit{Adversarial Machine Learning}~\cite{10.1145/1014052.1014066}. In particular, several proposed attacks show the feasibility of affecting at test time ML algorithms by adding small and unnoticeable perturbation to the input data~\cite{Biggio_2013},~\cite{goodfellow2014explaining},~\cite{papernot2016transferability}.
In this paper, we focus on the text-domain, where the addition of malicious perturbation is translated with the modification of text through various techniques (e.g., misspelling, typos, word addition).
The primary constraint of attacks in the text-domain is the ``readability preservation’’, i.e., a human being can understand the meaning of modified sentences.
The reasons behind the limitation mentioned above are deducible: let us consider the sentiment classification of music reviews (i.e., positive/negative reviews), where the adversary goal is to have positive classifications for negative sentences. The original sentence could be ``I hate this album'', and its malicious counterpart ``I hA.XYztXaeX this album''.
While the original sentence classification is expected to be negative, its counterpart might not; however, from a human point-of-view, the malicious sentence is not readable, and thus the adversarial sample loses its semantic meaning.
So far, several works have proposed adversarial techniques in the text-domain, combining complex algorithms to find a trade-off between the effectiveness of the attack and the readability of produced malicious sentences. For instance, in~\cite{samanta2017crafting}, the authors identify the importance of the words of a given sentence for the target model, and replace them with sophisticated linguistic strategies.
In~\cite{8424632}, the authors proposed \textit{DeepWordBug}, a process that first uses a scoring function to identify critical tokens for the target model and then applies character transformations to minimize the number of modifications. For an in-depth overview of state-of-the-art adversarial machine learning in text-domain, we suggest~\cite{10.1145/3374217}.
\paragraph{\textit{Contributions}}
Motivated by the common assumption of ``readability preservation’’, we investigate a novel evasion technique that guarantees full readability and attack effectiveness.
Our technique, called ``\textit{Zero-Width}'' (ZeW), injects malicious UNICODE characters often used in text steganography strategies.
These characters are called \textit{zero-width space}, and their effect is that, when printed, they have zero-width, resulting invisible from a human being's perspective.
In one of our attack scenarios, we attack the popular web application Google Translate\footnote{\url{https://translate.google.com}.} on the English-Italian task. Figure~\ref{fig:g_translate} shows an example of wrong translation, where the original sentence ``I wanna kill you'' is translated as ``ti voglio bene'', which means ``I love you''. It is curious to notice that the input section has 31 characters (Figure~\ref{fig:g_translate}, left side), while the sentence should contain only 16.
In contrast to the state-of-the-art, ZeW does not require any assumption of the target model, and the readability constraint is relaxed.
Moreover, so far, most of the proposed attack strategies aim to leverage the learning strategies’ weaknesses (e.g., model architectures); however, a ML application is composed of several stages (pipeline), where the ML model is only one of them. In this work, we aim to attack and disrupt the ``indexing-stage'' (see Section~\ref{sub.textpipeline}), which is the step that converts a sentence from the textual representation to a numerical one. To the best of our knowledge, not much attention has been put to find possible weaknesses to the entire text pipeline.
\begin{figure*}
\centering
\includegraphics[width=1.0\linewidth]{Figures/google.png}
\caption{Zero-Width (ZeW) on a real-life scenario: Google Translate. The translated sentence means ``I love you''.}
\label{fig:g_translate}
\end{figure*}
In this paper we aim to understand the following: i) the effect of ZeW attack on different types of indexing strategies, and ii) if commercial solutions are vulnerable to ZeW attacks.
We conduct our experiments through a case study of a possible ZeW attack application: hate speech manipulation. We designed a simple injection strategy that, given a hateful sentence, identifies negative words and injects malicious characters in two possible fashions: i) \textit{Mask1}, where only one malicious character is inserted in the middle of the word and ii) \textit{Mask2}, where one malicious character is inserted between each character of the word. We tested this strategy over popular text MLaaS provided by Amazon, Google, IBM, and Microsoft, without having prior knowledge of the target models.
The analysis aims to understand which services can be affected by ZeW, and the magnitude of the vulnerability.
Our experiment shows that 11 out of 12 MLaaS are vulnerable to the proposed attack. We further introduce a simple countermeasure approach that can prevent ZeW. The purpose of this work is to emphasize the importance of studying the security of machine learning pipelines in all of their stages. Due to the gravity of ZeW, at the time of submission, all of the companies (i.e., Amazon, Google, IBM, and Microsoft) are informed.\par
Our contributions can be summarized as follows.
\begin{itemize}
\item We propose a novel text evasion strategy called \textit{Zero-Width} (ZeW) that affects the indexing stage of text pipelines.
\item We show the effect of ZeW over Machine-Learning-as-a-Service developed by Amazon, Google, IBM, and Microsoft. Out of 12 tested services, 11 show vulnerabilities (8 strongly affected).
\item We propose a countermeasure to ZeW that can be easily integrated in every text ML-based pipeline.
\end{itemize}
\paragraph{\textit{Paper Organization}}
The manuscript is organized as follows. In Section~\ref{sec.prel} we first briefly introduce the basic concepts required to fully understand the rest of the paper. Motivations, theoretical perspective, and countermeasure of ZeW are described in Section~\ref{sec.zwspa}. We then move in Section~\ref{sec.cs} with the implementation of ZeW in a real case scenario, the hate speech manipulation, followed by a discussion of the attack results in a controlled environment first (Section~\ref{sec.res_indoor}), followed by MLaaS (Section~\ref{sec.results}).
In Section~\ref{sec.rel_work} we summarize state-of-the-art attacks targeting models of MLaaS.
We conclude with the limitations of the proposed attack in Section~\ref{sec.limitations}, followed by considerations and discussions of the possible implications of our results (Section~\ref{sec.end}).
\section{Limitations}\label{sec.limitations}
In this section we briefly discuss the limitations of ZeW attack and the proposed coutermeasure.
\paragraph{\textit{Attack}}
The results presented in Section~\ref{sec.results} show how different commercial services can be affected by the proposed attack ZeW. However, the efficacy of ZeW is strictly related to services implementation choices. For example, as shown in Section~\ref{sec.res_indoor}, char-based models are more resilient compared to word-based ones. Moreover, when the model discards unrecognized characters, the attack is completely unsuccessful.
Another major drawback is the limited control over malicious samples and, as a consequence, over the effect of the attack. If we consider language translators, an attacker can affect the translation, but he/she has no control over the output. For example, in the attack reported in Figure~\ref{fig:g_translate} we did not target that particular translation.
Similarly, on the classification task, an attacker can only reduce the likelihood of a sentence being in a specific class (e.g., in this work we reduce sentences' negativity) and not let the sample be classified as a target class.
\paragraph{\textit{Defense}}
In Section~\ref{sub.counter}, we present a simple yet effective countermeasure, consisting on the removal (sanitization) of zero-width characters from any given sentence. This choice is possible since normal English sentences should not contains such characters. Moreover, to understand if a ZeW attack is occurring, models owners can feed their applications with both original and sanitized sentences and look for results discrepancies.
The proposed sanitization technique is however applicable only for ZeW attacks, resulting in a patch rather than a general solution.
A popular countermeasure adopted in the state-of-the-art is the \textit{adversarial training}, where, for example, the defender augments the training data with examples of adversarial samples to make the model more robust~\cite{goodfellow2014explaining}. Even though the adversarial training showed promising results, we believe that a strong and simple countermeasure consists of limiting applications' character vocabulary.
We recall that our attack uses characters that are not normally present in the written language, and thus a simple input control can raise alerts whenever unlikely characters are identified.
Finally, as reported in Table~\ref{tab:results}, character-based models present an intrinsic resiliency to ZeW attack; future commercial implementation should consider this aspect.
\section{Related Work}\label{sec.rel_work}
In the state-of-the-art we can find several adversarial attacks targeting the machine learning algorithms of MLaaS.
We now briefly summarize attacks on the MLaaS considered in Section~\ref{sec.results}.
\paragraph{\textit{Hate Speech Detectors}}Several scientific discussions use Google Perspective as a case study of their hate speech evasion techniques. For example, in~\cite{hosseini2017deceiving, 10.1145/3270101.3270103} the authors show the ability to manipulate Perspective by adding small mistakes to the sentences (e.g., typos, leet speech, word addition, word removal). In~\cite{9071126}, the authors proposed evasion techniques based on acoustic and visual similarities, with an evasion power equal to 33\% and 72.5\%.
\paragraph{\textit{Sentiment Analyzers}}In~\cite{gong2018adversarial}, to manipulate sentiment tools, the authors applied techniques from computer visions, i.e., Fast Gradient Sign Attack~\cite{goodfellow2014explaining}. In DeepFool~\cite{moosavi2016deepfool}, the authors manipulate the sentiment analysis of a CNN model. This algorithm uses Word Mover's Distance (WMD)~\cite{kusner2015word} to find suitable words whose embeddings allow to influence the target classifier. Similarly, in~\cite{Alzantot_2018}, the authors propose a word replacement algorithm based on semantic similarities. In~\cite{DBLP:conf/ndss/LiJDLW19}, authors describe TextBugger, a black-box framework that achieves high evasion success rate on different Machine-Learning-as-a-Services.
\paragraph{\textit{Machine Translators}}Cheng et al. propose AdvGen, a gradient-based method for attacking Neural Machine Translation (NMT) models~\cite{Cheng_2019}. In~\cite{Cheng_2020}, the authors propose two techniques to evade Seq2Seq models (e.g., translators) using ad-hoc loss functions: \textit{non-overlapping attack} and \textit{keyword attack}. For the first, the goal is to generate completely novel adversarial sentences, while for the latter, the malicious translation contains target keywords. For the interested reader, we suggest finding more details on adversarial machine learning in Seq2Seq models in~\cite{Cheng_2020}.
\section{Results on Controlled Environments}\label{sec.res_indoor}
In this section, we evaluate the impact of the ZeW injection strategy presented in Section~\ref{sec.cs} over different machine learning models and indexing techniques. In Section~\ref{sub.res_indoor_sett} we first present the experimental settings, followed by result discussions in Section~\ref{sub.res_indoor_res}.
\subsection{Experimental Settings}\label{sub.res_indoor_sett}
Algorithm~\ref{Alg.1} aims to reduce the negative part-of-speech of a given sentence. We thus decide to understand the impact of ZeW injection strategy over a binary classification task: the sentiment classification. The task consists of predicting whether a sentence is positive or negative. For the experiments we use the \textit{Sentiment140 dataset}~\cite{Sentiment140}. The dataset contains positive and negative tweets (800K per class), for a total of 1.6M of labeled tweets. We then randomly split the corpus into a training (70\%), validation (10\%), and testing partition (20\%).
We evaluate two types of ML algorithms:
\begin{itemize}
\item \textit{SGDClassifier}. This is a linear classifier. We use Scikit-Learn~\cite{scikit-learn} implementation. The model is built on top of a \textit{TfidfVectorizer}, i.e., an engine that converts raw documents into TF-IDF representations.
\item \textit{Recurrent Neural Network Classifier}. We deploy standard RNN-based classifiers using an embedder, followed by a two layers GRU and a final linear layer. The model is deployed using Pytorch~\cite{NEURIPS2019_9015}.
\end{itemize}
Each model is trained over different variants of text representation i.e., character and word-based. The SGDClassifier implements two different combinations: character ngrams defined in the range $[1, 5]$, and word ngrams, defined in the range $[1, 3]$. For example, the range $[1, 2]$ means that the vectorizer considers unigrams and bigrams. The RNN classifier is defined over character and word unigrams tokenizer; in addition, we further consider RNN classifiers that use and discard ``unknown'' tokens.
The models use a common and standard preprocessing technique that removes hashtag, mentions, and URLs from tweets. Table~\ref{tab:results} summarize models' configuration.
We now briefly describe the hyperparameters selection and training strategy of the two models categories. The SGD classifier is implemented with a greed search strategy over the following TfidfVectorizer's hyperparameters: max document frequency $(0.5, 0.75, 1)$, max number of features $(1000, 5000)$, and use IDF $(True, False)$. We use the validation set to find the best configuration. RNN models implement default hyperparameters configurations: the embedding dimension is 100, and the GRU's hidden size is 256.
The vocabulary size is set to 25K tokens for word-based cases, while 100 for character based ones; these vocabulary thresholds allows the model to learn the representation of ``unknown'' tokens.
The training process uses Adam optimizer and BCEWithLogitsLoss as loss function. The models are trained for a maximum of 100 epochs. Note that we use a stopper mechanism that interrupts the training if a model does not improve its validation performance for 5 epochs.
\subsection{Results and Considerations} \label{sub.res_indoor_res}
In this section, we evaluate the performance of the six models presented in Section~\ref{sub.res_indoor_res}. The first evaluation is conducted with the \textit{accuracy score} (ACC), i.e., the percentage of correct predictions. Table~\ref{tab:results} summarize the results. As expected, DNN-based models tend to outperform simple linear models; this gap can be linked with the limited vocabulary size adopted in the \textit{TfidfVectorizer} due to memory limitations. We also highlight that the usage of unknown tokens does not boost-up models' performance.
The effect of ZeW is measured with the \textit{attack success percentage} (ASP), i.e., the percentage of sentences classified as positive. Note that such a percentage also contains those samples that are misclassified in normal conditions. The evaluation uses three corpora: a set of original tweets called ``\textit{real}'', and two malicious counterparts (one per mask) named ``\textit{mask1}'' and ``\textit{mask2}'', respectively. The set ``\textit{real}'' corresponds to the negative test sentences (160K); we then discard those sentences that cannot be modified by Algorithm~\ref{Alg.1}, resulting in a final set with 75K tweets.
Table~\ref{tab:results} presents ZeW success percentage. We can notice that the ASR is always under 40\%. This result can be explained with:
\begin{itemize}
\item a limitation of Algorithm~\ref{sub.alg}, where the injection strategy modifies negative tokens. However, a sentence's polarity might be the effect of a sequence of tokens rather than the sum of single instances polarity.
\item a limitation of ZeW, where the injection is limited to a strict set of operations (i.e., the insertion of a set of characters).
\end{itemize}
Nevertheless, we can find some insights from such results:
\begin{enumerate}
\item ZeW can affect the performance of different models that use different tokenization strategies. The combination of ZeW with state-of-the-art attacks targeting ML-models can result in dangerous effects.
\item In general, character-based models are more resilient to ZeW. In particular, we highlight that when unknown tokens are discarded, ZeW attack fails.
\item In general, models that consider unknown tokens are more vulnerable. An attacker can thus leverage this factor.
\end{enumerate}
\begin{table*}[!ht]
\centering
\begin{tabular}{ccc|ccc|ccc} \toprule
\textbf{ML Model} & \textbf{Tokenization} & \textbf{UNK} & \textbf{Train (ACC)} & \textbf{Valid (ACC)} & \textbf{Test (ACC)} & \textbf{Real (ASP)} & \textbf{Mask1 (ASP)} & \textbf{Mask2 (ASP)}\\ \midrule
SGDClassifier & Char & No & 77.15 & 77.00 & 77.19 & 12.06 & 22.15 & 29.63\\
SGDClassifier & Word & No & 73.04 & 73.00 & 73.15 & 14.94 & 20.88 & 27.12 \\ \midrule
RNN & Char & No & 81.68 & 81.52 & 81.46 & 5.27 & \textbf{3.72} & \textbf{3.72} \\
RNN & Char & Yes & 82.60 & 82.41 & 82.39 & 7.57 & 12.53 & 21.34 \\
RNN & Word & No & 84.79 & 84.20 & 84.28 & 6.93 & 37.75 & 37.19\\
RNN & Word & Yes & 84.93 & 84.38 & 84.41 & 6.25 & 37.29 & 36.62\\ \bottomrule
\end{tabular}
\caption{Overview of models' performance. The accuracy score (ACC) measure the quality of the model on the three splits. The attack success percentage (ASP) measures the misclassification percentage of a given classifier; in bold the results of models resistant to ZeW.}
\label{tab:results}
\end{table*}
\section{Results on MLaaS}\label{sec.results}
In this section, we show how ZeW affects the performance of different MLaaS of the leading IT companies: Amazon, Google, IBM, and Microsoft. The considered companies provide similar services, and, where possible, the results are grouped-by.
We identified the following macro-areas.
\begin{itemize}
\item \textit{Hate Speech Detection} (Section~\ref{sub.hs}). Tools that identify toxicity/hate speech in comments.
\item \textit{Insights Extractors} (Section~\ref{sub.oth}). Tools that extract insightful information from the text (e.g., tones, personalities).
\item \textit{Sentiment Analyzers} (Section~\ref{sub.sent}). Tools that measure sentence polarization.
\item \textit{Translators} (Section~\ref{sub.tr}). Tools that translate sentences from a source language to a target one.
\end{itemize}
In this work, we do not compare the performance of ZeW with state-of-the-art since their focus is to exploit ML algorithms vulnerabilities, while we aim at the disruption of the indexing stage.
Since our attack model is free of all of the restrictions in the number of modifications, an attacker can combine ZeW with attacks targeting ML algorithms.
\subsection{Dataset \& Evaluation on VaderSentiment}\label{sub.vadRes}
For the experiments, we use the hateful sentences available in~\cite{ICWSM1715665}, a well-known dataset of the task. This dataset contains three distinct classes: ``hateful'', ``offensive but not hateful'', and ``neither'' (nor hateful neither offensive). The dataset includes 1430 hateful sentences. We call now on the set of hateful sentences \textit{Real}. Sentences that do not contain any negative word (detected) are discarded from \textit{Real}. We then applied the injection algorithm with the two possible masks, generating two sets called, respectively, \textit{Mask1} and \textit{Mask2}. The final corpora contain 1094 samples each. All of the analyses and tests in different MLaaS use these corpora.
\begin{figure}[!htb]
\centering
\includegraphics[width=.8\linewidth]{Figures/vader.pdf}
\caption{Negative sentiment densities of different corpus measured by VaderSentiment, where +1.0 is extremely negative and 0.0 is absence of negativity. The violin plot shows the distributions of the corpora's negative scores; the blue line represents the median value of the distribution. The service is vulnerable if the distributions under attack are not equal to the distribution of \textit{Real}.}
\label{fig:vader}
\end{figure}
We first analyze the impact of ZeW on VaderSentiment. As shown in Figure~\ref{fig:vader}, both injection strategies (\textit{Mask1} and \textit{Mask2}) entirely cancel the perceived negativity. The median values of negativity scores are 0.35 (\textit{Real}), and 0.0 for both \textit{Mask1} and \textit{Mask2}. ZeW is effective in both modalities against VaderSentiment. The injection of only one character per negative word is enough to disrupt this service. This might be a relevant problem since this tool is widely used in the scientific community.
In this section, we further show the effectiveness of our defense strategies proposed in Section~\ref{sub.counter}, where the sanitization technique discards the malicious character from any given sentence. Figure~\ref{fig:vader} shows that the sanitized sentences have the same distribution of the original and unpoisoned corpus. Given the simplicity and the effectiveness of the proposed countermeasure, we decide to do not report similar results in the rest of Section~\ref{sec.results}.
\subsection{Hate Speech Detection}\label{sub.hs}
We start our analysis with hate-speech detection, the set of tools closer to our case study. These tools aim is to identify and detect the toxicity of sentences. The goal of an adversary is to write hateful sentences without being detected.
This scenario is likely on Social Networks (e.g., Facebook) that uses account suspension or ban when users post inappropriate content.
We analyzed the following services.
\begin{itemize}
\item Google Perspective\footnote{\url{https://www.perspectiveapi.com}}. Perspective is part of the Conversation AI project, which aim is to improve the quality of online conversations with the supervision of ML. The tool identifies several aspects of online conversations that might be inappropriate, such as toxicity, profanity, and flirtation. In this experiment, we focus on toxicity manipulation, defined as disrespectful comments.
\item Microsoft Content Moderator\footnote{\url{https://azure.microsoft.com/en-us/services/cognitive-services/content-moderator}}. This ensemble of ML-tools aim to identify potentially offensive content in different type of media, such as text, images, and videos. Regarding the text-domain, the model identifies three categories of malicious content: sexually explicit content, sexually suggestive, and offensive. In this analysis, we focus on the latter. The tool offers the ``autocorrect'' option, which corrects grammatical mistakes before analyzing the contents. In our experiment, this parameter is set to TRUE.
\end{itemize}
Figure~\ref{fig:toxicity} shows the effect of ZeW on the toxicity detectors. Both services are highly resistant to the attack.
On Google Perspective the median confidence level of the detector is 0.95 on (\textit{Real}), 0.84 on \textit{Mask1}, and 0.83 on (\textit{Mask2}). Similarly, on Microsoft Moderator the median is 0.99 on (\textit{Real}), 0.97 on \textit{Mask1}, and 0.86 on (\textit{Mask2}).
The impact of the attack is not strong, and the model seems resistant. On the other hand, in Google Perspective the insertion of only one zero-width character per negative word appears sufficient to damage the model's confidence level. Similarly, the Microsoft tool can be affected by \textit{Mask2}. We also need to highlight that the purpose of these tools is to detect high toxicity levels rather than detect negativity on sentences; thus, the algorithm described in Figure~\ref{Alg.1} might not be effective.
The combination of ZeW with other state-of-the-adversarial techniques could seriously damage this service. We can state that both models are vulnerable to this attack.
\begin{figure*}[!ht]
\centering
\subfloat[Google Perspective.]{%
\includegraphics[width=0.4\linewidth]{Figures/google_perspective_score.pdf}%
\label{fig:persp}%
}%
\hfill%
\subfloat[Microsoft Moderator.]{%
\includegraphics[width=0.4\linewidth]{Figures/microsoft_moderator_score.pdf}%
\label{fig:mic_mod}%
}%
\caption{Toxicity score densities of different corpora measured by Google Perspective (left), and Microsoft Moderator (right), where +1.0 is high confidence of being classified as toxic. The violin plot shows the distributions of the corpora’s toxicity scores; the blue line represents the median value of the distribution. A service is vulnerable if the distributions under attack are not equal to the distribution of \textit{Real}.}
\label{fig:toxicity}
\end{figure*}
\subsection{Insights Extractors}\label{sub.oth}
Online Social Networks (OSNs) such as Facebook and Twitter are places where billions of users share their experiences, ideas, feelings, and opinions. These platforms are perfect for analyzing social behaviors and interactions. Several studies are conducted, from sentiment analysis and opinion mining~\cite{pak-paroubek-2010-twitter-corpus, 10.1007/978-3-642-35176-1_32}, to the prediction of when a security vulnerability will be exploited~\cite{10.1145/3292500.3330742}.
IBM offers two services that are helpful to analyze OSNs data.
A possible attacker's goal is to hide his/her own personality.
\begin{itemize}
\item IBM Watson Tone Analyzer\footnote{\url{https://www.ibm.com/cloud/watson-tone-analyzer}}. The tool detects and extracts emotional and language tones in a written text.
\item IBM Watson Personal Insight\footnote{\url{https://www.ibm.com/watson/services/personality-insights}}. The tool predicts the personality of a target user. For example, this tool allows us to analyze the tweets-history of a target Twitter account.
\end{itemize}
IBM Watson Tone Analyzer returns a list of emotions (strings) detected in a given sentence. Here, a possible adversary's goal is to hide/manipulate emotions from his text. To understand the efficacy of ZeW, we measured the similarity between the sets of emotions of the unpoisoned sentences and their poisoned counterparts. In particular, given a sentence $x$, its adversarial counterpart $x'$, and a Tone Extractor function $f$, we obtain the sets $A=f(x)$ and $B=f(x')$. The similarity between $A$ and $B$ is given by the \textit{Jaccard Similarity}, defined as follows~\cite{niwattanakul2013using}.
\begin{equation}
J(A, B) = \frac{dim(A\cap B)}{dim(A \cup B)},
\end{equation}
where $dim$ returns the number of items in the set.
The performance is measured by comparing the Jaccard similarities of \textit{Real} vs. \textit{Mask1} and \textit{Real} vs. \textit{Mask2}.
Ideally, the API is resistant if the Jaccard Similarity is equal to +1.0 (two identical sets).
In Figure~\ref{fig:ta}, we can notice a different trend. The median values are 0.5 and 0.33 for \textit{Mask1} and \textit{Mask2}, respectively.
A good portion of sentences (40\%) are not affected; a possible explanation is that the negative words of those sentences are not essential to extract the emotion.
Note that from this analysis we discard those sentences without any ``tone'' detected by the tool (322 sentences discarded).
\begin{figure}
\centering
\includegraphics[width=.8\linewidth]{Figures/ta.pdf}%
\caption{The similarity distributions between \textit{Real} vs. \textit{Mask1} and \textit{Real} vs. \textit{Mask2} of Watson Tone Analyzer, where +1,0 is an exact match between two sets. The violin plot shows the distributions of the corpora’s Jaccard similarities; the blue line represents the median value of the distribution. The service is vulnerable if the distributions under attack are not close to one.}
\label{fig:ta}
\end{figure}
On IBM Watson Personal Insight, the adversary's goal is to hide/manipulate his personality. In our test, we extract the personalities from the three corpora. In this experiment, the analysis is at a corpus-level rather than a sentence-level, i.e., we obtain one personality for each corpus. In Figure~\ref{fig:personalities}, we can notice that \textit{Real} and \textit{Mask1} differ in terms of ``Openness'' and ``Conscientiousness'', while \textit{Mask2} seems to push all of the dimensions close to zero. In conclusion, both services of IBM are severely vulnerable to ZeW.
\begin{figure}
\centering
\includegraphics[width=1.\linewidth]{Figures/personality.pdf}
\caption{ Watson Personal Insight detects three distinct personalities (\textit{Real}, \textit{Mask1}, and \textit{Mask2}). The service is vulnerable if at least one of the five dimensions changes.}
\label{fig:personalities}
\end{figure}
\subsection{Sentiment Analyzers}\label{sub.sent}
Sentiment analysis is one of the most popular topics in NLP~\cite{7724305, liu2012sentiment, agarwal2011sentiment, maas2011learning} and can be used for several purposes, such as understand the opinions of restaurants, movies, or products. This importance is reflected by the fact that all companies implement this service: Amazon Comprehend\footnote{\url{https://aws.amazon.com/comprehend}.}, Google Cloud Natural Language\footnote{\url{https://cloud.google.com/natural-language}.}, IBM Watson Natural Language Understanding\footnote{\url{https://www.ibm.com/cloud/watson-natural-language-understanding}.}, and Microsoft Text Analytics\footnote{\url{https://azure.microsoft.com/en-us/services/cognitive-services/text-analytics}.}.
In the hate speech scenario, as shown in Figure~\ref{fig:vader}, the sentences are likely to be perceived as negative. A possible adversary's goal is to minimize the detected negativity; this attack can be seen as a \textit{transferable attack}~\cite{papernot2016transferability} since our malicious sentences are first tested on a sentiment analyzer (i.e., VaderSentiment).
Figure~\ref{fig:sent} shows the effectiveness of ZeW on 3 services out of 4. In particular, Amazon Comprehend is resistant to both modalities of injection, where the median value is constant (0.86). Google Cloud Natural Language shows a similar vulnerability pattern for both masks, with an equal median value that moves from 0.5 to 0.2.
In this service, the addition of one character per negative word is sufficient to disrupt it. We conclude with the services provided by IBM and Microsoft, where we see a common decreasing pattern of the median values, which move from 0.92 / 0.95 on \textit{Real}, to 0.56 / 0.24 for \textit{Mask1}, and to 0.13 / 0.05 for \textit{Mask2}.
We can state that three out of four services are severely vulnerable to ZeW, while only one show resistance.
\begin{figure*}[!t]
\centering
\subfloat[Amazon Comprehend.]{%
\includegraphics[width=0.4\linewidth]{Figures/amazon_sent.pdf}%
\label{fig:amazon_comprehend}%
}%
\hfill%
\subfloat[Google Cloud Natural Language.]{%
\includegraphics[width=0.4\linewidth]{Figures/google_sent.pdf}%
\label{fig:google_cnl}%
}
\newline
\subfloat[IBM Watson Natural Language Understanding.]{%
\includegraphics[width=0.4\linewidth]{Figures/ibm_sent.pdf}%
\label{fig:watson_nlu}%
}%
\hfill%
\subfloat[Microsoft Text Analytics.]{%
\includegraphics[width=0.4\linewidth]{Figures/microsoft_sent.png}%
\label{fig:microsoft_ta}%
}
%
\caption{Effect of Zero-Width Space Attack on different sentiment extractor services. The violin plot shows the distributions of the corpora’s negative scores; the blue line represents the median value of the distribution. A service is vulnerable if the distributions under attack are not equal to the distribution of \textit{Real}.}
\label{fig:sent}
\end{figure*}
\subsection{Translators}\label{sub.tr}
We conclude the results section with another well-known NLP task: the language translation. All four companies implement this service: Amazon Translate\footnote{https://aws.amazon.com/translate.}, Google Translation\footnote{https://cloud.google.com/translate?hl=en.}, IBM Watson Language Translator\footnote{https://www.ibm.com/cloud/watson-language-translator.}, and Microsoft Translator\footnote{https://azure.microsoft.com/en-us/services/cognitive-services/translator.}.
In the hate speech scenario, we can imagine that the adversary writes a hateful message in an unknown language for the victim. The victim uses translators to understand the meaning of the message. For example, human moderators could use automatic translators to understand if comments written in foreign languages are hateful. Another example is browsers like Chrome that automatically translates web content.
An example of this scenario is shown in Figure~\ref{fig:g_translate}, where the malicious sentence ``I wanna kill you'' is translated as ``I love you'' by Google Translate\footnote{https://translate.google.com/.}. Note that since the target model is unknown, we do not have any control over the target output. We highlight here that the aim of the attacker is to degrade the general performance of the target model rather than control the translation process.
ZeW is evaluated on the translation task English-Italian.
To understand the impact, we measure the similarity between the translations given by the unpoisoned sentence and its malicious counterpart. The difference is measured with the Bilingual Evaluation Understudy Score (BLEU score), with its 4-gram cumulative implementation. Formally, given a sentence $x$, its malicious counterpart $x'$, and a translation function $f$, the similarity is defined as
\begin{equation}
similarity = BLEU4(f(x), f(x')).
\end{equation}
Ideally, a service is not affected if the translations of the original sentence and its malicious version are the same, resulting in BLEU score equal to +1.0 (perfect match).
In Figure~\ref{fig:transl} we can see that all of the services are vulnerable to the attack. Amazon seems resistant to \textit{Mask1}, with a median value equal to 1.0, while vulnerable to \textit{Mask2}, with the median equal to 0.83. Similarly, IBM is resistant to \textit{Mask1} and vulnerable to \textit{Mask2}: the median value is 1.0 for \textit{Mask1}, and 0.58 for \textit{Mask2}.
Google and Microsoft show vulnerabilities in both injection strategies, where the median values move from 0.63 / 0.47 in \textit{Mask1}, to 0.40 / 0.34 in \textit{Mask2}.
\begin{figure*}[!t]
\centering
\subfloat[Amazon Translate.]{%
\includegraphics[width=0.4\linewidth]{Figures/amazon_tr.pdf}%
\label{fig:amazon_tr}%
}%
\hfill%
\subfloat[Google Translate.]{%
\includegraphics[width=0.4\linewidth]{Figures/google_tr.pdf}%
\label{fig:google_tr}%
}
\newline
\subfloat[IBM Watson Natural Language Translator.]{%
\includegraphics[width=0.4\linewidth]{Figures/ibm_tr.pdf}%
\label{fig:watson_tr}%
}%
\hfill%
\subfloat[Microsoft Translator.]{%
\includegraphics[width=0.4\linewidth]{Figures/microsoft_tr.pdf}%
\label{fig:microsoft_tr}%
}
%
\caption{Effect of Zero-Width Space Attack on different translator services. The violin plot shows the distributions of the corpora’s BLEU scores; the blue line represents the median value of the distribution. A service is vulnerable if the distributions under attack are not close to one.}
\label{fig:transl}
\end{figure*}
All of the models show more difficulties in handling \textit{Mask2}. These tools show different vulnerability patterns compared to the sentiment analysis tasks. The possible explanation is the nature of translators: Seq2Seq models (i.e., autoencoders). Seq2Seq models likely use different placeholders to deal with OOV tokens, as introduced in Section~\ref{sub.textpipeline}.
We can state that all of the services are vulnerable to ZeW: three strongly vulnerable, and only one weakly (Amazon).
\subsection{Considerations}\label{sub.res_cons}
In this section, we analyzed how different MLaaS behave under the ZeW attack. We can notice different trends among types of services (e.g., sentiment analyzers) and the same companies (e.g., Microsoft). We now try to understand why these models behave differently.
ZeW seems to fail on \textit{hate speech detectors}. This result suggests that both services use character-based tokenizers, which is a reasonable assumption since such services should deal with noisy text (e.g., grammatical errors, misspelling) gathered from blogs, forums, and social networks. Moreover, such services are resistant to the injected noise (unknown tokens); a possible explanation is that these services deal with unrecognized words (e.g., discard).
Amazon services show similar performance.
In general, IBM MLaaS are vulnerable to ZeW attack. Similar trends are shared among different services (e.g., Watson Personal Insight, sentiment extractor), where the attack is more effective when we inject more ZeW characters. These trends are similar to the RNN char-based with UNK performance, as shown in Table~\ref{tab:results}.
Finally, on translators, we find two patterns: i) resistant only to \textit{mask1} (i.e., Amazon, IBM), and vulnerable to both injection levels (i.e., Google, Microsoft). Since \textit{mask2} has a stronger impact, the four models might be character-based. However, it is unclear why there is such discrepancy, where two out of four models are resistant to \textit{mask1} ZeW attack. More in-depth investigations should be conducted on neural machine translators architecture.
\section{Introduction}
\input{Sections/Introduction}
\input{Sections/Background}
\input{Sections/Attack}
\input{Sections/CaseStudy}
\input{Sections/Results}
\input{Sections/Related_Works}
\input{Sections/Limitations}
\input{Sections/Conclusions}
\balance
\bibliographystyle{IEEEtran}
|
1,477,468,751,057 | arxiv | \section{Introduction}
Extensive studies on the high-$T_c$ superconducting copper oxides have
revealed an intimate connection between the magnetism and the
superconductivity. \cite{kastner} A detailed study of the spin dynamics in the
superconducting La$_{2-x}$Sr$_x$CuO$_4$ system has been performed by Yamada
$et$ $al.$ \cite{yamada}
The phase diagram of La$_{2-x}$Sr$_x$CuO$_4$ has been explored and it has been
shown that the magnetic properties evolve dramatically with Sr doping.
The parent material La$_2$CuO$_4$ shows three-dimensional (3D) long-range
antiferromagnetic ordering below $\sim$325 K. \cite{lco,birgeneau} When Sr is
doped in the material, the 3D antiferromagnetic ordering quickly disappears and,
as originally
predicted by Aharony $et$ $al.$, \cite{aharony} the N\'{e}el state is replaced by
a spin-glass phase. In this phase elastic magnetic Bragg rods, originating from
two-dimensional (2D) spin correlations, develop gradually as shown by
Sternlieb $et$ $al.$ \cite{sternlieb} and Keimer $et$ $al.$ \cite{keimer}
In particular, Keimer $et$ $al.$ found that the magnetic peaks are almost elastic
and relatively sharp in $Q$ in La$_{1.96}$Sr$_{0.04}$CuO$_4$.
Very recently, Wakimoto $et$ $al.$ studied the magnetic properties in the spin-glass
phase (0.03$\le x\le$0.05) in detail elucidating the hole concentration dependence
of the transition temperature and the spin correlations. \cite{wakimoto}
Most importantly, they found incommensurate spin correlations in the spin-glass
La$_{1.95}$Sr$_{0.05}$CuO$_4$ in which the positions of the incommensurate
peaks were rotated by 45$^\circ$ in reciprocal space about ($\pi$,$\pi$) from those
observed in Sr-richer superconducting compounds.
On the other hand, static magnetic ordering has also been observed in superconducting
La$_{2-x}$Sr$_x$CuO$_4$. Suzuki $et$ $al.$ \cite{suzuki} and Kimura $et$
$al.$ \cite{kimura} showed that sharp incommensurate
magnetic Bragg peaks develop at low temperatures in superconducting
La$_{1.88}$Sr$_{0.12}$CuO$_4$, in close analogy with the putative stripe ordering
of holes and spins found in the La$_{2-y-x}$Nd$_y$Sr$_x$CuO$_4$
system. \cite{tranquada} Very recently, quasi-3D magnetic ordering has been observed
in superconducting La$_2$CuO$_{4+\delta}$. \cite{lee} Thus, magnetic ordering
takes place in the La$_{2-x}$Sr$_x$CuO$_4$ system with various levels of hole
doping. It is essential to clarify the nature of the static magnetic behavior
in order to understand the relationship between the spin-glass, the stripe, and the
low temperature magnetic phase in the superconducting samples.
In this paper, we report on high-resolution elastic and inelastic
neutron scattering studies in La$_{1.98}$Sr$_{0.02}$CuO$_4$. Previously,
inelastic neutron scattering measurements with medium-resolution were performed
on the same sample in order to study the magnetic excitations. \cite{matsuda} The
main purpose of the previous experiments was to compare the dynamic spin properties of
the $x$=0.02 sample with those in
La$_{1.96}$Sr$_{0.04}$CuO$_4$, in which the integrated susceptibility scales with
$E/T$. \cite{keimer} Since the time that these experiments were performed, it has
become evident that the static magnetic properties
are also important as mentioned above. Stimulated by those findings, we aimed
to clarify the static magnetic properties in the spin-glass phase of
La$_{1.98}$Sr$_{0.02}$CuO$_4$, in which the 3D antiferromagnetic long-range
ordering just disappears. The most interesting issue is how
the long-range 3D antiferromagnetic ordering disappears and the spin-glass behavior
appears with hole doping.
In this study we find that below $\sim$40 K quasi-elastic magnetic peaks develop at
the positions where magnetic Bragg peaks exist in La$_2$CuO$_4$. This means that
spin correlations develop perpendicular to the CuO$_2$ plane in addition to parallel
to it, suggesting that the 2D spin fluctuations, which exist at high temperatures,
at least in part freeze with freezing temperature enhanced due to the interplanar
interaction. Our most important findings
are that the spin correlations in the CuO$_2$ plane are anisotropic at low temperatures
and that the spin directions in the spin clusters differs from that in pure La$_2$CuO$_4$.
The spin cluster dimension along the $a\rm_{ortho}$ axis ($\sim$160 \AA) is $\sim$6 times
longer than that along the $b\rm_{ortho}$ axis ($\sim$25 \AA) at 1.6 K. From these results,
it is concluded that the spin-glass state can be described as a random freezing of
quasi-3D spin clusters with anisotropic spin correlations.
The format of this paper is as follows: The scattering geometry used in this study
is summarized in Sec. 2. Experimental details are described in Sec. 3.
The experimental results of the elastic neutron measurements are presented and the
spin structure of the short-range ordered state is summarized in Sec. 4. The
experimental results on the inelastic neutron measurements are presented in Sec. 5.
In Sec. 6 we discuss the magnetic properties of the spin-glass state.
\section{Scattering geometry}
Figure 1(a) shows the scattering geometry in the $(H,H,0)$ scattering plane in the
high-temperature tetragonal phase. La$_{1.98}$Sr$_{0.02}$CuO$_4$ undergoes
a structural phase transition from tetragonal ($I4/mmm$) to orthorhombic ($Bmab$)
structure at 485 K. Figure 1(b) shows the scattering geometry in the
$(H,K,0)$ scattering plane in the low-temperature orthorhombic phase. The structure
is slightly distorted with the $a\rm_{ortho}$ and $b\rm_{ortho}$ axes almost along the
diagonal directions of the $a\rm_{tetra}$ and $b\rm_{tetra}$ axes. Ideally, the
experiments should be performed with a single domain crystal. However, four
domains are expected to exist in real crystals since a twin structure is energetically
stable. For small rectangular samples
it is possible to prepare a single domain crystal by heating the sample above
the transition temperature and then cooling it down while applying pressure along
the $a\rm_{ortho}$ or $b\rm_{ortho}$ axis. However, the heat treatment is not efficient
for large crystals so that preparation of a single domain sample is extremely difficult
in practice. Consequently in this experiment, we were forced to use a four-domain crystal.
The scattering geometry for the four-domain crystal is shown in Fig. 1(c).
$(H,K,L)\rm_{ortho}$
and $(K,H,L)\rm_{ortho}$ nuclear Bragg peaks are observed at nearby positions. As a
result, four peaks are observed around (1,1,0)$\rm_{tetra}$ and three peaks are
observed around (2,0,0)$\rm_{tetra}$ in the $(H,K,0)$ scattering plane. Figure 2(a)
shows an elastic scan (scan A as indicated in Fig. 1(c)) at (2,0,0)$\rm_{tetra}$.
The two side peaks at
(2,0,0)$\rm_{tetra}$ originate from two of the domains and the central peak, which is
factor of $\sim$2 larger than each side peak, originates from the other two domains.
From these results, we estimate that the four domains are equally distributed.
Due to the twin structure, the $(H,0,L)\rm_{ortho}$ and the $(0,K,L)\rm_{ortho}$ scattering
planes are superposed upon each other as shown in Fig. 1(d). Scan B in Fig. 1(d)
corresponds to scan C in Fig. 1(c). Since the vertical resolution is quite broad,
the (2,0,0)$\rm_{ortho}$ ((0,2,0)$\rm_{ortho}$) peak shown in Fig. 2(b) actually originates
from two separate (2,0,0)$\rm_{ortho}$ ((0,2,0)$\rm_{ortho}$) peaks from two domains above
and below the scattering plane.
There are two ways to express Miller indices; the high temperature tetragonal phase
notation $(H,K,L)\rm_{tetra}$ and the low temperature orthorhombic phase notation
$(H,K,L)\rm_{ortho}$. Since all of the results shown in this paper are observed in the
orthorhombic phase and also obtained in the $(H,0,L)\rm_{ortho}$ and
$(0,K,L)\rm_{ortho}$ scattering planes, $(H,K,L)\rm_{ortho}$ will be used to express
Miller indices.
\section{Experimental Details}
The single crystal of La$_{1.98}$Sr$_{0.02}$CuO$_4$ was grown from a
non-stoichiometric CuO-rich solution. \cite{hidaka} The dimensions of the
plate-like shaped crystal are about 20 $\times$ 20 $\times$
3 mm$^3$. The effective mosaic of the single crystal is less than
0.5$^\circ$ full-width-at-half-maximum. The Sr concentration was determined
directly from an electron probe microanalysis measurement and indirectly from neutron
scattering measurements of the structural phase transition temperature,
$T\rm_{st}$=485 K.
The lattice constants are $a\rm_{ortho}$=5.333 \AA, $b\rm_{ortho}$=5.414 \AA,
and $c$=13.098 \AA\ ($b/a$=1.015) at 1.6 K.
The La$_{1.98}$Sr$_{0.02}$CuO$_4$ crystal is the same one as used in
Ref. \onlinecite{matsuda}. We observe that weak but sharp magnetic Bragg peaks
gradually develop below $\sim$165 K at the positions where magnetic Bragg peaks
exist in La$_2$CuO$_4$. However, the volume fraction is estimated to be less than
10\% if 0.3$\mu_B$ is assumed for the Cu moment as in an oxygen-rich
La$_2$CuO$_4$ ($T_N$=185 K). We assume that the 3D magnetic peaks come from a small
fraction of the volume where the Sr and/or oxygen concentration is below the
critical value. Since the magnetic signals which are discussed in this paper originate
from diffuse scattering, the sharp peaks from the 3D N\'{e}el phase can easily be
separated from the 2D spin glass signals by avoiding the regions around
$(1,0,even)\rm_{ortho}$ and $(0,1,odd)\rm_{ortho}$.
The neutron scattering experiments were carried out on the three-axis spectrometers
H7 and H8 at the Brookhaven High-Flux Beam Reactor, on the three-axis
spectrometer TASP at the cold neutron guide at the PSI-SINQ Facility, and
on the three-axis spectrometer SPINS at the cold neutron guide at the NIST
Center for Neutron Research.
For most of the elastic measurements, the horizontal collimator sequences were
40$'$-40$'$-S-40$'$-80$'$ and 32$'$-40$'$-S-40$'$-220$'$ with a fixed incident
neutron energy of $E_i$=5 meV.
For the inelastic measurements, the horizontal collimator sequences were
20$'$-20$'$-S-20$'$-80$'$ with $E_i$=14.7 meV and 72$'$-80$'$-S-80$'$-80$'$
with $E_f$=8 meV for lower energies ($E\le$4 meV) and
40$'$-40$'$-S-40$'$-80$'$ with $E_f$=14.7 meV for $E$=10 meV. Pyrolytic
graphite (002) was used as both monochromator and analyzer. Contamination
from higher-order neutrons was effectively eliminated using Be filters for
$E_i$=5 meV and pyrolytic graphite filters for $E_i$=14.7 meV and $E_f$=14.7 meV.
No filter was used for measurements with $E_f$=8 meV at TASP since the population
of the higher-order beam is considerably reduced.
The single crystal was oriented in the $(H,0,L)\rm_{ortho}$
and $(0,K,L)\rm_{ortho}$ scattering planes. For the elastic measurements the sample
was mounted in a helium pumped cryostat while for the inelastic measurements the
sample was mounted in a closed cycle refrigerator.
\section{Static Magnetic Properties}
\subsection{Neutron Elastic Experiments}
Figure 3(a) shows an elastic neutron scan at $(H,0,-0.3)$ at 1.6 K. A sharp
peak is observed at $H$=1. If the magnetic correlations were isotropic in the $ab$
plane and the correlations were purely two dimensional, it would be expected that
two sharp equi-intense peaks would be observed at $H$=0.985 ($K$=1) and 1.
\cite{14}
The data in Fig. 3(a) indicate that there are spin correlations along the $c$ axis
and/or that spin correlations are anisotropic. The
temperature dependence of the intensity at (1,0,$-$0.3) is shown in Fig. 3(b). The
filled circles represent data measured with $E_i$=5 meV
($\Delta E$=0.25 meV); the intensity gradually develops below $\sim$40 K. The
open circles represent the data measured with $E_i$=14.7 meV ($\Delta E$=0.9
meV); the intensity starts to increase at $\sim$80 K and the development of the
intensity is much broader. These results are consistent with those observed by
Keimer $et$ $al.$ in La$_{1.96}$Sr$_{0.04}$CuO$_4$. \cite{keimer} Specifically,
this means that the magnetic signal is quasi-elastic rather than truly elastic so that
the temperature dependence of the intensity depends on the energy window.
\subsection{Spin Structure}
Figures 4(a) and 4(b) show the $L$ dependence of the magnetic elastic peaks at
(1,0,$L$) and (0,1,$L$) at 1.6 K, respectively.
The background estimated from the high temperature data (60 K) was subtracted
so that the remaining signal is purely magnetic.
Broad peaks are observed at $(1,0,even)$ where magnetic Bragg
peaks exist in La$_2$CuO$_4$. There are some characteristic features. Firstly
the peaks are broad, indicating that the spin correlations along the $c$ axis are
short-ranged. Secondly, the magnetic intensity at $(1,0,even)$ initially increases
with increasing $L$, in contrast to the behavior found for the magnetic
Bragg intensities in pure La$_2$CuO$_4$. Lastly, the magnetic intensities at
$(1,0,L)$ are much larger than those at $(0,1,L)$. From these results, one can deduce
that spin clusters are formed in La$_{1.98}$Sr$_{0.02}$CuO$_4$.
However, the spin clusters have a different geometrical structure from that in pure
La$_2$CuO$_4$.
The magnetic Bragg intensity is proportional to
\begin{eqnarray}
\left[\mbox{\boldmath $\rm{S}$}_{\perp}f(Q)\sum_{j}
{\rm exp}(i\mbox{\boldmath $\rm{Q}$}\cdot\mbox{\boldmath $\rm{R}$}_{j})\right]^2
\label{cs}
\end{eqnarray}
where $\mbox{\boldmath $\rm{S}$}_{\perp}$, $f(Q)$, and
$\mbox{\boldmath $\rm{R}$}_{j}$
are the copper spin component perpendicular to $\mbox{\boldmath $\rm{Q}$}$,
the magnetic form factor, and the copper ion positions, respectively. In pure
La$_2$CuO$_4$, the magnetic intensities at $(1,0,even)$ are just proportional to
$f(Q)^2$, which is approximately constant for the range of $L$'s
considered here \cite{freltoft}, since the copper spins point along the $b$
axis perpendicular to the $(H0L)$ scatteing plane and
$\sum{\rm exp}(i\mbox{\boldmath $\rm{Q}$}\cdot\mbox{\boldmath $\rm{R}$}_{j})$
at $(1,0,even)$ is calculated to be constant from the 3D spin structure
with the antiferromagnetic propagation vector
$\mbox{\boldmath $\hat{\tau}$}\parallel$
$\mbox{\boldmath $\hat{a}$}\rm_{ortho}$.
The simplest model to explain the increase of the intensity with increasing $L$ along
both (1,0,$L$) and (0,1,$L$) is that the cluster antiferromagnetic spin is randomly
directed within the $ab$ plane. In this case, the intensity would vary like
\begin{eqnarray}
\left[\mbox{\boldmath $\rm{S}$}_{\perp}f(Q)\right]^2=
\frac{1}{2}\left[1+{\rm sin}^2\theta(L)\right]S^2f(Q)^2
\label{int}
\end{eqnarray}
where $\theta(L)$ is the angle that the $Q$-vector of the (1,0,$L$) or (0,1,$L$) reflection
makes with the $ab$ plane.
We will justify this model after first discussing the spatial geometry of the frozen
clusters. We should note that a result equivalent to Eq. (2) is obtained by fixing
the spin direction along $(H,H,0)$ or by assuming equal admixtures of 3D correlated
phases where the spin vector $\mbox{\boldmath $\rm{S}$}$ is along or perpendicular
to $\mbox{\boldmath $\hat{\tau}$}$($\parallel$
$\mbox{\boldmath $\hat{a}$}\rm_{ortho}$).
\subsection{Anisotropic Spin Correlations}
Figures 5(a) and 5(b) show elastic neutron scans at $(H,0,2.2)$ and $(H,0,3.2)$ at
1.6 K, respectively. The background estimated at $T$=60 K was subtracted so the remaining
scattering is entirely magnetic. A
sharp and intense peak is observed at (1,0,2.2), whereas, a broad peak is observed
at (0,1,3.2) and a sharp peak with reduced integrated intensity is observed at (1,0,3.2).
These results strongly
indicate that the spin correlations are anisotropic in the $ab$ plane, that is, the
spin correlations are longer along the $a\rm_{ortho}$ axis than along the $b\rm_{ortho}$
axis. If the magnetic correlations were isotropic in the $ab$ plane, it would be expected
that two sharp peaks would be observed at $H$=0.985 ($K$=1) and 1.
The fact that the measured intensities at $(1,0,even)$ are larger than those at
$(0,1,odd)$ is a resolution effect arising from both the broadening along $b\rm_{ortho}$
and the fact that the
coarse vertical resolution effectively integrates the peaks at $(1,0,even)$ which are
elongated perpendicular to the scattering plane as shown in the inset of Fig. 4(b).
The solid lines in Figs. 4(a), 4(b), 5(a), and 5(b) are the calculated profiles using
as the intrinsic line shape 3D squared Lorentzians convoluted with the
instrumental resolution function:
\begin{eqnarray}
\lefteqn{L(H,K,L,E)=\sum_{even,\ odd}}\nonumber\\
& &\left[\left(\frac{1}{\xi{'}_a^2(H-1)^2+\xi{'}_b^2K^2+\xi{'}_c^2(L-even)^2
+1}\right)^2
+\left(\frac{1}{\xi{'}_a^2H^2+\xi{'}_b^2(K-1)^2+\xi{'}_c^2(L-odd)^2
+1}\right)^2\right]\nonumber\\
& &\times\frac{1}{E^2+\Gamma^2}
\label{loren}
\end{eqnarray}
where $\xi'_a$, $\xi'_b$, $\xi'_c$, and $\Gamma$ represent the elastic spin
correlation lengths or cluster sizes along the $a\rm_{ortho}$ axis, $b\rm_{ortho}$ axis,
and $c$ axis and the energy width, respectively.
In order to describe spin correlations which have finite lengths
three-dimensionally, one might instead have used 3D Lorentzians:
\begin{eqnarray}
\lefteqn{L(H,K,L,E)=\sum_{even,\ odd}}\nonumber\\
& &\left(\frac{1}{\xi{'}_a^2(H-1)^2+\xi{'}_b^2K^2+\xi{'}_c^2(L-even)^2+1}
+\frac{1}{\xi{'}_a^2H^2+\xi{'}_b^2(K-1)^2+\xi{'}_c^2(L-odd)^2+1}\right)\nonumber\\
& &\times\frac{1}{E^2+\Gamma^2}.
\label{4dloren}
\end{eqnarray}
However, this function has long tails in $Q$ space and does not decay as rapidly as
the observed data do. Equation (3) describes the observed data well so that
$\xi'_a$, $\xi'_b$, and $\xi'_c$ can be estimated quite reliably. We note that the
Lorentzian squared form is the expected profile for frozen 3D random clusters with sharp
boundaries.
In the calculation the spin structure models described in Sec. 4-B,
all of which give the same $L$-dependence of the intensity, are
assumed. The parameters used are $\xi'_a$=160 \AA, $\xi'_b$=25 \AA, and
$\xi'_c$=4.7 \AA. $\Gamma$ is fixed at 0.01 meV which is determined
experimentally.
The calculation describes the observed data at $(1,0,L)$,
$(0,1,L)$, $(H,0,2.2)$, and $(H,0,3.2)$ reasonably well.
The small peaks at (1,0,$odd$) and (0,1,$even$) in Figs. 4(a) and 4(b) originate
from the tails of the broad (0,1,$odd$) and (1,0,$even$) peaks, respectively.
Surprisingly, it is found that $\xi'_a$ is about 6 times longer than $\xi'_b$.
The average distance between doped holes in the CuO$_2$ plane is $\sim$30 \AA\
in La$_{1.98}$Sr$_{0.02}$CuO$_4$, which is similar to $\xi'_b$ but much smaller
than $\xi'_a$.
$\xi'_c$=4.7 \AA\ indicates that the cluster size perpendicular to the CuO$_2$
plane is similar to the distance between nearest-neighbor CuO$_2$ planes.
A schematic figure of the spin configuration in the $ab$ plane at 1.6 K is
shown in Fig. 6. As mentioned in Sec. 4-B the spin easy axis cannot be determined
uniquely from this study, but the data are consistent with a model in which the cluster
antiferromagnetic spin direction is random in the $ab$ plane.
\section{Magnetic Excitations}
Figure 7 shows neutron inelastic scans at the energies 2, 4, and 10 meV measured
at 35 K. The spectra at 2 and 4 meV show a peak at $H$=1 and a broad tail at the
left side ($H<$1). On the other hand, the spectrum at 10 meV shows a broad and symmetric
peak centered at $H\sim 0.99$. From the results shown in Sec. 4-C,
the peak profiles at 2 and 4 meV appear to reflect the static anisotropic spin
correlations. Therefore, the spectra can be described with one sharp peak at $H$=1
and one broad peak at $H$=0.985 ($K$=1). The symmetric peak profile at 10 meV, however,
indicates that the dynamical spin correlations become isotropic at high energies.
Figure 8 shows the temperature dependence of the inelastic neutron spectra at
1.5 meV. The spectrum at 35 K is consistent with that observed at 2 meV and 35 K.
The spectrum becomes more symmetric with increasing temperature. These results
indicate that the dynamic spin correlations in the $ab$ plane are anisotropic
at both low energies and low temperatures and become more isotropic at both high
energies and high temperatures.
\section{Discussion}
The spin correlations in the spin-glass phase of
La$_{1.98}$Sr$_{0.02}$CuO$_4$
have been clarified in this study. The
static spin correlations are not simply finite-size versions of the N\'{e}el state
spin structure in pure La$_2$CuO$_4$. The characteristic features are as follows:
(i) The spin
cluster dimensions in the $ab$ plane are highly anisotropic at low temperatures.
(ii) The anisotropic behavior has both temperature and energy dependences.
(iii) The cluster antiferromagnetic spin direction appears to be randomly oriented
within the $ab$ plane.
The anisotropic spin correlations in the $ab$ plane are suggestive of the stripe phase
found in La$_{2-y-x}$Nd$_y$Sr$_x$CuO$_4$. \cite{tranquada} However, the
spin correlations in La$_{1.98}$Sr$_{0.02}$CuO$_4$ are different from those in
La$_{2-y-x}$Nd$_y$Sr$_x$CuO$_4$. In La$_{2-y-x}$Nd$_y$Sr$_x$CuO$_4$
the hole/spin stripes run along the $b\rm_{tetra}$ ($a\rm_{tetra}$) axis
so that the magnetic domains should be elongated along the $a\rm_{tetra}$ ($b\rm_{tetra}$)
axis. These axes are rotated by 45$^\circ$ in the $ab$ plane from the $a\rm_{ortho}$ axis
along which spin correlations are longer in La$_{1.98}$Sr$_{0.02}$CuO$_4$.
Interestingly, the spin correlations in La$_{1.98}$Sr$_{0.02}$CuO$_4$ have the same
geometry as those predicted theoretically for the low concentration hole-doped system.
A Hubbard model calculation on a two-dimensional square lattice has been
performed by Schulz \cite{schulz} and by Kato $et$ $al.$ \cite{kato} They find
that a diagonally modulated spin density wave state (diagonal stripe state) is stable
when the electron density is close to half-filling. In the diagonal state, the
hole/spin stripes run along the $a\rm_{ortho}$ or $b\rm_{ortho}$ axis, which is rotated by
45$^\circ$ in the
$ab$ plane from the $a\rm_{tetra}$ ($b\rm_{tetra}$) axis along which the stripes run in
La$_{2-y-x}$Nd$_y$Sr$_x$CuO$_4$. Since the spin ordering observed in
La$_{1.98}$Sr$_{0.02}$CuO$_4$ is short-ranged, the long-range diagonal stripe
structure is not realized in this compound. However,
short-range spin ordering with anisotropic spin correlations elongated along the
$a\rm_{ortho}$ axis can be considered as a precursor phenomenon for the diagonal stripe
ordering. It also seems apparent that the anisotropic spin correlations in
La$_{1.98}$Sr$_{0.02}$CuO$_4$ are directly related to the incommensurate spin
correlations observed in La$_{1.95}$Sr$_{0.05}$CuO$_4$, in which the
incommensurate peaks are rotated by 45$^\circ$ in reciprocal space about
($\pi$,$\pi$) from those observed in Sr-rich superconducting
compounds with $x\ge$0.06. \cite{wakimoto} The
diagonal stripe state is evidently more stable in this compound.
The connection with the results in La$_{1.95}$Sr$_{0.05}$CuO$_4$ can be made quantitative.
In the diagonal stripe phase, Wakimoto $et$ $al.$ observe peaks displaced along
$b\rm_{ortho}$ by a distance $\pm\delta$ where $\delta\simeq 2\pi x/b\rm_{tetra}$ with
$x$ the Sr$^{2+}$ concentration. Very recently, Matsuda $et$ $al.$ have
found that $\delta\simeq 2\pi x/b\rm_{ortho}$ in the highly insulating
spin-glass sample La$_{1.976}$Sr$_{0.024}$CuO$_4$. \cite{matsudanew}
Therefore, for $x$=0.02, $\delta\simeq0.023$ \AA$^{-1}$. We tried to
fit the observed data in Figs. 4 and 5 with the diagonal stripe
model with $\delta$ fixed at 0.023 \AA$^{-1}$. The data can be reasonably
fitted with $\xi'_a=160$ \AA, $\xi'_b=50$ \AA, and $\xi'_c=4.7$ \AA.
It is noted that $\xi'_b$ is still short and the spin correlations are slightly anisotropic
in the CuO$_2$ plane.
Thus the measured line shape in La$_{1.98}$Sr$_{0.02}$CuO$_4$ is consistent with the
diffraction profile expected from an array of disordered stripes locally oriented along
$a\rm_{ortho}$.
As mentioned in Sec. 4-B the $L$-dependence of the magnetic intensities is well
described by a model in which the AF spin of a given cluster is randomly oriented
in the $ab$ plane. The equivalent result is obtained for the cluster staggered spin
along $a\rm_{tetra}$ or $b\rm_{tetra}$ or randomly along both $a\rm_{ortho}$ and
$b\rm_{ortho}$.
In the N\'{e}el state of pure La$_2$CuO$_4$ the spin is along $b\rm_{ortho}$ while just
above $T_N$=325 K, that is, at 328 K when the correlation length is about 800 \AA\
(Ref. \onlinecite{birgeneau})
the spin is randomly oriented in the $ab$ plane. Because of the latter result it seems
physically plausible that in the frozen spin clusters below 40 K in
La$_{1.98}$Sr$_{0.02}$CuO$_4$ of dimensions 160 \AA\ $\times$ 25 \AA\ the spin
direction would also be random. This is consistent with the fact that the net Ising
anisotropy favoring the $b\rm_{ortho}$ axis from the Dzyaloshinsky-Moriya interaction
is only about 0.1 K in energy. We should note that in all cases we assume that the
propagation vector of the AF order is along $a\rm_{ortho}$ in order to account for the
pronounced peaks at $(1,0,L)$ for $L$ $even$ alone.
One interesting and important question is why anisotropic but short-range
correlations are achieved at low temperatures in our sample of
La$_{1.98}$Sr$_{0.02}$CuO$_4$ whereas long-range
diagonal incommensurate order occurs in La$_{1.95}$Sr$_{0.05}$CuO$_4$. Here, we
speculate that the primary difference is not the hole concentration but rather the
method of crystal growth. The $x$=0.05 crystal studied by Wakimoto $et$ $al.$ was
grown with the traveling-solvent-floating-zone technique which is crucible-free
whereas the $x$=0.02 crystal studied here was grown in a platinum crucible. It is
known that in the latter case some platinum is incorporated into the crystal, that
is, Pt$^{2+}$ replaces Cu$^{2+}$. In that case, the Pt$^{2+}$ impurities would exert
an effective random field on the incipient hole stripes there by destroying the
long-range order and causing the system to break up into finite size clusters
\cite{birgeneau2} as we indeed observe experimentally.
We need further studies using a Pt-free crystal with a low hole
concentration in order to check whether the short-range correlations
originate from Pt impurities or not.
We now discuss the results of the inelastic measurements. In the previous study on
La$_{1.98}$Sr$_{0.02}$CuO$_4$, \cite{matsuda} the scaling behavior with $E/T$
of the integrated dynamical spin susceptibility was observed in the energy range
3$\le E \le$9 meV. A clear deviation from the scaling function was observed at low
temperatures in the energy range $E\le$2 meV. Similar behavior was also observed
in the crucible-grown La$_{1.96}$Sr$_{0.04}$CuO$_4$ crystal studied by Keimer $et$ $al.$
\cite{keimer} In Ref. \onlinecite{matsuda} it was argued that the
susceptibility is suppressed at low energies due to the out-of-plane anisotropy.
From our new results, it also appears to be possible that the observed susceptibility is
suppressed at low energies and low temperatures because the broad peak at
$(0,1,L)$, which originates from the short correlation length along the $b\rm_{ortho}$ axis,
was not properly integrated in the experiments of Ref. \onlinecite{matsuda}. Further
studies are needed in order to determine the
important features determining the dynamical spin susceptibility at low energies and
low temperatures.
\section*{Acknowledgments}
We would like to thank Y. Hidaka for providing us with the single crystal of
La$_{1.98}$Sr$_{0.02}$CuO$_4$. We would also like to thank V. J. Emery, K. Hirota,
K. Machida, and J. Tranquada for stimulating discussions. This
study was supported in part by the U.S.-Japan Cooperative
Program on Neutron Scattering operated by the United States
Department of Energy and the Japanese Ministry of Education,
Science, Sports and Culture and by a Grant-in-Aid for Scientific
Research from the Japanese Ministry of Education,
Science, Sports and Culture. Work at Brookhaven National
Laboratory was carried out under Contract No.
DE-AC02-98CH10886, Division of Material Science, U.S.
Department of Energy. The research at MIT was supported by the National Science
Foundation under Grant No. DMR97-04532 and by the MRSEC Program of the National
Science Foundation under Award No. DMR98-08941.
|
1,477,468,751,058 | arxiv | \section{Results and questions about the spectrum of the damped wave equation}
Let $(M, g)$ be a smooth compact Riemannian manifold without boundary.
Let $a$ be a $C^\infty$ {\em real valued} function on $M$.
We study the ``damped\footnote{The term ``damped'' applies to the case when $a\geq 0$, that is to say, when the energy is decreasing. In this paper the sign of $a$ will be of no importance.} wave equation'',
\begin{equation}
\label{e:DWE}
\left(\partial^2_t-\triangle+2a(x)\partial_t\right)v=0
\end{equation}
for $t\in \IR$ and $x\in M$. We shall be interested in the stationary solutions, that is, solutions of the form
$v(t, x)=e^{it\tau}u(x)$ for some $\tau\in \IC$. This means that $u$ must satisfy
\begin{equation}
\label{e:SDWE}(-\lap-\tau^2+2ia\tau)u=0.
\end{equation}
Equivalently, $\tau$ is an eigenvalue of the operator
$$\left( \begin{array}{cc}
0 & I \\
-\lap & 2ia \\
\end{array} \right)$$
acting on $H^1(M)\times L^2(M)$. For $a=0$ this operator is the wave operator, an anti-selfadjoint operator;
but for $a\not=0$ the operator is not normal anymore. It is known that its spectrum is discrete
and consists of a sequence $(\tau_n)$ with $\Im m(\tau_n)$ bounded and $|\Re e(\tau_n)|\To +\infty$ (see Section \ref{s:spec}). One sees easily that $\Im m(\tau_n)\in [2\min(\inf a, 0), 2\max(\sup a, 0)]$ if $\Re e(\tau_n)=0$, and $\Im m(\tau_n)\in [\inf a, \sup a]$ if $\Re e(\tau_n)\not=0$ \cite{Sj00}. One can also note, obviously, that $-\bar \tau_n$ is an eigenvalue if $\tau_n$ is~: the spectrum is symmetric with respect to the imaginary axis.
\subsection{Background}
Motivated by questions from control theory, Lebeau \cite{Leb93} was interested in the so-called ``stabilization problem''~: define
$$\rho=\sup\left\{\beta, \,\exists\, C,\, E(u_t)\leq Ce^{-\beta t}E(u_0)\mbox{ for every solution } u
\mbox{ of \eqref{e:DWE}}\right\}$$
where the energy functional is $E(u)=\int_M |\nabla u|^2$. The stabilization problem consists in finding
some damping function $a$ (necessarily nonnegative) such that $\rho>0$. Lebeau identified
$$\rho=2\min\left\{D(0), C(\infty)\right\}$$
where $D(0)$ is a sort of ``spectral gap''~:
$$D(0)=\inf \left\{\Im m(\tau_n), \tau_n\not= 0\right\}$$
and $C(\infty)$ is defined
in terms of the Birkhoff averages of $a$ along the geodesic flow
$$G^t: T^* M\To T^* M,$$
$$C(\infty)=\lim_{t\To +\infty}\inf_{\rho\in T^* M}\frac 1t\int_0^t a(G^s\rho)ds.$$
Lebeau also showed, on an example, that it is possible to have a spectral gap
($D(0)>0$) but no exponential damping ($\rho=0$)~: this surprising phenomenon is typical of non normal operators.
Markus and Matsaev \cite{MM82} proved an analogue of Weyl's law, also found independently later
on
by Sj\"ostrand \cite{Sj00}~:
\begin{equation}\label{e:Weyl}\sharp\left\{ n,0\leq \Re e (\tau_n)\leq \lambda
\right\}=
\left(\frac{\lambda}{2\pi}\right)^d\left(\int_{p^{-1}]0, 1[}dxd\xi +\cO(\lambda^{-1}) \right)
\end{equation}
where $d$ is the dimension of $M$, $p$ is the principal symbol of $-\lap$, namely the function
$p(x, \xi)=g_x(\xi, \xi)$ defined on the cotangent bundle $T^* M$, and $dxd\xi$ is the Liouville measure
on $T^* M$ (coming from its canonical symplectic structure).
It is a natural question to ask about the distribution of the imaginary parts $\Im m(\tau_n)$ in the interval
$[\inf a, \sup a]$. For non normal operators, obtaining fine information on the distribution of the spectrum
is much harder than for normal operators -- one of the reason being the absence of continuous (or even smooth) functional calculus. Another related difficulty is that there is no general relation between the norm of the resolvent and the distance to the spectrum. Some techniques have being developed to obtain upper bounds on the density of eigenvalues, but lower bounds are notoriously more difficult.
Assuming that the geodesic flow is ergodic with respect to the Liouville measure on an energy layer, Sj\"ostrand proved that most of the $\tau_n$ have asymptotically an imaginary part
$\Im m(\tau_n)\sim\bar a$, where $\bar a$ denotes the average of $a$ on the energy layer $S^* M=\left\{(x, \xi)\in T^* M,g_x(\xi, \xi)=1\right\} $
with respect to the Liouville measure~:
\begin{thm}\cite{Sj00} Assume that the geodesic flow is ergodic with respect to the Liouville measure on $S^*M$. For every $C>0$, for every $\eps >0$, we have
$$\sharp\left\{ n,\lambda \leq \Re e (\tau_n)\leq \lambda+C, \Im m(\tau_n)\not\in[\bar a-\eps, \bar a+\eps]\right\}=
o(\lambda^{d-1})$$
as $\lambda$ goes to infinity.
\end{thm}
\begin{rem} If $C$ is not too small, one sees from \eqref{e:Weyl} that there exists $\tilde C>0$
such that
$$\sharp\left\{ n,\lambda \leq \Re e (\tau_n)\leq \lambda+C\right\}\geq
\tilde C \lambda^{d-1}$$
for large $\lambda$.
Thus, the theorem does say that a majority of the $\tau_n$ have $\Im m(\tau_n)\in
[\bar a-\eps, \bar a+\eps]$.
\end{rem}
\begin{rem} More generally, without the ergodicity assumption, \cite{Sj00} proves the following results.
Introduce the functions on $T^* M$, $$\langle a\rangle_T=\frac{1}{T}\int_{-T/2}^{T/2}a\circ G^s ds,$$
$$\langle a\rangle_\infty=\lim_{T\To+\infty}\langle a\rangle_T,$$ and the real numbers
$$a_+=\lim_{T\To+\infty}\sup_{S^*M}\langle a\rangle_T,$$
$$a_-=\lim_{T\To+\infty}\inf_{S^*M}\langle a\rangle_T.$$
The function $\langle a\rangle_\infty$ is well defined Liouville--almost--everywhere, by the Birkhoff theorem.
Lebeau \cite{Leb93} proves that for any $\eps>0$, there are at most finitely many $n$ with $\tau_n\not\in [a_--\eps, a_++\eps]$, and Sj\"ostrand \cite{Sj00} proves that
$$\sharp\left\{ n,\lambda \leq \Re e (\tau_n)\leq \lambda+C, \Im m(\tau_n)\not\in[ess\inf \langle a\rangle_\infty -\eps,ess\sup \langle a\rangle_\infty+\eps]\right\}=
o(\lambda^{d-1}).$$
\end{rem}
\subsection{Semiclassical formulation\label{p:semiclass}}
As in \cite{Sj00} we reformulate the problem using semiclassical notations.
In doing so, we also generalize a little the problem. We
introduce a semiclassical parameter $0<\hbar\ll 1$ and will be interested in the eigenvalues $\tau$
such that $\hbar\tau\mathop{\longrightarrow}\limits_{\hbar\To 0}1$.
If we put $\tau=\frac{\lambda}\hbar$ with
$\lambda$ close to $1$, then equation \eqref{e:SDWE} can be rewritten as
\begin{equation}
\label{e:SDWEh}\left(-\frac{\hbar^2\lap}2-\frac{\lambda^2}2+i\hbar\lambda a\right)u=0,
\end{equation}
or \begin{equation}\label{e:spec}(\cP-z)u=0
\end{equation}
if we put $z=\frac{\lambda^2}2$,
and \begin{equation}\label{e:genop}\cP=\cP(z)= P+i\hbar Q(z),\qquad P=-\frac{\hbar^2\lap}2, \qquad Q(z)=a\sqrt z.\end{equation}The parameter $z$ is close to $E=\frac12$.
More generally, we will consider a spectral problem of the form
\eqref{e:spec}
where $$\cP=\cP(z)= P+i\hbar Q(z),\qquad P=-\frac{\hbar^2\lap}2,$$
$z\in \Omega=e^{i]-s_0, s_0[}]E_{\min}, E_{\max}[$, with $0<E_{\min}<\frac12<E_{\max}<+\infty$, $0<s_0<\frac\pi{4}$. We will assume that
$ Q(z)\in \Psi DO^1 $ is a pseudodifferential operator that depends holomorphically on $z\in \Omega$, and that $ Q$ is formally self-adjoint for $z$ real (the definition of our operator classes is given in Section \ref{s:symbols}).
We denote $$\Sigma=\Sigma_\hbar=\left\{z\in\Omega, \,\exists u, (\cP(z)-z)u=0\right\}.$$
The operator $ P$ has principal symbol $p_o(x, \xi)=\frac{g_x(\xi, \xi)}2$, and $ Q(z)$ has principal symbol $q_z(x, \xi)$, taking real values for $z$ real. In these notations, the previous results read as follows~: for any $E_{\min}<E_1\leq E_2<E_{\max}$,
\begin{equation}\label{e:Weyl2}\sharp\left\{ z\in\Sigma, E_1\leq \Re e(z)\leq E_2\right\}=\frac{1}{(2\pi\hbar)^d}\left[\int_{p_o^{-1}[E_1, E_2]}dxd\xi+\cO({\hbar})\right]
\end{equation}
(uniformly for all $c>0$ and for all $E_1, E_2$ such that $|E_2-E_1|\geq 2c\hbar$, $E_1$ and $E_2$
staying away from $E_{\min}, E_{\max}$).
One can show that $\frac{\Im m(z)}{\hbar}$ (for $z\in\Sigma$) has to stay bounded if $E_1, E_2$ stay in a bounded interval.
Fix some $c>0$, and take $E_1=E-c\hbar$ and $E_2=E+c\hbar$.
Let us denote
$$q^-_E=\lim_{T\To+\infty}\inf_{p_o^{-1}\left\{E\right\}}\langle q_E\rangle_T,$$
$$q^+_E=\lim_{T\To+\infty}\sup_{p_o^{-1}\left\{E\right\}}\langle q_E\rangle_T,$$
then we have \cite{Sj00}
\begin{equation}\label{e:infsup}q^-_E+o(1)\leq\frac{\Im m(z)}{\hbar}\leq q^+_E+o(1)
\end{equation}
for all $z\in\Sigma$ such that $E-c\hbar\leq \Re e(z)\leq E+c\hbar$.
Finally, denote $\bar q_E$ the average of $q_E$ on the energy layer
$p_o^{-1}\left\{E\right\}$. Assuming that the geodesic flow is ergodic on $p_o^{-1}\left\{E\right\}$, then for any $\eps>0$, any $c>0$,
\begin{equation}\label{e:LGN}\sharp\left\{ z\in\Sigma, E-c\hbar\leq \Re e(z)\leq
E +c\hbar, \frac{\Im m(z)}{\hbar}\not\in [\bar q_E-\eps, \bar q_E+\eps]\right\}=o(\hbar^{1-d}).
\end{equation}
The method of \cite{Sj00} allows to treat the case of a more general $ P$ (and thus a more general
Hamiltonian flow than the geodesic flow), and also to deal with the case when the flow is not ergodic. However, in this paper we will stick to the case of an ergodic geodesic flow; we will add even stronger assumptions in the next paragraph.
\subsection{Questions, and a few results}
We try to give (partial) answers to the three natural questions~:
{\bf(Q1) (Fractal Weyl law)} Let $I$ be an interval that does not contain $\bar q_E$. Can we describe in a finer way the asymptotic behaviour for $$\sharp\left\{ z, E-c\hbar\leq \Re e(z)\leq E +c\hbar, \frac{\Im m(z)}\hbar\in I\right\} \;?$$
For instance, can we find
$$\lim_{\hbar\To 0} \frac{\log \sharp\left\{ z\in\Sigma, E-c\hbar\leq \Re e(z)\leq E +c\hbar, \frac{\Im m(z)}\hbar\in I\right\} }{\log\hbar} \;?$$
{\bf(Q2) (Inverse problem)} Suppose we know everything about $ P$
and about the dynamics of the geodesic flow, but that $ Q$ is unknown. To what extent does the knowledge of the eigenfrequencies $\left\{ z\in\Sigma, E-c\hbar\leq \Re e(z)\leq E +c\hbar\right\}$ determine
$q_E$ ?
Replacing $\cP$ by $ B^{-1}\cP B$, where $ B$
is an elliptic pseudodifferential operator with positive principal symbol $b$, amounts to replacing $q$
by $q-\left\{p_o, \log b\right\}$, where $\left\{., .\right\}$ stands for the Poisson bracket on $T^* M$.
Two smooth functions $f$ and $g$ on $T^*M$ are said to be cohomologous (with respect to the geodesic flow) if there exists a third smooth function $h$ such that $f=g+\left\{p_o, h\right\}$. This defines an equivalence relation, we write
$f\sim_{p_o} g$.
It is thus more natural to ask~:
{\bf(Q2')} Does the knowledge of the eigenfrequencies $\left\{ z\in\Sigma, E-c\hbar\leq \Re e(z)\leq E +c\hbar\right\}$ determine the {\em cohomology class} of
$q_E$ ?
If the length spectrum of $M$ is simple, then one can most probably use a trace formula and recover from the knowledge of $\Sigma$
all the integrals of $q_E$ along closed geodesics. And this is enough to determine the cohomology class of $q_E$ if $M$ has negative sectional curvature~: the Livshitz theorem \cite{Liv71, GK80} says that if two functions have the same integrals along all closed geodesics, then they are cohomologous. Thus,
the answer to (Q2) is probably {\em ``yes''} if $M$ has negative sectional curvature and
the length spectrum is simple (this last assumption is satisfied generically, but unfortunately not for surfaces of constant negative curvature).
In fact, it also makes sense to ask whether {\em some} knowledge of the imaginary parts alone $\left\{ \Im m(z), z\in\Sigma, E-c\hbar\leq \Re e(z)\leq E +c\hbar\right\}$ allows
to recover {\em some} information about $q_E$. For instance, one can ask the apparently simple question~:\\
{\bf(Q3) (Very weak inverse problem)} Let $C$ denote a constant function.
As follows from \eqref{e:infsup}, we have the implication
$$q_E\sim_{p_o} C \mbox{ on }p_o^{-1}\left\{E\right\}\Longrightarrow \frac{\Im m(z)}\hbar\mathop{\longrightarrow}\limits_{\hbar\To 0, z\in\Sigma, E-c\hbar\leq \Re e(z)+c\hbar} C.$$Does the converse hold ?
\vspace{.8cm}
{\bf Main assumption on the manifold $M$~:} From now on, we assume that $M$ has constant sectional curvature $-1$. This implies the ergodicity of the geodesic
flow on any energy layer (with respect to the Liouville measure), and in fact a very chaotic behaviour of trajectories (see Section \ref{s:chaos}). We will indicate how to generalize our results in the case of {\em surfaces} of variable negative curvature; however, it is not clear what to do in higher dimension and variable negative curvature.
\vspace{.8cm}
In the following theorem, $h_{KS}$ stands for the Kolmogorov--Sinai entropy, or metric entropy.
It is an affine functional, taking nonnegative values, defined on $\cM$, the set of $G$--invariant probability measures on $T^*M$~:
see for instance \cite{KH} for the definition of $h_{KS}$. We will also denote $\cM_E\subset\cM$ the set of invariant probability measures carried by $p_o^{-1}\left\{E\right\}$. Since $p_o$ is homogeneous it is enough to consider, for instance,
the case $E=\frac12$, and $p_o^{-1}\left\{E\right\}=S^*M$. For $\mu\in\cM_{\frac12}$, we have $h_{KS}(\mu)\leq d-1$, with equality if and only if $\mu$ is the Liouville measure. We now fix $E=\frac12$ and we denote $q=q_{\frac12}$,
$\bar q=\bar q_{\frac12}$, $q^+=q^+_{\frac12}$, $q^-=q^-_{\frac12}$.
We fix some $c>0$ and denote
$$\Sigma_{\frac12}=\left\{ z\in\Sigma, \frac12-c\hbar\leq \Re e(z)\leq \frac12 +c\hbar\right\}.$$
\begin{thm} \label{t:upperWeyl} Assume $M$ has constant sectional curvature $-1$. Define
$$H(\alpha)=\sup\left\{ h_{KS}(\mu), \mu\in \cM_{\frac12}, \int q\,d\mu=\alpha\right\}.$$
If $\alpha\geq \bar q$, then for any $c>0$
$$\limsup_{\hbar\To 0} \frac{\log\sharp\left\{ z \in \Sigma_{\frac12}, \frac{\Im m(z)}\hbar\geq \alpha\right\} }{\abs{\log\hbar}} \leq H(\alpha).$$
If $\alpha\leq \bar q$, then for any $c>0$
$$\limsup_{\hbar\To 0} \frac{\log\sharp\left\{ z \in \Sigma_{\frac12}, \frac{\Im m(z)}\hbar\leq \alpha\right\} }{\abs{\log\hbar}} \leq H(\alpha).$$
\end{thm}
\begin{rem} (see \cite{Lal89-II}, \S 4, or \cite{BabLed98}, \S 3 for the argument) The function $H$ is concave and is identically $ -\infty$ outside $[q^-, q^+]$.
It is continuous and strictly concave in $[q^-, q^+]$, and real analytic in $]q^-, q^+[$
(note that $q^-=q^+$ if and only if $q$ is cohomologous to a constant on $S^*M$).
The function $H$ reaches its maximum $d-1$ at the point $\bar q$, and has finite (nonnegative) limits at the endpoints $q^-, q^+$. In particular $H$ is positive in the open interval $]q^-, q^+[$.
\end{rem}
\begin{rem} The key fact in the proof of Theorem \ref{t:upperWeyl}
is the large deviation estimate from \cite{Kif90},
$$\lim_{T\To +\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_T(\rho)\in I\right\}}{ T}=\sup\left\{H(\alpha), \alpha\in I\right\}-d-1$$
for any interval $I\subset \IR$ -- where $L_{\frac12}$ is the Liouville measure on $p_o^{-1}\left\{\frac12\right\}=S^*M$. This result gives the volume of the set of trajectories deviating from the Birkhoff ergodic theorem.
See Section \ref{s:chaos}.
\end{rem}
On a surface of variable negative curvature, we can generalize the result to~:
\begin{thm} \label{t:vari} Assume $M$ is a surface of variable negative curvature. Denote $\varphi$ the infinitesimal
unstable Jacobian (see Section \ref{s:chaos}). Define
$$H(\alpha)=\sup\left\{ \frac{h_{KS}(\mu)}{\int\varphi\,\,d\mu}, \mu\in \cM_{\frac12}, \int q\,d\mu=\alpha\right\}.$$
Then the same conclusion as in Theorem \ref{t:upperWeyl} holds.
\end{thm}
\begin{rem} For a manifold of variable negative curvature and dimension $d$ we would expect
the same to hold with
$H(\alpha)=(d-1)\sup\left\{ \frac{h_{KS}(\mu)}{\int\varphi\,\,d\mu}, \mu\in \cM_{\frac12}, \int q\,d\mu=\alpha\right\}.$ However, our proof does not work in this case.
\end{rem}
One may wonder if the $\limsup$ in Theorem \ref{t:upperWeyl} is also a $\liminf$. This question is reminiscent of the fractal Weyl law conjecture asked by Zworski and Sj\"ostrand, but in our situation we can say with certainty that it is not true. Worse, one cannot expect the lower bound to hold for a ``generic'' $q$. In a paper in preparation, Emmanuel Schenck\footnote{Private communication.} is proving that there exists $\delta>0$
such that $\frac{\Im m(z)}\hbar\leq q^+-\delta$ for all $z\in\Sigma_{\frac12}$, provided a certain criterion
on $q$ is satisfied. The criterion reads ${\rm Pr}(q-\frac{d-1}2)<q^+$, where the pressure functional ${\rm Pr}$ is defined on the space of continuous functions on $S^*M$ and is the Legendre transform of $-h_{KS}$ (see Section \ref{s:chaos}). The functional ${\rm Pr}$ is lipschitz with respect to the $C^0$ norm, and the condition ${\rm Pr}\left(q-\frac{d-1}2\right)<q^+$ defines a non--empty open
set in the $C^0$ topology.
For such a $q$ we cannot have
$$\liminf_{\hbar\To 0} \frac{\log\sharp\left\{ z\in \Sigma_{\frac12}, \frac{\Im m(z)}\hbar > q_+-\delta\right\} }{\abs{\log\hbar}} \geq H( q_+-\delta),$$
since $H$ is positive in $]q_-, q_+[$ but the $\liminf $ on the left-hand side is $-\infty$.
In Sections \ref{s:q3} and \ref{s:arithm}, we investigate Question 3 in some special cases.
We work on compact hyperbolic surfaces ($d=2$), and we study operators of the form
$$\lap_\omega f=\lap f-2\la \omega, df\ra+\norm{\omega}_x^2f,$$
called ``twisted laplacians'' -- here $\omega$ is a harmonic real-valued $1$-form on $M$.
Studying the large eigenvalues of $\lap_\omega$ amounts to studying a fixed spectral window for the semiclassical twisted laplacian
$$-\hbar^2\frac{\lap_\omega}2=-\hbar^2\frac{\lap}2+\hbar^2 \la \omega, d.\ra-\hbar^2\frac{\norm{\omega}_x^2}2, \qquad \hbar\To 0.$$
This question falls exactly into the setting of \S \ref{p:semiclass}, with $q(x, \xi)=\la \omega_x,\xi\ra$.
Since $q(x, -\xi)=-q(x, \xi)$, we have $\bar q=0$. We will denote ${\rm Pr}(\omega)={\rm Pr}(q)$
the pressure of the function $q$, defined in Section \ref{s:pressureetal}. It will also be interesting to note that $q^+=-q^-$ coincides with the stable norm $\norm{\omega}_s$ defined in Section \ref{p:proof3},
that $\norm{\omega}_s+1\geq {\rm Pr}(\omega)\geq \norm{\omega}_s$ and that
${\rm Pr}(t\omega)-|t|\norm{\omega}_s\mathop{\longrightarrow}\limits_{t\To \infty}0$ in the case of surfaces.
\begin{thm}\label{t:q3arithm} Assume $M$ is a compact arithmetic surface coming from a quaternion algebra. Take $\omega\not=0$. Fix $\beta\in(0, 1]$, and $0<\eps<\beta$. Then, for every $\hbar $ small enough, there exists
$z\in Sp (-\hbar^2\frac{\lap_\omega}2)$ with $|\Re e(z)-\frac12|\leq\hbar^{1-\beta}$, such that
$$ \frac{\Im m(z)}{\hbar} \geq {\rm Pr}(\omega)-\frac12 -\frac{1+\eps}{2\beta}.$$
Equivalently, given $\beta\in(0, 1]$, and $0<\eps<\beta$, for all $T$ large enough,
there exists $r_n$ such that $|\Re e(r_n)-T|\leq T^\beta$ and
$$|\Im m(r_n)|\geq {\rm Pr}(\omega)-\frac12 -\frac{1+\eps}{2\beta},$$
where $r_n$ is the ``spectral parameter'' defined by $\lambda_n=-(\frac14+r_n^2)$.
\end{thm}
Of course, we deduce immediately the following corollary~:
\begin{coro}
$\sharp\{n, |\Re e(r_n)|\leq T, |\Im m(r_n)|\geq {\rm Pr}(\omega)-\frac12 -\frac{1+\eps}{2\beta}\}\geq T^{1-\beta},$
asymptotically as $T\To +\infty.$
\end{coro}
Since ${\rm Pr}(\omega)\To 1$ as $\omega\To 0$ (but ${\rm Pr}(\omega)\To +\infty$ as $\norm{\omega}_s\To +\infty$)
this result is only interesting if $\norm{\omega}_s$ is large enough (depending on $\beta$).
Note also that if $\norm{\omega}_s$ is large enough we have ${\rm Pr}(\omega)< \norm{\omega}_s+\frac12$,
so that the work of Emmanuel Schenck mentioned above will show that there is a strip $\{\Im m(z)\geq \norm{\omega}_s-\delta\}$ ($\delta>0$) that contains no $r_n$.
The arithmetic case is special, in that the lengths of closed geodesics are well separated~: we can write a trace formula, find a lower bound on the geometric part (despite of its oscillatory nature) and deduce a lower bound on the imaginary parts of eigenvalues. The ideas are borrowed from \cite{Hej}.
\begin{rem} Another way of defining the twisted laplacian is to write $M=\Gamma\backslash \IH$, where
$\IH$ is the universal cover of $M$ and $\Gamma$ is a discrete group of isometries of $\IH$; to fix an origin $o\in \IH$, and to write
$$\lap_\omega f(x) =e^{\int_o^x\omega}\circ \lap\left( e^{-\int_o^x\omega}f(x)\right);$$
this operator preserves $\Gamma$-periodic functions, hence descends to $M$. The case where $\omega$ takes purely imaginary values is self-adjoint, and analogous to the study of ``Bloch waves''.
The case where $\omega$ takes real values is no longer self-adjoint.
\end{rem}
\begin{rem} Our motivation for working with twisted laplacians on hyperbolic manifolds was
that it is a case where the trace formula is exact. There was, {\em a priori}, hope to prove finer results than in cases where the trace formula is not exact (in the latter case the space of test functions to which the formula can be applied is more limited). {\em A posteriori}, we never use any wild test function that wouldn't be allowed in the general case. So, one can think that the same result holds
for our general operator \eqref{e:genop} -- provided one proves a semiclassical trace formula first.
\end{rem}
If we don't assume arithmeticity, we can only treat the following operator~:
$$-\hbar^2\frac{\lap_{\theta(\hbar)\omega}}2=-\hbar^2\frac{\lap}2+\hbar^2\theta(\hbar) \la \omega, d.\ra-\hbar^2\theta(\hbar)^2\frac{\norm{\omega}_x^2}2, \qquad \hbar\To 0,$$
where $\theta(\hbar)\geq |\log\hbar|$ and $\hbar\theta(\hbar)\To 0$. In other words the non-selfadjoint
perturbation is stronger than in the previous cases.
\begin{thm}\label{t:q3} Assume $M$ is a compact hyperbolic surface. Take $\omega\not=0$.
Take any function $f(\hbar)\gg \theta(\hbar)^{3/2}\log\log\hbar^{1/2}.$ Then there is a sequence
$\hbar_n\To 0$ such that
$$ \sup\left\{\frac{\Im m(z)}{\hbar_n\theta(\hbar_n)}, z\in Sp\left(-\hbar_n^2\frac{\lap_{\theta(\hbar_n)\omega}}2\right) ,
|\Re e z-\frac12|\leq \hbar_n f(\hbar_n)\right\} \mathop{\longrightarrow}\limits_{n\To +\infty}\norm{\omega}_s,$$
the stable norm of $\omega$.
\end{thm}
See Section \ref{p:proof3} for the definition of the stable norm. Note that with our previous notations, $\norm{\omega}_s=q^+=-q^-$, and ${\rm Pr}(t\omega)-|t|\norm{\omega}_s\mathop{\longrightarrow}\limits_{t\To \infty}0 $ on a hyperbolic surface.
\vspace{.5cm}
{\bf Acknowledgements~:} This work was partially supported by the grant ANR-05-JCJC-0107-01.
I am grateful to UC Berkeley and the Miller Institute for their hospitaly in the spring of 2009.
I thank Dima Jakobson for suggesting that some of the ideas contained in Hejhal's book \cite{Hej} could be used to find lower bounds on the density of eigenvalues in arithmetic situations. In fact, at the same time as this paper was written, Jakobson and Naud managed
to apply these ideas to study the resonances of certain arithmetic convex cocompact surfaces \cite{JN09}.
\section{Note on the definition of the spectrum and its multiplicity\label{s:spec}.}
By the term ``spectrum'', we mean the set $\Sigma$ of $z\in\Omega$ such that $\cP(z)-z$ is not
bijective $H^2(M)\To H^0(M)$. If it is bijective, then the inverse must be continuous, by the closed graph theorem.
If $z$ is restricted to a compact subset of $\Omega$, it is easy to see that $G(z)=I+\lambda^{-1}(\cP(z)-z)$ is invertible for $\lambda>0$ large enough. The inverse $G(z)^{-1}$ is then a compact operator on $H^0(M)$. Besides, one sees that $\cP(z)-z$ is not
bijective $H^2(M)\To H^0(M)$ if and only if $1$ is in the spectrum of $G(z)^{-1}$, if and only if there
exists $u\in H^2(M)$ such that $(\cP(z)-z)u=0$. This shows, in particular, that the ``spectrum'' is discrete
and corresponds to the existence of ``eigenfunctions''.
To define the multiplicity of $z_0\in\Sigma$, we proceed the same way as in \cite{Sj00}. By the density
of finite rank operators in the space of compact operators \cite{RS}, one shows that in a neighbourhood of $z_0$ there exists
a finite rank operator $K(z)$, depending holomorphically on $z$, such that $\cP(z_0)-z_0+K(z_0)$
is invertible. The multiplicity of $z_0$ is then defined as the order of $z_0$ as a zero of the
holomorphic function
$$z\mapsto \det[(\cP(z)-z+K(z))^{-1}(\cP(z)-z)]=\det[I- (\cP(z)-z+K(z))^{-1}K(z)].$$
It is shown in \cite{Sj00} that this definition does not depend on the choice of $K(z)$, and coincides
with other usual definitions of the multiplicity. Besides, the argument can be extended to the case where $K(z)$ is trace class.
\section{A few facts on the geodesic flow on a negatively curved manifold \label{s:chaos}}
\subsection{Anosov property}
If $M$ has negative sectional curvature, then the geodesic flow on $S^*M$ has the Anosov property \cite{An67}.
This means there are $C, \lambda >0$ such that for each
$\rho\in S^*M$, the tangent space $T_\rho(S^*M)$ splits into $$
T_\rho(S^*M)=E^u(\rho)\oplus E^s(\rho) \oplus \IR\,X(\rho)\,
$$
where\\
-- the vector field $X$ generates the geodesic flow $G^t$;\\
-- $E^s$ is the stable subspace~: for all $v\in E^s(\rho)$, and for $t\geq 0$, $\norm{DG^t_\rho.v}\leq Ce^{-\lambda t}\norm{v}$;\\
-- $E^u$ is the unstable subspace~: for all $v\in E^u(\rho)$, and for $t\leq 0$, $\norm{DG^t_\rho.v}\leq Ce^{\lambda t}\norm{v}$.
If $M$ has constant negative curvature $-1$, any $\lambda<1$ will do. We take $\lambda=1-\eps$,
with $\eps$ arbitrarily small.
One also has an upper bound $\norm{DG^t_\rho.v}\leq Ce^{(1+\eps) |t|}\norm{v}$ for any $\eps>0$ and any $t\in \IR$.
\subsection{Pressure, entropy, and large deviation.\label{s:pressureetal}}
The pressure is defined on $C^0(S^* M)$, as the Legendre transform of the entropy~:
$${\rm Pr}(f)=\sup\left\{h_{KS}(\mu)+\int f\,d\mu, \mu\in \cM_{\frac12}\right\}.$$
If $f$ is H\"older, then the supremum is attained for a unique $\mu$, called the equilibrium measure
of $f$. The functional ${\rm Pr}$ is analytic on any Banach space of sufficiently regular functions --
for instance, a space of H\"older functions \cite{BR75, Ru}. Besides, the restriction of ${\rm Pr}$ to any line $\left\{f+tg, t\in\IR\right\}$ is strictly convex, unless $g$ is cohomologous to a constant \cite{Ra73}. If $g$ is sufficiently smooth, we recall that this means that $g=\left\{p_o, h\right\}+c$ for some smooth function $h$ and a constant $c$. If $g$ if H\"older, it is better to use the integral version of the notion. If $\gamma(t)$ is a periodic trajectory of the geodesic flow on $S^*M$ (equivalently, a closed geodesic),
we denote $l_\gamma$ its period (equivalently, the length of the closed geodesic). We denote $\,d\gamma$ the measure $\int g\,d\gamma=\int_0^{l_\gamma}g(\gamma(t))dt$ on $S^*M$, and $\,\,d\mu_\gamma$ the probability measure $\int g\,\,d\mu_\gamma=l_\gamma^{-1}\int g\,d\gamma$. One says that $g$ is cohomologous to the constant function $c$ if $\int g\,d\gamma=c\,l_\gamma$ for all periodic trajectories of the geodesic flow (the Livschitz theorem says that both notions are equivalent for smooth functions).
Let us now fix a smooth function $q$ on $S^*M$, not cohomologous to a constant. For $\alpha\in\IR$, define
$$H(\alpha)=\sup\left\{h_{KS}(\mu), \mu\in\cM_{\frac12}, \int q\,d\mu=\alpha\right\},$$
$$P(\beta)={\rm Pr}(\beta q)=\sup\left\{h_{KS}(\mu)+\beta\int q\,d\mu, \mu\in \cM_{\frac12}\right\}=\sup_\alpha \alpha\beta+H(\alpha) .$$
The function $H$ is concave, continuous on the interval $[q_-, q_+]$ defined earlier~:
$$q^-=\lim_{T\To+\infty}\inf_{p_o^{-1}\left\{\frac12\right\}}\langle q\rangle_T,$$
$$q^+=\lim_{T\To+\infty}\sup_{p_o^{-1}\left\{\frac12\right\}}\langle q\rangle_T.$$
In the case of a negatively curved manifold, this definition coincides with
$$q^-=\inf\left\{\int q\,d\mu, \mu\in \cM_{\frac12}\right\},$$
$$q^+=\sup\left\{\int q\,d\mu, \mu\in \cM_{\frac12}\right\}.$$
The function $H$ is real analytic and strictly concave in $]q_-, q_+[$. The function $P$ is real analytic, strictly convex on $\IR$.
Clearly, $P(\beta)\geq \beta q_+$ for $\beta\geq 0$, and it is not very difficult to show that the limit $\lim_{\beta\To +\infty }P(\beta)-\beta q_+$ exists and is nonnegative\footnote{This limit is equal to $H(q^+)$.}. Similarly,
$P(\beta)\geq \beta q_-$ for $\beta\leq 0$, the limit $\lim_{\beta\To -\infty }P(\beta)-\beta q_-$ exists and is nonnegative.
The pressure and the entropy appear naturally when studying large deviations for the Birkhoff averages of the function $q$. Denote $J_t(\rho)$ the Jacobian of $DG^t$ going from $E^u(\rho)$ from $E^u(G^t\rho)$. Define $\varphi$ (the infinitesimal unstable Jacobian) by
$$\varphi(\rho)={\frac{dJ_t}{dt}}_{|t=0}(\rho).$$
On a manifold of dimension $d$ and constant negative curvature $-1$, the function $\varphi$ is constant,
equal to $d-1$. In general one can only say that it is a H\"older function. The function
$\varphi$ is not necessarily positive, but it is cohomologous to a positive function, for instance
$\la \varphi\ra_T$ for $T$ large enough. In what follows, we will assume without loss of generality that
$\varphi>0$.
The two following large deviation results are due to Kifer \cite{Kif90}.
\begin{thm}
\cite{Kif90}, Prop 3.2.
Let $M$ be a compact manifold of negative sectional curvature. Let $q$ be a smooth function on $S^* M$. For $T>0$, define the function $\la q\ra_T=\frac{1}T\int_{-T/2}^{T/2}q\circ G^s ds$ on $S^*M$. Denote $L_{\frac12}$ the Liouville measure
on $S^*M$.
Then $$\lim_{T\To+\infty}\frac{\log \int_{S^* M} e^{T\la q\ra_T(\rho)}dL_{\frac12}(\rho)}{ T} =
{\rm Pr}(q-\varphi).$$
\end{thm}
As a consequence, one also has~:
\begin{thm}\label{p:LDP} \cite{Kif90}, Thm 3.4 (i)
Let $M$ be a compact manifold of negative sectional curvature. Let $q$ be a smooth function on $S^* M$. For $T>0$, define the function $\la q\ra_T=\frac{1}T\int_{-T/2}^{T/2}q\circ G^s ds$ on $S^*M$. Denote $L_{\frac12}$ the Liouville measure
on $S^*M$.
Then, for any closed interval $I\subset \IR$, we have
$$\limsup_{T\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_T(\rho)\in I\right\}}{ T}\leq \sup\left\{h_{KS}(\mu)-\int \varphi \,d\mu, \mu\in\cM_{\frac12}, \int q\,d\mu \in I \right\}.$$
For any open interval $I\subset \IR$, we have
$$\liminf_{T\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_T(\rho)\in I\right\}}{ T}\geq \sup\left\{h_{KS}(\mu)-\int \varphi \,d\mu, \mu\in\cM_{\frac12}, \int q\,d\mu \in I\right\}.$$
(ii) Let $M$ be a compact manifold of dimension $d$, with constant sectional curvature $-1$. Then (i) can be rephrased as follows. Let $q$ be a smooth function on $S^* M$. For $T>0$, define the function $\la q\ra_T=\frac{1}T\int_{-T/2}^{T/2}q\circ G^s ds$ on $S^*M$. Denote $L_{\frac12}$ the Liouville measure
on $S^*M$.
Then, for any closed interval $I\subset \IR$, we have
$$\limsup_{T\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_T(\rho)\in I\right\}}{ T}\leq \sup\left\{H(\alpha), \alpha\in I\right\}-(d-1)$$
where $H$ is the function $H(\alpha)=\sup\left\{h_{KS}(\mu), \mu\in\cM_{\frac12}, \int q\,d\mu=\alpha\right\}$.
For any open interval $I\subset \IR$, we have
$$\liminf_{T\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_T(\rho)\in I\right\}}{ T}\geq \sup\left\{H(\alpha), \alpha\in I\right\}-(d-1).$$
\end{thm}
The pressure and entropy functions also appear when counting closed geodesics $\gamma$ with a given $q$-average~:
$${\rm Pr}(q)=\lim_{t\To+\infty }\frac1t\log\sum_{\gamma,\, l_\gamma\leq t}e^{\int q\,d\gamma},$$
and as consequence
$$\limsup_{t\To +\infty}\frac1t\log\sharp\left\{ \gamma, \,l_\gamma\leq t, \int q\,\,d\mu_\gamma\in I\right\}\leq
\sup\left\{H(\alpha), \alpha\in I\right\},$$
(for a closed interval $I$),
$$\liminf_{t\To +\infty}\frac1t\log\sharp\left\{ \gamma, \,l_\gamma\leq t, \int q\,\,d\mu_\gamma\in I\right\}\geq
\sup\left\{H(\alpha), \alpha\in I\right\},$$
(for an open interval $I$). See \cite{Kif94}.
In negative variable curvature, we will also need the following variant of Theorem \ref{p:LDP}~:
\begin{thm} \label{t:LDP2}
Let $M$ be a compact manifold of negative sectional curvature. Let $q$ be a smooth function on $S^* M$. Let $\phi$ be a smooth positive function. Denote $L_{\frac12}$ the Liouville measure
on $S^*M$. For $\rho\in S^* M$ and $t\in\IR$, define $\cT_\rho(t)$ by $\int_0^{\cT_\rho(t)}\phi(G^s\rho)ds=(d-1)t$. For $t>0$, define the function $$\la q\ra_{\cT(-t/2), \cT(t/2)}=\frac{1}{\cT_\rho(t/2)-\cT_\rho(-t/2)}\int_{\cT_\rho(-t/2)}^{\cT_\rho(t/2)}q\circ G^s (\rho)ds$$ on $S^*M$.
Then, for any closed interval $I\subset \IR$, we have
\begin{multline*}\limsup_{t\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_{\cT(-t/2), \cT(t/2)}(\rho)\in I\right\}}{ t}\\ \leq (d-1) \sup\left\{\frac{h_{KS}(\mu)}{\int \phi \,d\mu}- \frac{\int \varphi \,d\mu}{\int \phi \,d\mu}, \mu\in\cM_{\frac12}, \int q\,d\mu \in I \right\}.\end{multline*}
For any open interval $I\subset \IR$, we have
\begin{multline*}\liminf_{t\To+\infty}\frac{\log \, L_{\frac12}\left\{\rho\in S^* M, \la q\ra_{\cT(-t/2), \cT(t/2)}(\rho)\in I\right\}}{ t}\\ \geq (d-1) \sup\left\{\frac{h_{KS}(\mu)}{\int \phi \,d\mu}- \frac{\int \varphi \,d\mu}{\int \phi \,d\mu}, \mu\in\cM_{\frac12}, \int q\,d\mu \in I \right\}.\end{multline*}
\end{thm}
\begin{proof} Define a flow $\bar G$ on $S^*M$ that has the same trajectories as $G$ but different speed~:
$\bar G^t(\rho)=G^{\cT_\rho(t)}(\rho)$.
For the new flow, the infinitesimal unstable Jacobian is equal to $(d-1)\frac{\int \varphi \,d\mu}{\int \phi \,d\mu}$. If $\mu$ is an invariant probability measure of $G$, then
$d\bar\mu=\frac{\phi\,\,d\mu}{\int \phi\,\,d\mu}$ is an invariant probability measure of $\bar G$.
Besides, their entropies are related by the Abramov formula~:
$$h_{KS}(\bar\mu)=(d-1)\frac{h_{KS}(\mu)}{\int \phi\,\,d\mu},$$
where the entropies of $\bar\mu$ and $\mu$ are computed with respect to $\bar G$ and $G$ respectively.
The theorem is then again an application of Theorem 3.4 in \cite{Kif90} for the Anosov flow $\bar G$.
\end{proof}
\section{Averaging\label{s:ave}}
We are now ready to start the proof of Theorem \ref{t:upperWeyl}. We will work in dimension $d$ and constant negative $-1$. The changes to make in order to get Theorem \ref{t:vari} are indicated in Remarks \ref{r:change} and \ref{r:change2}.
The following proposition is proved in \cite{Sj00}, \S 2~:
\begin{prop} Let $T>0$, there exists an invertible selfadjoint pseudodifferential operator $A_T\in \Psi DO^0$
such that
$$A_T^{-1}( P+i\hbar Q(z))A_T= P+i\hbar\Op_\hbar(q^T(z))+\hbar^2 R_T(z)$$
for $z\in \Omega$;
with $R_T\in\Psi DO^0$ depending holomorphically on $z\in \Omega$, and with $q^T(z)\in S^{1 }$ depending holomorphically on $z\in \Omega$, equal to $\la q\ra_T-q+q_z$ in a neighbourhood
of $p_o^{-1}\left(\frac12\right)$.
\end{prop}
The definition of our symbol classes $S^m $ and operator classes $\Psi DO^m$ is given in Section \ref{s:symbols}.
We recall that the operator $A_T$ constructed by Sj\"ostrand is $A=\Op_\hbar(e^{g_T})$,
where
$$g_T= \frac12\int_0^{T/2}\left(\frac{2s}T-1\right)q\circ G^{s}ds +\frac12\int_{-T/2}^0\left(\frac{2s}T+1\right) q\circ G^{s}
ds$$
on $p_o^{-1}\left(\frac12\right)$. The function $g_T$ solves $\{p_o, g_T\}=q-\la q\ra_T.$
Exactly the same proof yields~:
\begin{prop}\label{p:ave} Assume $M$ has constant curvature $-1$. Let $\eps>0$ and $T=(1-4\eps)|\log\hbar|$. Define $\delta=\frac{1-\eps}2$.
There exists an invertible selfadjoint pseudodifferential operator $A_T \in \Psi DO^0_\delta$
such that
$$A_T^{-1}( P+i\hbar Q(z))A_T= P+i\hbar\Op_\hbar(q^T(z))+\hbar^2 R_T(z)$$for $z\in \Omega$;
with $R_T\in \hbar^{-2\delta}\Psi DO_\delta^0$ depending holomorphically on $z$, and with $q^T(z)\in S_\delta^{1}$ depending holomorphically on $z$, equal to $\la q\ra_T-q+q_z$ in a neighbourhood
of $p_o^{-1}\left(\frac12\right)$.\end{prop}
In what follows, we will restrict our attention to a region where $|z-\frac12|=\cO(\hbar)$. As a consequence, we can write
\begin{equation} P+i\hbar\Op_\hbar(q^T(z))+\hbar^2 R_T(z)= P+i\hbar\Op_\hbar(q^T)+\hbar \tR_T(z)
\label{e:theope}\end{equation}
with $q^T=q^T\left(\frac12\right)=\la q\ra_T$ in a neighbourhood
of $p_o^{-1}\left(\frac12\right)$, and $\tR_T(z)$ is a pseudodifferential operator depending holomorphically on $z\in \Omega$, tending to zero when $\hbar\To 0$ and $|z-\frac12|=\cO(\hbar)$. More precisely,
$$ \tR_T(z)=\left(z-\frac12\right) Q'(z)+ \hbar R_T(z),$$
$R_T\in \hbar^{-2\delta}\Psi DO_\delta^0$ depending holomorphically on $z$, and
$Q'(z)\in \Psi DO_\delta^1$ depending holomorphically on $z$.
\begin{rem}\label{r:change} To treat the case of variable curvature, we should modify Proposition \ref{p:ave} as follows. Fix $\phi$ a smooth function such that $ \phi\geq \varphi $. Define $\cT_{\rho}( \frac{T}2)$, $\cT_{\rho}(-\frac{T}2)$ as in Theorem \ref{t:LDP2}, in a neighbourhood of $p_o^{-1}\left(\frac12\right)$.
We have to choose $\phi$ smooth because we want $\cT_\rho$ to depend smoothly on $\rho$.
In dimension $d=2$, we have $\cT_{\rho}( \frac{T}2)\in S^0_\delta$ with $\delta=\frac{1-\eps}2$ (which may not be true for $d>2$ since the unstable Jacobian no longer controls the derivatives of the geodesic flow).
In Proposition \ref{p:ave}, we now define $A_T =\Op_\hbar(e^{g_T})$ where
$$g_T(\rho)= \frac12\int_0^{\cT_\rho(T/2)}\left(\frac{s}{\cT_\rho(T/2)}-1\right)q\circ G^{s}ds +\frac12\int_{-\cT_\rho(T/2)}^0\left(\frac{s}{\cT_\rho(T/2)}+1\right) q\circ G^{s}
ds$$
on $p_o^{-1}\left(\frac12\right)$. In the last sentence of Proposition \ref{p:ave}, we replace $\la q\ra_T$ by $\la q\ra_{\cT(-\frac{ T}2), \cT( \frac{T}2)}+r_T$ where $r_T=q-\{p_o, g_T\}-\la q\ra_{\cT(-\frac{ T}2), \cT( \frac{T}2)}$ satisfies $r_T\in |\log\hbar|^{-1}S^0_\delta$ and $\{p_o, r_T\}\in |\log\hbar |^{-1}S^0_\delta$. For $d=2$, all the operators $A_T$, $R_T$ {\em etc} stay in the same class as stated in Proposition \ref{p:ave}.
\end{rem}
\section{Perturbations with controlled trace norm.\label{s:pert}}
In the following sections, we let $z$ vary in a disc of radius $\cO(\hbar)$ around $\frac12$. We will write $2z=1+\zeta$, $\zeta=\cO(\hbar)$. We consider the operator \eqref{e:theope}, that we write
\begin{equation}\label{e:theope2}
\cP_T=\cP_T(z)= P+i\hbar Q_T+\hbar \tR_T(z),\qquad Q_T= \Op_\hbar(q^T).
\end{equation}
Note that we have
$$\left\{p_o, q^T\right\}=\cO\left(\frac1T\right)$$
in a neighbourhood of $p_o^{-1}\left(\frac12\right)$. By Proposition \ref{p:CV},
this implies
\begin{equation}\label{e:comm}\norm{[P, Q_T]u}\leq
C \left(\frac{\hbar}T +\cO(\hbar^{2-2\delta})\right)\norm{u}
+\cO(\hbar)\norm{(P-\frac12)u}.\end{equation}
We now want to make a small perturbation $\tilde\cP$ of $\cP$ with a good control over the resolvent $(\tilde\cP(z)-z)^{-1}$, and over the trace class norm $\norm{.}_1$ of $\tilde\cP-\cP$.
We construct a pseudodifferential operator $\tQ_T=\Op_\hbar(\tilde q^T)\in \Psi DO^1_\delta$ such that
$\tilde q^T\leq q^T$ on $p_o^{-1}\left(\frac12\right)$ and $\left\{p_o, \tilde q^T\right\}=\cO\left(\frac1T\right)$. In addition, we fix some $\eps>0$ and introduce an arbitrarily small $\theta>0$, and we want $q^T=\tilde q^T$ on $p_o^{-1}(]\frac12-\eps, \frac12+\eps[)\cap\left\{q^T\leq \alpha-3\theta\right\}$, and $\tilde q^T\leq\alpha-2\theta$ everywhere on $p_o^{-1}(]\frac12-\eps, \frac12+\eps[)$. For instance we can take $\tilde q^T=a(q^T)$ on $p_o^{-1}(]\frac12-\eps, \frac12+\eps[)$ where $a$ is real and smooth, $a(E)\leq E$, $|a'|\leq 1$; $a(E)=E$ if $E\leq \alpha-3\theta$, and $a\leq \alpha-2\theta$ everywhere.
\begin{rem}At this stage it is convenient to choose a positive quantization scheme $\Op_\hbar$, in order to have $\Op_\hbar(q^T)\geq \Op_\hbar(\tilde q^T)$.
\end{rem}
Let $0\leq f\in \cS(\IR)$, with $\hat f\in C_0^\infty$, where $\hat f(t)=\int e^{itE}f(E)dE$ is the Fourier transform. Put
$$\tilde\cP=P+i\hbar\hat Q_T+\hbar\tR_T(z),$$
with
$$\hat Q_T=Q_T+f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right).$$
The following proposition is proved in \cite{Sj00} for fixed $T$ (and $\delta=0$, that is, with standard symbol classes). One can follow the proof of \cite{Sj00} line by line and check that it is still valid for $T=(1-4\eps)|\log\hbar|$, $\eps>0$ very small~:
\begin{prop}\label{p:plagiat} Let $P=-\hbar^2\frac{\lap}2$. Let $Q=Q(z)\in \Psi DO^1$ have principal symbol $q(z)$ depending holomorphically on $z\in\Omega$, and be formally self-adjoint when $z$ is real. Let
$$\cP_T=P+i\hbar Q_T+\hbar \tR_T(z),\qquad Q_T=Q_T\left(\frac12\right), \qquad z=\frac{1+\zeta}2, \qquad
\zeta=\cO(\hbar),$$
be the operator defined in \eqref{e:theope2}, with $Q_T=\Op_\hbar(q^T)\in \Psi DO_\delta^1$, and $\tR_T(z)\in \hbar^{1-2\delta}\Psi DO^0_\delta
+(z-\frac12)\Psi DO_\delta^1$. Let $\tQ_T=\Op_\hbar(\tilde q^T)\in \Psi DO^1_\delta$, with $\tilde q^T=a(q^T)$ on $p_o^{-1}(]\frac12-\eps, \frac12+\eps[)$,
where $a$ is real and smooth, $a(E)\leq E$, $|a'|\leq 1$; $a(E)=E$ if $E\leq \alpha-3\theta$, and $a\leq \alpha-2\theta$.
Put
$$\tilde\cP_T=P+i\hbar\hat Q_T+\hbar \tR_T(z),$$
with
$$\hat Q_T=Q_T+f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right).$$
Then
$$\norm{\tilde\cP_T-\cP_T}\leq \hbar\left( \norm{f}_\infty^2\sup_{p_o^{-1}\left(\frac12\right)}(q^T-\tilde q^T)+\cO(\hbar^{1-2\delta})\right)$$
and
\begin{multline*}\norm{\tilde\cP_T-\cP_T}_1
\leq C_d\hbar^{2-d}\Big[\hat{f^2}(0)\int_{p_o^{-1}\left(\frac12\right)}(q^T-\tilde q^T)L_{\frac12}(d\rho)\\
+\sum_{k=1}^{N-1}\hbar^k \abs{D^{2k}_t\hat{f^2}(0)}\int_{p_o^{-1}\left(\frac12\right)}\abs{D^{2k}_\rho(q^T-\tilde q^T)}L_{\frac12}(d\rho)+\cO(\hbar^{N(1-2\delta)})\Big],
\end{multline*}
where $D^{2k}_t$ and $D^{2k}_\rho$ are differential operators of degree $\leq 2k$, respectively on
$\IR$ and $T^*M$.
If we restrict $z$ by assuming that for some continuous function $\alpha(E)>0$, defined on some bounded interval $J$ containing $0$, we have
$$\frac{\Im m(\zeta)}{2\hbar}-q^T+f\left(\frac{\Re e(\zeta)}\hbar\right)^2(q^T-\tilde q^T)\geq \alpha\left(\frac{\Re e(\zeta)}\hbar\right), $$ on $p_o^{-1}\left(\frac12\right), \frac{\Re e(\zeta)}\hbar\in J$,\\
then for $\hbar$ small enough, $(z-\tilde \cP_T)^{-1}$ exists, and we have
$$\norm{\left(\frac1\hbar(z-\tilde \cP_T)\right)^{-1}}\leq
\frac{2+12\sup_{p_o^{-1}\left(\frac12\right)}(q^T-\tilde q^T)\norm{f'}_\infty\norm{f}_\infty}{\alpha\left(\frac{\Re e(\zeta)}\hbar\right)}.$$
\end{prop}
The proof is identical to the proof in \cite{Sj00}. In Appendix \ref{a:plagiat} we will give some details, for the reader's convenience.
\begin{coro} Define
$$\tilde\Omega_\hbar=\left\{\frac12-2c\hbar\leq \Re e(z)\leq\frac12 +2c\hbar\right\}\cap\left\{
(\alpha-\theta)\hbar\leq \Im m(z)\leq 4\norm{q}_\infty\hbar\right\}\subset\IC.$$ For $z\in\tilde\Omega_\hbar$, the operator $z-\tilde \cP_T$ is invertible, and
$$\norm{(\tilde \cP_T-z)^{-1}}\leq \frac{C_{f, q}}{\theta\hbar}.$$
\end{coro}
\begin{coro}\label{c:maincoro}For $\hbar$ small enough, we have
\begin{eqnarray*}\norm{\tilde\cP_T-\cP_T}_1&\leq &C_{d, f, q} \hbar^{2-d} L_{\frac12}\left(\left\{\tilde q^T\not=q^T\right\}\cap \left\{p_o^{-1}\left(\frac12\right)\right\}\right)\\
&\leq & C_{d, f, q} \hbar^{2-d} L_{\frac12}\left(\left\{ q^T\geq \alpha -3\theta\right\}\cap \left\{p_o^{-1}\left(\frac12\right)\right\}\right)\\
&\leq & C_{d, f, q} \hbar^{2-d} e^{T[H(\alpha-3\theta)-(d-1)+\eps]}\\
&\leq & C_{d, f, q} \hbar^{2-d} \hbar^{ [(d-1)-H(\alpha-3\theta)-\eps](1-4\eps)}
\end{eqnarray*}
for $\hbar$ small enough.
\end{coro}
\section{Jensen's inequality\label{s:Jensen}}
We already defined $$\tilde\Omega_\hbar=\left\{\frac12-2c\hbar\leq \Re e(z)\leq\frac12 +2c\hbar\right\}\cap\left\{
(\alpha-\theta)\hbar\leq \Im m(z)\leq 4\norm{q}_\infty\hbar\right\}\subset\IC.$$
To finish the proof of Theorem \ref{t:upperWeyl}, we also introduce the set $$\Omega_\hbar=
\left\{\frac12-c\hbar\leq \Re e(z)\leq\frac12 +c\hbar\right\}\cap \left\{
\alpha\hbar\leq \Im m(z)\leq 3\norm{q}_\infty\hbar\right\}\subset\tilde\Omega_\hbar.$$
For $z\in\tilde\Omega_\hbar$, we can write
$$ \cP_T-z=(\tilde \cP_T-z)(1+K(z))$$
where $K(z)$ is the trace class operator $(\tilde \cP_T-z)^{-1}(\cP_T-\tilde \cP_T)$.
We can bound the number of eigenvalues of $\cP_T$ in $\Omega_\hbar$ by the number
of zeros of the holomorphic function $g(z)=\det(1+K(z))$ in $\Omega_\hbar$. Let us call $N(g, \Omega_\hbar)$ this number of zeros.
Introduce $z_0=\frac12+2i\hbar \norm{q}_\infty$.
By the Jensen inequality \cite{Rud},
\begin{equation}N(g, \Omega_\hbar)\leq C \left(\log\norm{g}_{\infty, \tilde\Omega_\hbar}-\log|g(z_0)|\right),\label{e:jensen}\end{equation}
where the constant $C$ does not depend on $\hbar$ (because the rectangles $\tilde\Omega_\hbar$ and $\Omega_\hbar$ can be transported, by translations and homotheties, to the fixed rectangles
$\tilde\Omega_1$ and $\Omega_1$).
On the one hand, for all $z\in \tilde\Omega_\hbar$,
\begin{eqnarray*}
|\det(1+K(z))|&\leq& \exp\norm{K(z)}_1
\\ &\leq& \exp\left( \norm{\tilde \cP_T-z)^{-1}}\norm{\cP_T-\tilde \cP_T}_1 \right)\\
&\leq& \exp\left( \frac{C_{d,f, q}}{\theta\hbar } \hbar^{2-d} \hbar^{ [(d-1)-H(\alpha-3\theta)-\eps](1-4\eps)}
\right)\\
&\leq& \exp\left( C_{f, q, \theta, d} \hbar^{1-d}\hbar^{ [(d-1)-H(\alpha-3\theta)-\eps](1-4\eps)}
\right).
\end{eqnarray*}
On the other hand, we also know that $ \norm{(1+K(z_0))^{-1}}\leq C\hbar^{-1}$~: since $z_0$ has `large'
imaginary part, $\cP_T-z_0$ is invertible, and it is easy to get a bound $\norm{(\cP_T-z_0)^{-1}}=\cO(\hbar^{-1})$. We use the same calculation as in \cite{Sj00} and get
\begin{eqnarray*}
|\det(1+K(z_0))^{-1}|&=& |\det (1-K(z_0)(1+K(z_0))^{-1})|
\\ &\leq& \exp\norm{K(z_0)(1+K(z_0))^{-1}}_1
\\ &\leq&\exp \norm{K(z_0)}_1 \norm{(1+K(z_0))^{-1}}
\\&\leq& \exp\left( \tilde C_{f, q, \theta, d} \hbar^{1-d}\hbar^{ [(d-1)-H(\alpha-3\theta)-\eps](1-4\eps)}\right)
\end{eqnarray*}
so that
$$|\det(1+K(z_0)|\geq \exp\left( -\tilde C_{f, q, \theta, d} \hbar^{1-d}\hbar^{ [(d-1)-H(\alpha-3\theta)-\eps](1-4\eps)}\right).$$
This, combined to \eqref{e:jensen}, yields
$$\abs{N(g, \Omega_\hbar)}\leq C \hbar^{1-d}\hbar^{ [(d-1)-H(\alpha-2\theta)-\eps](1-4\eps)}$$
Since $\theta$ and $\eps>0$ are arbitrary, we have proved Theorem \ref{t:upperWeyl}.
\begin{rem}\label{r:change2} Starting from Remark \ref{r:change}, the proof of Theorem \ref{t:vari} goes exactly along the same lines. We find
$$\limsup_{\hbar\To 0} \frac{\log\sharp\left\{ z \in \Sigma_{\frac12}, \frac{\Im m(z)}\hbar\geq \alpha\right\} }{\abs{\log\hbar}} \leq \tilde H(\alpha),$$
where $\tilde H(\alpha)=(d-1) \sup\left\{\frac{h_{KS}(\mu)}{\int \phi \,d\mu}- \frac{\int \varphi \,d\mu}{\int \phi \,d\mu} +1, \mu\in\cM_{\frac12}, \int q\,d\mu =\alpha\right\}$ and $\phi$ is as in Remark \ref{r:change}. Letting $\phi$ converge to $\varphi$ uniformly, we obtain Theorem \ref{t:vari}.
\end{rem}
\section{About Question 3\label{s:q3}}
In this section, we consider a particular case of the problem \eqref{e:spec} in which the trace formula is exact.
We then try to investigate Question 3 on this example.
Let $M$ be a compact hyperbolic surface~: $M$ can be written as $M=\Gamma\backslash \IH$,
where $\IH$ is the hyperbolic disc and $\Gamma$ is a discrete subgroup of the group of hyperbolic isometries.
Let $[\omega]\in H^1(M, \IC)$ be represented by the harmonic complex valued $1$--form $\omega$. Introduce the twisted laplacian
$$\lap_\omega f=\lap f-2\la \omega, df\ra+\norm{\omega}_x^2f.$$
Studying the large eigenvalues of $\lap_\omega$ amounts to studying a fixed spectral window for the semiclassical twisted laplacian
$$-\hbar^2\frac{\lap_\omega}2=-\hbar^2\frac{\lap}2+\hbar^2 \la \omega, d.\ra-\hbar^2\frac{\norm{\omega}_x^2}2, \qquad \hbar\To 0.$$
The ``usual'' selfadjoint case is when $\omega$ has coefficient in $i\IR$. We shall instead be interested
in the case when $\omega$ has coefficients in $\IR$.
The operator falls exactly into the case studied in \S \ref{p:semiclass}, with $q(x, \xi)=\la \omega_x, \xi\ra$.
The geodesic flow is ergodic, and Sj\"ostrand's result tells us that ``most'' eigenvalues of $-\hbar^2\frac{\lap_\omega}2$ such that $\Re e(z)\in [\frac12-C\hbar, \frac12+C\hbar]$ have imaginary part
$\Im m(z)=o(\hbar)$. Equivalently, ``most'' eigenvalues of $-\lap_\omega$ such that $\Re e(z)\in
[\lambda-C\sqrt{\lambda}, \lambda+C\sqrt{\lambda}]$ have imaginary part $\Im m(z)=o(\sqrt\lambda)$.
We rephrase Question 3 as\\
{\bf (Q3')} If $\omega\not=0$, is it possible to have $\frac{\Im m(z)}\hbar\To 0$ as $\hbar\To 0$ and $\Re e(z)\in [\frac12-C\hbar, \frac12+C\hbar]$, $z\in Sp(-\hbar^2\frac{\lap_\omega}2)$ ?
\vspace{.4cm}
\\
{\bf Conjecture~:} I conjecture the opposite~:
if $\omega\not=0$, then there is a sequence $\hbar_n\To 0$, $z_n\in Sp(-\hbar_n^2\frac{\lap_\omega}2)$ with $\Re e(z_n)\in [\frac12-C\hbar_n, \frac12+C\hbar_n]$ and $\frac{\Im m(z_n)}{\hbar_n}\not\To 0$.
\vspace{.4cm}
As is usual in hyperbolic spectral theory, we introduce the spectral parameter $r$~: if $\lambda_j$ is an eigenvalue of $-\lap_\omega$, we denote $\lambda_j=\frac14+r_j^2$. Yet another way to phrase Question 3 is to ask whether it is possible to have $\Im m(r_j)\To 0$ as $\Re e(r_j)\To \infty$.
Sj\"ostrand's results say that $\Im m(r_j)$ is bounded and that $\Im m(r_j)\To 0$ for a subsequence of density one. But I naturally believe that it is impossible to have $\Im m(r_j)\To 0$ for the whole sequence,
unless $\omega=0$.
Recall the Selberg trace formula \cite{Sel56}, valid for $\omega\in H^1(M, i\IR)$~:
\begin{equation}
\sum_{\lambda_j} \hat f(r_j)=\frac{Area(M)}{4\pi}\int_{-\infty}^{+\infty}\hat f(r)r\tanh(\pi r) dr+\sum_{\gamma}\frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}f(l_\gamma),
\end{equation}
for any function $f$ on $\IR$, even, smooth enough, and decaying faster than any exponential.
On the right hand side, the sum runs over the set of closed geodesics (equivalently, the set of conjugacy classes in $\Gamma$). If $\gamma$ is a periodic geodesic, we denote $l_\gamma>0$ its
length, or period; and $l_{\gamma_o}$ is its shortest period.
\begin{prop} The trace formula holds, under the same assumptions on $f$,
if $\omega\in H^1(M, \IR)$.
\end{prop}
Recall that the Fourier transform is defined by $\hat f(r)=\int e^{iru}f(u)du.$
\begin{proof}
Take $\omega\in H^1(M, \IR)$.
We consider the operator $\lap_{z\omega}$ for $z\in \IC$. The argument of Section \ref{s:spec}
shows that this operator has discrete spectrum (eigenvalues).
The right hand side of the trace formula reads
\begin{equation}\label{e:tracez}\frac{Area(M)}{4\pi}\int_{-\infty}^{+\infty}\hat f(r)r\tanh(\pi r) dr+\sum_{\gamma}\frac{e^{z\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}f(l_\gamma)\end{equation}
and clearly is an entire function of $z$.
The left hand side is
$\sum_j \hat f(r_j(z))$. To check that it is an entire function of $z$, we first note that $\hat f(r)$, being
an even entire function, can be written as $g(\frac14+r^2)$ where $g$ is entire. Thus
$\hat f(r_j(z))=g(\lambda_j(z))$, where the $\lambda_j(z)$ are the eigenvalues of $-\lap_{z\omega}$.
If $z$ is restricted to a bounded subset $\Omega$ of $\IC$,
we note that the $\Im m (r_j(z))$ are uniformly bounded. To that end, we write $-\lambda_j=\frac14+r_j^2$ with $r_n=x+iy$. The eigenvalue equation
$$-\lap_{z\omega}f=\lambda_j f$$
with $f$ normalized in $L^2$
implies both equations
$$\frac14+x^2-y^2=\int |\nabla f|^2+2\beta i\int\la\omega, df\ra\bar f+(\beta^2-\alpha^2)\int
\norm{\omega}_x^2\abs{f}^2$$
and
$$2xy=-2\alpha i\int\la\omega, df\ra\bar f-2\alpha\beta\int
\norm{\omega}_x^2\abs{f}^2$$
if we decompose $z=\alpha+i\beta\in \IR+i\IR$ and $r_j=x+iy\in \IR+i\IR$. If $\alpha$ and $\beta$ stay bounded, it also follows that $y$ must stay bounded.
Besides
$$\lambda^{-2}\sharp\left\{n , 0\leq \Re e(r_n(z))<\lambda\right\} $$
is bounded uniformly for $\lambda>1$ and $z$ staying in a compact set of $\IC$ (the arguments of \cite{Sj00}, \S 4, or of our Sections \ref{s:ave}, \ref{s:pert}, \ref{s:Jensen} are locally uniform in $z$). Since $\hat f$ is rapidly decreasing in each horizontal strip, it follows that
the sum
$\sum_j \hat f(r_j(z))$ is the uniform limit of the partial sums
$\sum_{|\Re e(r_j(z))|<\lambda}\hat f(r_j(z))$. But for a given $\lambda$, this is a holomorphic function of $z$ (in the open set $\{z,\Re e(r_j(z))\not=\lambda \mbox{ for all }j\}$), since $\lap_{z\omega}$ depends holomorphically on $z$ and $\sum_{|\Re e(r_j(z))|<\lambda}\hat f(r_j(z))$ can be defined by holomorphic functional calculus.
This shows that $\sum_j \hat f(r_j(z))$ is also an entire function. Both sides of \eqref{e:tracez}
coincide for $z\in i\IR$ (by the usual trace formula), thus they must coincide everywhere.
\end{proof}
Introduce some parameters $\sigma, R, T>0$, and take
$$f(u)=\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(u-T)^2}{2\sigma^2}}e^{iuR}+e^{-\frac{(u+T)^2}{2\sigma^2}}e^{-iuR}\right]$$
so that
$$\hat f(r)=e^{-\frac{\sigma^2}2(r-R)^2}e^{-iTr}+e^{-\frac{\sigma^2}2(r+R)^2}e^{iTr}.$$
This yields~:
\begin{multline}\label{e:tracegauss}
\sum_j e^{-\frac{\sigma^2}2(r_j-R)^2}e^{-iTr_j}+e^{-\frac{\sigma^2}2(r_j+R)^2}e^{iTr_j}\\
=\frac{Area(M)}{4\pi}\int_{-\infty}^{+\infty}r\tanh\pi r \left[e^{-\frac{\sigma^2}2(r-R)^2}e^{-iTr}+e^{-\frac{\sigma^2}2(r+R)^2}e^{iTr}\right]dr\\ +
\sum_\gamma \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(l_\gamma-T)^2}{2\sigma^2}}e^{il_\gamma R}+e^{-\frac{(l_\gamma +T)^2}{2\sigma^2}}e^{-il_\gamma R}\right]
\end{multline}
We want to bound from below the right hand side. We hope that this bound will tell us that $\exp(\pm iTr_j)$ cannot be too small on the left hand side, giving some information
on the imaginary part of $r_j$, for $\Re e(r_j)\sim R$. On the right,
the idea is that
the main contribution should come from the geodesics with $l_\gamma \sim T$. We want to choose $R$
so as not to be bothered by the oscillatory terms $e^{\pm il_\gamma R}$.
For that purpose,
$R$ and $T$ will be related in the following manner~:
\begin{lem}\cite{PR94}, Lemma 3.3, \cite{JP07}. \label{l:JPTtrick}
For any $M>1$, there exists $R\in \left[M, M\exp(\exp( 5T))\right]$ such that $\cos(Rl_\gamma)\geq \frac12$ for every closed geodesic $\gamma$ with $l_\gamma \leq 5T$.
\end{lem}
In the sequel we take $M=\exp(\exp(cT))$, ($c>0$ arbitrary) to ensure that $T$ is of order $\log\log R$.
We note that this relation between $R$ and $T$ is independent of $\omega$.
This will allow us to modify slightly our initial problem by extending it to the case where $\omega$ can depend on $R$ (or $T$).
More precisely, we want to consider the case when $\omega=\Theta(R)\omega_o$, where $\omega_o$ is fixed and $\Theta(R)\geq 1$ is allowed to
go to infinity with $R$ at a reasonable rate.
This means that we consider a slight generalization of
\eqref{e:spec}~ (the motivation should become clearer in \S \ref{p:proof3}):
\subsection{A more general problem}
We consider a spectral problem of the form
\begin{equation}\label{e:spec2}(\cP-z)u=0
\end{equation}
where $$\cP=\cP(z)=P+i\hbar\theta(\hbar)Q(z),\qquad P=-\frac{\hbar^2\lap}2$$
where $z\in \Omega=e^{i]-s_0, s_0[}]E_{\min}, E_{\max}[$, with $0<E_{\min}<\frac12<E_{\max}<+\infty$, $0<s_0<\frac\pi{4}$. We will assume that
$ Q(z)\in \Psi DO^1 $ depends holomorphically on $z\in \Omega$, and that $\theta(\hbar)$ is some real valued function such that $\theta(\hbar)\geq 1$ and $\hbar\theta(\hbar)\mathop{\longrightarrow}\limits_{\hbar\To 0}0$. We have in mind $\theta(\hbar)=|\log(\hbar)|$. Finally, we assume that $ Q$ is formally self-adjoint for $z$ real. Again, we call $\Sigma$ the ``spectrum'' the set of $z$ for which the
equation $(\cP(z)-z)u=0$ has a solution.
The results of \S \ref{p:semiclass} can be generalized as follows~: for any $E_{\min}<E_1\leq E_2<E_{\max}$,
\begin{equation}\label{e:Weyl3}\sharp\left\{ z\in\Sigma, E_1\leq \Re e(z)\leq E_2\right\}=\frac{1}{(2\pi\hbar)^d}\left[\int_{p_o^{-1}[E_1, E_2]}dxd\xi+\cO({\hbar\theta(\hbar)})\right].
\end{equation}
One can show that $\frac{\Im m(z)}{\hbar\theta(\hbar)}$ has to stay bounded for $z\in\Sigma$.
Taking $\theta(\hbar)=|\log(\hbar)|$, $E_1=E-c\hbar\theta(\hbar)$ and $E_2=E+c\hbar\theta(\hbar)$, one has
$$q^-_E+o(1)\leq\frac{\Im m(z)}{\hbar\theta(\hbar)}\leq q^+_E+o(1)$$
for $z\in\Sigma$ such that $E_1\leq \Re e(z)\leq E_2$.
Assuming that the geodesic flow is ergodic on $p_o^{-1}\left\{E\right\}$, {\em and for $\theta(\hbar)=|\log(\hbar)|$}, then for any $\eps>0$, any $c>0$,
\begin{equation}\label{e:LGN2}\sharp\left\{ z\in\Sigma, E-c\hbar\theta(\hbar)\leq \Re e(z)\leq
E +c\hbar\theta(\hbar), \frac{\Im m(z)}{\hbar\theta(\hbar)}\not\in [\bar q_E-\eps, \bar q_E+\eps]\right\}=\theta(\hbar)o(\hbar^{1-d}).
\end{equation}
\begin{rem}
The paper \cite{Sj00} only treats the case $\theta(\hbar)=1$. But the method
of \cite{Sj00}, \S5 can be adapted in a straighforward way to show the following~: consider the spectral problem
\begin{equation}\label{e:spec3}(\cP-z)u=0
\end{equation}
where $$\cP=\cP(z)=P+i\hbar\theta Q(z),\qquad P=-\frac{\hbar^2\lap}2$$
where $z\in \Omega=e^{i]-s_0, s_0[}]E_{\min}, E_{\max}[$, with $0<E_{\min}<\frac12<E_{\max}<+\infty$, $0<s_0<\frac\pi{4}$. Fix some $\eps>0$. Then
\begin{equation}\label{e:Weyl4}\sharp\left\{ z\in\Sigma, E_1\leq \Re e(z)\leq E_2\right\}=\frac{1}{(2\pi\hbar)^d}\left[\int_{p_o^{-1}[E_1, E_2]}dxd\xi+\cO(\hbar\theta)\right].
\end{equation}
uniformly in all $\theta\geq 1$ such that $\theta\hbar\leq \eps$ and $E_1, E_2$ such that $E_{\min}<E_1-2\eps,
E_2+2\eps<E_{\max}$, $|E_2-E_1|\geq\hbar\theta$.
.
For \eqref{e:LGN2} (and $\theta(\hbar)=|\log\hbar|$), the generalization of the proof in \cite{Sj00} is without surprise, but requires some rather technical changes~: we will not prove it here, but still feel allowed to ask
about Question 3 in this generalized setting. We note that to extend \eqref{e:LGN2} to more general $\theta(\hbar)$, some analyticity assumptions would be required, as in \cite{HSjV07}.
\end{rem}
We will focus our attention on the operator
$$-\hbar^2\frac{\lap_{\theta(\hbar)\omega}}2=-\hbar^2\frac{\lap}2+\hbar^2\theta(\hbar) \la \omega, d.\ra-\hbar^2\theta(\hbar)^2\frac{\norm{\omega}_x^2}2, \qquad \hbar\To 0,$$
when $\omega$ has coefficients in $\IR$.
We generalize Question 3 as\\
{\bf (Q3'')} If $\omega\not=0$, prove that there exists
a sequence $\hbar_n\To 0$, and $z_n\in Sp(-\hbar_n^2\frac{\lap_{\theta(\hbar_n)\omega}}2)$
with $\Re e(z_n)\in [\frac12-C\hbar_n \theta(\hbar_n), \frac12+C\hbar_n\theta(\hbar_n)]$,
such that $\frac{\Im m(z_n)}{\hbar_n
\theta(\hbar_n)}\not\To 0$.
\subsection{Heuristic discussion of the parameters $\sigma, R, T$\label{p:proof3}}
We start again from the trace formula
\eqref{e:tracegauss}, considering the case where
$\omega=\Theta(R)\omega_o$, $\Theta(R)=\theta(R^{-1})\geq 1$. Here $R^{-1}$ is going to play the role of the small parameter $\hbar$. The form $\omega_o$ is fixed, and we normalize it to have stable norm $\norm{\omega_o}_s=1$.
\begin{eqnarray}\norm{\omega}_s &=&\sup\left\{\int_{S^*M} \omega \,d\mu, \mu\in\cM_{\frac12}\right\}\\
&=&\sup_{\gamma} \frac{\int_\gamma \omega}{l_\gamma}.
\end{eqnarray}
The first line can be considered as a definition of the stable norm (valid for
a general compact riemannian manifold $M$), whereas
the second line holds on negatively curved manifolds because of the density of the closed geodesics. Using the definition
${\rm Pr}(\omega)=\sup\left\{h_{KS}(\mu)+\int_{S^*M} \omega \,d\mu, \mu\in\cM_{\frac12}\right\}$, it is not difficult to show that $$\lim_{t\To \infty}{\rm Pr}(t\omega)-\abs{t}\norm{\omega}_s=\sup\left\{h_{KS}(\mu),
\mu\in\cM_{\frac12}, \int_{S^*M} \omega \,d\mu=\norm{\omega}_s\right\}.$$ On a surface, the right-hand side vanishes \cite{A03}.
Besides, for any $T$ one can find a closed geodesic $\gamma$ with $l_\gamma\in [T-1, T]$, and such that
\begin{equation}\label{e:sn}\frac{\int_\gamma \omega_o}{l_{\gamma}}\geq \norm{\omega_o}_s(1+o_T(1))= (1+o_T(1)),\end{equation}
where $o_T(1)$ goes to $0$ as $T$ approaches $+\infty$.
Simply recall that for any $0<\delta<1$,
\begin{equation}\label{e:H}\lim \frac{\log \sharp\left\{\gamma, l_\gamma\in [T-1, T],
\frac{\int_\gamma \omega_o}{l_{\gamma_o}}\geq (1-\delta)\right\}}{T}=
H(1-\delta)>0,\end{equation}
where
$$H(\alpha)=\sup\left\{h_{KS}(\mu), \mu\in\cM_{\frac12}, \int_{S^*M} \omega_o\,d\mu=\alpha\right\}.$$
The function $H$ is continuous, concave on $[-1, 1]$, real-analytic
on $]-1, 1[$ \cite{BabLed98}. And again, $H(-1)=H(1)=0$ on a compact surface \cite{A03}.
In \eqref{e:tracegauss}, we have not said yet how $\sigma$ will depend on $R$ and $T$. For the moment, let us decide {\em a priori} that $\sigma$ should be such that the term
$$\frac{Area(M)}{4\pi}\int_{-\infty}^{+\infty}r\tanh\pi r \left[e^{-\frac{\sigma^2}2(r-R)^2}e^{-iTr}+e^{-\frac{\sigma^2}2(r+R)^2}e^{iTr}\right]dr$$
is negligible compared to the sum $\sum_\gamma$.
Remember that $R$ and $T$ are chosen so as to satisfy Lemma \ref{l:JPTtrick}.
Then, fixing $1>\delta>0$, the right hand side of \eqref{e:tracegauss} should be bounded from below by
\begin{equation}\label{e:below}\sigma^{-1} e^{TH(1-\delta)-T/2}e^{\Theta(R)(1-\delta) T}e^{-\frac{1}{2\sigma^2}}.\end{equation}
We want to use the fact that this grows quite fast with $T$.
On the other hand, looking at the left hand side of \eqref{e:tracegauss},
we cannot hope to do better than to bound it from above by
\begin{equation}\label{e:above}\sharp\left\{j, |\Re e(r_j)-R|\leq \sigma^{-1} \right\} e^{\frac{\sigma^2}2 \sup_j \Im m(r_j)^2} e^{T\sup_j |\Im m(r_j)|},\end{equation}
where each time the $\sup_j$ should be restricted to the indices $j$ such that $|\Re e(r_j)-R|\leq \sigma^{-1}$.
This heuristic argument would give an inequality
\begin{equation}\label{e:comparison}\sharp\left\{j, |\Re e(r_j)-R|\leq \sigma^{-1} \right\} e^{\frac{\sigma^2}2 \sup_j \Im m(r_j)^2} e^{T\sup_j |\Im m(r_j)|}\geq \sigma^{-1} e^{TH(1-\delta)-T/2}e^{\Theta(R)(1-\delta) T}e^{-\frac{1}{2\sigma^2}},\end{equation}
obtained by comparing the lower bound \eqref{e:below}
and the upper bound \eqref{e:above}. Again, the hope is to compare the powers of $e^T$ on both sides to prove that $\sup_j |\Im m(r_j)|$ cannot be arbitrarily small.
Consider the case $\Theta(R)=1$, which is the case we were originally interested in. If we take $\sigma$ to be a constant, then
by Weyl's law we have
$\sharp\left\{j, |\Re e(r_j)-R|\leq \sigma^{-1} \right\}\sim R\geq \exp(\exp(c T))$. In this case \eqref{e:comparison}
cannot bring any useful information. On the other hand, if we want to choose $\sigma$ such that
$\sharp\left\{j, |\Re e(r_j)-R|\leq \sigma^{-1} \right\}$ is bounded, we are led to take $\sigma\sim R$; in this case the term $e^{\frac{\sigma^2}2 \sup_j \Im m(r_j)^2} $ will be too large to yield any interesting information.
We see that the method only has a chance to work if $T\Theta(R)\gg\log R$. From now on
we take $\Theta(R)\geq \log R$, and always such that $R^{-1}\Theta(R)\To 0$. We also take
$\sigma^{-2}=C\Theta(R)$ with $C$ large. We must note that the parameters $r_j$ correspond
to the eigenvalues of $-\lap_{\Theta(R)\omega}$, and thus they also depend on $R$.
\subsection{Proof of Theorem \ref{t:q3}}The right hand side of \eqref{e:tracegauss} is easy to understand. The term
\begin{equation}\label{e:int}\int_{-\infty}^{+\infty}r\tanh\pi r \left[e^{-\frac{\sigma^2}2(r-R)^2}e^{-iTr}+e^{-\frac{\sigma^2}2(r+R)^2}e^{iTr}\right]dr\end{equation} is $\cO(\sigma^{-1}R)$, whereas the $\sum_\gamma$
has modulus greater than
\begin{multline}
\frac12\sum_{l_\gamma\leq 5 T} \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(l_\gamma-T)^2}{2\sigma^2}}+e^{-\frac{(l_\gamma +T)^2}{2\sigma^2}}\right]
+ \sum_{l_\gamma\geq 5T} \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(l_\gamma-T)^2}{2\sigma^2}}+e^{-\frac{(l_\gamma +T)^2}{2\sigma^2}}\right]\cos(l_\gamma R)
\\ \geq
\frac12\sum_{T-1\leq l_\gamma\leq T} \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(l_\gamma-T)^2}{2\sigma^2}}+e^{-\frac{(l_\gamma +T)^2}{2\sigma^2}}\right]
+ \sum_{l_\gamma\geq 5T} \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{1}{\sqrt{2\pi}\sigma}\left[e^{-\frac{(l_\gamma-T)^2}{2\sigma^2}}+e^{-\frac{(l_\gamma +T)^2}{2\sigma^2}}\right]\cos(l_\gamma R)
\\ \geq
\frac{1}{\sqrt{8\pi}\sigma} \left[e^{TH(1-\delta)-T/2}e^{\Theta(R)(1-\delta) T}
e^{-\frac{1}{2\sigma^2}} +\cO(1)\right],\label{e:modulus}
\end{multline}
(using \eqref{e:H}), and thus is much greater than the integral \eqref{e:int}. To get the last $\cO(1)$ we have used the exponential growth of the number of closed geodesics. The left hand side of \eqref{e:tracegauss}
is more complicated to bound from above, since the $r_j$ now depend on $R$.
\begin{prop} \label{p:f}Take $\Theta(R)\geq \log R$, and such that $R^{-1}\Theta(R)\To 0$.
Take $\sigma^{-2}=C\Theta(R)$. Let $f(R)$ be such that $\sigma^2f(R)^2\gg T\Theta(R)$.
Then
$$|\sum_j e^{-\frac{\sigma^2}2(r_j-R)^2}e^{-iTr_j}|
\leq \sharp\left\{j, |\Re e(r_j)-R|\leq f(R) \right\} e^{\frac{\sigma^2}2 \sup_j \Im m(r_j)^2} e^{T\sup_j |\Im m(r_j)|}+\cO(1),$$
where the $\sup_j$ are taken over the set of indices $j$ such that $|\Re e(r_j)-R|\leq f(R) $.
\end{prop}
\begin{proof}
\begin{lem} We have an a priori bound $|\Im m(r_j)|\leq c\,\Theta(R)$, where $c$ depends only on $\omega_o$.
\end{lem}
\vspace{.3cm}
Indeed, let $r_j=x+iy$ and $f\in L^2(M)$ be such that $\norm{f}_{L^2}=1$ and
$$-\lap f+2\Theta(R)\la \omega_o, df\ra-\Theta(R)^2 \norm{\omega_o}^2_x f=\left(\frac14+r_j^2\right)f.$$
Taking the scalar product with $f$, we get
\begin{equation}\label{e:1}\frac14+x^2-y^2=\int |\nabla f|^2 -\Theta(R)^2\int
\norm{\omega_o}_x^2\abs{f}^2\end{equation}
and
\begin{equation}\label{e:2}2xy=-2\Theta(R) i\int\la\omega_o, df\ra\bar f . \end{equation}
Equation \eqref{e:2} yields $\abs{xy}\leq c\, \Theta(R) \sqrt{\int |\nabla f|^2}.$ Equation \eqref{e:1}
implies that $x^2\geq \int |\nabla f|^2 -c^2\Theta(R)^2.$ If $\abs{x}\geq \frac12\sqrt{\int |\nabla f|^2}$ then we are done, by \eqref{e:2}. If $\abs{x}\leq \frac12\sqrt{\int |\nabla f|^2}$, then \eqref{e:1} implies that $\int |\nabla f|^2 \leq 2c^2 \Theta(R)^2$ and that $y^2\leq \frac14 +5c^2 \Theta(R)^2.$
The lemma follows.
\vspace{.4cm}
We now break the sum
$\sum_j e^{-\frac{\sigma^2}2(r_j-R)^2}e^{-iTr_j}$ into three parts~: $I=\sum_{j,
\Re e(r_j)\leq R- f(R)}$, $II=\sum_{j,|\Re e(r_j)-R|\leq f(R)}$ and
$III=\sum_{j,
\Re e(r_j)\geq R+ f(R)}$.
The last sum $III$ is bounded by
$$e^{\frac{\sigma^2}2 c^2\Theta(R)^2} e^{cT\Theta(R)}\sum_{j,
\Re e(r_j)\geq R+ f(R)}e^{-\frac{\sigma^2}2(\Re e(r_j)-R)^2} .$$
We decompose this sum into $\sum_{n\geq 0} \sum_{R+ f(R)+n\leq\Re e(r_j)\leq R+ f(R)+n+1}$, and by Weyl's law in the form \eqref{e:Weyl4}, this is dominated by
$$
e^{cT\Theta(R)}\sum_{n\geq 0} (R+n+f(R))\Theta(R) e^{-\frac{\sigma^2}2(n+f(R))^2} \leq
e^{cT\Theta(R)}\Theta(R)\int_{f(R)-1}^{+\infty}(R+x)e^{-\frac{\sigma^2 x^2}2}dx
$$
and with our relations between $T, R, f(R)$ and $\sigma$, this last quantity is $\cO(1)$.
Concerning the first sum $I$, we bound it by
\begin{equation}\label{e:firstsum}e^{\frac{\sigma^2}2 c^2\Theta(R)^2} e^{cT\Theta(R)}\sum_{j,
\Re e(r_j)\leq R- f(R)}e^{-\frac{\sigma^2}2(\Re e(r_j)-R)^2}.\end{equation} The subsum $\sum_{j,
\Re e(r_j)\leq -R+ f(R)}$ can be treated as above and shown to be $\cO(1)$ (using Weyl's law in the form \eqref{e:Weyl4}), and we only need to concentrate on
$\sum_{j,
|\Re e(r_j)|\leq R- f(R)}$. To bound this sum, we first need a control the number of terms.
\begin{lem}$\sharp\left\{j, |\Re e(r_j)|\leq R\right\}=\cO(R^2\Theta(R)).$\label{l:Weyl5}
\end{lem}
To that end, we use again the trace formula and write~:
\begin{equation*}
\sum_j e^{-\frac{r_j^2}{2R^2}}
=\frac{Area(M)}{4\pi}\int_{-\infty}^{+\infty}r\tanh(\pi r). e^{-\frac{r^2}{2R^2}} dr +
\sum_\gamma \frac{e^{\Theta(R)\int_\gamma \omega_o}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}
\frac{R}{\sqrt{2\pi}}e^{-R^2 l_\gamma^2/2}.
\end{equation*}
On the right, the term $\sum_\gamma$ is clearly $o(1)$ whereas the $\int$ term is of order $ R^2$.
On the left, we break the sum into $\sum_{j, |\Re e (r_j)|\leq R}$ and $\sum_{j, |\Re e (r_j)|\geq R}$. As above, we can use Weyl's law in the form \eqref{e:Weyl4} to show that the $\sum_{j, |\Re e (r_j)|\geq R}e^{-\frac{r_j^2}{2R^2}} $
is $\cO \sum_{n\geq 0}(R+n+1)\Theta(R)e^{-\frac{(R+n)^2}{2R^2}}=\cO(R^2\Theta(R))$. Thus we have
$$\sum_{j, |\Re e( r_j)|\leq R}e^{-\frac{\Re e (r_j)^2-\Im m(r_j)^2}{2R^2}}e^{-\frac{i \Re e(r_j)\Im m(r_j)}{R^2}}
=\cO(R^2\Theta(R)).$$
Remember that $|\Im m(r_j)|\leq c\,\Theta(R)$ and that $R^{-1}\Theta(R)\To 0$. Thus, for $|\Re e (r_j)|\leq R$ we can write $e^{-\frac{i \Re e(r_j)\Im m(r_j)}{R^2}}=1+\cO(R^{-1}\Theta(R))$. This yields
\begin{eqnarray*} e^{-1/2}
\sharp\left\{j, |\Re e(r_j)|\leq R\right\}\left(1+\cO(R^{-1}\Theta(R))\right)&\leq& \sum_{j, |\Re e r_j|\leq R}e^{-\frac{\Re e (r_j)^2-\Im m(r_j)^2}{2R^2}}\left(1+\cO(R^{-1}\Theta(R))\right)
\\ &=&\cO(R^2\Theta(R))
\end{eqnarray*}
and finishes the proof of Lemma \ref{l:Weyl5}.
Now, we go back to the sum $\sum_{j,
|\Re e(r_j)|\leq R- f(R)}$ in \eqref{e:firstsum}, and we see that it is bounded by
$$ R^2\Theta(R) e^{\frac{\sigma^2}2 c^2\Theta(R)^2} e^{cT\Theta(R)}e^{-\sigma^2 f(R)^2/2}=\cO(1).$$
This ends the proof of Proposition \ref{p:f}
\end{proof}
We can now come back to \eqref{e:tracegauss}. Noting that $\sigma^2 \sup_j \Im m(r_j)^2=\cO(1)$,
Proposition \ref{p:f} shows that the left-hand side of the trace formula \eqref{e:tracegauss}
is bounded from above by
$$C \sharp\left\{j, |\Re e(r_j)-R|\leq f(R) \right\} e^{T\sup_j |\Im m(r_j)|}+\cO(1)
\leq CRf(R)\Theta(R)e^{T\sup_j |\Im m(r_j)|}$$
where the $\sup$ is taken over all $j$ such that $|\Re e(r_j)-R|\leq f(R)$, and where we have used again \eqref{e:Weyl4}. We can of course assume, without loss of generality, that $f(R)=\cO(R)$.
On the right hand side of \eqref{e:tracegauss}, the $\int$ is $\cO(R\Theta(R)^{1/2})$, and the $\sum_\gamma$ is bounded from below by
$$\frac{1}{\sqrt{8\pi}\sigma} e^{TH(1-\delta)-T/2}e^{\Theta(R)(1-\delta) T}
e^{-\frac{1}{2\sigma^2}} ,$$
as in \eqref{e:modulus}. Writing
$$CRf(R)\Theta(R) e^{T\sup_j |\Im m(r_j)|}\geq \sigma^{-1} e^{TH(1-\delta)-T/2}e^{\Theta(R)(1-\delta) T}e^{-\frac{1}{2\sigma^2}},$$
remembering that $\Theta(R)\geq \log(R)$, $\Theta(R)=o(R)$, $\sigma^{-2}=c\,\Theta(R)$
and $T\asymp\log\log R$, we see that necessarily
$$\sup_j |\Im m(r_j)|\geq (1-2\delta)\Theta(R).$$
This finishes the proof of Theorem \ref{t:q3}.
\section{The arithmetic case.\label{s:arithm}}
Let $p\geq 3$ be a prime, $p\equiv 1\,({\rm mod } \;4)$, and $A\geq 1$ be a quadratic non-residue modulo $p$. We set
$$\Gamma=\Gamma(A, p)=\left\{\left( \begin{array}{cc}
y_0+y_1\sqrt{A} & y_2\sqrt{p}+y_3\sqrt{Ap} \\
y_2\sqrt{p}-y_3\sqrt{Ap} & y_0-y_1\sqrt{A} \\
\end{array} \right), y_0, y_1, y_2, y_3\in\IZ\right\}.$$
It is a discrete cocompact subgroup of $SL(2, \IR)$ which contains only hyperbolic transformations \cite{Hej}.
We consider the hyperbolic surface $M=\Gamma\backslash \IH$.
In $M$, the lengths of the closed geodesics are the $\log x_m$, where
$$x_m=2m^2-1+2m\sqrt{m^2-1},\qquad m\in\IN.$$ We define
$$\mu(m)=\sum_{\gamma,\, l_\gamma=\log x_m}e^{\int_\gamma \omega}l_{\gamma_o}.$$
We now follow very closely the approach of \cite{Hej}, pp. 304--314. We introduce an even function $k$ on $\IR$, whose Fourier transform is nonnegative and compactly supported in $[-1, 1]$; we also assume
that $\hat k\geq 1$ on $[-\frac12, \frac12]$. We define
$$K_{\alpha}(r)=k(r)[e^{i\alpha r}+e^{-i\alpha r}].$$
We write again the trace formula~: for all $t>0$,
\begin{multline}\label{e:traceshift}
\sum_j K_\alpha(r_j-t)+K_\alpha(r_j+t)=\frac{Area(M)}{4\pi}\int r \tanh(\pi r) [K_\alpha(r-t)+K_\alpha(r+t)]dr\\
+2\sum_\gamma \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}\hat K_\alpha(l_\gamma)\cos(tl_\gamma).
\end{multline}
We will bound from below the right-hand side (averaged in $t$) to obtain information on the left-hand side. Denote
$$S_\alpha(t)=\sum_\gamma \frac{e^{\int_\gamma \omega}l_{\gamma_o}}{\sinh\frac{l_\gamma}2}\hat K_\alpha(l_\gamma)\cos(tl_\gamma)=
2\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\frac{\mu(m)}{x_m^{1/2}+x_m^{-1/2}}\hat K_\alpha(\log x_m)\cos(t\log x_m).$$
\begin{prop}Let $\alpha=2\beta\log T-C$, $0<\beta\leq 1$ and $C$ large enough. Then,
$$\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)|S_\alpha(t)|^2 dt\geq \tilde C T^{\beta(4{\rm Pr}(\omega)-1)}.$$
\end{prop}
Although we are really interested in the quantity $\int_{2T-T^\beta}^{2T+T^\beta}|S_\alpha(t)|^2 dt$,
the reason for introducing the regularizing factor $\left(1-\frac{\abs{t-2T}}{T^\beta} \right)$ is exactly the same
as in \cite{Hej}, p. 315.
\begin{proof}
Introduce the notations
$$\eta(m)=\frac{\mu(m)}{x_m^{1/2}+x_m^{-1/2}}\hat K_\alpha(\log x_m),$$
$$\nu(m)=\mu(m)\hat K_\alpha(\log x_m).$$
Divide the integral $I=\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)|S_\alpha(t)|^2 dt$ into $I=I_1+I_2$,
where
$$I_1=\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\eta(m)^2\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)\cos^2(t\log x_m)dt$$
and
$$I_1=2\sum_{e^{\alpha-1}\leq x_k< x_m\leq e^{\alpha+1}}\eta(m)\eta(k)\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)\cos(t\log x_m)\cos(t\log x_k)dt.$$
The idea is that the $x_m, x_k$, with $x_m\not= x_k$, are well-spaced, implying that the oscillatory integral $I_2$ is small compared to $I_1$.
\begin{lem} For $T\geq 1$ and $\lambda\in\IR$,
$$\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)e^{i\lambda t}dt=\cO\left[\min\left(T^\beta, \frac{1}{\lambda^2T^\beta}\right)\right].$$
\end{lem}
Let us first consider $I_1$.
\begin{eqnarray*}\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)\cos^2(t\log x_m)dt&=&\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)\frac{1+\cos(2t\log x_m)}2dt\\
&=& \frac{T^\beta}2+\cO(T^{-\beta}\log x_m^{-2}).
\end{eqnarray*}
Hence,
\begin{eqnarray}I_1&\geq& \left(\frac{T^\beta}2+o(1)\right)\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\eta(m)^2
\\& \geq&\left(\frac{T^\beta}2+o(1)\right)\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\frac{\nu(m)^2}{(x_m^{1/2}+x_m^{-1/2})^2}
\\& \geq& c_1T^\beta e^{-\alpha} \sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\nu(m)^2.\label{e:I_1}
\end{eqnarray}
The right-hand side of \eqref{e:I_1} will be estimated later.
We now turn to $I_2$, and want to show that it is much smaller than $I_1$. The integral
$$\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)\cos(t\log x_m)\cos(t\log x_k)dt$$
is smaller than $\frac{1}{T^\beta(\log x_m-\log x_k)^2}$. We can ensure that
$$\frac{1}{\abs{\log x_m-\log x_k}}\leq cT^\beta$$
for $e^{\alpha-1}<x_k<x_m<e^{\alpha+1}$, by choosing $\alpha$ in an appropriate range~: writing
$$\log x_m-\log x_{m-1}\sim \frac{x_m-x_{m-1}}{x_m}\sim \frac2m\sim \frac4{\sqrt{x_m}},$$
we see we have to take $\alpha\leq 2\beta\log T-C$ ($C$ large). More generally, we have by the intermediate value theorem
$$|\log x_m-\log x_{k}|\geq\tilde C e^{-\alpha/2}|m-k| .$$
The analysis
done by \cite{Hej}, pp. 310--311, can be applied {\em verbatim} to show that
\begin{equation}\label{e:I_2}\abs{I_2}\leq\frac{\tilde C}{T^\beta}\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\nu(m)^2.\end{equation}
Comparing \eqref{e:I_1} and \eqref{e:I_2}, we see that
$$\abs{I_2}\leq \frac{I_1}{100}$$
provided $\alpha\leq 2\beta\log T-C$ with $C$ sufficiently large. Thus,
$$I\geq \frac{99}{100}I_1.$$
To complete our estimate for $I$, we must return to equation \eqref{e:I_1}. Clearly,
$$\sum_{e^{\alpha-1}\leq x_m\leq e^{\alpha+1}}\nu(m)^2\geq
\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}}\mu(m)^2.$$
We write the Cauchy-Schwarz inequality,
$$\left[\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}}\mu(m)\right]^2\leq
\left(\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}} 1\right)
\left(\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}}\mu(m)^2\right).$$
But
$$\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}}\mu(m)= \sum_{\gamma, \alpha-1/2\leq l_\gamma\leq\alpha+1/2}e^{\int_\gamma \omega}l_{\gamma_o}\geq \tilde C\,e^{\alpha {\rm Pr}(\omega)},$$
see \cite{PP}, p. 117.
On the other hand,
$$\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}} 1=\cO(e^{\alpha/2}).$$
We obtain this way
$$\sum_{e^{\alpha-1/2}\leq x_m\leq e^{\alpha+1/2}}\mu(m)^2\geq C e^{2\alpha {\rm Pr}(\omega)-\alpha/2}.$$
We have proved
$$\int_{2T-T^\beta}^{2T+T^\beta}\left(1-\frac{\abs{t-2T}}{T^\beta}\right)|S_\alpha(t)|^2 dt\geq \tilde C T^{\beta(4{\rm Pr}(\omega)-1)}.$$
\end{proof}
This implies
$$S_\alpha(t)\geq C t^{\beta(2{\rm Pr}(\omega)-1)}$$
for some $t\in [2T-T^\beta, 2T+T^\beta]$.
Now consider the integral $\int r \tanh(\pi r) K_\alpha(r-t)dr$ or $\int r \tanh(\pi r) K_\alpha(r+t)dr$
in \eqref{e:traceshift}. We write
$$\int r \tanh(\pi r) K_\alpha(r-t)dr =\int r \tanh(\pi r)k(r-t) [e^{i\alpha (r-t)}+e^{-i\alpha (r-t)}]dr.$$
To evaluate
$\int (r+t) \tanh(\pi (r+t))k(r) e^{i\alpha r} dr$, we shift the integral over $\IR$ to an integral over
$(\frac12-\eps)i +\IR$, and we find that the integral is $\cO(te^{-|\alpha|(\frac12-\eps)})$ for any $\eps>0$.
\begin{lem}
$\int r \tanh(\pi r) K_\alpha(r-t)dr$ or $\int r \tanh(\pi r) K_\alpha(r+t)dr=\cO(te^{-|\alpha|(\frac12-\eps)})=\cO(t^{1+\eps-\beta})$
for any $\eps>0$.
\end{lem}
We finally turn to
$\sum_j K_\alpha(r_j-t)+K_\alpha(r_j+t)$. Fixing a small $\eps>0$, one sees using Weyl's law, the fact that $k$ is rapidly decreasing in any horizontal strip -- and the fact that the $\Im m(r_j)$ are bounded -- that
$$\sum_j K_\alpha(r_j-t)=\sum_{j, |\Re e(r_j)-t|\leq t^\eps}K_\alpha(r_j-t)+\cO(t^{-\infty}).$$
Similarly,
$$\sum_j K_\alpha(r_j+t)=\sum_{j, |\Re e(r_j)+t|\leq t^\eps}K_\alpha(r_j+t)+\cO(t^{-\infty}).$$
We see that
$$|\sum_j K_\alpha(r_j-t)+K_\alpha(r_j+t)|\leq C e^{\alpha\sup_j|\Im m(r_j)|}t^{1+\eps}\leq C t^{2\beta\sup_j|\Im m(r_j)|}t^{1+\eps}$$
where the $\sup_j$ is taken over the $j$ such that $|\Re e(r_j)\pm t|\leq t^\eps.$
We have proved that there exists $t\in [T-T^\beta, T+T^\beta]$, and $r_j$ with $|\Re e(r_j)- t|\leq t^\eps,$
such that
$$t^{2\beta\sup_j|\Im m(r_j)|}t^{1+\eps}\geq \tilde C t^{\beta(2{\rm Pr}(\omega)-1)}.$$
In particular, if $T$ is large enough, this implies
$$\sup\left\{|\Im m(r_j)|, |\Re e(r_j)\pm T|\leq T^\beta\right\}\geq {\rm Pr}(\omega)-\frac12 -\frac{1+\eps}{2\beta},$$
and this proves Theorem \ref{t:q3arithm}.
\section{Symbol classes\label{s:symbols}}
Following \cite{DS}, for any $0\leq \delta <1/2$ we introduce the symbol class
\bequ\label{e:symbol-eps}
S_\delta^{m}\stackrel{\rm{def}}{=}\set{a\in C^\infty(T^*M),\ |\partial_x^\alpha \partial_{\xi}^\beta a|\leq
C_{\alpha,\beta}\, \hbar^{-\delta|\alpha+\beta|}\,\la\xi\ra^{m-|\beta|}}\,.
\eequ
If $\delta=0$ we will just denote $S^{m}$.
We will denote $\Op_\hbar(a(x, \xi))=\Op(a(x, \hbar\xi))$, where $\Op$ is a quantization procedure on $M$.
The quantization of any $a\in S_\delta^{0}$ leads
to a bounded operator on $L^2(M)$ (the norm being bounded uniformly in $\hbar$), see \cite{DS}.
We will denote $\Psi DO_\delta^m=\Op_\hbar(S_\delta^{m}).$
We use~:
\begin{prop}\label{p:PDO}Let $a\in S_\delta^{m}, b\in S_{\delta'}^{n}$ with $0\leq\delta'\leq\delta<1/2$.
Then\\
(i) $\Op_\hbar(a)\Op_\hbar(b)-\Op_\hbar(ab)\in \hbar^{1-\delta-\delta'}\Op_\hbar(S_\delta^{m+n-1})$.\\
(ii) $[\Op_\hbar(a),\Op_\hbar(b)]-\frac\hbar{i}\Op_\hbar(\left\{a, b\right\})\in \hbar^{2(1-\delta-\delta')}\Op_\hbar(S_\delta^{m+n-2})$.\\
\end{prop}
We also use a local form of the Calderon-Vaillancourt estimate \cite{DS}~:
\begin{prop}\label{p:CV} There exists $K\in\IN$ depending only on the dimension of $M$, such that the following holds. Take $A=\Op_\hbar(a)$ where $a\in\Psi DO_\delta^2$. Let $I$ be an open interval of $\IR^+$ and let $\lambda$ belong to $I$. Then, there exists $C>0$, and $C(a,\lambda)$ depending on a finite number of seminorms of $a$ (uniform in $\lambda$ if it stays inside a compact subset of $I$), such that, for all $u\in L^2(M)$,
$$\norm{Au}_{L^2}\leq C \left( \sup_{p_o^{-1}(I)}|a|+\sum_{k=1}^K\hbar^k\sup_{p_o^{-1}(I)}|D^{2k} a| \right)\norm{u}_{L^2}+C(a, \lambda) \norm{(P-\lambda)u}_{L^2}.$$
In fact $C(a, \lambda)$ is controlled by the supremum norm of $\frac{a}{p_o-\lambda}$ and a finite number of its derivatives outside $p_o^{-1}(I)$.
Similarly we have
$$\abs{\la u,Au\ra}\leq C \left( \sup_{p_o^{-1}(I)}|a|+\sum_{k=1}^K\hbar^k\sup_{p_o^{-1}(I)}|D^{2k} a| \right)\norm{u}_{L^2}+C(a, \lambda) \norm{(P-\lambda)u}^2_{L^2}.$$
\end{prop}
\section{Appendix~: Sj\"ostrand's proof of Proposition \ref{p:plagiat}\label{a:plagiat}}
In order to prove Proposition \ref{p:plagiat}, we first want to bound the norm and trace norm of
$$f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right).$$
We write a Calderon-Vaillancourt type estimate,
$$\norm{(\tQ_T-Q_T)u}\leq \left(\sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(\tilde q^T-\tilde q^T)+\cO(\hbar^{1-2\delta})\right)\norm{u}+\cO(1)\norm{(2P-1)u},$$
where $\delta$ is as in Proposition \ref{p:ave}.
Besides, $\norm{(2P-1)f\left(\frac{2P-1}\hbar\right)}=\cO(\hbar)$. It follows that
$$\left\Vert f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right)\right\Vert\leq
\norm{f}^2_\infty \left(\sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(\tilde q^T-\tilde q^T)+\cO(\hbar^{1-2\delta})\right).$$
For the trace class norm, we need to be even more careful than in \cite{Sj00}. Instead of using the G\aa rding inequality, we use the existence of a positive quantization. If we choose such, we have directly that
$f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right)\geq 0$ in the operator sense. Thus,
\begin{eqnarray*}\left\Vert f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right)\right\Vert_{1}&=&
{\rm Tr}\, f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right)\\
&=& {\rm Tr}\, f\left(\frac{2P-1}\hbar\right)^2(\tQ_T-Q_T) \\
&=&{\rm Tr}\, \frac1{2\pi}\int \hat{f^2}(t)e^{it\frac{2P-1}\hbar}(\tQ_T-Q_T)dt.
\end{eqnarray*}
Writing the expansion of $e^{it\frac{2P-1}\hbar}$ as a Fourier Integral Operator, writing the trace as the integral of the kernel, and applying the stationary phase method in the time-energy variables,
we obtain an asymptotic expansion
\begin{multline*}{\rm Tr}\, \frac1{2\pi}\int \hat{f^2}(t)e^{it\frac{2P-1}\hbar}(\tQ_T-Q_T)dt\\
= C_d\hbar^{2-d}\left[\hat{f^2}(0)\int_{p_o^{-1}\left(\frac12\right)}(q^T-\tilde q^T)L_{\frac12}(d\rho)+\sum_{k=1}^{N-1}\hbar^k D^{2k}_t\hat{f^2}(0)\int_{p_o^{-1}\left(\frac12\right)}D^{2k}_\rho(q^T-\tilde q^T)L_{\frac12}(d\rho)+\cO(\hbar^{N(1-2\delta)})\right],
\end{multline*}
where $D^{2k}_t$ and $D^{2k}_\rho$ are differential operators of degree $\leq 2k$, respectively on
$\IR$ and $T^*M$. Note that the term $\hbar^k D^{2k}_t\hat{f^2}(0)\int_{p_o^{-1}\left(\frac12\right)}D^{2k}_\rho(q^T-\tilde q^T)L_{\frac12}(d\rho)$ is a $\cO(\hbar^{k(1-2\delta)})L_{\frac12}(\tilde q^T\not=q^T)=o(1)L_{\frac12}(\tilde q^T\not=q^T).$ This proves, in particular, Corollary \ref{c:maincoro}.
To finish the proof of Proposition \ref{p:plagiat}, there remains to study the invertibility of $z-\tilde\cP_T$. Recall the identity
$$\norm{(A+iB)u}^2=\norm{Au}^2+\norm{Bu}^2+i\la u, [A, B] u\ra,$$
if $A, B$ are bounded self-adjoint operators. Thus,
\begin{multline*}
2\norm{(\tilde\cP_T-z)u}^2\geq\norm{(P+i\hbar\hat Q_T-z)u}^2-\cO(\hbar^{4(1-\delta)})(\norm{(2P-1)u}^2+\norm{u}^2)\\
\geq\norm{(P-\Re e(z))u}^2+\hbar^2\norm{\left(\frac{\Im m(z)}\hbar-\hat Q_T \right)u}^2
+i\hbar\la u, [P, \hat Q_T]u\ra \\-
\cO(\hbar^{4(1-\delta)})(\norm{(2P-1)u}^2+\norm{u}^2)\\
=\norm{(P-\Re e(z))u}^2+\hbar^2\norm{\left(\frac{\Im m(z)}\hbar-\hat Q_T \right)u}^2\\
+\left(\cO(1)\frac{\hbar^2}T(1+\norm{f}^2_\infty)+\cO(\hbar^{3-2\delta})\right)\norm{u}^2
+\cO(\hbar^2)\norm{(P-\Re e(z))u}^2
\end{multline*}
We have used \eqref{e:comm} (or Proposition \ref{p:CV}), and the same for $\tilde Q_T$.
We find that
\begin{equation}\label{e:real}
\sqrt{3}\norm{(\tilde\cP_T-z)u}\geq \norm{(P-\Re e(z))u}-(\cO(\frac\hbar{\sqrt T})+\cO(\hbar^{\frac32-\delta}))\norm{u}.
\end{equation}
On the other hand, we have
\begin{multline}
\Im m \left\la \frac1{\hbar}(z-\tilde\cP_T)u, u\right\ra\\
= \left\la \left(\frac{\Im m(z)}{\hbar}-\hat Q_T\right)u, u\right\ra +\cO(\hbar^{1-2\delta})(\norm{u}+\norm{(\Re e(z)-P)u})\norm{u}\\
\left\la\left( \frac{\Im m(z)}{\hbar}- Q_T+f\left(\frac{2P-1}\hbar\right)(\tQ_T-Q_T)f\left(\frac{2P-1}\hbar\right)\right)u, u\right\ra \\+\cO(\hbar^{1-2\delta})(\norm{u}+\norm{(\Re e(z)-P)u})\norm{u}\\
= \left\la\left( \frac{\Im m(z)}{\hbar}- Q_T+f\left(\frac{2\Re e(z)-1}\hbar\right)^2(\tQ_T-Q_T)\right)u, u\right\ra \\+\cO(\hbar^{1-2\delta})(\norm{u}+\norm{(\Re e(z)-P)u})\norm{u}
\\-\left(2\sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(q^T-\tilde q^T)\norm{f}_\infty\norm{f'}_\infty
+\cO(\hbar^{1-2\delta})\right)\norm{u}\left\Vert \frac{P-\Re e(z)}\hbar u \right\Vert
\label{e:im}
\end{multline}
by the same trick as in \cite{Sj00}, (3.19). Recall that we are interested in a region where
$z-\frac12=\cO(\hbar).$
\vspace{.4cm}
Let $\alpha(E)>0$ be a continuous function defined on a bounded interval $J$ containing $0$, and restrict $z$ by assuming that
$$\frac{\Im m(\zeta)}{2\hbar}-q^T+f\left(\frac{\Re e(\zeta)}\hbar\right)^2(q^T-\tilde q^T)\geq \alpha\left(\frac{\Re e(\zeta)}\hbar\right), $$ near $p_o^{-1}\left(\frac12\right), \frac{\Re e(\zeta)}\hbar\in J$
(where $\zeta=2z-1$). It follows from the G\aa rding inequality that for such $z$
\begin{multline*}\left\la\left( \frac{\Im m(z)}{\hbar}- Q_T+f\left(\frac{2\Re e(z)-1}\hbar\right)^2(\tQ_T-Q_T)\right)u, u\right\ra\\
\geq \left( \alpha\left(\frac{\Re e(\zeta)}\hbar\right)-\cO(\hbar^{1-2\delta})\right)\norm{u}^2-\cO(1)
\norm{u}\norm{(P-\Re e(z))u}.
\end{multline*}
Using this in \eqref{e:im}, we get
\begin{multline}
\Im m \left\la \frac1{\hbar}(z-\tilde\cP_T)u, u\right\ra \geq \left( \alpha\left(\frac{\Re e(\zeta)}\hbar\right)-\cO(\hbar^{1-2\delta})\right)\norm{u}^2\\
-\left(2\sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(q^T-\tilde q^T)\norm{f}_\infty\norm{f'}_\infty
+\cO(\hbar^{1-2\delta})\right)\norm{u}\left\Vert \frac{P-\Re e(z)}\hbar u \right\Vert.
\end{multline}
Reasoning as in \cite{Sj00}, (3.25) and (3.26), we find finally
\begin{multline*}
\Im m \left\la \frac1{\hbar}(z-\tilde\cP_T)u, u\right\ra \geq \left( \alpha\left(\frac{\Re e(\zeta)}\hbar\right)-\cO(\hbar^{1-2\delta})\right)\norm{u}^2\\
-\sqrt 3\left(2\sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(q^T-\tilde q^T)\norm{f}_\infty\norm{f'}_\infty
+\cO(\hbar^{1-2\delta})\right)\norm{u}\left\Vert \frac{\tilde\cP_T-z}\hbar u \right\Vert
\\-(\cO(\frac1{\sqrt T})+\cO(\hbar^{\frac12-\delta}))\norm{u}^2 .
\end{multline*}
and
\begin{multline*}
\left( \alpha\left(\frac{\Re e(\zeta)}\hbar\right)+\cO(\frac1{\sqrt T})+\cO(\hbar^{\frac12-\delta})\right)\norm{u}
\\\leq \left[1+2\sqrt 3 \sup_{p_o^{-1}]\frac12-\eps, \frac12+\eps[}(q^T-\tilde q^T)\norm{f}_\infty\norm{f'}_\infty
+\cO(\hbar^{1-2\delta})\right]\left\Vert \frac{\tilde\cP_T-z}\hbar u \right\Vert
\end{multline*}
which finishes the proof of Proposition \ref{p:plagiat}.
|
1,477,468,751,059 | arxiv | \section{Introduction}
Despite all its success, quantum mechanics after more that one hundred years, is still under debate. All the problems due to its interpretation are originated from the choice to formulate the theory using the Hilbert space formalism. Here we will discuss how this choice is unavoidable and suggests an interesting motivation for this unavoidability.\newline
In general, two are the main basic observations that can be done, experimentally, when one deal with non-relativistic quantum systems
\begin{enumerate}
\item[\textbf{O1}] the outcomes of an experiment about a quantum system is probabilistic;
\item[\textbf{O2}] there are physical quantities that can be measured \emph{simultaneously} and other that do not.
\end{enumerate}
The first observation is a common feature of all the physical system, once one take seriously into account the fact that, even the best experimental physicist in the word, can perform measurement with a finite resolution. The second observation is the distintive feature of a quantum system and, as we shall see, it is the origin of all the differences between classical and quantum system. Starting from these two observations, one can derive (as logical consequence and with few further assumptions) almost all the postulates of quantum mechanics. In what follow we will present two approaches for such derivation, pointing out where we need to do an assumption in order to continue the reconstruction. The first approach is based on (quantum) logical arguments, while the second make use of operational considerations. Finally we will present a toy model to motivate the second observation (deriving it from the first, in some sense) for the description of a point-like particle, moving over a random space.
\section{Propositions about a quantum system: QM from QL}\label{sec2}
In this section, the quantum logic derivation of the Hilbert space structure of quantum mechanics will be briefly reviewed. The main idea behind this approach is to find the theoretical foundations of the postulates for a quantum theory starting from the proposition that one can formulate about a quantum system. This is the so called quantum logic (QL) approach to the foundations of quantum mechanics (QM). For a more detailed treatment, we refer to \cite{gBjvN},\cite{eBgC}, \cite{mpS} for the quantum logic and \cite{vM} for the mathematical formulation of the quantum mechanics' postulates. \newline
\subsection{Propositions for quantum systems}
In everyday life, it is a common fact to formulate \emph{propositions} to describe something and this, of course, holds also in science. Propositions are the basic outcomes of any experiment, and so it is reasonable to expect that some basic feature of the physical system under study, can be deduced from the propositions we may formulate from the experiments. If a quantity $A$ can be measured (assign an objective numerical value), it is common to formulate proposition like
\begin{center}
\emph{\textgravedbl$A$ takes the value $v_A$\textacutedbl}
\end{center}
or better, taking into account the finite resolution of any measurement device (hence O1)
\begin{center}
\emph{\textgravedbl$A$ takes value in $[a,b]$\textacutedbl}
\end{center}
These proposition can be considered as the simplest possible proposition. Introducing the very natural logical connectivities AND/OR
and considering a second measurable quantity $B$, one may also formulate composite propositions. Two basic examples are
\begin{center}
\emph{\textgravedbl$A$ takes value $v_A$OR $B$ takes value $v_B$\textacutedbl}\\
\emph{\textgravedbl$A$ takes value $v_A$ AND $B$ takes value $v_B$\textacutedbl}
\end{center}
In everyday life, both the propositions make sense. Nevertheless, for a quantum system, the second proposition cannot be formulated in general: if $A$ and $B$ cannot be measured simultaneously, the measurement to formulate this proposition cannot be performed in general because \textbf{O2}. We can see that the effect of \textbf{O2} is to reduce the number of propositions that we may formulate using AND. Let us also observe the following fact: the proposition
\begin{center}
\emph{\textgravedbl A take value $v_A$ IMPLIES THAT $B$ take value $v_B$\textacutedbl}
\end{center}
make sense only for quantities that can be measured at the same time, which is again a consequence of \textbf{O2}. In order to explore better the consequences of this, we will adopt the following conventions: a proposition like \emph{\textgravedbl $A$ takes value $v_A$\textacutedbl} or \emph{\textgravedbl $A$ takes value $[a,b]$\textacutedbl} will be labeled simply by $a$, the logical connectors AND by $\wedge$, $OR$ by $\vee$, the implication by $\Rightarrow$ and finally the logical negation by $\neg$. We say that two propositions are equal $a = b$ when $a \Rightarrow b$ and $b \Rightarrow a$. With this notation, the observation \textbf{O2} restrict the number of propositions about a physical system having form $a \wedge b$ which make sense. Before to go on, suppose $X$ may take only two values $v_X$ and $u_Y$. Then consider the following proposition
\begin{center}
\emph{\textgravedbl $Y$ takes value in $v_Y$ AND, $X$ take value $v_X$ OR $X$ take value $u_X$\textacutedbl}
\end{center}
Apparently it seems equivalent to
\begin{center}
\emph{\textgravedbl $Y$ takes value in $v_Y$ AND $X$ takes value $v_X$, OR $Y$ takes value in $v_Y$ AND $X$ takes value $u_X$\textacutedbl}
\end{center}
Using the conventions introduced above $a \wedge (b \vee c) = (a \wedge b) \vee (a \wedge c)$. It is not difficult to understand that if $X$ and $Y$ cannot be measured at the same time, the second proposition do not make sense: this means that for the possible propositions that we can formulate about a quantum system, in general
\begin{equation}\label{dist}
a \wedge (b \vee c) \neq (a \wedge b) \vee (a \wedge c)
\end{equation}
where $\neq$ simply means that it is not true that are equivalent. Consider now a different situation. Let $X,Y,Z$ be three measurable quantities. If
\begin{center}
\emph{\textgravedbl $X$ takes value $v_X$ IMPLIES THAT $Y$ takes value $v_Y$\textacutedbl}
\end{center}
is true, namely the value assumed by $X$ determine the value of $Y$, then
\begin{center}
\emph{\textgravedbl$X$ takes value $v_X$, OR $Y$ takes value $u_Y$ AND $Z$ takes value $w_Z$ IMPLIES THAT $Y$ takes value $u_Y$, AND $X$ takes value $v_X$ OR
$Z$ takes value $w_Z$\textacutedbl}
\end{center}
which in symbols can be written as: if $a \Rightarrow b$ then $a \vee (b \wedge c) \Rightarrow b \wedge (a \vee c)$. Again we can see that, if $a$ and $c$ cannot be measured at the same time (and so also $b$ cannot be measured at the same time of $c$), the first part of this proposition in general doesn't make sense. This means that for the possible propositions we can formulate about a quantum system, in general
\begin{equation}\label{modu}
\mbox{if } a \Rightarrow b \mbox{ then } a \vee (b \wedge c) \nRightarrow b \wedge (a \vee c)
\end{equation}
This is again a consequence of \textbf{O2}. Finally we observe that in general when
\begin{center}
\emph{\textgravedbl $X$ takes value $v_X$ IMPLIES THAT $Y$ takes value $v_Y$\textacutedbl}
\end{center}
is true, for $u_Y$ arbitrary, the proposition
\begin{center}
\emph{\textgravedbl$X$ takes value $v_X$, OR $X$ DOES NOT take value $v_X$ AND $Y$ takes value $u_Y$ IS EQUIVALENT TO $Y$ takes value $u_Y$\textacutedbl}
\end{center}
always make sense for a quantum system, since the two physical quantities can always be measured at the same time by assumption. In symbols we can write that, for the set of propositions about a quantum system
\begin{equation}\label{othomodu}
\mbox{if } a \Rightarrow b \mbox{ then } a \vee (\neg a \wedge b) = b
\end{equation}
holds. From this discussion we can understand the the logical connectors OR, AND, NOT, IS EQUIVALENT TO and IMPLIES THAT cannot be used in a straightforward manner for a quantum system: thus the usual logic is not suitable in this case. As we will see, (\ref{dist}),(\ref{modu}) and (\ref{othomodu}) will help us to select the right structure to describe mathematically the set of all the propositions we may formulate about a quantum system, namely to implement \textbf{O2}.
\subsection{From propositions to lattice}
Let us now try to formalise mathematically the discussion done before. In order to do that, we need to state some technical definitions.
\begin{definition}
Let $X$ be a set. A relation $\preccurlyeq$ on $X$ is said \emph{partial order} if it is reflexive ($x\preccurlyeq x$, $\forall x \in X$), transitive ($x \preccurlyeq y$ and $y \preccurlyeq z$ implies $x \preccurlyeq z$, $\forall x,y,z \in X$) and skew-symmetric ($x \preccurlyeq y \preccurlyeq x$ implies $x = y$, $\forall x,y \in X$). The couple
$(X,\preccurlyeq)$ is said \emph{poset}.
\end{definition}
Using the ordering relation of the poset, one may define the following
\begin{definition}
Let $(X,\preccurlyeq)$ be a poset and consider a subset $Y \subset X$. The \emph{lower bound of $Y$} (\emph{upper bound of $Y$}) is an element $a \in X$ such that $a \preccurlyeq x$ ($x \preccurlyeq a$) for any $x \in X$. The \emph{greatest lower bound, GLB} (\emph{least upper bound, LUB}) of $Y$ is a lower bound (upper bound) of $Y$ $b$ such that $b \preccurlyeq a$ ($a \preccurlyeq b$) for every lower bound (upper bound) $a$ of $Y$.
\end{definition}
It is not difficult to see that if the GLB (LUB) exists it is unique. Now we are ready to introduce the central mathematical concept of this paragraph.
\begin{definition}
Given a poset $(X,\preccurlyeq)$, it is a \emph{lattice} if for any $x,y \in X$, the GLB and LUB always exist (denoted $x \wedge y$ and $x \vee y$, respectively).
\end{definition}
Not all the poset are lattice. The symbols $\wedge$ and $\vee$ used in the definition above, can be defined as the following maps
\begin{enumerate}
\item[a)] $\wedge: X \times X \rightarrow X$, such that for any $x,y,z \in X$, then $x \wedge y \preccurlyeq x$, $x \wedge y \preccurlyeq y$ and, if $z \preccurlyeq x$ and $z \preccurlyeq y$ then $z \preccurlyeq x \wedge y$.
\item[a)] $\vee: X \times X \rightarrow X$, such that for any $x,y,z \in X$, then $x \preccurlyeq x \vee y$, $y \preccurlyeq x \vee y$ and, if $x \preccurlyeq z$ and $y \preccurlyeq z$ then $x \wedge y \preccurlyeq z$.
\end{enumerate}
and it is not difficult to see that the writing $x \wedge y = x$, $x \vee y = y$ and $x \preccurlyeq y$ are equivalent. Lattices are classified according to the following
\begin{definition}
A lattice $(X,\preccurlyeq)$ is said
\begin{enumerate}
\item[a)] \emph{distributive} if $x \wedge (y \vee z) = (x \wedge y) \vee (x \wedge z)$, $\forall x,y,z \in X$;
\item[b)] \emph{modular} if $x \preccurlyeq y$ implies $x \vee (y \wedge z) = y \wedge (x \vee z)$, $\forall x,y,z \in X$;
\item[c)] \emph{bounded} if there exist two elements $\mathbf{0} \in X$ and $\mathbf{1} \in X$ such that $\mathbf{0} \preccurlyeq x \preccurlyeq \mathbf{1}$, $\forall
x \in X$;
\item[d)] \emph{orthocomplemented} if it is bounded and equipped with an operation $x \mapsto \neg x$ (called \emph{orthocomplementation}) such that
\begin{enumerate}
\item[i)] $x \vee \neg x = \mathbf{1}$, $\forall x \in X$;
\item[ii)] $x \wedge \neg x = \mathbf{0}$, $\forall x \in X$;
\item[iii)] $\neg (\neg x) = x$, $\forall x \in X$;
\item[iv)] $x \preccurlyeq y$ implies $\neg y \preccurlyeq \neg x$, $\forall x,y \in X$;
\end{enumerate}
\item[e)] \emph{orthomodular} if orthocomplemented and $x \preccurlyeq y$ implies that $x \vee (\neg x \wedge y) = y$, $\forall x,y \in X$;
\end{enumerate}
\end{definition}
One can prove that: distributivity implies modularity which implies orthomodularity, but the converse is not true. The last notions we need to reach the goal of this paragraph, are about the elements of a lattice
\begin{definition}
Let $(X,\preccurlyeq)$ be a bounded lattice then
\begin{enumerate}
\item[a)] an element $x \in X$ \emph{covers} $y \in X$ if $y \prec x$ (namely, $y \preccurlyeq x$ but $x \neq y$) and doesn't exist $z \in X$, such that $y \prec z \prec x$;
\item[b)] an element $x \in X$ is said \emph{atom} if it covers $\mathbf{0}$;
\item[c)] two elements $x,y \in X$ are said \emph{orthogonal}, written $x \perp y$, if $x \preccurlyeq \neg y$
\end{enumerate}
A bounded lattice $(X,\preccurlyeq)$ is said \emph{atomic} if for any $y \in X/\{\mathbf{0}\}$ there exist an atom $x \in X$ such that $x \preccurlyeq y$. A bounded lattice is said \emph{atomistic} if any element of the lattice can be seen as the join of atoms. An atomic lattice $(X,\preccurlyeq)$ is said \emph{with the covering property} if for any $x \in X$ and every atom $a \in X$ such that $a \wedge x = \mathbf{0}$, the element $a \vee x$ covers $x$.
\end{definition}
For an orthomodular lattice, one can prove that if it is atomic, it is also atomistic. The discussion done in the previous paragraph seems to suggest the following: if the set of all the propositions about a quantum system with the operations $\Rightarrow, \wedge$ and $\vee$ is a lattice, then it must be an orthomodular lattice because of \textbf{O2}. Nevertheless, in order to say this we need to find a way to define the partial ordering, namely the $\Rightarrow$ that in the previous paragraph played the role of logical implication. To define this ordering relation, the observation \textbf{O1}, suggests that the following mathematical definition is physically reasonable
\begin{definition}
Let $(X,\preccurlyeq)$ be an orthomodular lattice, a \emph{probability-like measure} on $X$ is a function $p: X \rightarrow [0,1]$ such that
\begin{enumerate}
\item[a)] $p(\mathbf{1}) = 1$ and $p(\mathbf{0}) = 0$;
\item[b)] for every sequence $\{x_i\}_{i \in I}$ of orthogonal elements of $X$, $p\left( \bigvee_i x_i\right) = \sum_i p(x_i)$
\end{enumerate}
\end{definition}
This probability-like measure induces an ordering relation on $X$, in particular $x \preccurlyeq y$ if and only if $p(x) \leqslant p(y)$ for every possible $p$: but notice that the ordering relation exists independently to the existence of $p$. In any case, the observation \textbf{O1} tells us that when we study a quantum system (and in general any physical system) this notion is at disposal: the measure $p$ can be interpreted as a \textgravedbl degree of belief\textacutedbl (or \textgravedbl truth value\textacutedbl) of a certain proposition, namely, if $a$ is a proposition, $p(a)$ tell us how much we are sure that $a$ happens in real word. But one must be careful about \textbf{O2}: the degree of belief of a proposition can be tested and compared with the one of another proposition, only if these propositions are associated to observables that are measurable at the same time. Keeping this fact in mind, we can say that, if we are agree on \textbf{O1}, we have an ordering relation at disposal over the set of all the physical propositions. This partial order, allows us to define the met and join between all the propositions (actually this is an assumption, despite it is reasonable), thus we can conclude that it is a lattice. In what follow, the lattice of the physical proposition about a quantum system $Q$ will be denoted by $\mathcal{L}_Q(\Rightarrow)$, where $\Rightarrow$ denotes the partial ordering relation described before. The observation \textbf{O2} suggests that is an orthomodular lattice because we expect (\ref{othomodu}) to hold. Nevertheless we should also check if it is bounded and define an orthocomplementation on it.
We need to define $\mathbf{1}$ and $\mathbf{0}$. The first can be though as the proposition
\begin{center}
\emph{\textgravedbl The measurement of some quantity is a real number \textacutedbl}
\end{center}
which is clearly always true for a physical system. $\mathbf{0}$ can be thought as the proposition
\begin{center}
\emph{\textgravedbl We are not measuring anything \textacutedbl}
\end{center}
which is always false if we assume that we formulate propositions only after that at least one experiment was performed. It is not difficult to see that any other proposition in between these two, or more formally, $\mathbf{0} \Rightarrow a \Rightarrow \mathbf{1}$. Thus $\mathbf{0}$ and $\mathbf{1}$ belongs to the lattice of all the propositions about the physical system $\mathcal{L}_Q(\Rightarrow)$ and this lattice is bounded. Once we have this, the orthocomplementation of a proposition $a \in \mathcal{L}_Q(\Rightarrow)$ is the unique proposition $\neg a$ with truth value $p(\neg a) = 1-p(a)$, and it is not difficult to understand that it is the negation (in common language sense) of the initial proposition. Hence it is reasonable to think $\mathcal{L}_Q(\Rightarrow)$ as an \emph{orthomodular lattice}.
Now, we will try to motivate other two properties that $\mathcal{L}_Q(\Rightarrow)$ should have: atomiticy and the covering property. When we deal with a physical system we have at disposal a set of elementary propositions, like
\begin{center}
\emph{\textgravedbl The physical quantity $A$ takes exactly the value $v_A \in \mathbb{R}$\textacutedbl}
\end{center}
and from them we may construct more complex propositions (like \textgravedbl$A$ takes value in $[a,b]$\textacutedbl). In principle for a physical system, we have at disposal an infinte number of this kind of propositions. Moreover, if $A$ is an elementary physical quantity, in the sense that cannot be expressed in terms of other physical quantities, then these propositions may cover only $\mathbf{0}$. This means that propositions of this kind correspond to the atoms of $\mathcal{L}_Q(\Rightarrow)$. It is also physically reasonable to say that, any proposition about a physical system or is an atom or it cover an atom (excluding the trivial case of the $\mathbf{0}$ proposition). Hence this suggest that $\mathcal{L}_Q(\Rightarrow)$ is an \emph{atomistic lattice}. The covering property is more subtle but still reasonable. Suppose we have two elementary propositions $a$ and $b$ (hence two atoms) and consider also a third proposition $c$ (which may not be in general an atom). Now, suppose that we know $a \Rightarrow b\vee c$. This means that $p(a) \leqslant p(b \vee c)$, and so that at the same time $a,b$ and $c$ should hold. Because of this simultaneous truth of these propositions, and because $a$ cannot implies $b$ and viceversa (they are atoms), it is also reasonable to assume that $p(b) \leqslant p(a \vee c)$ (which means $b \Rightarrow a\vee c$) simply because, if it is not so, for a physical system the previous requirement ($a \Rightarrow b\vee c$) doesn't make sense anymore. Thus more rigorously we can write, if $a,b$ are atoms, then $a \Rightarrow b\vee c$ implies $b \Rightarrow a\vee c$ for any $c$. This is another possible characterisation of the covering property for the case of orthomodular lattice. Hence $\mathcal{L}_Q(\Rightarrow)$ can also be considered as a \emph{lattice with the covering property}.
Finally we conclude this paragraph with the last lattice-theoretical concept which, by the way, can always be assumed: \emph{irreducibility}.
\begin{definition}
Let $(X,\preccurlyeq)$ be an orthocomplemented lattice. Consider two elements $a,b \in X$, we say that $a$ \emph{commute with} $b$ if
\begin{equation*}
a = (a \wedge b) \vee (a \wedge \neg b)
\end{equation*}
The set of all the elements of the lattice commuting with any other element of the lattice is called \emph{center}. A lattice is said \emph{irreducible} if its center is just $\{\mathbf{0},\mathbf{1}\}$.
\end{definition}
We can easily see that two propositions commute if and only if they are testable at the same time (hence they are associated to two simultaneously measurable quantities or to the same quantity). Thus the commutativity can be interpreted as simultaneous testability. This means that \textbf{O2} implies the loss of commutativity between propositions, in lattice-theoretical terms, and so it determines the impossibility to use the usual interpretation of the logical connectors, as discussed in the beginning. It can be proved that any reducible (i.e. not irreducible) lattice can be seen as the direct sum (in set-theoretical sense) of lattices that are irreducible. Hence, even if the set of propositions about a quantum system is not irreducible, we may always recast the problem in lattices that are irreducible. For this reason we will always consider irreducible lattices.
\newline
Thus we may conclude the following fact: \emph{the lattice of propositions we can formulate on a quantum system $Q$, ${\mathcal{L}_Q(\Rightarrow)}$, is an orthomodular, irreducible, atomistic lattice with the covering property whose ordering relation is represented by the truth value of a proposition}.
\subparagraph{Remark.}
The arguments presented here doesn't prove rigorously that a quantum system, and the set of propositions about it, are described by the lattice $\mathcal{L}_Q(\Rightarrow)$ with the properties mentioned above. The aim of this paragraph is to convince the reader that it is physically well motivated to assume this structure as starting point, and that the motivations lie at the heart of all the experimental observations about a quantum system.
\subsection{From lattice to Hilbert spaces}
We have seen that an orthomodular, irreducible, atomistic lattice with the covering property can be used to model the set of proposition we can formulate about a quantum system. In this paragraph we will see how it is possible to map this rather abstract mathematical structure to the usual Hilbert spaces in which quantum mechanics is typically formulated. Two are the main results that we need, but before to state them, we need to introduce some technical definitions.
\begin{definition}
Let $\mathbb{K}$ be a division ring and consider a vector space $\mathcal{H}$ on it. Then
\begin{enumerate}
\item[a)] an \emph{involution} is a map $^* : \mathbb{K}\rightarrow \mathbb{K}$ such that $(a+b)^* = a^* + b^*$, $(ab)^* = b^* a^*$ and $(a^*)^* = a$, $\forall a,b \in \mathbb{K}$;
\item[b)] an \emph{hermitian form} is a map $\langle \cdot , \cdot \rangle : \mathcal{H} \times \mathcal{H} \rightarrow \mathbb{K}$ such that
\begin{enumerate}
\item[i)] $\langle x , y \rangle = (\langle y, x \rangle)^*$, $\forall x,y \in \mathcal{H}$;
\item[ii)] $\langle x, ay + bz \rangle = a \langle x,y \rangle + b \langle x,z \rangle$, $\forall x,y,z \in \mathcal{H}$ and $\forall a,b \in \mathbb{K}$;
\item[iii)] $\langle x, x\rangle = 0$ if and only if $x =0$, $\forall x \in \mathcal{H}$
\end{enumerate}
\end{enumerate}
The couple $(\mathcal{H},\langle \cdot ,\cdot \rangle)$ is called \emph{hermitian inner space}.
\end{definition}
As usual, a subset where the vector space operation of $\mathcal{H}$ are preserved is called \emph{subspace}. As usual, two elements $x,y \in \mathcal{H}$ are said orthogonal if $\langle x , y \rangle = 0$. This allows us to define the orthogonal complement of a subspace $N \subset \mathcal{H}$, which is the set
\begin{equation*}
N^\perp := \{ y \in \mathcal{H} \quad | \quad \langle x , y \rangle = 0 , \forall x \in N\}
\end{equation*}
Then the following definition holds
\begin{definition}
Given an hermitian inner space $(\mathcal{H}, \langle \cdot , \cdot \rangle )$, if for any closed subspace $N \in \mathcal{H}$ one can write that $\mathcal{H} = N \oplus N^\perp$, then $(\mathcal{H},\langle \cdot , \cdot \rangle)$ is said \emph{generalised Hilbert space} (or \emph{orthomodular space}).
\end{definition}
We observe that, the class of Hilbert spaces is a particular class of generalised Hilbert spaces, in fact in this definition the involution and the field $\mathbb{K}$ are arbitrary.
Now, we are ready to state the first important theorem, due to Piron, that allows us to recover the Hilbert space formulation.
\begin{theorem}[Piron theorem]
Any (complete) irreducible, atomistic orthomodular lattice $(X,\preccurlyeq)$ with the covering property having at least four orthogonal atoms, is isomorphic to the set of closed subspaces of some generalised Hilbert space $(\mathcal{H},\langle \cdot , \cdot \rangle)$.
\end{theorem}
$\mathcal{L}_Q(\Rightarrow)$ fulfil all the requirements of this theorem (the \emph{completeness} for a lattice was assumed when we declared that the met and join always exist.) except for the number of atoms which are orthogonal. Orthogonal atoms, means elementary propositions that are mutually exclusive. This means that, if we try to evaluate the first proposition, then we know that the second proposition is not true: this does not seem to be an unphysical requirement since it seems reasonable that the number of mutually exclusive propositions is infinite (because in general a physical quantity assume value over $\mathbb{R}$). So, accepting that we have at least 4 orthogonal atoms, the Piron theorem guarantees that our lattice can be represented using some generalised Hilbert space $\mathcal{H}$, and in particular the propositions are in one-to-one correspondence with the closed subspace of $\mathcal{H}$ (i.e. in one-to-one correspondence with projectors over these subspaces).
We have not jet reached our goal to motivate the Hilbert space structure of quantum mechanics with the quantum logic approach. In order tho do so we need a second theorem, due to S\'{o}ler, which is able to select between all the generalised Hilbert spaces exactly the three class of Hilbert spaces.
\begin{theorem}[S\'{o}ler theorem]
If a generalised Hilbert space $(\mathcal{H}, \langle \cdot , \cdot \rangle)$ over a field $\mathbb{K}$, admits a sequence of elements $\{e_i\}_{i \in \mathbb{N}}$ such that
\begin{equation*}
\langle e_i , e_j \rangle = \delta_{i,j} \lambda
\end{equation*}
for some $\lambda \in \mathbb{K}$, then $\mathbb{K}$ must be the field of reals, complex or quaternionic numbers and $(\mathcal{H}, \langle \cdot , \cdot \rangle)$ is an infinite dimensional Hilbert space over one of these fields.
\end{theorem}
Thus we only need to find a sequence $\{e_i\}_{i \in \mathbb{N}}$ of pairwise orthogonal element of $\mathcal{H}$. The existence of such sequence can be motivated from the physical assumption of an infinite number of orthogonal atoms associated to the same physical quantity: atoms are elementary propositions about this quantity, which are in one-to-one correspondence with the closed subspace of $\mathcal{H}$. Orthogonality between atoms translate in orthogonality between subspaces, thus we have an infinte number of orthogonal subspace on which a single physical quantity takes different values. This suggests that the dimension of $\mathcal{H}$ should be infinite for a reasonable physical theory. In this way, one can motivate heuristically the application of the Soler theorem. Nevertheless we still not select a particular Hilbert space. The Hilbert space over the field of real numbers can be excluded from considerations about the Galilean invariance, for a non-relativistic quantum theory. Hilbert spaces over the field of quaternionic numbers, are still under studies, but they can be seen as the direct sum of two complex Hilbert spaces. So it seem that the field of complex number is in some sense special, and it is actually the arena where standard quantum mechanics is formulated.\newline
Despite we are not able to select uniquely the usual Hilbert space of quantum mechanics, the quantum logic approach gives a strong argument supporting the usual formulation of quantum mechanics in complex Hilbert space. One can see that, as a consequence of a very basic physical consideration (the observation \textbf{O2}), this structure cannot be avoided.
\subsection{From Hilbert space to quantum mechanics}
In this last paragraph, we briefly explain how it is possible to obtain all the remaining postulate of quantum mechanics, as logical consequences of the structure explained before plus some assumptions.
Let us recap the results obtained in the previous paragraph. Assuming that the propositions about a quantum system fulfil \textbf{O1} and \textbf{O2}, plus some other reasonable hypothesis, we are lead to the following conclusion:
\begin{center}
\emph{To each quantum system we may associate a complex Hilbert space $(\mathcal{H},\langle \cdot ,\cdot \rangle)$. Propositions about a quantum system are represented by closed subspaces of $\mathcal{H}$.}
\end{center}
Over this Hilbert space, then we may introduce a probability-like measure which tells us the truth value of each proposition. Such probability-like measure, by the \emph{Gleason theorem} (assuming $\dim \mathcal{H} > 2$), is uniquely determined by a positive trace-class operator $\hat{\rho}$. Such operator can be always normalised and it allows to say that the truth value of a proposition $A \subset \mathcal{H}$ si given by
\begin{equation*}
P(A) = \Tr{\hat{\rho}\hat{P}_A}
\end{equation*}
where $\hat{P}_A$ is the projector associated to the closed subspace $A$. This means that
\begin{center}
\emph{To each quantum system we may associate a positive, normalised, trace-class operator $\hat{\rho}$, which allows us to compute the probability to find a given proposition true as
$ \Tr{\hat{\rho}\hat{P}_A}$}
\end{center}
Typically, such positive, normalised, trace-class operator is called \emph{state}. At this point we need to define the notion of observable. Among all the propositions about a physical system, all the propositions regarding the same physical quantity should have these features
\begin{enumerate}
\item[1.] they are all simultaneously testable, since are all associated to the same physical quantity. This implies that all the propositions commute and so in this case the usual interpretation of logical connectors can be applied;
\item[2.] call $m(A)$ is the outcome of a measurement of $A$, if $m(A)\in B$ is true and $m(A) \in B'$ is also true, then clearly $m(A) \in B \cap B'$ is true;
\item[3.] the proposition $m(A) \in \mathbb{R}$ is always true, thus correspond to $\mathbf{1}$;
\item[4.] if $m(A) \in B$ and $B$ can be written as $B = \cup_{i \in I} C_i$, then the proposition $m(A) \in B$ is equivalent to the proposition $\vee_{i \in I}\{m(A) \in C_i \}$, where $\vee$ is interpreted as OR, since from the point 1 we know that the usual interpretation of logical connectivities can be applied.
\end{enumerate}
Defining the set of propositions fulfilling all these physically reasonable features, one can prove that to each physical quantity it is possible to associate a projection-valued measure (PVM). Then by the \emph{spectral theorem} to each PVM one can associate a self-adjoint operator to each physical quantity. Moreover, one can always associate to PVM a probability measure which can be used to compute the expectations (the so called \emph{spectral measure}). More precisely, one can prove the following
\begin{center}
\emph{Physical quantities are represented by self-adjoint operators over $\mathcal{H}$. For a quantum system with state $\hat{\rho}$, the expectation value of a physical quantity represented by a self-adjoint operator $\hat{A}$ is given by $\Tr{\hat{\rho}\hat{{A}}}$}
\end{center}
At this point, one may be interesting in select among all the self-adjoint operators, the ones that represent reasonable physical quantities. This can be done using group theory. In particular, the symmetry group of a non-relativistic physical theory is the Galilean group. Hence to obtain interesting physical quantities, one have to represent this group over this mathematical structure. In order to do that, one have to specify on what Hilbert space we want to represent the group. The usual choice is the following
\begin{center}
\emph{The Hilbert space describing a single quantum particle in $\mathbb{R}^n$ with $m$ internal degree of freedom is $\mathcal{H}_{1p} = L_2(\mathbb{R}^n,dx)\otimes \mathbb{C}^m$.}
\end{center}
When one try to represent the Galilean group over the Hilbert space selected above demanding that the transition probabilities are preserved (i.e. looking for unitary representation), one is lead to consider the central extension of this group. Then, by the \emph{Stone-von Neumann-Mackey theorem} one can prove that the position and the momentum operator take its usual form, in addition from the \emph{Stone theorem} one can obtain the usual (free) hamiltonian operator. All these fact are contained in the following request
\begin{center}
\emph{The symmetry group of a non-relativistic quantum particle is the (central extension) of the Galilean group.}
\end{center}
Finally one need to explain how to deal with composite systems, namely system where there are more particles. Requiring that the structure of the single quantum particle systems is preserved, the measurement on one system does not disturb the other and that the maximal information content is constant irrespective to the way in which we gain the information, one is lead to the notion of tensor product of Hilbert spaces.
\begin{center}
\emph{The Hilbert space for a quantum system composed by $N$ quantum particles is given by $\mathcal{H}_{Np} = \bigotimes_{i=1}^N \mathcal{H}_{i, 1p}$.}
\end{center}
Once we have this method to treat composite system, then it is very reasonable to think that the only propositions that make sense are the one that does not depend on the ordering of the subsystems, namely the one that are invariant under the permutation of the single subsystems. Thus we are lead to
\begin{center}
\emph{For a quantum system which consist in $n$ subsystems, the physically admissible proposition are all the propositions that are invariant under the action of the permutation group of $\{1,\cdots,n\}$.}
\end{center}
Till now all the rules sketched to describe a quantum system, all consequence of the assumption that we have to use an Hilbert space to describe a quantum system and on that the invariance under a symmetry group is a reasonable physical requirement. Nevertheless, to conclude this discussion one need to introduce a last rule: the so called \emph{measurement postulate}.
\begin{center}
\emph{If at the time $t$ we find that a particular proposition $A$ holds, the state right after the measurement is given by
\begin{equation*}
\hat{\rho}' = \frac{\hat{P}_A\hat{\rho}\hat{P}_A}{\Tr{\hat{P}_A\hat{\rho}}}
\end{equation*}
where $\hat{P}_A$ is the projector associated to the proposition $A$.}
\end{center}
Different proposal have been done to derive this postulate from the others like decoherence, dynamical collapses models and quantum bayesianism.
\section{Modelling a lab: operational reconstruction of QM}\label{sec3}
In this section we will briefly review the algebraic formulation of quantum mechanics, derived from operation considerations. The operational approach for the construction of a physical theory can be summarised as the attempt to formulate a physical theory defining each abstract mathematical operation as a procedure that can be executed, at least in principle, in a laboratory. This approach will lead to a formulation of quantum mechanics that is (almost) equivalent to the one described in the previous section, but starting from different assumptions. We will not speak anymore about propositions but we will focus our attention on physical quantities we use to describe a system. The main reference for this section are \cite{fS} for the operational arguments, \cite{vM} for the mathematical theorems and \cite{lA} for the probabilistic notions.
\subsection{A bit of math: some notions of algebra}
In this paragraph we will concentrate mostly of the mathematical notions and simple results we will use in what follows. The key concept is
\begin{definition}
An \emph{associative algebra} $\mathcal{A}$ over a field $\mathbb{K}$, is a $\mathbb{K}$-vector space equipped with a product $\star:\mathcal{A} \times \mathcal{A}\rightarrow\mathcal{A}$ such that
\begin{enumerate}
\item[1.] $a \star (b \star c) = (a \star b) \star c$, $\forall a,b,c \in \mathcal{A}$;
\item[2.] $a \star ( b + c) = a\star b + a\star c $, $\forall a,b,c \in \mathcal{A}$;
\item[3.] $(a+b) \star c = a\star c + b\star c $, $\forall a,b,c \in \mathcal{A}$;
\item[4.] $\alpha (a \star b) = (\alpha a)\star b = a \star (\alpha b)$, $\forall \alpha \in \mathbb{K}$ and $a,b \in \mathcal{A}$.
\end{enumerate}
\end{definition}
Briefly, an algebra is a vector space equipped with a product operation which is associative and distributive with respect the vector space operations. Algebras can be classified according with the following definitions
\begin{definition}
Given an associative algebra $\mathcal{A}$, then
\begin{enumerate}
\item[1.] is said \emph{normed algebra}, if equipped with a norm $\| \cdot \|$ such that $\|a \star b\| \leqslant \| a\| \|b\|$, $\forall a,b \in \mathcal{A}$;
\item[2.] is said \emph{banach algebra}, if normed and at the same time $\mathcal{A}$ is a Banach space under the norm;
\item[3.] is said \emph{$^*$- algebra}, if equipped with an involution $^*: a\mapsto a^*$, $\forall a \in A$;
\item[4.] is said \emph{$C^*$-algebra}, if banach and $\|a^*\star a\| = \|a\|^2$ (said \emph{$C^*$-property});
\item[5.] is said \emph{algebra with unit}, if there exist an element $\mathsf{I} \in \mathcal{A}$ such that $a = a\star\mathsf{I} =\mathsf{I} \star a$;
\item[6.] is said \emph{abelian (or commutative) algebra}, if $[a,b] = a \star b - b\star a = 0$, $\forall a,b \in \mathcal{A}$.
\end{enumerate}
\end{definition}
The symbol $\star$ for the algebraic product will be omitted, if there are no ambiguities with the usual product, and we also set $\mathbb{K} = \mathbb{C}$. Another important algebra, which is not in general associative, is the \emph{Jordan algebra}
\begin{definition}
A \emph{jordan algebra} $\mathcal{A}$ is a vector space, equipped with a bilinear form $\circ: \mathcal{A} \times \mathcal{A} \rightarrow \mathcal{A}$ such that $a \circ b = b \circ a$ and $a \circ (b \circ (a \circ a)) = (a \circ b) \circ (a \circ a)$ (\emph{jordan identity}).
\end{definition}
In what follow we will consider an associative algebra, in particular a $^*$-algebra. Its elements may be classified in similar manner of what is typically done for operators: $a$ is said \emph{normal} if $a a^* = a^* a$, and \emph{self-adjoint} if $a = a^*$. Again, following the similarity with operators (which in may cases form an algebra) one can define also a notion of \emph{spectrum} in algebraic contest.
\begin{definition}
Given a Banach algebra with unit, $\mathcal{A}$, and consider $a \in \mathcal{A}$. The \emph{spectrum} of $a \in \mathcal{A}$ is the set defined as
\begin{equation*}
\sigma(a) := \{ \xi \in \mathbb{C} | \nexists(a - \xi \mathsf{I})^{-1} \in \mathcal{A} \}
\end{equation*}
\end{definition}
The following result about the spectrum of a $C^*$-algebra is interesting for our discussion
\begin{proposition}
Let $a \in \mathcal{A}$ be a self-adjoint element of a $C^*$-algebra with unit, the $\sigma(a) \subset [-\|a\|,\|a\|] \subset \mathbb{R}$.
\end{proposition}
In addition, the notion of spectrum allows us to introduce a further classification between the elements of a Banach algebra with unit:
\begin{definition}
Given a Banach algebra with unit $\mathcal{A}$, an element $a \in \mathcal{A}$ is said \emph{positive} if self-adjoint and its spectrum is positive $\sigma(a) \subset \mathbb{R}^+$. The set of all the positive elements will be denoted by $\mathcal{A}^+$.
\end{definition}
Positivity of the elements allows us to define the positivity of linear functionals over the algebra. Such functionals play a special role in the algebraic formulation of quantum mechanics since they are interpreted as \emph{states}.
\begin{definition}
Let $\mathcal{A}$ be a Banach algebra with unit, a functional $\omega: \mathcal{A} \rightarrow \mathbb{C}$ such that
\begin{enumerate}
\item[1.] it is \emph{positive}, namely $\omega(a)\geqslant 0$, $\forall a \in \mathcal{A}^+$;
\item[2.] is \emph{normalised}, namely $\omega(\mathsf{I}) = 1$;
\end{enumerate}
is called \emph{state}.
\end{definition}
It can be proved, that over a $C^*$-algebra, a linear functional is positive if and only if it is bounded (in particular $\|\omega\| = \omega(\mathsf{I})$). This implies that over a $C^*$-algebra, a state is a continuous (since bounded) functional. Finally the following result about states is interesting for our discussion
\begin{proposition}
Let $\mathcal{A}$ be a $C^*$-algebra, take $a \in \mathcal{A}$ and consider $\alpha \in \sigma(a)$. Then there exist a unique state $\omega_\alpha : \mathcal{A} \rightarrow \mathbb{C}$, such that $\omega_\alpha(a) = \alpha$.
\end{proposition}
\subsection{Operational approach for a physical theory: description of a single physical quantity}
Any experimental science is based on the reproducibility of experiments. One prepare the system in a certain \emph{configuration} and then perform a series of measurements on some \emph{physical quantity} from which one can prove or disprove a fact. Physics was the first science where this procedure was applied. The theoretical models we use to describe a physical system should be based, as much as possible, on the way one have access to the information that we learn in a measurement. This is the heart of the operational approach for the construction of a physical theory.
Let $\mathcal{C}$ be the set of all the possible configurations in which a system can be prepared, and $\mathcal{A}$ be the set of all the physical quantities of the system we can measure. Consider a physical quantity $a \in \mathcal{A}$, the outcome of a measurement for a system prepared in the configuration $\omega \in \mathcal{C}$, will be labeled by $m_\omega (a)$, and can be operationally defined as the value that the pointer of the measuring device assume when we measure the quantity $a$. The \emph{result of a measurement} of the physical quantity $a \in \mathcal{A}$ for a system prepared in the configuration $\omega \in \mathcal{C}$, labeled $\omega(a)$, is defined to be the average of all the measured value $m_\omega(a)$, repeating the experiment (ideally) an infinite number of times, namely
\begin{equation*}
\omega(a) := \lim_{n \rightarrow +\infty} \frac{1}{n} \sum_{i=1}^{n} m_{\omega}^{(i)}(a)
\end{equation*}
Two configurations $\omega_1, \omega_2 \in \mathcal{C}$ are \emph{operationally indistinguishable} if the result of the two measurements is the same for all the physical quantities. Mathematically speaking, on $\mathcal{C}$ we can say that $\omega_1 = \omega_2$ if and only if $\omega_1(a) =\omega_2(a)$, $\forall a \in \mathcal{A}$. Similarly, two physical quantities are operationally indistinguishable if preparing the system in all the possible configurations, the results of the measurements are always identical. Hence on $\mathcal{A}$ we can say that $a_1 = a_2$ if and only if $\omega(a_1) = \omega(a_2)$, $\forall \omega \in \mathcal{C}$. Using these last equivalence relation, we can define operationally, the usual mathematical operations over $\mathcal{C}$ and on $\mathcal{A}$. For example, consider $a \in \mathcal{A}$ and $\lambda \in \mathbb{R}$, one can define the physical quantity $\lambda a$ as the physical quantity measured by a measuring device whose pointer scale is dilated by a factor $\lambda$ with respect to the original scale. More formally, one can write $m_\omega(\lambda a) := \lambda m_\omega (a)$ $\forall \omega \in \mathcal{C}$, which implies that $\omega(\lambda a) = \lambda \omega(a)$. In similar way, given $a,b \in \mathcal{A}$, one can operationally define the sum of two physical quantities $a+b$ simply setting $m_\omega (a + b) := m_\omega (a) + m_\omega (b)$ $\forall \omega \in \mathcal{C}$, and so $\omega(a+b) = \omega(a) + \omega(b)$. We can see that, by definition, $\omega$ is \emph{linear}.
One can operationally define also $a^n$, $n \in \mathbb{N}$ setting $m_\omega(a^n):=[m_\omega(a)]^n$ $\forall \omega \in \mathcal{C}$. Because of this definition, one can see that $m_\omega(a^0) = 1$ for any $\omega$. Thus we can define an \emph{identity} $\mathsf{I} := a^0$. We can also see that it is quite reasonable to assume $a^{n+m} = a^na^m$, since $m_\omega(a^{n+m}) = m_\omega(a)^n m_\omega(a)^m$. Finally, it is not difficult to see that even for $\lambda \in \mathbb{C}$, $\lambda a$ is well defined if we measure separately the real and imaginary part of it. Hence, all the complex polynomials of $a \in \mathcal{A}$, like $\alpha_n a^n + \alpha_{n-1} a^{n-1} + \cdots + \alpha_0 \mathsf{I}$, are well defined from the operational point of view. Let $\mathcal{A}_a$ denote the set of all the possible complex polynomials of $a$. Clearly $\mathsf{I} \in \mathcal{A}_a$ and on $\mathcal{A}_a$ one can naturally define an \emph{involution} setting $(\lambda a)^* := \overline{\lambda} a$, with $\overline{\lambda} $ complex conjugate of $\lambda $, and $(ab)^* = b^*a^*$. Thus we can conclude that \emph{$\mathcal{A}_a$ is an abelian $^*$-algebra with unit}. We cannot say the same thing on $\mathcal{A}$, because we may have troubles in the definition of $m_\omega(ab)$ and $m_\omega(ba)$, because of the property \textbf{O2}: for this reason we concentrate on $\mathcal{A}_a$ only.
From the discussion done till now, we may conclude that the following things holds for any $b,c \in \mathcal{A}_a$ (hence polynomial of $a$)
\begin{enumerate}
\item[1.] $m_\omega(\alpha b + c) = \alpha m_\omega(b) + m_\omega(c)$;
\item[2.] $m_\omega (bc) = m_\omega(b) m_\omega(c)$;
\item[3.] $m_\omega (\mathsf{I}) = 1$;
\item[4.] $m_\omega(b^*) = \overline{[m_\omega(b)]}$;
\item[5.] $m_\omega (b^*b) \geqslant 0$.
\end{enumerate}
We observe that the point 2, since we are dealing with polynomial of $a$ only, is a consequence of 1 and the properties of powers for the elements of $\mathcal{A}_a$ and does violate \textbf{O2}. These have consequences on the configurations
\begin{enumerate}
\item[1.] $\omega(\alpha a + b) = \alpha \omega(a) + \omega(b)$;
\item[2.] $\omega(\mathsf{I}) = 1$;
\item[3.] $\omega(a^*) = \overline{[\omega(a)]}$;
\item[4.] $\omega(a^*a) \geqslant 0$
\end{enumerate}
This means that a configuration $\omega: \mathcal{A}_a \rightarrow \mathbb{C}$ is a positive, normalised linear functional over $\mathcal{A}_a$: for this reason $\omega$ is a state on $\mathcal{A}_a$. At this point it is useful to introduce the following map for any $a \in \mathcal{A}_a$
\begin{equation*}
\| \cdot \|: \mathcal{A}_a \rightarrow \mathbb{R} \qquad \qquad \|a\|:= \sup_{\omega \in \mathcal{C}}|\omega(a)|
\end{equation*}
This number is the maximum value that a physical quantity may assume. Because any real instrument have a finite scale, $\|a\|$ is a \emph{finite}, positive real number. Without proving, we state the following proposition containing all the key properties of $\|\cdot\|$.
\begin{proposition}
Take $a \in \mathcal{A}$, $\lambda \in \mathbb{C}$ and $b,c \in \mathcal{A}_a$. The map $\|\cdot\|$ on $\mathcal{A}_a$ defined as above fulfils
\begin{enumerate}
\item[1.] $\|b+c\|\leqslant\|b\| + \|c\|$ and $\|\lambda b\| = |\lambda| \|b\|$;
\item[2.] $\|b^*\| = \|b\|$;
\item[3.] $\| bb^*\| \leqslant \|b\|^2$ with equality if and only if $b =b^*$;
\item[4.] $\|bc\| \leqslant \|b\| \|c\|$ when $b=b^*$ and $c=c^*$.
\end{enumerate}
\end{proposition}
Now, define following symmetric product on $\mathcal{A}_a$
\begin{equation*}
b \circ c := \frac{bc + cb}{2} \qquad\qquad b,c \in \mathcal{A}_a
\end{equation*}
It is not difficult to see that it is commutative, in addition also the Jordan identity is fulfilled. Thus $\mathcal{A}_a$ equipped with this product is a \emph{Jordan algebra}. Now, because $|\omega(bc)| \leqslant |\omega(b)||\omega(c)|$ on $\mathcal{A}_a$, one can conclude that $\|b \circ c\|\leqslant\|b\| \|c\|$ (and so $\|\cdot\|$ defines a norm), and so $\mathcal{A}_a$ is also normed, and after completion, $\mathcal{A}_a$ can be considered as a \emph{abelian Jordan-Banach algebra} (with unit). From the definition given for the $\circ$ product, one can conclude that $\mathcal{A}_a$ is is a sub-algebra of a larger abelian $C^*$-algebra with unit\cite{fS}. Thus, using operational arguments, we arrived to the following conclusion: each physical quantity can be described using an abelian $C^*$-algebra with unit. In the next paragraphs, we will see how it is possible to describe algebraically the whole physical system at the same time and not single physical quantities.
\subsection{Another bit of math: represent an algebra}
In the previous paragraph, we have seen that if when we want to describe a physical quantity, it is reasonable to use a $C^*$-algebra. Nevertheless, to explicitly perform any calculation, it should be useful to have more simple (less abstract) mathematical objects. In this section, we will briefly review how to represent an abstract $C^*$-algebra using concrete mathematical objects, like functions or operators.
The first result in this sense, for the particular case of a commutative $C^*$algebra, is the so called \emph{commutative Gelfand-Naimark theorem}. To formulate it we need the following definitions
\begin{definition}
Let $\mathcal{A}$ be a commutative $C^*$-algebra with unit. A \emph{character of $\mathcal{A}$} is a non-zero homomorphism between $A$ and $\mathbb{C}$, namely a map $\phi:\mathcal{A}\rightarrow \mathbb{C}$ such that $\phi(ab) = \phi(a)\phi(b)$, $a,b \in \mathcal{A}$. The set of all characters of $\mathcal{A}$ is called \emph{structure space} (or \emph{spectrum of the algebra}), and is labeled by $\Delta(\mathcal{A})$.
\end{definition}
Now, define a maps $\tilde{a}:\Delta(\mathcal{A}) \rightarrow \mathbb{C}$ as $\tilde{a}(\phi) := \phi(a)$, for $a \in \mathcal{A}$. This map is the so called \emph{Gelfand's transform} and can be proved that
\begin{equation*}
\sigma(a) = \{ \tilde{a}(\phi) \mspace{5mu} | \mspace{5mu} \phi \in \Delta(\mathcal{A}) \}
\end{equation*}
which is the link between the spectrum of an element of the algebra $\sigma(a)$ and the structure space $\Delta(\mathcal{A})$. At this point we can state the aforementioned theorem
\begin{theorem}[commutative Gelfand-Naimark theorem]
Let $\mathcal{A}$ be a commutative $C^*$-algebra with unit, and consider the algebra of continuous functions of the structure space $C(\Delta(\mathcal{A}))$ ($C^*$-algebra with respect to the norm $\|\cdot \|_{\infty}$). Then the Gelfand's transform is an isomorphism preserving the involution and the norm (isometric $^*$-isomorphism) between $\mathcal{A}$ and $C(\Delta(\mathcal{A}))$
\end{theorem}
Thus this theorem guarantee that we can always \textgravedbl represent \textacutedbl a commutative $C^*$-algebra with an algebra of continuous functions. As we will see this is no longer true if we drop the commutativity of the algebra: in this case we have to represent the algebra in a different manner.
Let us now consider the general case of a $C^*$-algebra, without assume commutativity. It turns out that Hilbert spaces are the suitable concrete mathematical object on which we can represent this algebra. This motivate the following
\begin{definition}
Consider a $C^*$-algebra with unit $\mathcal{A}$ and an Hilbert space $\mathcal{H}$. An homomorphism, preserving the involution and unit, $\pi: \mathcal{A} \rightarrow \mathcal{B}(\mathcal{H})$, is called \emph{representation of $\mathcal{A}$ on $\mathcal{H}$}. The representation $\pi$ is said \emph{faithful} if it is one-to-one. Finally, a vector $\psi \in \mathcal{H}$ is said \emph{cyclic} for $\pi$, if $\overline{\{ \pi(a)\psi \mspace{5mu} | \mspace{5mu} a \in \mathcal{A} \}} = \mathcal{H}$.
\end{definition}
At this point, take the $C^*$-algebra with unit $\mathcal{A}$, and a state $\omega: \mathcal{A} \rightarrow \mathbb{C}$ on it. It is not difficult to see that $\langle a, b \rangle_\omega := \omega(a^*b)$ define pre-inner product (in general it is degenerate). In order to have a well defined inner product, one can define the set
\begin{equation*}
N_\omega := \{ a \in \mathcal{A} \mspace{5mu} | \mspace{5mu} \omega(a^*b) = 0, \forall b \in \mathcal{A} \}
\end{equation*}
Then on $\mathcal{A}/N_\omega$, the product $\langle \cdot , \cdot \rangle_\omega$ is a well defined inner product. Thus, once can complete $\mathcal{A}/N_\omega$ to an Hilbert space, and we will label such Hilbert space by $\mathcal{H}_\omega$. One may define for each $a \in \mathcal{A}$
\begin{equation*}
[\pi_\omega (a)] ([b]) := [a][b] \qquad\qquad \forall [b] \in \mathcal{H}_\omega
\end{equation*}
This is by construction an homomorphism preserving the involution. Computing the norm, one can prove that $\pi_\omega(a)$ is bounded, thus $\pi_\omega$ is a representation of $\mathcal{A}$ on $\mathcal{H}_\omega$. Finally, we can understand the the unit of the algebra, defines a special vector in $\mathcal{H}_\omega$, which is $\Psi_\omega = [\mathsf{I}]$. Such vector is cyclic for $\pi_\omega$. Thus we can see that, given a $C^*$-algebra $\mathcal{A}$ and a state $\omega$, we can always construct a triple $(\mathcal{H}_\omega, \pi_\omega, \Psi _\omega)$, called \emph{GNS triple}. The discussion done above, sketched a part of the proof of the so called \emph{Gelfand-Naimark-Segal theorem}.
\begin{theorem}[GNS theorem]
Let $\mathcal{A}$ be a $C^*$-algebra with unit and $\omega:\mathcal{A}\rightarrow\mathbb{C}$ a state. Then
\begin{enumerate}
\item[i)] there exist a triple $(\mathcal{H}_\omega,\pi_\omega,\Psi_\omega)$ where $\mathcal{H}_\omega$ is an Hilbert space, $\pi_\omega : \mathcal{A} \rightarrow
\mathcal{B}(\mathcal{H}_{\omega})$ is a $^*$-representation of $\mathcal{A}$ on the $C^*$-algebra of bounded operators on $\mathcal{H}_\omega$, and
$\Psi_\omega \in \mathcal{H}_\omega$ is a vector, such that
\begin{enumerate}
\item[a)] $\Psi_\omega$ is unit vector, cyclic for $\pi_\omega$.
\item[b)] $\langle \Psi_\omega | \pi_\omega(a) \Psi_\omega \rangle = \omega(a)$ for any $a \in \mathcal{A}$.
\end{enumerate}
\item[ii)] If $(\mathcal{H},\pi,\Psi)$ is a triple such that
\begin{enumerate}
\item[a)] $\mathcal{H}$ is an Hilbert space, $\pi: \mathcal{A} \rightarrow \mathcal{B}(\mathcal{H})$ is a $^*$-representation and
$\Psi \in \mathcal{H}$ is a unit vector cyclic for $\pi$;
\item[b)] $\omega(a) = \langle \Psi | \pi(a) \Psi \rangle$;
\end{enumerate}
then there exit a unitary operator $\hat{U}:\mathcal{H}\rightarrow\mathcal{H}_\omega$ such that $\Psi = \hat{U}\Psi_\omega$ and
$\pi_\omega(a) = \hat{U}\pi(a)\hat{U}^{-1}$ for any $a \in \mathcal{A}$.
\end{enumerate}
\end{theorem}
Thus we have found a way to \textgravedbl represent\textacutedbl a generic $C^*$-algebra over an Hilbert space. Nevertheless, such representation depend on the state, hence changing the state we change also the operators. In addition in general is not one-to-one: we may find element in $\mathcal{B}(\mathcal{H}_{\omega})$ which doesn't represent any element of $\mathcal{A}$. The way out of this problem is the \emph{Gelfand-Naimark theorem}
\begin{theorem}
Any $C^*$-algebra with unit $\mathcal{A}$ is isomorphic (there exist an isomorphism preserving the involution) to a sub-algebra of $\mathcal{B}(\mathcal{H})$ for some Hilbert space $\mathcal{H}$.
\end{theorem}
In particular, one can prove that the Hilbert space considered in this theorem is $\mathcal{H} = \bigoplus_{\omega \in \mathcal{C}} \mathcal{H}_\omega$ and the isomorphism is ${\Pi(a):= \bigoplus_{\omega \in \mathcal{C}}\pi_\omega(a)}$.
Summarising we have that, any $C^*$-algebra with unit may be represented over an Hilbert space using suitable bounded operators, if the algebra is also abelian we can always find an algebra of function isomorphic to the original algebra.
\subsection{Operational approach for a physical theory: description of a physical system}
We have seen that, when we try to describe a single physical quantity, we are naturally lead to introduce an abelian $C^*$-algebra with unit. Commutativity, allows us to say that we can represent the elements of an algebra using functions, while the \emph{Riesz representation theorem} allows us to understand how to represent algebraic state over this space of function
\begin{theorem}[Riesz representation theorem]
Let $\mathsf{X}$ be a locally compact Hausdorf space and $\omega: C(\mathsf{X}) \rightarrow \mathbb{C}$ a continuous functional. Then there exist a unique Borel measure $\mu_\omega$
such that
\begin{equation*}
\omega(f) = \int_{\mathsf{X}} f(x) \mu_{\omega}(dx)
\end{equation*}
\end{theorem}
This theorem allows us to conclude that, given the couple $(\mathcal{A},\omega)$, where $\mathcal{A}$ is a commutative $C^*$-algebra and $\omega$ is a state, then there exist a probability space $(\mathsf{X},\mathscr{B}(\mathsf{X}), \mu_\omega)$, and the elements of the algebra are continuous function on $\mathsf{X}$ (which means that they are random variables). Thus, using this different approach, one obtain that each physical quantity is always described by a random variable over a probability space. This is part of the content of the observation \textbf{O1} about a quantum system: we can see that it is a general feature of any physical system, since no quantum assumption was done in its derivation.
From the discussion done above, one can see that $(\mathcal{A},\omega)$, and the random variables over $(\mathsf{X},\mathscr{B}(\mathsf{X}),\mu_\omega)$ describe the same thing: thus one can choose if deal with probability using measure-theoretic notions or algebraic notions. More formally \cite{lA}, the couple $(\mathcal{A},\omega)$ where $\mathcal{A}$ is a $^*$-algebra and $\omega$ is a positive normalised functional on $\mathcal{A}$, is called \emph{algebraic probability space}. If the algebra is $C^*$, $(\mathcal{A},\omega)$ is said \emph{$C^*$-probability space}. Finally, it is possible to prove that any commutative algebraic probability space is equivalent, up to zero-measure sets, to the usual measure-theoretic probability space \cite{hM}.
In general a physical system is described by more than one physical quantity. The mathematical object that we need to describe the whole systems should contains all the abelian $C^*$-algebra of all the physical quantities. Thus, in order to be conservative as much is possible, we may associate to each physical system, a $C^*$-algebra $\mathcal{A}$
defined to be the smallest $C^*$-algebra which contains all the abelian $C^*$-algebra with unit associated to the single physical quantity $\mathcal{A}_a$ (clearly $\mathcal{A}$ will have unit). In addition we may extend by continuity all the states $\omega: \mathcal{A}_a \rightarrow \mathbb{C}$, on $\mathcal{A}$. Thus we can summarise as follow: \emph{to each physical system we may associate a $C^*$-probability space $(\mathcal{A},\omega)$}. This in this way, one can motivate operationally , the observation \textbf{O1}, seen in the introduction.
As we will see this $C^*$-probability space is not abelian in general. Given a $C^*$-algebra $\mathcal{A}$, for each $a \in \mathcal{A}$ and $\alpha \in \sigma(a)$, we may find a state $\omega_\alpha(a) = \alpha$. Thus we can see that the spectrum of an element of the algebra can be interpreted as the set of all the possible values that the physical quantity may assume (which is not the set of all the possible outcome of an experiment). Because the measurable quantities are always expressed using real numbers, we have to conclude that the spectrum is a subset of $\mathbb{R}$. This happens if and only if the physical quantities are represented by self-adjoint elements of $\mathcal{A}$. Thus \emph{physical quantities are the self-adjoint elements of the algebra}.
At this point we want to analyse more in detail the structure of the $C^*$-algebra $\mathcal{A}$. In particular we want to discuss under which condition it is abelian. The main tool we will use is the notion of \emph{entropy of a physical quantity}. Let $a \in \mathcal{A}$ be a physical quantity and $\omega$ a state on it. By the GNS theorem, we may always find a triple $(\mathcal{H}_\omega,\pi_\omega, \Psi_\omega)$, and so the physical quantity $a \in \mathcal{A}$, can be represented on $\mathcal{H}_\omega$ as $\hat{\pi}_\omega(a)$. Using the spectral measure of this operator, one can compute the probability to observe the value of $a$ in the set $A$ as
\begin{equation*}
P_\omega(a \in A) = \int_{\sigma(a)} \chi_A(x) \mu^{(\hat{\pi}_\omega(a))}_\omega(dx) = \int_{[-\|a\|,\|a\|]} \chi_A(x) \mu^{(\hat{\pi}_\omega(a))}_\omega(dx)
\end{equation*}
since $\sigma(a) \subset [-\|a\|,\|a\|]$ and the spectral measure has support on $\sigma(a)$ only. Now, if we partition the spectrum using sets of diameter $\epsilon$, namely $\cup_{i \in I} U_i^{(\epsilon)} = [-\|a\|,\|a\|]$, for a given $\epsilon$ we obtain a set of probabilities $\{ p_\epsilon (i;\omega)\}_{i \in I}$, where $p_\epsilon (i;\omega):= P_\omega(a \in U_i^{\epsilon})$, which describe the probabilistic behaviour of the physical quantity in the state $\omega$. The diameter $\epsilon$ of the above sets, can be thought as the width of the bins of the measuring device of the given physical quantity. Then at this point one can define consistently the \emph{$\epsilon$-entropy of $a$ in the state $\omega$} as
\begin{equation*}
H_\omega^{(\epsilon)}(a) := - \sum_{i \in I} p_\epsilon(i;\omega) \log p_\epsilon(i;\omega)
\end{equation*}
This is the entropy of the physical quantity $a \in \mathcal{A}$ for a system prepared in the state (configuration) $\omega$, when is measured with an measurement apparatus with resolution $\epsilon$. Given a second physical quantity $b \in \mathcal{A}$, in the same way, one can define $H_\omega^{(\delta)}(b)$. Then, excluding the trivial partitions, the following theorem holds
\begin{theorem}\label{myteo}
Let $\mathcal{A}$ be a $C^*$-algebra with unit and consider $a,b \in \mathcal{A}$. If for any state $\omega$, and any $\epsilon,\delta > 0$
\begin{equation*}
H_\omega ^{(\epsilon)}(a) + H_\omega^{(\delta)}(b) \geqslant D(\epsilon, \delta)
\end{equation*}
with $ D(\epsilon, \delta)$ positive and fixed for any $\epsilon$ and $\delta$ positive, then $[a,b] \neq 0$.
\end{theorem}
Also the converse holds \cite{MU}, thus the above relation is another way to demand for non-commutativity between $a$ and $b$. Operationally, the meaning is simple: even if we know $a$ with certainty ($H_\omega ^{(\epsilon)}(a) = 0$) then we can't know $b$ with arbitrary precision, and viceversa. This the operational analogous of the observation \textbf{O2}. The above relations involve only probability, and so it can be considered as an operational criterium for establish if two physical quantities are represented by commuting element of $\mathcal{A}$ or not. Hence, as claimed above, in general $\mathcal{A}$ may not be commutative. A final observation: the partition of the spectrum is necessary if we admit the possibility to have observable with $\sigma(a)$ which is not discrete (algebraically this cannot be established in advance, because the usual classification of the spectrum depend on the state), necessary to have a well defined notion of (Shannon) entropy. If one deal with physical quantity having $\sigma(a)$ discrete (for instance by construction), the theorem can be stated using only the usual Shannon's entropy without consider any partition of $\sigma(a)$. Relations like the one described above, are called \emph{entropic uncertainty relations}. Hence we can see that, \emph{if we find an entropic uncertainty relation, then the algebra cannot be abelian}. In this case, such algebra can be represented only using a subset of bounded operators over an Hilbert space $\mathcal{H}$, and not with functions.
\subsection{Algebraic formulation of quantum mechanics}
Now, we are able to reconstruct quantum mechanics from the operational point of view. Below we will list the set of axioms needed to reconstruct quantum mechanics, justified by the discussion done till now.
\begin{center}
\emph{Each physical system is described by an $C^*$-probability space $(\mathcal{A},\omega)$. The physical quantities are the self-adjoint elements of $\mathcal{A}$ and the possible way one can prepare the system are represented by states $\omega$.}
\end{center}
This is a general feature of a physical system and no quantum assumption are done till this point. The probabilistic interpretation, allows us to conclude that
\begin{center}
\emph{For a composite system, the algebraic probability space is constructed using the $C^*$-tensor product of the algebras associated to each subsystem.}
\end{center}
The $C^*$-tensor product, is the only possible tensor product between algebras which preserves the $C^*$-property. This rules is just the generalisation in $C^*$-algebraic contest of what is usually done in measure-theoretic probability spaces: in fact, for commutative $C^*$-algebra, it reduces exactly to the cartesian product of the sample spaces. One need to specify if the algebra is abelian or not.
\begin{center}
\emph{For a quantum system, some entropic uncertainty relation holds, which means that $\mathcal{A}$ is not abelian.}
\end{center}
This requirement force us to remain over an Hilbert space, when we represent the $C^*$-algebra. Positive, normalised linear functional are states and the GNS theorem show that the expectation is computed as in the previous section. Representing the algebra $\mathcal{A}$ over an Hilbert space, one is forced to consider physical quantities only some class bounded operators. Thus, we have no room for the position and momentum operators, because they are unbounded. Nevertheless there is a way out: one can always consider succession of bounded operators converging to the unbounded position and momentum operators. The commutation relations between them can be obtained by Weyl relations defining a \emph{Weyl $C^*$-algebra}. Time evolution can be obtained using the additive group $(\mathbb{R},+)$, which is the same subgroup of the Galilean group generating the evolution in the previous section, to describe time translations.
\begin{center}
\emph{Time evolution is described by the group of time translations $(\mathbb{R},+)$.}
\end{center}
On the GNS Hilbert space, such group is represented via Stone theorem, obtaining the usual Schrodinger evolution. Finally one need to introduce the measurement postulate.
Using algebraic probabilistic considerations, the measurement postulate can be obtained using conditional expectation (this is the quantum bayesanisim solution of the measurement problem), nevertheless this solution is not universally accepted. Thus one should also need
\begin{center}
\emph{If the observed value of $a \in \mathcal{A}$ in the state $\omega$ is in the Borel set $F$, then the state after the measurement $\omega_F$ is}
\begin{equation*}
\omega_F : b \in \mathcal{A} \mapsto \frac{\langle \Psi_\omega | \hat{P}_F \pi_\omega(b) \hat{P}_F \Psi_\omega \rangle}{\langle \hat{P}_F\Psi_\omega | \hat{P}_F \Psi_\omega \rangle}
\end{equation*}
\end{center}
\section{What is the meaning of an Hilbert space? A toy-model proposal}\label{sec4}
In the two previous sections, we have seen in a rather informal way, how the quantum mechanics can be formulated starting from basic principles. In both cases, two are the basic assumptions: one about the intrinsic probabilistic nature of a physical system (a very general feature), and one about the limitations in this description due to the presence of quantities that cannot be known at the same time. We have also seen that is this feature that render unavoidable the Hilbert space formulation. In the quantum logic approach, is this feature (\textbf{O2}) which for us to abandon the usual (boolean) logic, and so the possibility to use the ordinary probability theory. In the operational-algebraic approach, are the entropic uncertainty relations that make the algebra not abelian, and so we cannot map it completely into an ordinary probability space.
Quantum mechanics is usually formulated using the Hilbert space language, and this give rise to all the famous problems in the interpretation (like state-superposition or entanglement). We cannot abandon such formulation, because of non-commutativity, so the proposal of this section, is to treat non-commutativity as the reason because one is forced to use Hilbert space, and doesn't consider it as a consequence, as it is typically done. Once one accept this view, one can recognise that the entropic uncertainty relations and algebraic probability spaces are a very useful tools to derive the limitations in the simultaneous description of physical quantities from the intrinsic probabilistic nature of a physical system. In the following paragraph we will describe a simple toy-model, where the origin of the uncertainty is clear, which is able to reproduce the non-commutativity at algebraic level, of the analogous position and momentum for a particle. This model is very simple, and we do not claim that it reproduces completely the quantum analogous of it, nevertheless we think that it suggests an interesting motivation on the reason because nature at fundamental level, should be described using Hilbert spaces.
\subsection{The toy model: random jumps over a random space}
In the model proposed here, there are two main actors: the space and the particle. The space is assumed discrete and finite: it can be thought as a random distribution of, say $N$, points over the real line (we will consider the 1-dimensional case for simplicity). In addition the space is not assumed static but stochastic: the initial distribution of space points is assumed to bee know and it evolve in time as if each space point is a discrete-time random walk. The particle is assumed to be a point-like object whose dynamics can be thought as a succession of jumps from one space point to another. Now, we formalise this ideas from the mathematical point of view,
Let us start briefly reviewing the mathematical formulation of the basic stochastic process describing the space: \emph{the random walk on a line}. Let $\{X_i\}$ be a collection of random variables such that $P[X_i = +l] = p$ and $P[X_i = - l] = 1 - p =: q$, where $l$ is a fixed number. Such processes are called \emph{Bernulli random variables} and typical example of it is the coin tossing. We may describe the random walk in the following way. Suppose we have a person in the initial position $x_0$ at the initial (discrete) time $0$. At each instant of time this person tosses a coin: if he gets head he will move on the left of $l$, otherwise he will move on the right of the same amount. Thus the position of the person at the time $1$ will be $S_1 = x_0 + l$ if he get head, $S_1 = x_0 - l$ otherwise. Repeating this procedure for each instant of time we can say that the movement of this person is a random walk. Hence we can say that the random walk starting at the point $x_0$, $S_N$, is defined as
\begin{equation*}
S_N := x_0 + \sum_{i=0}^N X_i
\end{equation*}
Central quantity we want to compute is the probability to have $S_N = d$, for some $d$ integer multiple of $l$, at the time $N$ starting from the point $x_0$. It can be proved that
\begin{equation*}
P[S_N - x_0 = d] =
\begin{cases}
\frac{N!}{\left[\frac{1}{2}\left(\frac{d}{l} + N\right)\right]!\left[\frac{1}{2}\left(N - \frac{d}{l}\right)\right]!} p ^{\left(\frac{d}{l} + N\right)}q^{\left(\frac{d}{l} + N\right)}&
\mbox{ if }d \in [-Nl,+Nl] \\
0 &\mbox{ otherwise}
\end{cases}
\end{equation*}
Suppose that the initial position $x_0$ is a random variable with distribution $\pi(x_0)$ over the real line (possibly discrete), then we can write that
\begin{equation*}
P[S_N = d] = \int_\mathbb{R} dx_0 P[S_N - x_0 = d]\pi(x_0)
\end{equation*}
Consider now a collection of random walks on the line, $\{S_N^{(i)}\}_{i \in I}$, starting in different points $x_{0}^{(i)}$, where $I$ is a finite set. At each instant of time, this collection of random walk will select at most $|I|$ points over the real line (two or more random walks may overlaps because they are assumed independent: the presence of a random walk in a give point doesn't influence the probability of another random walk to be found in the same point). Such collection of points, randomly distributed over the real line, is our model of space.
The particle in this model can be described essentially by two quantities: the first is the position of the particle, $X_N$, the second its velocity, $V_N$, suitably defined over a space of this kind. Let us consider the position first. Let $\Pi_N$ be the finite set of points selected by the collection of random walks $\{S_N^{(i)}\}_{i \in I}$ at the time $N$. Clearly $\Pi_N \subset \mathbb{R}$. The random variable describing the position of a point-like particle at time $N$ will be a map $X_N: \Omega_{X_N} \rightarrow \Pi_N$. At this level seems problematic define the possible outcome of $X_N$ because the set $\Pi_N$ change at each instant of time. To avoid this problem, we may think that the random variable $X_N$ will select a single random walk in the collection $\{S_N^{(i)}\}_{i \in I}$. Select means that at the time step $N$, we have that $X_N = S_N^{(i)}$, in probability. Hence, in general we may relate the probability to observe the particle in a given point of space, with the probability to find a point of space where we observe the particle, as
\begin{equation}\label{PXN}
P[X_N = c] = \sum_{i \in I} P[X_N = S_N^{(i)}| S_N^{(i)} = c]P[S_N^{(i)}=c]
\end{equation}
Let us explain better the quantities involved in this equation. We have
\begin{enumerate}
\item[a)]$P[X_N = S_N^{(i)}| S_N^{(i)} = c] =: \gamma(N,i,c)$ represent the probability that the particle select the $i$-th random walk in the collection, assumed that this
random walk at the time $N$ is in $x = c$. This is the probability that can be changed if we act on the particle only. More precisely, we can prepare the system (in our case the particle) in a configuration such that the probability to observe it in a given point is higher respect to another configuration, changing this term. For example, if we want to increase the probability to observe the particle in $x = a$, we may select (hence choose the index $i$) the random walks in the collection with an higher probability to be found in $x = a$. Summarising, when we prepare the particle in a given configuration we act on this object.
\item[b)] $P[S_N^{(i)}=c]$ is the probability to observe a point of space in $x = a$. This quantity is given by the model and cannot be changed when we act on the particle
only.
\end{enumerate}
The jumps of the particle between two different points of space will be modelled with a simple discrete-time Markov chain with transition probabilities
\begin{equation}\label{forwardprobX}
P[X_{N+1} = b | X_N = c] =: \alpha(b,c)
\end{equation}
where $b$ and $c$ are integer multiple of $l$, the step of the random walk (in what follow we will assume $l = 1$, for simplicity, and so $b,c \in \mathbb{N}$). These transition probabilities are assumed to fulfil some equation describing the physical system's dynamics.
As we will see, the random space described before will put some constraints on the possible values these transition probabilities may assume. Let us consider now the second random variable we are interested in: the velocity. The space described above is discrete and, in addition even the time is assumed discrete, hence it seem reasonable to define the velocity of a particle over this space as
\begin{equation*}
V_N := \frac{X_{N+1} - X_N}{N+1 - N} = X_{N+1} - X_N
\end{equation*}
Clearly, if it is a random variable it should depends both on $X_N$ and $X_{N+1}$. This fact render problematic the computation of the various probability generating function (like the characteristic functions) as a function of the random variable $X_N$ and $X_{N+1}$. Nevertheless, this problem may be bypassed, in a certain sense, following this intuitive idea. Suppose we know that the particle at the time $N$ is in the position $X_N = c$, then we know that the event $C:=\{X_N = c\}$ is true, namely $P[X_N = c] = 1$ (which means $P[X_N = d] = \delta_{d,c}$). Then, in this case, we can write that $V_N = X_{N+1} - c$ when $C$ is true. This suggests that the probability to have $V_N = a$ assumed that $C$ is true is equal to the probability that $X_{N+1} = a+c$ (where $a$ can vary and $c$ is fixed). Then, using (\ref{forwardprobX}) we can conclude that $P[X_{N+1} = a+c] = \alpha(a+c,c)$ and so $P[V_N = a|C] = \alpha(a+c,c)$. Finally we can write that $P[V_N = a]$ is given by
\begin{equation*}
\begin{split}
P[V_N = a] &= \sum_c P[V_N = a | X_N = c]P[X_N = c] \\
&= \sum_c \alpha(a+c,c)P[X_N = c]
\end{split}
\end{equation*}
This complete the probabilistic description of the particle in this model.
\paragraph*{Remark.} The computation of $P[V_N = a | C]$ can be done in a rigorous way using characteristic function. Since $V_N = X_{N+1} - X_N$ and because when $C$ is true ($P[C] =1$, hence $P[X_N=d|C] = \delta_{d,c}$), $X_N$ and $X_{N+1}$ are independent, from the properties of the characteristic functions one can write $\varphi_{V_N}(\lambda)|_C = [\varphi_{X_{N+1}}(\lambda)|_C ][ \varphi_{ - X_N}(\lambda)|_C]$. This means that
\begin{equation*}
\begin{split}
\varphi_{V_{N}} (\lambda)|_C &= \left(\sum_{b} P[X_{N+1}=b|C]e^{i\lambda b}\right)\left( \sum_{d} P[X_N = d|C] e^{i\lambda(-d)}\right) \\
&= \sum_{b} P[X_{N+1}=b|C]e^{i\lambda (b-c)} \\
&= \sum_{b} \alpha(b,c)e^{i\lambda(b-c)}
\end{split}
\end{equation*}
In the case of discrete random variable, one can recover the probability measure from the characteristic function using the formula
\begin{equation*}
P[X=a] = \lim_{T\rightarrow+\infty} \frac{1}{2T}\int^{+T}_{-T} e^{ita} \varphi_X(t)dt
\end{equation*}
Hence we have that
\begin{equation*}\label{VNC}
\begin{split}
P[V_N = a|C] &= \lim_{T\rightarrow+\infty} \frac{1}{2T} \int_{-T}^{+T} e^{-i\lambda a}\varphi_{V_N}(\lambda)|_Cd\lambda \\
&= \lim_{T\rightarrow+\infty} \frac{1}{2T} \int_{-T}^{+T} e^{-i\lambda a}\sum_b \alpha(b,d)e^{i \lambda(b-c)}d\lambda \\
&= \sum_b \alpha(b,c)\lim_{T\rightarrow+\infty} \frac{1}{2T} \int_{-T}^{+T} e^{i\lambda(b-c-a)}d\lambda \\
&= \sum_b \alpha(b,c) \delta_{0,b-c-a} \\
&= \alpha(a+c,c)
\end{split}
\end{equation*}
Confirming the result obtained above.
\subsection{Entropy for $V_N$ and entropy for $X_N$}
In this paragraph, we will compute the entropy of the two random variables described before. Consider the following situation: suppose that we know that at time $N$ the particle is in the position $x=c$. Thus we can conclude that $P[X_N = d|C] = \delta_{d,c}$. In addition, using (\ref{VNC}), we also know that $P[V_N = a|C] = \alpha(a+c,c)$. Now, if $\alpha(a+c,c)$ can be changed continuously to a delta, then we should obtain $P[V_N = a|C] = \delta_{a+c,c}$. The computation of the two entropies with these probabilities will give us $H(X_N|C) = 0$ and $H(V_N|C) = 0$. Nevertheless we should observe the following fact. Recalling (\ref{PXN}), we can write
\begin{equation}\label{transprob}
\alpha(a+c,c) = \sum_{j \in I} P[X_{N+1} = S^{(j)}_{N+1}|S^{(j)}_{N+1} = a+c, C] P[S^{(j)}_{N+1} = a+c]
\end{equation}
The quantity $P[X_{N+1} = S^{(j)}_{N+1}|S^{(j)}_{N+1} = a+c, C] =: \gamma(N+1,j,a+c|C)$ have the following property
\begin{equation}\label{propG}
\sum_{j \in I}\gamma(N+1,j,a+c|C) = 1
\end{equation}
Recalling that $ \gamma(N+1,j,a+c|C) \in [0,1]$ because is a transition probability, the equation (\ref{transprob}) can be considered as the average, with respect the probability distribution $\{ \gamma(N+1,j,a+c|C)\}_{j \in I}$, of the probability to find a point of space in $x = a+c$. This observation is very important: it allows us to find a bound for the transition probabilities. In fact, by the property of the average, for all the $\alpha(a+c,c) \neq 0$ (which gives a non zero contribution to the entropy) we can write that
\begin{equation}
\alpha(a+c,c) \leqslant \max_{j \in I} P[S^{(j)}_{N+1} = a+c]
\end{equation}
If we assume that the random space is a \emph{purely random process}, namely that $P[S^{(j)}_N = b] < 1$ for any $N$, $b$ and $j \in I$, then we can see that the $\alpha(a+c,c)$ cannot be $1$ for any value of $a$ (from now on it will be always assumed so).
This implies that we cannot have a delta-like probability distribution for both the observables $V_N$ and $X_N$ if the space is a purely random process. The bound found above cannot be changed if we act on the particle only: it is linked to the process describing the random space. This observation suggests that we can find a bound for the information that we can have on these two random variables at the same time.\newline
Let us find a bound for the sum of the two entropies. It is a known fact that the entropy is a concave function of the probability distribution. Thus, the Jensen inequality holds, namely if $g$ is a concave function
\begin{equation*}
g\left( \frac{\sum_{i=1}^{n} a_i x_i}{\sum_{i=1}^{n} a_i} \right) \geqslant \frac{\sum_{i=1}^{n}a_i g(x_i)}{\sum_{i+1}^{n} a_i}
\end{equation*}
where $a_i \in \mathbb{R}$ are arbitrary numbers. Then, we can write
\begin{equation*}\begin{split}
H(V_N|C) & = -\sum_{a} \alpha(a+c,c) \log \alpha(a+c,c) \\
& = - \sum_a \left( \sum_{j \in J} \gamma(N+1,j,a+c|C) P[S^{(j)}_{N+1} = a+c]\right) \log \left( \sum_{j \in J} \gamma(N+1,j,a+c|C)P[S^{(j)} = a+c] \right) \\
& \geqslant - \sum_a \sum_{j \in J} \gamma(N+1,j,a+c|C) \left( P[S^{(j)}_{N+1} = a+c] \log P[S^{(j)} = a+c] \right) \\
& \geqslant - \sum_a \min_{j \in J}|_a \left( P[S^{(j)}_{N+1} = a+c] \log P[S^{(j)} = a+c] \right)
\end{split}\end{equation*}
Where we used the Jensen inequality and (\ref{propG}), while $\min_{j \in J}|_a$ is the minimum for $a$ fixed. This is already a bound on $H(V_N|C)$ which doesn't involve processes related to the particle, nevertheless we can further simplify this result if we add some reasonable assumption on the random walks describing the space process.
We may require that
\begin{enumerate}
\item[a)] The initial positions $\{x^{(i)}_0\}_{i \in J}$ are $i.i.d.$ random variables, namely $x^{(i)}_0 \sim \pi^{(i)} = \pi$ for any $i \in J$,
\item[b)] The left and right probabilities (i.e. $p^{(i)}$ and $q^{(i)}$) of the random walks are all equal, namely $p^{(i)} = p$ for any $i \in J$.
\end{enumerate}
Under these two assumption, we can say that all the random walks are \emph{statistically equivalent}, in the sense that $P[S_N^{(i)} = a] = P[S_N^{(j)} = a]$ for any $i,j \in J$ ("observing" the space process only at time-step $N$). This implies that
\begin{equation*}
- \sum_a \min_{j \in J}|_a \left( P[S^{(j)}_{N+1} = a+c] \log P[S^{(j)} = a+c] \right) = - \sum_a \left( P[S^{(j)}_{N+1} = a+c] \log P[S^{(j)} = a+c] \right)
\end{equation*}
namely
\begin{equation*}
H(V_N|C) \geqslant H(S_{N+1})
\end{equation*}
Finally, it is not difficult to see that $H(X_N|C) = 0$ since $P[X_N = d |C] = \delta_{d,c}$. Hence we can write that
\begin{equation*}
H(X_N|C) + H(V_N|C) \geqslant H(S_{N+1})
\end{equation*}
and because $H(X|Y) = \sum _i P[Y = i]H(X|\{Y=i\})$ and $H(X|Y) \leqslant H(X)$ (conditioning reduces entropy), the above inequality implies that
\begin{equation}\label{EUR}\begin{split}
H(X_N) + H(V_N) &\geqslant \sum_c P(X_N = c)\left[H(X_N|C) + H(V_N|C) \right] \\
&\geqslant \sum_c P[X_N = c]H(S_{N+1}) = H(S_{N+1})
\end{split}\end{equation}
The RHS has the following features
\begin{enumerate}
\item[a)] $H(S_{N+1})$ is a positive quantity which cannot be changed if we act on the particle only: $H(S_{N+1})$ is fixed once that the model of the space is given;
\item[b)] $H(S_{N+1})$ is zero only for a deterministic space. This case is excluded if we assume that the random space is a purely random process (the space is not
deterministic). As expected from the initial discussion on the transition probabilities, we can see that the bound in the entropies is related to the random nature
of the space;
\item[c)] It is not guarantee that this bound is optimal.
\end{enumerate}
The above bound can be explained in the following manner: we may change the system configuration (namely the $\{\gamma(N,i,c)\}$) in order to know completely the position of the particle, nevertheless the velocity of the particle remains uncertain at least as the future position of a space point. Such uncertainty cannot be reduced acting on the particle only. The discussion done till now, should prove that position and velocity of a point-like particle which jumps at random in different points over a random space satisfy an entropic uncertainty relation.
\subsection{Algebraic description of a point-like particle}
The final result of the previous paragraph, suggests that position and velocity doesn't commute. In order to point out explicitly this non-commutativity, we will describe the particle (not the space) using algebras. The position is a random variable, namely a measurable map between a probability space $(\Omega_{X},\mathcal{E}(\Omega_{X}), P_{X})$ and a measure space, say $(\mathbb{R},\mathscr{B}(\mathbb{R}))$. Thus we can write $X_N:\Omega_X \rightarrow \mathbb{R}$. Same things can be say for the velocity: it is defined over a probability space $(\Omega_V, \mathcal{E}(\Omega_V),P_V)$, take value over the measure space $(\mathbb{R},\mathscr{B}(\mathbb{R}))$ and is a measurable map, thus $V_N: \Omega_V \rightarrow \mathbb{R}$. This description is equivalent, up to null sets, to the algebraic probability spaces $(\mathcal{A}_X, \varrho)$ and $(\mathcal{A}_V, \varsigma)$, for $X_N$ and $V_N$ respectively. In this description, $X_N$ and $V_N$ are algebraic random variable, namely (involution preserving) homomorphism between two algebras: $X_N$ is defined as the homomorphism $X_N:\mathcal{A}_X \rightarrow L_\infty(\mathbb{R},\mathscr{B}(\mathbb{R}))_X$ and similarly for $V_N$ ($L_\infty(\mathbb{R},\mathscr{B}(\mathbb{R}))$ is the canonical $C^*$-algebra associated to the measurable space $(\mathbb{R},\mathscr{B}(\mathbb{R}))$).
At this point, using the algebraic probability language, we can define the $C^*$-probability space describing the point-like particle of the model as $C^*$-probability space $(\mathcal{A},\omega)$, where $\mathcal{A}$ is the smallest $C^*$-algebra which contain both $L_\infty(\mathbb{R},\mathscr{B}(\mathbb{R}))_X$ and $L_\infty(\mathbb{R},\mathscr{B}(\mathbb{R}))_V$, and $\omega$ is a state. Thus $X_N$ and $V_N$ are elements of $\mathcal{A}$, and the discussion done in the previous paragraph, tell us that they fulfil an entropic uncertainty relation and, by the theorem \ref{myteo} we have that $[X_N,V_N] \neq 0$. Thus, recalling the previous discussion about non-abelian algebras, we can see that the description of our point-like particle must be done over an Hilbert space, and cannot be mapped to an ordinary probability space.
It is worth to remark that this apparently strange result, is possible because we eliminated the space from the description. In fact, we start using an ordinary probability space, and we found a relations between the entropies of the three main objects of the model (the space, the position and the velocity). Then eliminating the space from the model, we found an entropic uncertainty relation. It is interesting to observe that in ordinary quantum mechanics the space doesn't play any role, exactly as in the final model of the particle.
\subsection{Conclusions, weak points of the model and possible further development}
The model presented in the previous sections have the following interesting feature: the space as well as the particle are treated in the same way. Thinking that any observation we can do in a laboratory give us only a probabilistic outcome (it is a nonsense to say "the quantity $A$ has the value $b$") this feature can be justified using an operational approach for the construction of a physical theory. In the model presented, the evolution of the particle is the only possible, in the sense that if the particle change position, it must happens with a jump. Thus in this sense we derived \textbf{O1} form \textbf{O2}, in the particular model considered. Of course, from the discussion done till now we cannot conclude that the quantum mechanics is equivalent to something similar to the model presented here. In particular, after a bit of though, the following aspects may look strange
\begin{enumerate}
\item[a)] \emph{Time is discrete}. The discreteness of time allows us to define $V_N$ as is done and non-commutativity seem to be a consequence of it. Nevertheless, in
the proof of the entropic uncertainty relation between $X_N$ and $V_N$ time doesn't play any role. So the proof seem to be quite robust to possible change in the definition of $V_N$, due to change in the time assumptions.
\item[b)] \emph{Space is discrete}. The structure of space is the core of the proof. Nevertheless, discreteness limit only the place where a point of space can be found.
This is problem can be solved switching the description of space from random walks to wiener(-like) processes, something that at the moment is not available. Thus this weak point still remain but, we will discuss later an interesting mathematical object that can be used to treat this problem.
\item[c)] \emph{The constant in the entropic uncertainty relation change with time}. Looking back to the entropic uncertainty relation (\ref{EUR}) one can easily see that
the constant depend on time. This seem rather strange despite it doesn't influence the result: the constant remains positive for each time at it is (particle) state independent. Nevertheless, one should observe this fact: the constant is related to the stochastic process describing the space. In particular, in (\ref{EUR}) only the RHS depend on the space, so if we change the model of space, only this side change. For example one can substitute the usual random walk (where the possible position of the walker can be any integer number, this cause the increasing of the value of the constant) with a reflexing boundary random walk. In this case the number of possible position of the point of space are a finite number, and so the constant can be fixed. This suggests that this problem is not so fundamental. In addition the bound is not assumed to be optimal, as already observed.
\end{enumerate}
From the above arguments, one can see that a better model for the space should be desirable. Keeping in mind this we may list a series of fact, suggesting that further studies in this direction can be interesting
\begin{enumerate}
\item[a)] \emph{Determinantal random point field.} A determinantal random point field is a stochastic process which describe the random distribution of $N$ (possibly
infinite) points over $\mathbb{R}^n$ (or more generally over a Polish space). This process is said deteriminantal because any point correlation function can be expressed as Fredoholm's determinant of a locally trace class operator $\hat{\sigma}$ over an Hilbert space (in particular $\hat{\sigma}:L_2(\mathbb{R}^n)\rightarrow \mathcal{K}$, with $\mathcal{K}\subset L_2(\mathbb{R}^n)$ and $\dim \mathcal{K} = N$) \cite{aS}. Such property is called \emph{determinantal property}. Locally trace class operator are operators whose trace may be infinite but, if restricted to suitable subspace, it is finite. Clearly any trace class operator can be considered as a locally trace class operator. Thus given a quantum system described by an Hilbert space $\mathcal{H}$ and a trace class operator $\hat{\rho}$ we may associate to it a determinantal random point field. Other similarity of these mathematical objects with quantum mechanics is that also that can be described using the second quantisation language. Finally the unitary evolution of quantum mechanics preserves the determinantal property.
\item[b)]\emph{$L_2(\mathbb{R}^3)$ for a particle.} If we assume that the space where the particles live is a random distribution of points over $\mathbb{R}^3$ described by a determinantal random point field, then we may justify the postulate of quantum mechanics which tells us that $L_2(\mathbb{R}^3)$ is the Hilbert space for a single particle.
\item[c)]\emph{Position and momentum operator and Galilean group.} From the Stone-von Neumann theorem we may justify that the position and momentum operators in
quantum mechanics are defined as
\begin{equation*}
\hat{X}_i \psi(x) = x_i \psi(x) \mspace{50mu} \hat{P}_i\psi(x) = - i\frac{\partial}{\partial x_i}\psi(x)
\end{equation*}
for $\psi(x) \in \mathcal{S}(\mathbb{R}^n)$. In quantum mechanics doesn't seem a priori valid the classical relation between position and momentum for a point-like particle ($p = m\dot{x}$). Nevertheless from the Galilean group one can prove that\cite{vM}
\begin{equation*}
\langle P_t \rangle = m \frac{d}{dt} \langle X_t \rangle
\end{equation*}
Assuming, because a limited accuracy for any clock, that the time is discrete (hence the limit is replaced by an inferior) and using the Wigner quasi probability distribution, we can write that
\begin{equation*}
P_{t} = m \inf_{\delta t} \frac{X_{t + \delta t} -X_{t}}{\delta t} \mspace{50mu} dW\mbox{-a.s.}
\end{equation*}
Setting $\delta t =1$ ($\delta t$ is our unit of time) and $t = N\delta t$, we can write $P_N = m (X_{N+1} - X_N) = mV_N$. This consideration justify at qualitative level the interest in the two random variable analysed in these notes.
\end{enumerate}
We conclude this writing, with a quotation whose author consider it contains the spirit of what was written in this last section
\begin{center}
\textgravedbl \emph{Probability is the most important concept in modern science, especially as nobody has the slightest notion what it means} \textacutedbl
\end{center}
\begin{flushright}
(Bertrand Russel in a lecture, 1929)
\end{flushright}
\bibliographystyle{amsalpha}
|
1,477,468,751,060 | arxiv | \section{Introduction}
Reinforcement Learning (RL) is an algorithm with no supervision that can be used to train an agent to interact with an unknown environment. The dynamics of the interaction are often assumed to be a Markov Decision Process (MDP). The key feature of RL is its sole dependence on a set of experiences, which are gathered by interacting with, i.e., \emph{exploring}, the environment.
This makes RL inherently different than classical dynamic programming methods~\cite{puterman} and automatic control approaches, in the sense that it can optimally solve the decision-making problem without any prior knowledge about the MDP model~\cite{sutton}.
This very practical feature has paved the way for applications of RL in economics, engineering, and biology \emph{inter alia}, to solve sequential decision-making problems when no model is available~\cite{ng,ifac,chemistry,lcnfq,deepql,cdc,silver}.
While the existing RL methods deliver good training outcomes, by and large they lack guarantees on what happens \emph{during training}. Existing results rely either on ``soft safety'' or on ``ergodicity'' assumptions. The essence of soft safety is that unsafe states, which are to be avoided, may still be visited regardless of the consequent catastrophic outcome. The ergodicity assumption means that any state is eventually reachable from any other state if a proper policy is followed -- this second assumption allows a (non-episodic) RL agent to explore by simply favouring states that have rarely been visited, even if they are unsafe in practice.
These assumptions might be reasonable for certain applications, such as gaming or virtual environments, but are not affordable for many safety-critical physical systems, which may break before exploration completes. Thus, unsurprisingly, most of the existing exploration methods are impractical in safety-critical scenarios where the aforementioned assumptions do not hold.
\emph{Safe RL} is an active area of research focusing on the training of agents with guarantees on safety~\cite{garcia}.
However, most of these methods minimize the risk that the \emph{trained agent} violates a safety specification, but do not ensure safety of exploration \emph{during training}~\cite{risk1,risk2,risk3,risk4,arxiv,shield,shield2,lyap,lcrl_j}. Recent approaches on this problem~\cite{entropy,barrier_1,barrier_2,knownD,knownD2} are either computationally expensive or require explicit, strong assumptions about the model of agent-environment interactions.
In this work we take a step back from currently used exploration methods and recall that RL is originally inspired by cognitive and behavioural psychology~\cite{psy}. When humans learn to control, they naturally account for what \emph{they expect} to be safe, i.e., (1)~they use their own a-priori prediction of the environment when choosing which behaviours to explore, and (2)~they continuously update their knowledge and expectations using local observations. For example, when novice pilots learn to control a helicopter, they slowly pull the lever until the helicopter slightly lifts off the ground, then quickly land it back down. They will repeat this a few times, gradually increasing the time the helicopter hovers off the ground. At all times, they aim to prevent the likelihood of a disaster by ensuring that a safe landing is possible. In other words, they try to restrict exploration to a locally safe state-action region, which in this work we shall name \emph{safe padding}. As their knowledge of the dynamics of the helicopter in its environment improves, they perform increasingly more sophisticated maneuvers, namely exploring new sequences of state-action pairs. It is interesting to notice that the maneuvers that a fully trained pilot will ultimately perform are initially incompatible with the safety of the learning agent, and as such might be located outside of the \emph{safe padding} initially in use while learning.
Inspired by the cognitive approach to learning outlined above, we propose to equip the RL agent with a limited knowledge of its own dynamics, and with its local perception of safety. \emph{Uncertain dynamics} characterise how the environment reacts to the actions of the agent. Much like a trainee pilot, the agent starts by performing exploratory cautious actions, and gradually, in line with the growing confidence about the environment obtained from observations, the range of acceptably safe actions grows, and the uncertain component of the dynamics becomes known.
Beyond the issue of safe exploration, in the RL literature task satisfaction is usually achieved by hand-engineering appropriate rewards~\cite{garcia}. In this context the difficulty is mapping complex, possibly long-term, sequential tasks to an appropriate reward structure~\cite{reduction}. If done incorrectly,
the outcome of learning might be unexpectedly sub-optimal.
As an extension of ongoing research \cite{lcrl_j,cdc},
in this work we employ temporal logic, and more specifically Linear Temporal Logic (LTL)~\cite{pnueli} as a formal reward shaping technique to specify task-related goals~\cite{clarke}.
We convert a given LTL formula into an automaton that expresses the property~\cite{bible}, then translate the automaton into a state-adaptive reward structure.
Using any off-the-shelf RL with the obtained reward structure results in policies that maximised the probability of verifying the given LTL formula.
This framework, which we call Logically-Constrained RL (LCRL \cite{lcrl_j,cdc}), is enhanced here by safe learning, with an architecture named \emph{cautious RL}. Cautious RL is applicable to any standard reward-based RL.
While safety could be as well one of the tasks expressed in the LTL formula, meaning that the given LTL property will hold when deploying the trained agent, safety in the context of this work will be separately accounted for during training by means of safe padding.
Technically, we propose a safe padding in combination with the state-adaptive reward function based on the task automaton over the state-action pairs of the MDP. Using this automatic reward shaping procedure, RL is able to generate a policy that satisfies the given task expressed as an LTL property with maximal probability, while the safe padding prevents violating safety during learning. Thus, the method we propose inherits aspects of reward engineering that are standard in RL, and at the same time infuses notions from formal methods that allow guiding the exploration safely, furthermore also certifying the learning outcomes in terms of the probability of satisfying the given task expressed as an LTL formula.
The proposed framework is related to, but cannot be reduced to, work on Constrained MDP (CMDP)~\cite{cmdp}, due to its generality and to its inherent structural differences: in this work LTL satisfaction is encoded into the expected return itself, while in CMDP algorithms the original objective is separate from the constraint.
Focusing exclusively on the safety fragment of LTL, the concept of shielding is proposed in~\cite{shield}: the proposed shield is a reactive machine that ensures that the agent remains safe during learning. To express the specification, \cite{shield} uses a DFA and then translates the problem into a safety game. This work has been extended to probabilistic CTL properties in~\cite{shield2}, where a probabilistic model checking technique is used to construct the shield. Unlike this paper, in both \cite{shield,shield2}, the agent has to observe the entire MDP (and opponents) to construct a model of the safety game.
\cite{fulton2,fulton} address safety-critical settings in the context of cyber-physical systems, where the agent has to select a correct model within a heterogeneous set of models in model-based RL: \cite{fulton} first generates a set of feasible models given an initial model and data on runs of the system. With such a set of feasible models, the agent has to learn how to safely identify which model is the most accurate; \cite{fulton3} further employs differential dynamic logic~\cite{ddl}, a first-order logic for specifying and proving properties of hybrid models.
In summary, in this paper we contribute the following:
\begin{enumerate}
\item Cautious RL: a safe exploration scheme for model-free RL. This is applicable to standard reward-based RL, however, we tailor it to the next goals:
\item The use of LTL as task specification for policy synthesis in RL. Automatic reward shaping and task decomposition when the task is highly complex. Bringing (1) and (2) together, we obtain:
\item Prediction of unsafe state-action pairs (safe padding) while learning and consequent limitation of exploration and of policy learning for LTL task satisfaction. The method guarantees asymptotic results.
\end{enumerate}
\section{Background}\label{background}
\subsection{Problem Setup}
\begin{definition} [Markov Decision Process, MDP]\label{def_mdp}
A finite MDP $\mathfrak{M}$ is a six tuple $(\allowbreak
\mathcal{S},\allowbreak\mathcal{A},\allowbreak s_0,\allowbreak
P,\allowbreak\mathcal{AP},\allowbreak L)$ where $\mathcal{S}$ is a finite set called the state space, $\mathcal{A}$ is a finite set of actions, $s_0$
is the initial state. $P(\cdot|s,a)\in\mathcal{P}(\mathcal{S})$ is the probability distribution over the next states given that action $a$ has been taken in state $s$, where $\mathcal{P}(\mathcal{S})$ is the set of probability distributions on subsets of $\mathcal{S}$.
$\mathcal{AP}$ is a finite set of atomic propositions and a labelling
function $L: \mathcal{S} \rightarrow 2^{\mathcal{AP}}$ assigns to each state
$s \in \mathcal{S}$ a set of atomic propositions $L(s) \subseteq
2^\mathcal{AP}$. $\hfill\square$
\end{definition}
A random variable $R(s,a)\sim\rho(\cdot|s,a)\in\mathcal{P}(\mathbb{R^+})$ can be defined over the MDP $\mathfrak{M}$, representing the immediate reward obtained when action $a$ is taken in a given state $s$ where $\mathcal{P}(\mathbb{R^+})$ is the set of probability distributions on subsets of $\mathbb{R^+}$, and $\rho$ is the reward distribution. One possible realization of the immediate reward is denoted by $r(s,a)$.
\begin{definition}[Stationary Deterministic Policy]
A policy is a rule according to which the agent chooses its action at a given state. More formally, a policy $\pi$ is a mapping from the state space to a distribution in $\mathcal{P}(\mathcal{A})$, where $\mathcal{P}(\mathcal{A})$ is the set of probability distributions on subsets of $\mathcal{A}$. A policy is stationary if $\pi(\cdot|s)\in\mathcal{P}(\mathcal{A})$ does not change over time and it is called a deterministic policy if $\pi(\cdot|s)$ is a degenerate distribution. $\hfill\square$
\end{definition}
An MDP controlled by a policy $\pi$ induces a Markov chain $\mathfrak{M}^\pi$ with transition kernel $P^\pi(\cdot|s)=P(\cdot|s,\pi(s))$, and with reward distribution $\rho^\pi(\cdot|s)=\rho(\cdot|s,\pi(s))$ such that $R^\pi(s)\sim\rho^\pi(\cdot|s)$.
\begin{definition}
[Expected Discounted Return \cite{sutton}] \label{expectedreturn} For any policy~$\pi$ on an MDP $\mathfrak{M}$, the expected discounted return in state $s$ is
\begin{equation}
{V}^\pi_\mathfrak{M}(s)=\mathds{E}^\pi[\sum\limits_{n=0}^{\infty} \gamma^n~ r(s_n,a_n)|s_0=s],
\end{equation}
where $\mathds{E}^\pi[\cdot]$ denotes the expected value by following policy $\pi$, $\gamma$~is the discount factor, and $s_0,a_0,s_1,a_1...$ is the sequence of state-action pairs generated by policy $\pi$. The expected discounted return is often referred to as \emph{value function}. Note that the discount factor~$\gamma$ is a hyper-parameter that has to be tuned. In particular, there is standard work in RL on state-dependent discount factors \cite{discount,discount2,discount3}, which is shown to preserve convergence and optimality guarantees. Similarly, the action-value function is defined as:
\begin{equation}\label{return}
{Q}^\pi_\mathfrak{M}(s,a)=\mathds{E}^\pi[\sum\limits_{n=0}^{\infty} \gamma^n~ r(s_n,a_n)|s_0=s,a_0=a].
\end{equation}
We drop the subscript $\mathfrak{M}$ when it is clear from the context. $\hfill\square$
\end{definition}
\begin{definition}[Optimal Policy]\label{optimal_pol}
An optimal policy $\pi^*$ is defined as follows:
$$
\pi^*(s)=\arg\sup\limits_{\pi \in \varpi}~ {V}^\pi_\mathfrak{M}(s),
$$
where $\varpi$ is the set of stationary deterministic policies over $\mathcal{S}$. $\hfill\square$
\end{definition}
\begin{theorem}[\cite{puterman,puterman2}]\label{optimal_policy_thm}
In an MDP $\mathfrak{M}$ with a bounded reward function and a finite action space optimal policies are stationary and deterministic.
\end{theorem}
By Theorem \ref{optimal_policy_thm}, as long as the reward function is bounded and the action space is finite, the optimal policy in Definition \ref{optimal_pol} exists. This would be the case for this work.
\subsection{Linear Temporal Logic}
LTL formulae over a given set of atomic propositions $\mathcal{AP}$ are syntactically defined as \cite{pnueli}
\begin{equation}\label{ltlsyntax}
\varphi::= \mathit{true} ~|~ \alpha ~|~ \varphi \land \varphi ~|~ \neg \varphi ~|~ \bigcirc \varphi ~|~ \varphi ~\mathrm{U}~ \varphi,
\end{equation}
where $\alpha\in \mathcal{AP}$, and the operators $ \bigcirc $ and $ \mathrm{U} $ are called \emph{next} and \emph{until}, respectively. Using the until operator we define two further temporal modalities:
(1)~eventually, $\lozenge \varphi = \mathit{true} ~\mathrm{U}~ \varphi$; and
(2)~always, $\square \varphi = \neg \lozenge \neg \varphi$. An infinite \emph{word} $w$ over the alphabet $2^{\mathcal{AP}}$ in MDP $\mathfrak{M}$ is defined as an infinite sequence $w=l_0 ~l_1 ~l_2 ~l_3 ...\in (2^{\mathcal{AP}})^{\omega}$, where $\omega$ denotes infinite repetition and $l_i\in2^{\mathcal{AP}}$, $\forall i\in\mathbb{N}$. The language $\{w \in (2^{\mathcal{AP}})^\omega ~\mbox{s.t.}~ w \models \varphi\}$ is defined as the set of words that satisfy the LTL formula $\phi$, where $\models\subseteq (2^{\mathcal{AP}})^{\omega}\times\phi$ is the satisfaction relation.
\begin{definition}[Probability of Satisfying an LTL Formula] \label{ltlprobab}
Starting from any state $s$ and following a stationary deterministic policy $\pi$, we denote the probability of satisfying formula $\varphi$ as
$
\mathds{P}(s..^{\mathit{\pi}} \models \varphi),
$
where $s..^{\pi}$ is the collection of all state sequences starting from $ s $, generated under policy $\pi$. $\hfill\square$
\end{definition}
For any LTL property $\varphi$ the set $\mathit{Words}(\varphi)$ can be expressed by an LDBA. An LDBA is a special form of a Generalized B\"uchi Automaton (GBA) \cite{sickert}, defined as follows:
\begin{definition}[Generalized B\"uchi Automaton]
A GBA $\mathfrak{A}=(\allowbreak\mathcal{Q},\allowbreak q_0, \allowbreak\Delta,\allowbreak\Sigma, \allowbreak\mathcal{F})$ is a structure where $\mathcal{Q}$ is its finite set of states, $q_0 \in \mathcal{Q}$ is the initial state, $\Delta: \mathcal{Q} \times \Sigma \rightarrow 2^\mathcal{Q}$ is a transition relation, $\Sigma=2^{\mathcal{AP}}$ is a finite alphabet, and $\mathcal{F}=\{\mathcal{F}_1,...,\mathcal{F}_f\}$ is the set of accepting conditions, where $\mathcal{F}_j \subseteq \mathcal{Q}, 1\leq j\leq f$.~$\hfill\square$
\end{definition}
Let $\Sigma^\omega$ be the set of all infinite words over $\Sigma$. An infinite word $w \in \Sigma^\omega$ is accepted by a GBA $\mathfrak{A}$ if there exists an infinite run $\theta \in\mathcal{Q}^\omega$ starting from $q_0$ where $\theta[i+1] \in\Delta(\theta[i],w[i]),~i \geq 0$ and for each $\mathcal{F}_j \in \mathcal{F}$
\begin{equation} \label{acc}
\mathit{inf}(\theta) \cap \mathcal{F}_j \neq \emptyset,
\end{equation}
where $\mathit{inf}(\theta)$ is the set of states that are visited infinitely often by the run $\theta$.
\begin{definition}[LDBA]\label{ldbadef}
A GBA $\mathfrak{A}=(\allowbreak\mathcal{Q},\allowbreak q_0, \allowbreak\Delta,\allowbreak\Sigma, \allowbreak\mathcal{F})$ is limit-determ\-inistic if $\mathcal{Q}$ can be partitioned into two disjoint sets $\mathcal{Q}=\mathcal{Q}_N \cup \mathcal{Q}_D$ such that~\cite{sickert}:
\begin{itemize}
\item $\Delta(q,\alpha) \subset \mathcal{Q}_D$ and $|\Delta(q,\alpha)|=1$ for every state $q\in\mathcal{Q}_D$ and for every $\alpha \in \Sigma$;
\item for every $\mathcal{F}_j \in \mathcal{F}$, $\mathcal{F}_j \subseteq \mathcal{Q}_D$; and
\item $q_0 \in \mathcal{Q}_N$, and all the transitions from $\mathcal{Q}_N$ to $\mathcal{Q}_D$ are non-deterministic $\varepsilon$-transitions\footnote{An $\varepsilon$-transition allows an automaton to change its state without reading any label.}.~$\hfill\square$
\end{itemize}
\end{definition}
\begin{remark}\label{ldba_remark}
An LDBA is a GBA that has two partitions: one initial ($\mathcal{Q}_N$) and one accepting ($\mathcal{Q}_D$). The accepting part includes all the accepting states and has deterministic transitions. The LTL-to-LDBA construction used in this paper \cite{sickert} results in an automaton with deterministic initial and accepting parts. According to Definition~\ref{ldbadef}, the discussed structure is still an LDBA, though the determinism in the initial part is stronger than that required in the LDBA definition. We explain later why this matters for the proposed algorithm.
\end{remark}
\begin{remark}\label{nonblocking}
At any state $q$ of an LDBA $\mathfrak{A}$, the output of the transition relation $\Delta$ is non-empty, namely all the states of $\mathfrak{A}$ are non-blocking. Further, any subset of $\Sigma$ can be read at any state.~$\hfill\square$
\end{remark}
\section{Cautious Reinforcement Learning with Logical Constraints}\label{lcsec}
\subsection{Logically-guided Reinforcement Learning}
In order to relate the structure of an MDP to that of an LDBA, for now we assume that the MDP graph and its transition probabilities are fully known. This allows us to formally define a synchronised structure that will be key for policy synthesis. This assumption is dropped entirely later, and we stress that policy synthesis can indeed be implemented on-the-fly over unknown MDPs via model-free RL.
\begin{definition} [Product MDP]\label{product_mdp_def}
Given an MDP $\mathfrak{M}=(\allowbreak
\mathcal{S},\allowbreak\mathcal{A},\allowbreak s_0,\allowbreak
P,\allowbreak\mathcal{AP},\allowbreak L)$ and an LDBA $\mathfrak{A}=(\allowbreak\mathcal{Q},\allowbreak q_0, \allowbreak\Delta,\allowbreak\Sigma, \allowbreak\mathcal{F})$ with $\Sigma=2^{\mathcal{AP}}$, the product MDP is defined as $\mathfrak{M}\otimes\mathfrak{A}= \mathfrak{P}=(\mathcal{S}^\otimes,\allowbreak \mathcal{A}^\otimes,\allowbreak s^\otimes_0,P^\otimes,\allowbreak \mathcal{AP}^\otimes,\allowbreak L^\otimes,\allowbreak \mathcal{F}^\otimes)$, where $\mathcal{S}^\otimes = \mathcal{S}\times\mathcal{Q}$, $s^\otimes_0=(s_0,q_0)$, $\mathcal{AP}^\otimes = \mathcal{Q}$, $L^\otimes : \mathcal{S}^\otimes\rightarrow 2^\mathcal{Q}$ such that $L^\otimes(s,q)={q}$ and $\mathcal{F}^\otimes \subseteq {\mathcal{S}^\otimes}$ is the set of accepting states $\mathcal{F}^\otimes=\{\mathcal{F}^\otimes_1,...,\mathcal{F}^\otimes_f\}$, where $\mathcal{F}^\otimes_j=\mathcal{S}\times \mathcal{F}_j$. The transition kernel $P^\otimes(\cdot|s_i^\otimes,a)\in\mathcal{P}(\mathcal{S}^\otimes)$ is such that given the current state $(s_i,q_i)$ and action $a$, the new state $(s_j,q_j)$ is obtained such that $s_j\sim P(\cdot|s_i,a)$ and $q_j\in\Delta(q_i,L(s_j))$. In order to handle $\varepsilon$-transitions in $\mathfrak{A}$ we furthermore need to add the following transitions:
\begin{itemize}
\item for every potential $\varepsilon$-transition to some state $q \in \mathcal{Q}$ we add a corresponding action $\varepsilon_q$ in the product:
$$
\mathcal{A}^\otimes=\mathcal{A}\cup \{\varepsilon_q, q \in \mathcal{Q}\}.
$$
\item The transition probabilities corresponding to $\varepsilon$-transitions are given by
\[P^\otimes((s_j,q_j)|(s_i,q_i),\varepsilon_q) = \left\{
\begin{array}{lr}
1 & s_i=s_j, q_i\xrightarrow{\varepsilon_q} q_j=q,\\
0 & \text{otherwise}.
\end{array}
\right.
\]
\end{itemize}
\end{definition}
\vspace{-4mm}
$\hfill\square$\\*
\setlength{\fboxrule}{0pt}
\begin{figure}[!t]\centering
\subfloat[]{{
\scalebox{.78}{
\begin{tikzpicture}[shorten >=1pt,node distance=1.5cm,on grid,auto]
\node[state,initial] (q_0) {$q_0$};
\node[state] (q_1) [right=of q_0] {$q_1$};
\node[state,accepting] (q_2) [above right=of q_1] {$q_2$};
\node[state,accepting] (q_3) [below right=of q_1] {$q_3$};
\path[->]
(q_0) edge node {$a$} (q_1)
(q_1) edge node {$\varepsilon$} (q_2)
(q_1) edge [loop above] node {$\mathit{true}$} (q_1)
(q_1) edge [below] node {$\varepsilon~~~$} (q_3)
(q_2) edge [loop right] node {$a$} (q_2)
(q_3) edge [loop right] node {$b$} (q_3);
\end{tikzpicture}
}
}}
\subfloat[]{{
\scalebox{.78}{
\begin{tikzpicture}[shorten >=1pt,node distance=2cm,on grid,auto] \node[state,initial,label=below:$\{a\}$] (s_0) {$s_0$};
\node[state,label=below:$\{b\}$] (s_1) [right=of s_0]{$s_1$};
\draw[->] (s_0) -- node [below] {$0.9$} (s_1);
\draw[->] (s_0) [out=30,in=80,loop] to coordinate[pos=0.2](aa) node [above] {$\fbox{0.1}$} (s_0);
\draw[->] (s_1) [out=30,in=80,loop] to coordinate[pos=0.2] node [above] {\fbox{$\textcolor{orange}{a_2}:1$}} (s_1);
\path pic[draw, angle radius=8mm,"$\textcolor{orange}{a_1}$",angle eccentricity=1.4] {angle = s_1--s_0--aa};
\end{tikzpicture}
}
}}\\
\subfloat[]{
\scalebox{.65}{
\begin{tikzpicture}[shorten >=1pt,node distance=2.7cm,on grid,auto]
\node[state,initial] (q_0) {$(s_0,q_1)$};
\node[state] (q_1) [right=of q_0] {$(s_1,q_1)$};
\node[state] (q_2) [right=of q_1] {$(s_1,q_2)$};
\node[state,accepting] (q_3) [below right=of q_1] {$(s_1,q_3)$};
\draw[->] (q_0) -- node [below] {$0.9$} (q_1);
\draw[->] (q_0) [out=30,in=80,loop] to coordinate[pos=0.2](aa) node [above] {$\fbox{0.1}$} (q_0);
\draw[->] (q_1) [out=30,in=80,loop] to coordinate[pos=0.2] node [above] {\fbox{$\textcolor{orange}{a_2}:1$}} (q_1);
\draw[->] (q_1) -- node [below] {$\textcolor{orange}{\varepsilon_{q_2}}$} (q_2);
\draw[->] (q_1) -- node [below] {$\textcolor{orange}{\varepsilon_{q_3}}~~$} (q_3);
\path pic[draw, angle radius=12mm,"$\textcolor{orange}{a_1}$",angle eccentricity=1.4] {angle = q_1--q_0--aa};
\draw[->] (q_3) [out=335,in=25,loop] to coordinate[pos=0.2] node [right] {$\textcolor{orange}{a_2}:1$} (q_3);
\end{tikzpicture}}
}\vspace*{-0.25cm}
\caption{Example of product MDP: (a) the LDBA for $\varphi=a\wedge\bigcirc(\lozenge\square a\vee\lozenge\square b)$, (b) an instance MDP, and (c) the product according to Def. \ref{product_mdp_def}.}
\label{fig:product_mdp_ex}
\end{figure}
An example of a product MDP,
as per Definition~\ref{product_mdp_def},
is given in Fig.~\ref{fig:product_mdp_ex} for an instance of an MDP and for an LDBA generated from the LTL formula
$$
\varphi=a\wedge\bigcirc(\lozenge\square a\vee\lozenge\square b).
$$
Next, we propose a state-adaptive reward function based on the accepting condition of the automaton, so that maximisation of the expected cumulative reward (to be attained via RL) implies the maximisation of the satisfaction probability for the LTL formula (Definition \ref{ltlprobab}).
\subsection{State-adaptive Reward}
Before introducing the state-adaptive reward shaping scheme
we need to provide a few definitions.
\begin{definition}[Non-accepting Sink Component]
A non-accepting sink component of an automaton, in this case an LDBA, $\mathfrak{A}=(\allowbreak\mathcal{Q},\allowbreak q_0, \allowbreak\Delta,\allowbreak\Sigma, \allowbreak\mathcal{F})$ is a directed graph induced by a set of states $ \mathcal{Q}_\mathit{sink} \subset\mathcal{Q}$ such that (1) the graph is strongly connected; (2) it does not include all accepting sets $ \mathcal{F}_k,~k=1,...,f $; and (3) there exists no other strongly connected set $ \mathcal{Q}_\mathit{sink}' \subset \mathcal{Q},~\mathcal{Q}_\mathit{sink}'\neq \mathcal{Q}_\mathit{sink} $ such that $ \mathcal{Q}_\mathit{sink} \subset \mathcal{Q}_\mathit{sink}' $. We~denote the union of all non-accepting sink components as $\mathcal{Q}_\mathit{sinks}$.~$\hfill\square$
\end{definition}
\begin{remark}\label{unsafeness}
Note that after taking a transition in the automaton that takes us to $\mathcal{Q}_\mathit{sinks}$, it is not possible to satisfy the associated LTL property anymore, namely the probability of LTL satisfaction becomes zero under any policy. Identifying $\mathcal{Q}_\mathit{sinks}$ allows the agent to predict immediate labels that lead to a violation of the property. Thus, transitions $q \xrightarrow{\alpha\in\Sigma} q'$ to $\mathcal{Q}_\mathit{sinks}$ in the automaton are denoted by $\Delta_\mathit{sinks}$.~\hfill $\square$
\end{remark}
\begin{definition}
[Accepting Frontier Function]\label{frontier} Given an LDBA $\mathfrak{A} \allowbreak =(\allowbreak\mathcal{Q},\allowbreak q_0, \allowbreak\Delta,\allowbreak\Sigma, \allowbreak\mathcal{F})$, we define the accepting frontier function $ AF:\mathcal{Q}\times 2^{\mathcal{Q}}\rightarrow2^\mathcal{Q} $, which executes the following operation over any given set $ \mathds{F}\in 2^{\mathcal{Q}}$:
\begin{equation}\label{acc_function}
AF(q,\mathds{F})=\left\{
\begin{array}{l@{\hspace{0.2cm}:\hspace{0.2cm}}l}
\mathds{F}_{\setminus \mathcal{F}_j}~~~ & (q \in \mathcal{F}_j) \wedge (\mathds{F}\neq \mathcal{F}_j)\\
\bigcup\limits_{k=1}^{f}{\mathcal{F}_k} _{\setminus \mathcal{F}_j} & (q \in \mathcal{F}_j) \wedge (\mathds{F}=\mathcal{F}_j).
\end{array}
\right.
\end{equation}
\end{definition}
\vspace{-4mm}
$\hfill\square$\\*
Now assume that the agent is at state $ s^\otimes=(s,q) $, takes action $ a $ and observes the subsequent state $ {s^\otimes}'=(s',q') $. Note that since both the initial and accepting parts of the LDBA are deterministic, $q'$ can be obtained on-the-fly\footnote{There is no need to \emph{explicitly build} the product MDP and to store all its states in memory. The automaton transitions can be executed on-the-fly, as the agent reads the labels of the MDP states.}. The immediate reward is a scalar value, determined according to the following rule:
\begin{equation}\label{thereward}
\begin{aligned}
R(s^\otimes,a) = \left\{
\begin{array}{lr}
r_p & $ if $ q' \in \mathds{A},~{s^\otimes}'=(s',q'),\\
r_n & \text{otherwise}.
\end{array}
\right.
\end{aligned}
\end{equation}
Here, $r_p>0$ is a positive reward and $r_n=0$ is a neutral reward. The set $ \mathds{A} \in 2^{\mathcal{Q}}$ is called the \emph{accepting frontier set}, and it is initialized as the family set of all accepting sets, i.e.,
\begin{equation}\label{eq:initA}
\mathds{A}=\{F_k\}_{k=1}^{f}.
\end{equation}
The accepting frontier set is updated on-the-fly every time a set $\mathcal{F}_j$ is visited as $\mathds{A}\leftarrow AF(q',\mathds{A})$
where $AF(q',\mathds{A})$ is the accepting frontier function defined before.
In short, after initialisation of $\mathds{A}=\{F_k\}_{k=1}^{f}$ the accepting frontier function $AF$ always excludes from $\mathds{A}$ those accepting sets that have been visited or are being visited, unless it is the only remaining accepting set. In this case the accepting frontier $ \mathds{A} $ is reset, as per the second condition of \eqref{acc_function}. Thus, intuitively the set $ \mathds{A} $ always contains those accepting states that ought to be visited at any given time: in this sense the reward function is adapted to the accepting condition from the LDBA. The agent is guided by the above reward assignment to visit the accepting sets infinitely often, and consequently, to satisfy the given LTL property~$\varphi$, as per \eqref{acc}.
As in~\cite{lcrl_j,cdc}, we will argue that we can learn an optimal policy generating traces that satisfy the given property~$\varphi$ with maximum probability.
\begin{remark}
As in Definition \ref{ldbadef} and Remark \ref{ldba_remark}, the automaton transitions can be executed by reading the labels \emph{only}, which makes the agent aware of the automaton state without explicitly constructing the product MDP. The transitions in the automaton can be executed on-the-fly as the agent reads the labels of the MDP states, without knowledge of the model structure or the transition probabilities (or their product). As such, our algorithm is implementable in a fully model free manner. ~\hfill $\square$
\end{remark}
\subsection{Safe Padding for Exploration}
Ensuring safe exploration is critical when RL is employed to generate control policies in situations when learning from failure is unacceptable or extremely expensive, as in safety-critical autonomy for instance. We call this problem \emph{safe policy synthesis}.
We propose a \emph{safe padding} for the agent by leveraging the agent limited knowledge about its own dynamics and its local perception of safety. Hence, the agent avoids violating the safety requirement (up to some probability level), while learning the optimal policy for task satisfaction.
\begin{problem}[Safe Policy Synthesis]\label{problem_def}
Given an unknown black-box MDP $\mathfrak{M}$ and an LTL property~$\varphi$ a learning agent attains the following: (1) synthesises an optimal policy $\pi^*$ via RL such that the induced Markov chain $\mathfrak{M}^{\pi^*}$ satisfies the LTL property with maximum possible probability; and (2) does not violate a safety requirement during learning. We assume that the transition probability function $P$ in the MDP is only partly known. Technically, we assume that (i) the agent acquires prior knowledge about its own transition kernel (dynamics) $P_a:\mathcal{S}\times\mathcal{A}\times\mathcal{S}\rightarrow [0,1]$, that might not be accurate in the environment $\mathfrak{M}$ in which the agent operates. We also assume that (ii) the agent has a limited observability \emph{only} of the labelling function $L$ in Definition~\ref{def_mdp}: without knowing the full structure of the MDP, the agent is able to observe the labels of the surrounding states up to some distance from the current position.~$\hfill\square$
\end{problem}
Let us assume a starting belief about the agent transition kernel $P_a$, to be encoded as Dirichlet distributions
\cite{pac_littman,mle} via two functions $ \Psi: \mathcal{S} \times \mathcal{A} \rightarrow \mathds{N}$ and $ \psi: \mathcal{S} \times \mathcal{A} \times \mathcal{S} \rightarrow \mathds{N} $. Function $ \psi(s,a,{s'}) $ represents the number of times the agent executes action $ a $ in state~$s$, thereafter moving to state $ {s}' $, and $ \Psi(s,a)=\sum_{{s}' \in \mathcal{S}} \psi(s,a,{s}')$. The function $ \Psi(s,a) $ is initialised to be one for every state-action pair, reflecting the fact that at any given state it is possible to take any action, and also avoiding division by zero; the function $ \psi(s,a,{s'}) $ is initialised to zero. Once the transition $(s,a,{s'})$ is taken for the first time, $\Psi(s,a)~\leftarrow~2$, so $\psi(s,a,{s'})$ has to be incremented to $2$ to reflect the correct belief ${P}_a(s,a,{s'})=1$ (Algorithm~\ref{alg:sparl}, lines 17--23).
\medskip
The \emph{safe padding} is a subset of the state-action space of the MDP that the agent considers safe to explore.
As the agent explores and learns, the safe padding slowly expands, as much as in the flight control example the pilots slowly expands their comfort zone to learn how to control the helicopter. The expansion rate of the safe padding depends on how many times a particular state-action pair has been visited and on the corresponding update of the transition kernel~$P_a$. The expansion mechanism and kernel update are explained shortly.
Let us recall that we have assumed that the agent has a limited observation range over the labels of the surrounding states (as per Problem \ref{problem_def}). Assume that the observation radius is $r_o\geq1$, meaning that the agent can \emph{only} see the labels of states that have distance at most $r_o$ over the MDP graph, i.e.
$$
O(s)=\{s' \in \mathcal{S}~\wedge~d(s,s')\leq r_o\},
$$
where $O(s)\subset\mathcal{S}$ is the set of states whose labels are visible at $s$, i.e., the agent current state, and $d(s,s')$ is the length of the shortest directed path from $s$ to $s'$. Note that the agent can only see state labels,
and has to rely on its current knowledge of the dynamics, namely $P_a$. Let the current state of the agent in the product MDP be $s^\otimes=(s,q)$. Define
\begin{equation}
O_\mathit{safe}(s)=\{x \in O(s),~q\xrightarrow{L(x)}q'\notin\Delta_\mathit{sinks}\},
\end{equation}
as the set of safe states. Given the observation radius and the current belief of the agent about its own transition dynamics $P_a$, the agent performs a local, finite-horizon Bellman update over the observation area $O(s)$. For each state $x\in O_\mathit{safe}(s)$ and a horizon $H$ the Bellman update is performed $H$ times as follows~\cite{NDP}:
\begin{align}\label{uk}
\begin{aligned}
& u_k(x)=\min\limits_{a\in\mathcal{A}}\sum_{{x}'} P_a(x,a,{x}') u_{k+1}({x}'),~k=H-1,...,0\\
& u_H(x)=\mathds{1}_{O_\mathit{safe}}(x),
\end{aligned}
\end{align}
where $x'$ is the subsequent state after taking action $a$ in $x$, $u_k:\mathcal{S}\rightarrow[0,1]$ is a local value function at time step $k$, and $H$ is initialized as $H=r_o$. The value $u_H(x)$ is initialised as $1$ if $x \in O_\mathit{safe}(s)$, and as $0$ otherwise.
With this initialisation, $u_0$ represents the agent estimation of the minimum probability of staying in $O_\mathit{safe}$ within $H$ steps, i.e. $u_0=\mathds{P}_{\min}(\square^{\leq H}~ O_{safe})$. Notice that this estimation is indeed pessimistic and conservative.
Hence, with a fixed horizon $H$ the agent is able to calculate the maximum probability of violating the LTL property by picking action $a$ at state $s^\otimes$ in $H$ steps:
\begin{equation}\label{uk2}
U_H(s^\otimes,a)=1-\sum_{{s}'} P_a(s,a,{s}') u_{0}({s}'),
\end{equation}
where $s^\otimes=(s,q)$. As assumed in Problem~\ref{problem_def}, the agent dynamics $P_a$ is then updated considering the Maximum Likelihood Estimation (MLE) for the mean ${P}_a(s,a,{s'})$~\cite{pac_littman,mle} as:
\begin{equation}\label{uk3}
{P}_a(s,a,{s'})\leftarrow\dfrac{\psi(s,a,{s}')}{\Psi(s,a)}.
\end{equation}
Here, $ \psi(s,a,{s'}) $ represents the number of times the agent executes action $ a $ in state $ s $, thereafter moving to state $ {s}' $, whereas $ \Psi(s,a)=\sum_{{s}' \in \mathcal{S}} \psi(s,a,{s}')$. Note that $\psi$ and $\Psi$ (and consequently $P_a$) are functions of the MDP state and action spaces, not of the product MDP, since they capture the agent dynamics over the original MDP, which remains the same regardless of the current automaton state $q$. Hence, the RHS of \eqref{uk2} only depends on state $s$ and action~$a$, and not the automaton state~$q$.
\begin{remark}
Note that for each state $s$, the Bellman updates in \eqref{uk} are performed only over $O_\mathit{safe}(s)$. Recall that the set $O_\mathit{safe}(s)$ is the safe subset within the bounded observation area and thus, the computational burden of \eqref{uk} is limited.~\hfill $\square$
\end{remark}
\subsection{Safe Policy Synthesis with Logical Constraints}
At this point we bring together the use of the safe padding with the constrained learning architecture generating policies that satisfy the given LTL formula.
In order to pick the most optimal yet safe action at each state, we propose a \emph{double learner} architecture, as explained in the following.
The first part is an \emph{optimistic} learner that employs Q-learning (QL) \cite{watkins} to maximize the expected cumulative return as in \eqref{return}. For each state $s^\otimes \in \mathcal{S}^\otimes$ and for any action $a \in \mathcal{A}^\otimes$, QL assigns a quantitative value $Q:\mathcal{S}^\otimes\times\mathcal{A}^\otimes\rightarrow \mathbb{R}^+$, which is initialized with an arbitrary and finite value over all state-action pairs. The Q-function is updated by the following rule when the agent takes action $ a $ at state $ s^\otimes $:
\begin{equation}\label{ql_update_rule}
\scalebox{0.95}{$
Q(s^\otimes,a) \leftarrow Q(s^\otimes,a)+\mu[R(s^\otimes,a)+\gamma \max\limits_{a' \in \mathcal{A}^\otimes}(Q({s^\otimes}',a'))-Q(s^\otimes,a)],$}
\end{equation}
where $ Q(s^\otimes,a) $ is the Q-value corresponding to state-action $ (s^\otimes,a) $, $ 0<\mu\leq 1 $ is called learning rate or step size, $ R(s^\otimes,a) $ is the reward obtained for performing action $a$ in state $s^\otimes$, $0\leq\gamma\leq 1$ is the discount factor, and ${s^\otimes}'$ is the state obtained after performing action~$a$. The Q-function for the rest of the state-action pairs remains unchanged.
\begin{algorithm2e}[!t]
\DontPrintSemicolon
\SetKw{return}{return}
\SetKwRepeat{Do}{do}{while}
\SetKwFunction{terminate}{episode$\_$terminate}
\SetKwFor{terminatedef}{episode$\_$terminate()}{}{}
\SetKwData{conflict}{conflict}
\SetKwData{safe}{safe}
\SetKwData{sat}{sat}
\SetKwData{unsafe}{unsafe}
\SetKwData{unknown}{unknown}
\SetKwData{true}{true}
\SetKwData{false}{false}
\SetKwInOut{Input}{input}
\SetKwInOut{Output}{output}
\SetKwFor{Loop}{Loop}{}{}
\SetKw{KwNot}{not}
\begin{small}
\Input{LTL specification, $ \textit{it\_threshold} $, $ \gamma $, $ \mu $, $r_o$, $p_\mathit{critical}$, $P_a$}
\Output{$\pi^*$}
convert the desired LTL property to an equivalent LDBA $\mathfrak{A}$\;
initialize $\mathds{A}=\{F_k\}_{k=1}^{f}$\;
initialize $\kappa=1,~\forall s \in\mathcal{S}$\;
initialize horizon $H=r_o,~\forall s \in\mathcal{S}$\;
initialize $Q: \mathcal{S}^\otimes \times \mathcal{A}^\otimes \rightarrow \mathbb{R}^+$\;
initialize $\textit{episode-number}:=0$\;
initialize $\textit{iteration-number}:=0$\;
\While{$Q$ is not converged}
{
$\textit{episode-number}\leftarrow\textit{episode-number}+1$\;
$s^\otimes=(s_0,q_0)$\;
\While{$ (q \notin \mathcal{Q}_\mathit{sink}:~s^\otimes=(s,q))~ \wedge ~ (\textit{iteration-number}<\text{it\_threshold})$}
{
$\textit{iteration-number}\leftarrow\textit{iteration-number}+1$\;
$\#$ \textbf{pessimistic learner}\;
~~~calculate $U_H(s^\otimes,a)$ using $P_a$ as in \eqref{uk2}\;
~~~generate $\mathcal{A}^H_p(s^\otimes)$ as in \eqref{eq:a_h} \;
~~~choose $a_*=\argmax_{a \in\mathcal{A}_p^H[1:\kappa]}~Q(s^\otimes,a)- r_p U_H(s^\otimes,a)$ \;
~~~$ \Psi(s^\otimes,a_*)\leftarrow \Psi(s^\otimes,a_*)+1$\;
~~~execute action $a_*$ and observe the next state $s^\otimes_*$\;
~~~if $\Psi(s^\otimes,a_*)=2$ then\;
~~~\vrule~~~{ $\psi(s^\otimes,a_*,{s^\otimes_*})=2 $\;
}
~~~else\;
~~~\vrule~~~{ $\psi(s^\otimes,a_*,{s^\otimes_*})\leftarrow\psi(s^\otimes,a_*,{s^\otimes_*})+1$\;
}
~~~end\;
~~~update $P_a(s,a_*,{s_*})$ as in \eqref{uk3}\;
~~~update $H(s)$\;
~~~update $\kappa(s)$\;
$\#$ \textbf{optimistic learner}\;
~~~receive the reward $R({s^\otimes},a_*)$\;
~~~$\mathds{A}\leftarrow \mathit{Acc}(s_*,\mathds{A})$\;
~~~$Q({s^\otimes},a_*)\leftarrow Q({s^\otimes},a_*)+\mu[R({s^\otimes},a_*)-Q({s^\otimes},a_*)+\gamma \max_{a'}(Q(s^\otimes_*,a'))]$\;
~~~$s^\otimes=s^\otimes_*$\;
}
}
\end{small}
\caption{Cautious RL}
\label{alg:sparl}
\end{algorithm2e}
Under mild assumptions over the learning rate, for finite-state and -action spaces QL converges to a unique limit, call it $Q^*$ \cite{watkins}. Once QL converges, the optimal policy $\pi^*: \mathcal{S}^\otimes \rightarrow \mathcal{A}^\otimes$ for $\mathfrak{P}$ can be generated by selecting the action that yields the highest $Q^*$, i.e.,
$$
\pi^*(s^\otimes)=\arg\max\limits_{a \in \mathcal{A}^\otimes}~Q^*(s^\otimes,a).
$$
It has been shown~\cite{lcrl_j,cdc} that the optimal policy $\pi^*$ generates traces that satisfy the given property~$\varphi$ with maximum probability.
Of course, adhering to the optimistic learner policy by no means guarantees to keep the agent safe \emph{during} the exploration. This is where the second part of the architecture, i.e., the \emph{pessimistic} learner, is needed: we exploit the concept of cautious learning and create a safe padding for the agent to explore safely. The pessimistic learner locally calculates $U_H(s^\otimes,a),\forall a \in \mathcal{A}^\otimes$ for a selected horizon $H$ at the current state $s^\otimes$, and then outputs a set of permissive (safe) actions for the optimistic learner. Define a hyper-parameter $p_\mathit{critical}$ called \emph{critical probability} to select actions $a\in\mathcal{A}^\otimes$. This is the probability that is considered to be critically risky (unsafe) and is defined prior to learning: any action $a$ at state $s^\otimes$ that has $U_H(s^\otimes,a)\geq p_\mathit{critical}$ is considered as a critical action and has to be avoided. Accordingly, we introduce
\begin{equation}\label{eq:a_h}
\mathcal{A}_p^H(s^\otimes)=\{a\in\mathcal{A}^\otimes:U_H(s^\otimes,a)<p_\mathit{critical}\}.
\end{equation}
This set is sorted over actions such that the first element has the lowest $U_H(s^\otimes,a)$ -- with slight abuse of notations we write $\mathcal{A}_p^H[k]$ for the $k$-th element.
\begin{figure}[!t]
\centering
\vspace{-6mm}
\scalebox{0.88}[1]{
\hspace{-5.5mm}\subfloat[]{{\includegraphics[width=0.50\linewidth]{bridge_a.png}}}}%
\subfloat[]{{\includegraphics[width=0.50\linewidth]{value.jpg}}}%
\qquad
\subfloat[]{{\includegraphics[width=0.40\linewidth]{safe_padding_1.png}}}\hspace{7mm}
\subfloat[]{{\includegraphics[width=0.40\linewidth]{safe_padding_2.png}}}%
\qquad
\subfloat[]{{\includegraphics[width=0.55\linewidth]{safe_vs.png}}}%
\caption{(a) slippery grid world and agent, represented by an arrow surrounded by an observation area. Labelling is yellow: $\mathit{target}$, red: $\mathit{unsafe}$, blue: $\mathit{safe}$, and green is the initial state $s_0$; (b)~final value function $V(s)$, (c)--(d) state visitation number $v(s)=\sum_{a}\Psi(s,a)$ vs.~time where the safe padding gradually grows and repels the agent from entering unsafe region; (e) number of times the agent reaches unsafe area (red) until RL converges with safe padding on vs.~off.}%
\label{fig_bridge}
\end{figure}
At the beginning the pessimistic learner is conservative and only allows those actions in $\mathcal{A}_p^H$ that have index of less than $\kappa$, i.e., $\mathcal{A}_p^H[1:\kappa]$, where $\kappa$ is a monotonically increasing function of the number of state visitations $v(s)=\sum_{a}\Psi(s,a)$, such that
$$\kappa(v)|_{v=1}=1 \hspace{2mm}\mbox{and}\hspace{2mm} \lim_{v\rightarrow\infty}\kappa(v)=|\mathcal{A}_p^H|.$$
The horizon $H$ follows the opposite rule, namely it is a monotonically decreasing function of $v(s)$ such that initially
$$H(v)|_{v=1}=r_o \hspace{2mm}\mbox{and}\hspace{2mm} \lim_{v\rightarrow\infty} H(v)=1.$$
In other words, when the uncertainty around a state is high, the agent looks ahead as much as possible, i.e.~$H=r_o$. Once the confidence level around that particular state increases then the agent considers riskier decisions by just looking one step ahead, i.e.~$H=1$. This essentially means that the safe padding grows as the uncertainty diminishes (or learning grows). Note that in practice, $\kappa(v)$ and $H(v)$ can be step-wise functions of~$v$, and thus the agent is not necessarily required to visit a state an infinite number of times to get to $H=1$ and $\kappa=|\mathcal{A}_p^H|$.
Nevertheless, the infinite number of state (and action) visits is one of the theoretical assumptions of QL asymptotic convergence~\cite{watkins}, which aligns with the proposed rate of change of~$\kappa$ and~$H$. Owing to time-varying $\kappa$ and $H$, when the agent synthesizes its policy, a subset of $\mathcal{A}^\otimes$ is only available, e.g., in the greedy case:
$$
a^*=\argmax_{a \in\mathcal{A}_p^H[1:\kappa]}~Q(s^\otimes,a)-r_p U_H(s^\otimes,a),
$$
where the role of $r_p$ is to balance $Q$ and $U_H$. Note that since QL is an off-policy RL method, the choice of $a^*$ during learning does not affect the convergence of the Q-function \cite{watkins}. As the agent explores, the estimations of $P_a$, and thus of $U_H$, become more and more accurate, and the choice of actions become closer to optimal. Starting from its initial state $s_0$, the agent eventually expands the safe padding, i.e., the set of state-actions that it considers to be safe. The expansion occurs by diminishing the effect of the pessimistic learner, i.e., by decreasing the horizon $H$ of $U_H(s^\otimes,a)$ and also by increasing $\kappa$ in $\mathcal{A}_p^H(s^\otimes)$ until the effect of the pessimistic learner on decision making is minimal. Essentially, in the limit the role of the pessimistic agent is just to block the choice of actions that are critically unsafe according to $p_\mathit{critical}$ (actions that an optimal learned policy without the safe padding never takes, otherwise not optimal). However, the user-defined critical threshold $p_\mathit{critical}$ might affect the final policy of the agent in situations when acting safely may be at odds with acting optimally (Fig.~\ref{fig_bridge_2}).
\begin{figure}[!t]
\centering
\vspace{-4mm}
\subfloat[]{\includegraphics[width=0.8\linewidth]{pacman.png}}
\qquad
\subfloat[]{
\scalebox{0.8}{
\begin{tikzpicture}[shorten >=1pt,node distance=2.3cm,on grid,auto]
\node[state,initial] (q_1) {$q_0$};
\node[state] (q_2) [above right=of q_1] {$q_1$};
\node[state] (q_3) [below right=of q_1] {$q_2$};
\node[state] (q_5) [right=of q_1] {$q_4$};
\node[state,accepting] (q_4) [right=of q_5] {$q_3$};
\path[->]
(q_1) edge [loop below] node {$n$} ()
(q_1) edge [bend left=15] node {$f_1$} (q_2)
(q_1) edge [bend right=15] node {$f_2$} (q_3)
(q_2) edge [loop above] node[left,xshift=-0.1cm,yshift=-0.2cm] {$n \vee f_1$} ()
(q_2) edge [bend right=-15] node {$f_2$} (q_4)
(q_3) edge [loop below] node[right,xshift=0.1cm,yshift=0.2cm] {~~$n \vee f_2$} ()
(q_3) edge [bend right=15] node {$f_1$} (q_4)
(q_2) edge node {$g$} (q_5)
(q_3) edge node {$g$} (q_5)
(q_1) edge node {$g$} (q_5)
(q_5) edge [loop right] node {$\mathit{true}$} ()
(q_4) edge [loop above] node {$\mathit{true}$} ();
\end{tikzpicture}}
}\\*
\subfloat[]{\hspace{-3mm}\includegraphics[width=0.55\linewidth]{padding_on.png}}
\scalebox{0.95}[1]{\subfloat[]{\hspace{-3mm}\includegraphics[width=0.593\linewidth]{padding_off.png}}}
\caption{(a) Pacman environment with $|\mathcal{S}| > 80000$. Observation area is large square around initial condition. Square on the left is labelled as food~1 ($ f_1 $) and that the one on the right as food~2 ($ f_2 $), the state of being caught by a ghost is labelled as $ g $, and the rest of the state space is neutral ($ n $); (b) LDBA for the specification (\ref{pacman_p}); (c) number of steps to complete the game with safe padding on (cautious RL), (d) and with safe padding off.}
\label{pacmaninit}
\end{figure}
\section{Experiments}\label{case study}
We consider numerical experiments that concern LTL-constrained safe control policy synthesis problems for a robot in a slippery grid-world and the classical Pacman game. In both experiments the agent has to experience risky situations in order to achieve the goal. This allows us evaluate the performance of the proposed safe padding architecture in protecting the agent from entering unsafe states.
For the robot example, let the grid be a $ 20 \times 20 $ square over which the robot moves. In this setup, the robot location is the MDP state $s \in \mathcal{S} $. At each state $s \in \mathcal{S}$ the robot has a set of actions $ \mathcal{A}=\{\mathit{left},\mathit{right},\mathit{up},\mathit{down},\mathit{stay}\}$ using which the robot is able to move to other states (e.g. $s'$) with the probability of $P(s,a,s'), a \in \mathcal{A}$. At each state $s \in \mathcal{S}$, the actions available to the robot are either to move to a neighbour state $s' \in \mathcal{S}$ or to stay at the state $s$. In this example, we assume for each action the robot chooses, there is a probability of $85\%$ that the action takes the robot to the correct state and $15\%$ that the action takes the robot to a random state in its neighbourhood, including its current state.
To get to the target state the agent has to cross a bridge (Fig.~\ref{fig_bridge}a) surrounded by unsafe states. The grid is slippery, namely from the agent's perspective, when it takes an action it usually moves to the intended cell, but there is an \emph{unknown} probability that the agent is moved to a random neighbour cell. However, the agent prior belief $P_a$ is that it can always move to the correct state and this is the dynamics known to the agent. The initial state of the agent is bottom left, $\gamma=0.9$, $\mu=0.85$, $p_\mathit{critical}=0.82$, and the observation radius is $r_o=2$. Note that for the sake of exposition, we intentionally picked $p_\mathit{critical}=0.82$ close to the grid-world slipperiness probability of $0.85$ and $r_o$ close to the bridge gap, while in practice when the environment is unknown, $p_\mathit{critical}$ and $r_o$ should be set conservatively.
Similar to the pilot-helicopter example, the final goal of reaching the target is initially conflicting with the agent being safe since crossing the bridge has a high risk of slipping into an unsafe state. Thus, the agent has to slowly try different states while remaining safe, until it realises that there is no other way than crossing the high-risk bridge to achieve its goal. The LTL property associated with this task is as follows:
\vspace{-1mm}
\begin{equation}
\label{slippery_task}
\lozenge \mathit{target} \wedge \square \neg \mathit{unsafe}.
\end{equation}
Notice that in this example the safety requirements we uphold while learning are embedded directly within the LTL formula for the task. In general the two requirements can be distinct.
To ensure the agent's safety, we create a safe padding based on the agent knowledge of its own dynamics. This safe padding gradually grows, allowing the agent to safely explore the MDP (Fig.~\ref{fig_bridge}c--d) while repelling the agent to get too close to unsafe areas. Thanks to this guarding effect of the safe padding, once the goal is reached, the agent can safely back-propagate the reward and shape the value function (Fig.~\ref{fig_bridge}b) according to which the safe policy is later generated. Furthermore, note that with Cautious RL the agent is focused on those parts of the state space that are most relevant to the satisfaction of the given LTL property.
There was no single incident of going to unsafe in this experiment even with such a limited observation radius (Table~\ref{success_stats}). With the safe padding on, training took 170 episodes for RL to converge and with safe padding off, it took 500 episodes.
\begin{table}
\centering
\scalebox{0.9}{
\begin{tabular}{|c|c|c|c|}
\hline
Case Study & Safe Padding & Fail Rate & Success Rate \\
\hline \hline
\multirow{2}{*}{Slippery Grid-world} & Off & 36.48\% & 63.52\% \\ \cline{2-4}
& On & 0\% & 100\%
\\ \hline \hline
\multirow{2}{*}{Pacman} & Off & 52.69\% & 47.31\% \\ \cline{2-4}
& On & 10.77\% & 89.23\%
\\ \hline
\end{tabular}}
\caption{Proportion of number of times that the agent ended in unsafe (fail) states, and proportion of number of times in which the agent finds a path satisfying the LTL specification during learning. Statistics are taken over 500 learning episodes in the slippery grid-world and over 20000 episodes in the Pacman experiment.}
\label{success_stats}
\end{table}
The second experiment is the classic game Pacman, which is initialised in a tricky configuration likely to lead the agent to be caught by the roaming ghosts (Fig.~\ref{pacmaninit}a). In order to win the agent has to collect all tokens without being caught by ghosts:
\begin{equation}
\label{pacman_p}
\lozenge [ (f_1 \wedge \lozenge f_2) \vee (f_2 \wedge \lozenge f_1)] \wedge \square \neg g,
\end{equation}
where the token on the left is labelled as $ f_1 $, the one on the right as $ f_2 $, and the state of being caught by a ghost is labelled as $ g $. The constructed LDBA is shown in Fig.~\ref{pacmaninit}b. The ghost dynamics is stochastic: with a probability $ p_g=0.9 $ each ghost is chasing Pacman (\emph{chase mode}), else it executes a random move (\emph{scatter mode}). Note that each combination of (Pacman, ghost1, ghost2, ghost3, ghost4) represents a state in the experiment, resulting in a state-space cardinality over $80000$. As in the previous case study, here the safety requirements we uphold while learning are embedded directly within the LTL formula for the task.
Fig.~\ref{pacmaninit}c gives the results of learning with safe padding on and Fig.~\ref{pacmaninit}d off. Note that with the safe padding on, the agent was able to successfully escape the ghosts even from the beginning, with the cost of longer path to win whereas, without the safe padding it took $80000$ episodes to score the very first win. In the Pacman experiment, the safe padding significantly reduced the number of times the agent got caught by the ghosts (Table~\ref{success_stats}).
\begin{figure}[!t]
\centering
\vspace{-4mm}
\scalebox{0.88}[1]{
\hspace{-5.5mm}\subfloat[]{{\includegraphics[width=0.5\linewidth]{bridge_c.png}}}}\\
\subfloat[]{{\includegraphics[width=0.50\linewidth]{value_1.png}}}%
\subfloat[]{{\includegraphics[width=0.50\linewidth]{value_2.png}}}%
\caption{Safety and performance trade-off: (a) slippery grid world with two options to satisfy formula (\ref{slippery_task}), where labelling is yellow: $\mathit{target}$, red: $\mathit{unsafe}$, blue: $\mathit{safe}$, and green is the initial state $s_0$; (b)~value function $V(s)$ without safe padding; and (c)~value function with safe padding (cautious RL).}%
\label{fig_bridge_2}
\end{figure}
\section{Conclusions}
In this paper, we have proposed \emph{Cautious Reinforcement Learning}, a~general method for safe exploration in RL usable on black-box MDPs, which ensures agent safety both during the learning process and for the final, trained agent.
The proposed safe learning approach is in principle applicable to any standard reward-based RL. We have employed Linear Temporal Logic (LTL) to express an overall task (or goal), and to shape the reward for the agent in a provably-correct and safe scheme.
We have proposed a double-agent RL architecture: one agent is pessimistic and limits the selection of the actions of the other agent, i.e., the optimistic one, which learns a policy that satisfies the LTL requirement.
The pessimistic agent creates a continuously growing ``safe padding'' for the optimistic agent, which can learn the optimal task-satisfying policy, while staying safe during learning.
The algorithm automatically manages the trade-off between the need for safety during the training and the need to explore the environment to achieve the LTL objective.
\subsubsection*{Acknowledgments}
{\footnotesize This work is in part supported by the HiClass project (113213), a partnership between the Aerospace Technology Institute (ATI), Department for Business, Energy \& Industrial Strategy (BEIS) and Innovate UK.}
\bibliographystyle{ACM-Ref}
|
1,477,468,751,061 | arxiv | \section{Introduction}
The magnetic order can be excited by magnetic fields, spin~\cite{STT,TmIG0}
and heat~\cite{SpinCaloritronics,OtaniSpinConv} currents, mechanical rotations
and sound waves~\cite{MechanicalWaves1,MechanicalWaves2}, optical fields in
cavities~\cite{Cavitronics1,Cavitronics2}, and electric
fields~\cite{Suzuki2011,Nozaki2012,ParametricVCMA,GeneralReferenceEField,GeneralReferenceEField2,VCMARecentReview}. A mechanism of the latter is
\textit{voltage-control of magnetic anisotropy} (VCMA), which avoids electric
currents and thereby Joule heating. A time-dependent applied electric field
can assist or fully actuate magnetization switching~\cite{GeneralReferenceEField,GeneralReferenceEField2,VCMARecentReview,Shiota2009, Assisted2,Assisted,Kanai2012,SwitchingNp1,SwitchingNp2}, and excite the
ferromagnetic resonance~\cite{Nozaki2012,GeneralReferenceEField,GeneralReferenceEField2,VCMARecentReview,Zhu2012}. However, in order to become useful, the VCMA should be enhanced. This can be realized by, for example, improving
interface properties~\cite{Interfaces2,VCMARecentReview}, thermal
stability~\cite{ThermalStability}, employing higher-order magnetic
anisotropies~\cite{VCMAHigher1}, and reducing temperature
dependences~\cite{VCMAHigher2}. The control of magnetic properties by electric
fields has also been demonstrated or proposed in magnetoelectric
materials~\cite{Gerhard2010,Yamada2011,Sekine2016}, by proximity
effects~\cite{Platinum,PtProximityMag,ChibaTanaka}, by nuclear spin resonance in
single-molecule magnets~\cite{SMMWernsdorfer}, and by the tuning of exchange
interactions~\cite{You2005,Haney2009a,Tang2009,New2017,RKKYMultilayers1,RKKYImpurities}.
The electrostatic environment of a local moment affects its magnetic energy
via the spin-orbit interaction (SOI)~\cite{BookSkomski1,BookSkomski2}. In
transition-metal atoms such as Fe, Co, and Ni with partially filled 3d
subshells, the electrostatic interaction with neighboring atoms, $E_{CF}\sim
1$~eV is much larger than the SOI $E_{SOI}\sim0.05$~eV~\cite{BookSkomski1},
which implies that the orbital momentum of transition-metal ions is easily
quenched, while the relatively large 3d orbital radius favors band formation and
itinerant magnetism. The opposite occurs for the lanthanide series, i.e.,
atoms from lanthanum (La with atomic number 57) to lutetium (Lu with atomic
number 71). The \textit{rare earths} (RE) also include non-magnetic
scandium (Sc) and yttrium (Y). The ground states of the lanthanide La$^{+3}$, Eu$^{+3}$, and Lu$^{+3}$ ions are also not magnetic. The half-filled subshell of the magnetic ion Gd$^{+3}$
lacks orbital angular momentum and, therefore, SOI. Except for La$^{+3}$, Eu$^{+3}$, Gd$^{+3}$, and Lu$^{+3}$,
the 4f SOI energy of the lanthanide series $E_{SOI}\sim0.2$ eV is much stronger than crystal-field energies
$E_{CF}\sim0.01$ eV~\cite{BookSkomski1}, so their orbital momenta are
atomic-like and not quenched. The magnetism of lanthanide-containing compounds
can be understood by models that proceed from an atomic picture. Nevertheless,
since the crystal fields lock to their spin-orbit induced anisotropic charge
distributions, large magnetocrystalline anisotropies can be achieved.
The mechanism for the VCMA of RE moments is the electric
field-induced torque on an anisotropic 4f charge distribution with rigidly
coupled magnetic moment by the electric quadrupolar coupling~\cite{OurVoltage}.
This torque is communicated to the magnetic order via the exchange interaction.
Here, we predict that an interfacial RE dusting of transition-metal magnetic tunnel junctions can enhance its VCMA efficiency. We study the temperature dependence of the VCMA of RE moments, as well as the role of higher-order anisotropy constants. The latter issue has been addressed in transition-metal systems~\cite{VCMAHigher1,VCMAHigher2}, where the first- and second-order contributions partially cancel in the total VCMA. We calculate the magnetic anisotropy constants (MACs) of a rare-earth ion in the presence of an electric field, assuming a strong exchange coupling with the system magnetization. The effect is strongest for a RE at an interface between a magnetic metal and a non-magnetic insulator, such as Co$|$MgO. The Hamiltonian of the local moment in an angular momentum basis leads to so-called Stevens operators that can be easily diagonalized. We extract the intrinsic and field-induced MACs from the corresponding temperature-dependent free energy as a function of temperature.
\section{Single-ion magnetic anisotropy}
The 4f atomic radius is small compared to that of other filled atomic shells,
which isolates the 4f electrons from other atoms in
compounds~\cite{BookJensen}. Consequently, the crystal fields that would
quench the orbital momentum of 3d transition metals only slightly affect 4f
electron ground-state configurations. The 4f subshell is characterized by a
spin ($\mathbf{S}$), an orbital momentum ($\mathbf{L}$), and a total angular
momentum ($\mathbf{J}=\mathbf{L}+\mathbf{S}$). In the basis $|S,L,J,J_{z}%
\rangle$,
\begin{eqnarray*}
\mathbf{S}^{2}|S,L,J,J_{z}\rangle & =\hbar^{2}S\left( S+1\right)
|S,L,J,J_{z}\rangle,\\
\mathbf{L}^{2}|S,L,J,J_{z}\rangle & =\hbar^{2}L\left( L+1\right)
|S,L,J,J_{z}\rangle,\\
\mathbf{J}^{2}|S,L,J,J_{z}\rangle & =\hbar^{2}J\left( J+1\right)
|S,L,J,J_{z}\rangle,\\
\hat{J}_{z}|S,L,J,J_{z}\rangle & =\hbar J_{z}|S,L,J,J_{z}\rangle,
\end{eqnarray*}
where $S$ and $L$ are governed by Hund's first and second rules, respectively.
The third rule determines the multiplet $J=L\pm S$, where the $-$ and $+$ is
for the light (i.e., less than half-filled 4f shell with an atomic number less
than 64) and heavy REs, respectively. We list the $S$, $L$, and $J$ for the
whole 4f series in table~\ref{TableHundRules}. In the following, we focus on
the ground-state manifold with constant $S$, $L$, and $J$ numbers. This multiplet of
$J=L\pm S$ has $2J+1$ states that are degenerate in the absence of
electromagnetic fields. Also,
\begin{eqnarray*}
\mathbf{S} & =(g_{J}-1)\mathbf{J},\\
\mathbf{L} & =(2-g_{J})\mathbf{J},\\
\mathbf{L}+2\mathbf{S} & =g_{J}\mathbf{J},
\end{eqnarray*}
where $g_{J}=3/2+[S(S+1)-L(L+1)]/[2J(J+1)]$ is the Land\'{e} $g-$factor. The
projections of $\mathbf{S}$, $\mathbf{L}$, and $\mathbf{L}+2\mathbf{S}$ on
$\mathbf{J}$ for lanthanide atoms manifests itself also in the crystal-field
Hamiltonian, as shown in the next subsection.
\subsection{Stevens operators}
Let us consider a crystal site with a potential that is invariant to rotations around the $z$-axis, which can be expanded as
\begin{eqnarray*}
-eV\left( \mathbf{r}\right) & =A_{2}^{(0)}\left( 3z^{2}-r^{2}\right)
+A_{4}^{(0)}\left( 35z^{4}-30r^{2}z^{2}+3r^{4}\right) \nonumber\\
& +A_{6}^{(0)}\left( 231z^{6}-315z^{4}r^{2}+105z^{2}r^{4}-5r^{6}\right) ,
\end{eqnarray*}
where $A_{l}^{(0)}$ is a \textit{uniaxial crystal-field parameter} associated
to the $Y_{l}^{0}$ spherical harmonic function (see \ref{SecAppendixA}),
usually expressed in units of temperature divided by $a_{0}^{l}$, where
$a_{0}\approx 0.53$ \AA \ is the Bohr radius. For example, for the 4f states of
Nd$_{2}$Fe$_{14}$B~\cite{CoeysBook} $A_{2}^{(0)}=304$\thinspace\textrm{K}%
$/a_{0}^{2}$, $A_{4}^{(0)}=-15\,\mathrm{K}/a_{0}^{4}$, and $A_{6}%
^{(0)}=-2\,\mathrm{K}/a_{0}^{6}$. The crystal-field parameters of the 4f and
4g states of other members of the (RE)$_{2}$Fe$_{14}$B family can be found
in Ref.~\cite{CoeysBook}. \begin{table}[t]
\begin{center}%
\begin{tabular}
[c]{|c|c|c|c|c|c|}\hline
Ion & $4f^{n}$ & S & L & J & g$_{J}$\\\hline
Ce$^{3+}$ & $4f^{1}$ & 1/2 & 3 & 5/2 & 6/7\\\hline
Pr$^{3+}$ & $4f^{2}$ & 1 & 5 & 4 & 4/5\\\hline
Nd$^{3+}$ & $4f^{3}$ & 3/2 & 6 & 9/2 & 8/11\\\hline
Pm$^{3+}$ & $4f^{4}$ & 2 & 6 & 4 & 3/5\\\hline
Sm$^{3+}$ & $4f^{5}$ & 5/2 & 5 & 5/2 & 2/7\\\hline
Eu$^{3+}$ & $4f^{6}$ & 3 & 3 & 0 & -\\\hline
Gd$^{3+}$ & $4f^{7}$ & 7/2 & 0 & 7/2 & 2\\\hline
Tb$^{3+}$ & $4f^{8}$ & 3 & 3 & 6 & 3/2\\\hline
Dy$^{3+}$ & $4f^{9}$ & 5/2 & 5 & 15/2 & 4/3\\\hline
Ho$^{3+}$ & $4f^{10}$ & 2 & 6 & 8 & 5/4\\\hline
Er$^{3+}$ & $4f^{11}$ & 3/2 & 6 & 15/2 & 6/5\\\hline
Tm$^{3+}$ & $4f^{12}$ & 1 & 5 & 6 & 7/6\\\hline
Yb$^{3+}$ & $4f^{13}$ & 1/2 & 3 & 7/2 & 8/7\\\hline
\end{tabular}
\end{center}
\caption{Ground-state manifold of the tri-positive 4f ions. $S$, $L$, and $J$
are the quantum numbers associated with $\mathbf{S}^{2}$, $\mathbf{L}^{2}$,
and $\mathbf{J}^{2}$, respectively. $g_{J}$ is the Land\'{e} g-factor.}%
\label{TableHundRules}%
\end{table}
The electrostatic Hamiltonian of $N_{4f}$ electrons in the subshell Hilbert
space can be expanded into
\begin{eqnarray}
\sum_{j=1}^{N_{4f}}\left( 3\hat{z}_{j}^{2}-\hat{r}_{j}^{2}\right)
=\vartheta_{2}\left\langle r^{2}\right\rangle \hat{O}_{2}^{(0)},\nonumber\\
\sum_{j=1}^{N_{4f}}\left( 35\hat{z}_{j}^{4}-30\hat{r}_{j}^{2}\hat{z}_{j}%
^{2}+3\hat{r}_{j}^{4}\right) =\vartheta_{4}\left\langle r^{4}\right\rangle
\hat{O}_{4}^{(0)},\nonumber\\
\sum_{j=1}^{N_{4f}}h\left(\hat{r}_{j},\hat{z}_{j} \right)
=\vartheta_{6}\left\langle r^{6}\right\rangle \hat{O}_{6}^{(0)},
\label{Stevens}%
\end{eqnarray}
where $h\left(\hat{r}_{j},\hat{z}_{j} \right)=231\hat{z}_{j}^{6}-315\hat{z}_{j}^{4}\hat{r}%
_{j}^{2}+105\hat{z}_{j}^{2}\hat{r}_{j}^{4}-5\hat{r}_{j}^{6}$ and $\hat{z}_{j}$ and $\hat{r}_{j}$ are the operators of the $z$ and the
radial coordinates of the $j$-th electron, respectively. $\langle r^{l}%
\rangle$ is the mean value of $r^{l}$ calculated for a 4f (atomic) radial wave
function. The projection constants $\vartheta_{l}$ are listed in
Table~\ref{TableThetas}, while \textit{Stevens equivalent operators} are
\begin{eqnarray}
\hbar^{2}\hat{O}_{2}^{(0)} & =3\hat{J}_{z}^{2}-\mathbf{J}^{2},\\
\hbar^{4}\hat{O}_{4}^{(0)} & =35\hat{J}_{z}^{4}-30\mathbf{J}^{2}\hat{J}%
_{z}^{2}+25\hbar^{2}\hat{J}_{z}^{2}-6\hbar^{2}\mathbf{J}^{2}+3\mathbf{J}%
^{4},\\
\hbar^{6}\hat{O}_{6}^{(0)} & =231\hat{J}_{z}^{6}-315\mathbf{J}^{2}\hat{J}%
_{z}^{4}+735\hbar^{2}\hat{J}_{z}^{4}+105\mathbf{J}^{4}\hat{J}_{z}^{2}\\
& -525\hbar^{2}\mathbf{J}^{2}\hat{J}_{z}^{2}+294\hbar^{4}\hat{J}_{z}%
^{2}-5\mathbf{J}^{6}+40\hbar^{2}\mathbf{J}^{4}-60\hbar^{4}\mathbf{J}%
^{2}.\nonumber
\end{eqnarray}
Stevens operators for other symmetries are listed
in~\cite{StevensOp1,StevensOp2,StevensOp3}. The total crystal-field
Hamiltonian reads
\begin{equation}
H_{CF}=-e\sum_{j=1}^{N_{4f}}V\left( \mathbf{r}_{j}\right) =\sum
_{l=2,4,6}\vartheta_{l}\left\langle r^{l}\right\rangle A_{l}^{(0)}\hat{O}%
_{l}^{(0)}.
\end{equation}
\begin{table}[t]
\begin{center}%
\begin{tabular}
[c]{|c|c|c|c|c|}\hline
Ion & $4f^{n}$ & $10^2\vartheta_2$ & $10^3\vartheta_4$& $10^4\vartheta_6$\\\hline
Ce$^{3+}$ & $4f^{1}$ & -5.71 & 6.35&0\\\hline
Pr$^{3+}$ & $4f^{2}$ & -2.10 & -0.73&0.61\\\hline
Nd$^{3+}$ & $4f^{3}$ & -0.64 &-0.29 &-0.38\\\hline
Pm$^{3+}$ & $4f^{4}$ & 0.77&0.41 &6.69\\\hline
Sm$^{3+}$ & $4f^{5}$ & 4.13 &2.50 &0\\\hline
Eu$^{3+}$ & $4f^{6}$ & - & -&-\\\hline
Gd$^{3+}$ & $4f^{7}$ & - & -&-\\\hline
Tb$^{3+}$ & $4f^{8}$ & -1.01 &0.12&-0.01\\\hline
Dy$^{3+}$ & $4f^{9}$ & -0.63 & -0.06 &0.01\\\hline
Ho$^{3+}$ & $4f^{10}$ & -0.22& -0.03&-0.01\\\hline
Er$^{3+}$ & $4f^{11}$ &0.25 &0.04&0.02\\\hline
Tm$^{3+}$ & $4f^{12}$ & 1.01 & 0.16&-0.06\\\hline
Yb$^{3+}$ & $4f^{13}$ & 3.17 &-1.73& 1.48\\\hline
\end{tabular}
\end{center}
\caption{Projection constants in the Stevens' operators Eqs. (\ref{Stevens})
\cite{StevensOp2}. The nearly ellipsoidal 4f electron density causes a
hierarchy of projection constants, i.e., most ions obey the scaling
$|\vartheta_{2}|\sim10^{-3}-10^{-2}$, $|\vartheta_{4}|\sim10^{-5}-10^{-3}$, and $|\vartheta
_{6}|\sim10^{-6}-10^{-4}$, and then the quadrupole dominates. Some references use the
notation $\alpha_{J}=\vartheta_{2}$, $\beta_{J}=\vartheta_{4}$, and
$\gamma_{J}=\vartheta_{6}$.}%
\label{TableThetas}%
\end{table}
\subsection{Magnetic anisotropy constants}
In several magnets, the exchange interaction strongly couples the 4f local
moments to the magnetization $\mathbf{m}=\sin\theta\left( \cos\phi
\mathbf{e_{x}}+\sin\phi\mathbf{e_{y}}\right) +\cos\theta\mathbf{e_{z}}$,
where $\mathbf{e_{j}}$ is the unit vector along the Cartesian axis $j$. Then,
the Hamiltonian $H$ of a single RE atom reads
\begin{equation}
H=H_{CF}+\frac{J_{ex}\left( g_{J}-1\right) f(T)}{\hbar}\mathbf{J}%
\cdot\mathbf{m},\label{EqHamiltonianGeneral}%
\end{equation}
where $J_{ex}>0$ is the exchange constant with units of energy. The exchange
coupling favors the parallel alignment between the magnetization $\mathbf{m}$
and the spin contribution to the 4f moment $-\gamma_{e}\left( g_{J}-1\right)
\mathbf{J}$, with $-\gamma_{e}$ being the electron gyromagnetic ratio. Note
that the 4f spin $\mathbf{S}$ is antiparallel (parallel) to $\mathbf{J}$ for
the light (heavy) lanthanides because of $g_{J}<1 $ ($g_{J}>1$). $f(T)$
parameterizes the temperature dependence of the system
magnetization~\cite{KuzMin1,KuzMin2}
\begin{equation}
f(T)=\left[ 1-s\left( \frac{T}{T_{C}}\right) ^{3/2}-\left( 1-s\right)
\left( \frac{T}{T_{C}}\right) ^{p}\right] ^{1/3},\label{EqKuzMin}%
\end{equation}
where $T_{C}$ is the Curie temperature, and $s$ and $p$ (with $p>s$) are
material-dependent parameters. For example, for Co~\cite{KuzMin1},
$T_{C}=1385$ K, $s=0.11$, and $p=5/2$; for Fe, $T_{C}=1044$ K, $s=0.35$, and
$p=4$. Empirical expression~(\ref{EqKuzMin}) describes the temperature
dependence between Bloch's law $1-(s/3)\left( T/T_{C}\right) ^{3/2}$ for
$T\rightarrow0$ and the critical scaling $\left( 1-T/T_{C}\right) ^{1/3}$
for $T\rightarrow T_{C}$. Equation~(\ref{EqKuzMin}) is all we need to know
about the magnetic host.
The Helmholtz free energy~\cite{DefK}
\begin{equation}
F=-\frac{1}{\beta}\ln\left[ \sum_{n=1}^{2J+1}e^{-\beta E_{n}}\right]
,\label{EqFHelmholtz}%
\end{equation}
where $\beta=1/(k_{B}T)$, $k_{B}=8.617\times10^{-5}$ eV/K is Boltzmann's
constant, $T$ is temperature, and $E_{n}$ is the $n$-th eigenvalue of
Eq.~(\ref{EqHamiltonianGeneral}). The uniaxial anisotropy energy density can be expanded in the magnetization direction \(\theta\) as~\cite{BookSkomski1,BookSkomski2}
\begin{equation}
n_{RE}F=K_{1}\sin^{2}\theta+K_{2}\sin^{4}\theta+K_{3}\sin^{6}\theta,
\end{equation}
where $n_{RE}$ is the (surface or volume) density of RE moments and
\begin{eqnarray}
K_{1} & =\frac{n_{RE}}{2}\lim_{\theta\rightarrow0}\left[ \left(
\partial_{\theta}\right) ^{2}F\right] ,\label{EqDefK1}\\
K_{2} & =\frac{n_{RE}}{4!}\lim_{\theta\rightarrow0}\left[ \left(
\partial_{\theta}\right) ^{4}F\right] +\frac{K_{1}}{3},\label{EqDefK2}\\
K_{3} & =\frac{n_{RE}}{6!}\lim_{\theta\rightarrow0}\left[ \left(
\partial_{\theta}\right) ^{6}F\right] -\frac{2K_{1}}{45}+\frac{2K_{2}}%
{3},\label{EqDefK3}%
\end{eqnarray}
are the magnetic anisotropy constants (MACs) and $\left( \partial_{\theta
}\right) ^{m}$ is the $m$-th order partial derivative with respect to $\theta$.
$F$ and the MACs depend on the eigenenergies $E_{n}$ of the 4f
Hamiltonian, Eq.~(\ref{EqHamiltonianGeneral}), by
Eqs.~(\ref{EqFHelmholtz}) and (\ref{EqDefK1}-\ref{EqDefK3}). For example, when
$J=1$ and in the limit of large exchange ($\left\vert J_{ex}\right\vert
\gg|\vartheta_{l}\left\langle r^{l}\right\rangle A_{l}^{(0)}|$, for $l=2,4,6$)
and low temperatures ($k_{B}T\rightarrow0$),\footnote{Some denote the
magnetic anisotropy energy as $\kappa_{1}\cos^{2}\theta$ instead of $K_{1}%
\sin^{2}\theta$ with $\kappa_{1}=-K_{1}$.}
\begin{equation}
K_{1}=-\frac{3}{2}n_{RE}\theta_{2}\left\langle r^{2}\right\rangle A_{2}%
^{(0)}.\label{EqK1AResult}%
\end{equation}
$K_{1}$ does not depend on the exchange constant~\cite{DefK}.
Equation~(\ref{EqK1AResult}) is consistent with Ref.~\cite{BookSkomski2} and
can be written as $K_{1}/n_{RE}=-(3/2)Q_{2}A_{2}^{(0)}$, where $Q_{2}%
=\theta_{2}\left\langle r^{2}\right\rangle $ is the quadrupolar moment for
$J_{z}=J=1$, a measure of the asphericity of the 4f subshell charge density.
The calculation of general MACs requires the diagonalization of a $(2J+1)$
times $(2J+1)$ matrix. An analytic calculation of $K_{1}$, $K_{2}$ and $K_{3}
$ for arbitrary temperature and exchange constants is tedious, but easily
carried out numerically.
In the following, we numerically compute the temperature-dependent MACs
induced by electric fields at an \textit{insulator}$|$\textit{metal}
interface, also considering that crystal fields at interfaces may
substantially differ from that in bulk crystals. We assume uniaxial symmetry
and denote the interface crystal field parameters by $\bar{A}_{l}^{(0)}$. An
applied voltage can give rise to locally large electric fields $E_{0}$
normal to a metal$|$insulator interface (along the $z$-axis), which
contributes as $\Delta A_{l}^{(0)}$ with total $\tilde{A}_{l}^{(0)}=\bar
{A}_{l}^{(0)}+\Delta A_{l}^{(0)}$. \begin{figure}[b]
\begin{center}
\includegraphics[width=8.5cm]{Spectrum.pdf}
\end{center}
\caption{Energy spectra of lanthanide local moments at a Co surface at room
temperature, in units of $300$~K$k_{B}$, $J_{ex}=0.1$~eV, $E_{0}=10$~mV/nm, and magnetization along $z $ (i.e.,
$\theta=0$). Crystal-electric field effects are not included. The dots are the calculated eigenvalues
$E_{n}$ of the Hamiltonian~(\ref{EqHamiltonianGeneral}).}%
\label{Fig0}%
\end{figure}
\section{Electric field-dependent magnetic anisotropy}
The applied electric field $E_{0}$ is screened on the scales of the
Thomas-Fermi length $d_{TF}\sim1$~\AA \ on the metal side, so $\mathbf{E}%
=E_{0}e^{-z/d_{TF}}\mathbf{e_{z}}$ for $z>0$, with $z=0$ being the interface
position. Close to $z=0$ and using the expressions from~\ref{SecAppendixA},
\begin{eqnarray}
\Delta A_{2}^{(0)} & =-\frac{eE_{0}}{6d_{TF}},\\
\Delta A_{4}^{(0)} & =-\frac{eE_{0}}{840d_{TF}^{3}},\\
\Delta A_{6}^{(0)} & =-\frac{eE_{0}}{166320d_{TF}^{5}}.
\end{eqnarray}
Therefore, the electric field modifies not only the second-order uniaxial
anisotropy but also higher-order terms. With this set of crystal-field
parameters, we can diagonalize the atomic
Hamiltonian~(\ref{EqHamiltonianGeneral}), evaluate the free
energy~(\ref{EqFHelmholtz}), and the MACs~(\ref{EqDefK1}-\ref{EqDefK3}%
), see~\ref{AppNum}. We plot the
spectra $E_{n}$ with $n \in \{1,\ldots,2J+1\}$ at room temperature in Fig.~\ref{Fig0} in units of
the thermal energy for $J_{ex}=0.1$~eV, $E_{0}=10$~mV/nm, and $\theta=0$. The
exchange interaction dominates the term splittings, while electric-field
effects are small.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{MACs.pdf}
\end{center}
\caption{Voltage-controlled magnetic anisotropy (MAC) per unit of electric field $E_0$, $\Delta K_{1}/E_0$ (solid line),
$\Delta K_{2}/E_0$ (square-dashed line) and $\Delta K_{3}/E_0$
(dashed line) of rare earth moments at the surface of Co
at low temperatures ($T=0.01$ mK). Here we use the density $n_{RE}=1$/nm$^{2}$ and exchange constant $J_{ex}=0.1$ eV.
For better visibility, $\Delta K_{2}$ and $\Delta K_{3}$ are enlarged
by a factor of 10 and 100, respectively. For an electric field
$E_{0}=10$ mV/nm $=100$ kV/cm, $\Delta K_{1}$ is of the order $\mu$J/m$^2$ for most lanthanides.}%
\label{Fig1}%
\end{figure}
The MACs $\Delta K_{l}$ from $\Delta A_{l}^{(0)}$ are proportional
to the applied electric field $E_{0}$. $\Delta K_{1}$ has a negative slope for the oblate
(pancake-shaped) ions Ce$^{3+}$, Pr$^{3+}$, Nd$^{3+}$, Tb$^{3+}$, Dy$^{3+}$,
and Ho$^{3+}$, and a positive slope for the prolate (cigar-shaped) ions
Pm$^{3+}$, Sm$^{3+}$, Er$^{3+}$, Tm$^{3+}$, and Yb$^{3+}$, consistent with
previous results~\cite{OurVoltage}. Figure~\ref{Fig1} shows the VCMA
contributions of a set of RE atoms at an interface at low temperatures with
$n_{RE}=1$\thinspace nm$^{-2}$, and $J_{ex}=0.1$ eV. We use $d_{TF}=1$~\AA \
and the Co parameters for the magnetization, $\{T_{c},s,p\}=\{1385$%
~K$,0.11,5/2\}$ in Eq. (\ref{EqKuzMin}), assuming that they are not affected
much by the interface. The MACs in units of energy density result from
dividing the surface MACs by the thickness of the magnetic film. For example,
dusting the interface with one Tm$^{+3}$ ion per nm$^{2}$ with a field of $E_{0}%
\sim1$\thinspace$\mathrm{V/nm}=10^{4}\,\mathrm{kV/cm}$ creates an energy volume density of 1
MJ/m$^{3}$ in a 1 nm-thick Co film. Figure~\ref{Fig1} illustrates that the
VCMA of rare earths is governed only by $K_{1}$, while $K_{2}$ and $K_{3}$ are
negligibly small. This hierarchy differs from that of transition metals, where
$K_{1}$ and $K_{2}$ are of the same order of magnitude and partially
compensate each other~\cite{VCMAHigher1,VCMAHigher2}. This difference can be
understood as follows. The $l$-th order MAC divided by the characteristic
electrostatic energy, $eE_{0}d_{TF}$, scales as $\Delta K_{j}/(eE_{0}%
d_{TF})\propto\vartheta_{2j}\langle r^{2j}\rangle/d_{TF}^{2j}$. The 4f
subshell envelope is nearly ellipsoidal, which is accounted for by the
hierarchy of the projections constants $\left\vert \vartheta_{2}\right\vert
\ll\left\vert \vartheta_{4}\right\vert \ll\left\vert \vartheta_{6}\right\vert
$. The transition metal 3d shells are more polarizable and can be more easily
deformed by the crystal fields than the lanthanides. A consequence is that the
quadrupole contribution of the voltage-controlled anisotropy $\Delta K_{1}$ of
rare earths is much larger than $\Delta K_{2}$ and $\Delta K_{3}$.
The temperature dependence of rare-earth magnetic anisotropies in bulk
materials has been extensively
studied~\cite{BookSkomski1,BookSkomski2,KuzMin2}. Here we calculate the
temperature dependence of the VCMA-efficiency for rare-earth atoms at an interface
between a non-magnetic insulator (such as MgO) and a magnetic metal, such as
Fe or Co. Figure~\ref{Fig2} illustrates the temperature-dependence of $K_{1}/E_0$
for all lanthanides with a finite orbital momentum in the temperature range
0~K~$\leq T\leq$~1400 K~for $n_{RE}=1$~nm$^{-2}$. $K_{1}$ at room temperature,
$T=300$ K, is specified inside each graph. The efficiency at room-temperature is
largest for Tb$^{3+}$ and Dy$^{3+}$ with $\Delta K_{1}/E_0=-960$ fJ/Vm and $\Delta K_{1}/E_0=-910$ fJ/Vm, respectively. For an applied field of $E_0=10$mV/nm, the corresponding VCMA values of Tb$^{3+}$ and Dy$^{3+}$ are $\Delta K_{1}=-9.6\,\mathrm{\mu}$J/m$^{2}$ and $\Delta K_{1}=-9.1\,\mathrm{\mu}$J/m$^{2}$.
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{Temperature.pdf}
\end{center}
\caption{Magnetic anisotropy constants per unit of electric field, $\Delta K_{1}/E_0$, as a function of
temperature for a rare-earth density $n_{4f}=1$/nm$^{2}$
at a Co surface.}%
\label{Fig2}%
\end{figure}
In the absence of exchange coupling between the 4f angular momentum
($\mathbf{J}$) and the magnetization ($\mathbf{m}$), REs do not contribute to
the anisotropy, so the VCMA strength vanishes for $J_{ex}\rightarrow0$. This
tendency is shown in Fig.~\ref{Fig3} for $0.01$~$\mathrm{eV}\leq J_{ex}\leq10$
eV at $T=300$~K. Results are not very sensitive to the value of typical
exchange constants, $0.1$~$\mathrm{eV}\leq J_{ex}<1$ eV, as long as they are
larger than the anisotropy induced by the crystal fields or applied voltages
($\sim0.01$~eV~\cite{BookSkomski1}).
\begin{figure}[t]
\begin{center}
\includegraphics[width=8.5cm]{MACsAndExchange.pdf}
\end{center}
\caption{Magnetic anisotropy constant per unit of electric field, $\Delta K_{1}/E_0$, for exchange constants $J_{ex}=10$ eV (solid
line), $J_{ex}=1$ eV (crosses), $J_{ex}=0.1$ eV (open circles), and
$J_{ex}=0.01$ eV (full circles). This graph uses $T=300$ K
and a density $n_{4f}=1$/nm$^{2}$ at a Co surface. The thin horizontal line
$\Delta K_{1}/E_0=0$ is just for visual guidance.}%
\label{Fig3}%
\end{figure}
\section{Intrinsic interface magnetic anisotropy}
\label{Intrinsic} The intrinsic (i.e., at zero applied electric field) magnetic anisotropy at the interface cannot be calculated accurately in an easy way. Simple approaches, such as the
point-charge model, are not adequate for metals due to the efficient screening
by conduction electrons~\cite{SkomskiScreenedPointCharge}. The screened-charge
model of metals~\cite{SkomskiScreenedPointCharge} can
characterize interfacial anisotropies in metallic
multilayers~\cite{SkomskiScreenedPMultilayer}. However, this model is not
valid for metal$|$insulator interfaces with a nearly discontinuous conduction electron density.
Here we estimate the order of magnitude of the intrinsic interfacial RE
magnetic anisotropy by the model of a local moment in a metal at the origin
surrounded by four oxygen atoms with Cartesian coordinates $(\pm
d_{ox},0,-d_{ox})/\sqrt{2}$ and $(0,\pm d_{ox},-d_{ox})/\sqrt{2}$ and five
transition-metal atoms (such as Co or Fe) at positions $(\pm d_{TM},0,0)$,
$(0,\pm d_{TM},0)$ and $(0,0,d_{TM})$, as shown in Fig.~\ref{Fig4}. The
uniaxial crystal-field
parameter~\cite{BookSkomski1,BookSkomski2,SkomskiScreenedPointCharge} reads
\begin{equation*}
\bar{A}_{2}^{(0)}=\sum_{j}\frac{A_{j}^{\prime}}{2}\left( 3\cos^{2}\theta
_{j}-1\right) ,
\end{equation*}
where $j$ labels the ligand, $\cos\theta_{j}$ is the $z$-component of the
$j$-th site position ($\mathbf{r}_{j}$), and $A_{j}^{\prime}$ depends on the
distance $d_{j}=|\mathbf{r}_{j}|$
\begin{equation*}
A_{j}^{\prime}\left( d_{j}\right) =-\frac{eQ_{j}}{4\pi\varepsilon_{0}}%
\frac{e^{-d_{j}/d_{TF}}}{2d_{j}^{3}}\left[ 1+\frac{d_{j}}{d_{TF}}+\frac{1}%
{3}\left( \frac{d_{j}}{d_{TF}}\right) ^{2}\right] ,\label{EqIntrinsicCFP}%
\end{equation*}
where $\varepsilon_{0}$ is the vacuum permittivity. We adopt the
screened charges~\cite{SkomskiScreenedPointCharge} approach for $Q_{j}=Q_{j}\left(d_{TF}\right)$. For $d_{ox}=6
$~\AA , $d_{TM}=5$~\AA , and $d_{TF}=1$~\AA , and $A_{j}^{\prime}/k_{B}$
for iron and oxygen of the order of magnitude of
$100\,\mathrm{K}a_{0}^{-2}$ and $200$\thinspace
$\mathrm{K}a_{0}^{-2}$, respectively, $\bar{A}_{2}^{(0)}\sim3\times10^{18}%
$~eV/m$^{2}$ is of the same order as that produced by an electric field of
$E_{0}=-1.8$\thinspace V/nm.
\begin{figure}[b]
\begin{center}
\includegraphics[width=4.5cm]{Sketch.pdf}
\end{center}
\caption{Sketch of the ligands of a RE atom at a metal$|$insulator
interface.}%
\label{Fig4}%
\end{figure}
For the present interface model, oblate (prolate) ions with
$\vartheta_{2}<0$ ($\vartheta_{2}>0$) favor a perpendicular (in-plane)
magnetization. Doping a transition-metal layer with oblate rare-earth ions
enhances the perpendicular interface anisotropy, which is important for
spin transfer torque magnetic random-access memories (STT-MRAM), but also implies need for higher voltages to achieve VCMA-induced
magnetization switching. In
transition metals, on the other hand, the intrinsic magnetic anisotropies are small, and electric-field
effects easily dominate. A quantitative description of the intrinsic
interface rare-earth anisotropy as a function of interface structure and
morphology requires first-principles calculations.
\section{Conclusions and discussion}
We studied the temperature-dependent \textit{voltage-controlled magnetic anisotropy} of
rare-earth atoms at a magnetic metal$|$non-magnetic insulator interface. Our findings differ from the conventional wisdom based on
transition metals. In rare earths, only the lowest-order uniaxial constant can
be efficiently modulated by a voltage because of the small 4f radius and rigid
ellipsoidal shape of the 4f shell electron density. To leading order, the magnetic
anisotropy constants change linearly with the applied electric field, with a
negative slope for the oblate (pancake-like) Ce$^{3+}$, Pr$^{3+}$, Nd$^{3+}$,
Tb$^{3+}$, Dy$^{3+}$, and Ho$^{3+}$, and a positive one for the prolate
Pm$^{3+}$, Sm$^{3+}$, Er$^{3+}$, Tm$^{3+}$, and Yb$^{3+}$ moments. Rare
earths at an interface also contribute to the intrinsic (i.e., independent of the applied electric field) magnetic anisotropy,
the oblate (prolate) ones favoring a perpendicular (in-plane) equilibrium magnetization.
Our model assumes metallic screening, i.e., a drop of the electric field over
atomic distances at the interface which hosts the rare earth moments. This
assumption might break down at non-ideal interfaces, so it should be confirmed
by experimental or ab initio methods.
Nevertheless, we are confident about substantial effects at room temperature
for even low densities of RE atoms ($\sim1$/nm$^{2}$). Since the electric
field is strongly enhanced at metal$|$insulator interfaces, bulk doping of a
magnet with rare earths is not efficient. Still, the dusting of the
interface between a tunnel barrier and a transition metal thin film can
significantly enhance the switching efficiency of voltage-controlled tunnel junctions.
\subsection*{Acknowledgments}
This research was supported by JSPS KAKENHI Grant No. 19H006450, Postdoctorado
FONDECYT 2019 Folio 3190030, and Financiamento Basal para Centros Cientificos
de Excelencia FB0807.
|
1,477,468,751,062 | arxiv | \section{Introduction}
Precise measurements of neutrino mixing parameters are
crucial to searches for {\it CP}-symmetry violation among neutral leptons and tests of neutrino oscillation theory. In particular, the precision of neutrino mixing angle $\theta_{13}$ is of key significance in constraining the leptonic {\it CP} phase $\delta$~\cite{T2K_CPvsT13, NOvA, MINOS, LBNE}.
Prior to 2012, many experimental efforts had been made to determine $\theta_{13}$~\cite{DC2012, T2K2011, MINOS2011, Chooz, PaloVerde, K2K}.
The first measurement of $\theta_{13}$ with a significance greater than five standard deviations was reported by the Daya Bay Reactor Neutrino Experiment in 2012~\cite{DYB1}.
The most recent determinations of $\theta_{13}$ from reactor and accelerator experiments~\cite{nGd8AD, RENO, DC_nH16, DYB_nH, DC, T2K, MINOS_t13} are consistent.
The three reactor antineutrino experiments, Double Chooz~\cite{DC_TDR}, RENO~\cite{RENO_TDR}, and Daya Bay~\cite{DYB_TDR}, currently provide the most precise measurements of the mixing angle. They use gadolinium-doped liquid scintillator to identify electron antineutrinos through inverse $\beta$-decay (IBD) reactions ($\overline{\nu}_{e}$$\ +\ p \rightarrow n+e^{+}$) with the neutron capturing on gadolinium ($n$Gd). A surrounding volume of undoped liquid scintillator improves the efficiency of detecting $\gamma$'s that escape from the doped volume, and has been used (in conjunction with the doped volume) by each of the three reactor experiments
to determine $\sin^{2}$2$\theta_{13}$ independently through IBD reactions with the neutron captured by hydrogen ($n$H)~\cite{DC_nH, DYB_nH, RENO_nH, DC_nH16}. The KamLAND experiment has used $n$H IBDs to measure the disappearance of reactor $\overline{\nu}_{e}$~\cite{KamLAND} and the flux of geo-$\overline{\nu}_{e}$~\cite{KamLANDgeo}. The Super-Kamiokande experiment has used $n$H IBDs to search for relic supernova $\overline{\nu}_{e}$~\cite{SuperK}.
Future projects, including the medium-baseline reactor experiments JUNO~\cite{JUNO} and RENO-50~\cite{RENO50}, and LENA~\cite{LENA}, will also make use of $n$H IBDs.
Techniques developed for this analysis may be useful for these future experiments.
The previous analysis of $n$H IBDs from Daya Bay~\cite{DYB_nH} is improved in this article with 3.6 times the number of detected IBDs and with reduced uncertainties of backgrounds and the neutron-capture energy selection efficiency.
This statistically-independent measurement is also largely systematically independent from the $n$Gd-IBD analysis, and improves the overall uncertainty of $\sin^{2}$2$\theta_{13}$ from Daya Bay.
This article is organized as follows. Section~\ref{sec:Exp} describes the Daya Bay experiment. The calculation of reactor antineutrino flux is described in Section~\ref{sec:reactor}. Analysis of the data, including event reconstruction and IBD selection, is described in Section~\ref{sec:Selection}. Section~\ref{sec:ACC} describes the accidental background, and Section~\ref{sec:CorrBkg} describes correlated backgrounds. The IBD selection efficiency is discussed in Section~\ref{sec:DetEff}. The fit for $\sin^{2}$2$\theta_{13}$ and its combination with the $n$Gd-IBD result are presented in Section~\ref{sec:Results}. Section~\ref{sec:Future} briefly discusses the impact of the results and improvements expected in the future.
\section{Experiment}
\label{sec:Exp}
Located in Guangdong province, China, the Daya Bay experiment measures electron antineutrinos emitted from three pairs of nuclear reactors, each reactor nominally producing 2.9 GW of thermal power. Inside the adjacent mountains, two $near$ experimental halls (EH1 and EH2) are located roughly 360-470~m from their nearest reactor, and one $far$ experimental hall (EH3) is located 1.52-1.93~km from all six reactors.
Each far (near) experimental hall contains 4 (2) antineutrino detectors (ADs) submerged in a two-zone water Cherenkov detector~\cite{muon}.
An inner and outer zone together provide each AD with $>$ 2.5~m of shielding against ambient radiation and spallation products of nearby cosmogenic muons.
These inner and outer water shields (IWS and OWS) are independent cosmogenic muon detectors with 160 (121) and 224 (167) 20-cm photomultiplier tubes (PMTs), respectively, in the far (near) hall(s). Detecting muons enables estimates of muon-induced backgrounds; particularly, $^9$Li/$^8$He decay products and spallation neutrons.
The ADs were identically designed and consist of three nested, coaxial cylindrical vessels: an inner and outer acrylic vessel (IAV and OAV)~\cite{AVs} and an outermost stainless steel vessel (SSV), as shown in Fig.~\ref{fig:AD}.
\begin{figure}[b]
\includegraphics[trim=220 130 450 60,clip,width=\columnwidth]{AD0}
\caption{Schematic of an antineutrino detector. See the text for definitions.}
\label{fig:AD}
\end{figure}
For future reference, the $z$ coordinate is defined by the central axis of the cylinders and the $r$ coordinate is measured radially from the central axis.
The IAV is about 3~m in both height and diameter, and holds 20~tons of gadolinium-doped (0.1\% by mass) liquid scintillator (GdLS)~\cite{GdLS}. The surrounding OAV is about 4~m in both height and diameter, and holds 22~tons of undoped liquid scintillator (LS) to improve the efficiency of detecting $\gamma$'s that escape from the GdLS. The surrounding SSV is about 5~m in both height and diameter, and holds 36~tons of mineral oil (MO) to shield against radiation from the PMTs and the SSV.
Each AD contains 192 20-cm PMTs arranged in 24 columns and 8 rings at a fixed radius ($r \approx$ 2.19~m) in the MO. Reflectors were installed above and below the OAV to improve light collection.
Three automated calibration units (ACUs) are affixed atop each AD and house LEDs and various radioactive sources for calibrating the energy scale and position reconstruction of events in the ADs~\cite{ACU}. The ACUs deploy vertically at three radial positions: ACU-A at the center ($r$ = 0), ACU-B near the wall of the IAV ($r$ = 1.35~m), and ACU-C near the wall of the OAV ($r$ = 1.77~m).
ADs were triggered, and recorded the time and charge information of each PMT channel, when the number of PMTs with pulses above threshold ($N_\mathrm{PMT}$) was $\geq45$ or when the integrated sum of PMT pulses from all 192 PMTs ($Q_\mathrm{sum}$) was $\gtrsim$ 65~photoelectrons.
Both trigger thresholds corresponded to approximately 0.4~MeV and accepted 100\% of IBD positrons with $>$~0.7 MeV of deposited energy~\cite{DYB_NIM}.
Water shields triggered independently under analogous conditions~\cite{muon}.
The trigger criteria were tested within each cycle of an 80-MHz clock, and if satisfied, the subsequent 1~$\mu$s (and preceding 200~ns) of data from all channels were recorded.
The physical interactions that caused a single trigger in a given detector are referred to as an ``event''.
The time of an event is defined as the time of the trigger.
More detailed descriptions of the detector hardware are given in Ref.~\cite{DYB_Det}.
The analysis presented in this article determines $\sin^{2}$2$\theta_{13}$ by counting interactions of reactor antineutrinos in each AD in the one far and two near experimental halls.
Antineutrinos were identified in both the GdLS and LS volumes via IBD reactions ($\overline{\nu}_{e}$$\ +\ p \rightarrow n+e^{+}$) in which the positron carried away 99.4\% of the kinetic energy of the final state on average. The positron deposited energy within $O$(1)~ns and then annihilated with an electron, usually producing two back-to-back 0.511-MeV $\gamma$'s (several percent of the positrons annihilated in flight such that the sum of $\gamma$ energies was greater than $2~\times$ 0.511 MeV). The neutron thermalized and was captured primarily by Gd or H, releasing an approximately 8-MeV $\gamma$ cascade or a single 2.22-MeV $\gamma$, respectively. The time from production to capture was typically tens to hundreds of microseconds. The temporal coincidence of the prompt positron and delayed neutron-capture clearly distinguishes antineutrinos from single-event backgrounds.
\section{Reactor antineutrino flux}
\label{sec:reactor}
The expected number of IBDs in an AD was calculated as the product of the number of IBDs per target proton $\Phi$ and the efficiency-weighted number of target protons $N_\varepsilon$:
\begin{equation}
\label{eq:predIBD}
\overline{N}_\mathrm{IBD}=\Phi N_\varepsilon.
\end{equation}
The latter is discussed in Section~\ref{sec:DetEff} and the former is defined for the $d$-th AD as
\begin{equation}
\label{eq:Phi}
\Phi_d \equiv \sum_{r=1}^6 \frac{1}{4\pi L_{dr}^2} \iint_{\{t_d\}}
\!\! \! \sigma_\nu(E) \ P_{\mathrm{\nu}}\left(\tfrac{L_{dr}}{E}\right) \
\frac{d^{2}N_r(E, t)}{dE dt} dE dt,
\end{equation}
where $L_{dr}$ is the baseline distance between the $d$-th AD and the $r$-th reactor core,
$\sigma_\nu(E)$ is the IBD reaction cross section of an antineutrino with energy $E$,
$P_\nu(L_{dr}/E)$ is the neutrino survival probability,
and
$d^{2}N_r(E, t)/dE dt$ is the number of antineutrinos emitted from the $r$-th reactor at time $t$ with energy $E$, which is integrated over the periods of data acquisition for the $d$-th AD $\{t_d\}$.
The baselines $L_{dr}$~\cite{supp} were measured with negligible uncertainty~\cite{DYB_Det}. The cross section $\sigma_\nu$ was evaluated according to Ref.~\cite{IBDcs} using physical parameters from Ref.~\cite{PDG2014}.
In the three-neutrino-oscillation framework, the survival probability of electron (anti)neutrinos is expressed as
\begin{equation}
\label{eq:Psur}
\begin{aligned}
P_\nu = 1
& \left. -\cos^{4}\theta_{13}\sin^{2}2\theta_{12}\sin^{2}\Delta_{21}\right.\\
& \left. -\sin^{2}2\theta_{13}\cos^{2}\theta_{12}\sin^{2}\Delta_{31} \right.\\
& \left. -\sin^{2}2\theta_{13}\sin^{2}\theta_{12}\sin^{2}\Delta_{32} \right.,
\end{aligned}
\end{equation}
where $\Delta_{ij} \equiv 1.267\Delta m_{ij}^{2}L/E$, $E$ [MeV] is the energy of the neutrino at production, $L$ [m] is the distance between the points of production and interaction of the neutrino, and $\Delta m_{ij}^{2}$ [eV$^2$] is the difference between the squared masses of mass eigenstates $\nu_i$ and $\nu_j$.
The values of $\sin^{2}$2$\theta_{12} =$ 0.846 $\pm$ 0.021, $\Delta m_{21}^{2} =$ (7.53 $\pm$ 0.18)$\times$10$^{-5}$ eV$^2$, and $\Delta m_{32}^{2} =$ (2.44 $\pm$ 0.06)$\times$10$^{-3}$ eV$^2$ (for the normal hierarchy) [$\Delta m_{32}^{2} =$ (2.52 $\pm$ 0.07)$\times$10$^{-3}$ eV$^2$ (for the inverted hierarchy)] were taken from Ref.~\cite{PDG2014}. These uncertainties were found to have negligible impact on the fit of $\sin^2$2$\theta_\mathrm{13}$ and its uncertainty.
The reactor antineutrino emission rate was calculated as
\begin{equation}
\label{eq:flux}
\resizebox{\hsize}{!}{$\displaystyle\frac{d^{2}N(E, t)}{dE \ dt} = \frac{W_\mathrm{th}(t)}{\sum_{i}f_{i}(t)e_{i}}\sum_{i}f_{i}(t)S_{i}(E)c^\mathrm{ne}_{i}(E, t) + S_\mathrm{snf}(E, t),$}
\end{equation}
where the sum is over the four primary fissile isotopes: $^{235}$U, $^{239}$Pu, $^{238}$U, $^{241}$Pu. The thermal power of the reactor $W_\mathrm{th}(t)$ and fraction of fissions due to the $i$-th isotope $f_i(t)$ were supplied by the nuclear power plant, the average thermal energies released per fission $e_i$ were from Ref.~\cite{fissionEnergy},
the antineutrino yields per fission $S_{i}(E)$ from $^{238}$U, and from $^{235}$U, $^{239}$Pu, and $^{241}$Pu, were from Ref.~\cite{Mueller} and Ref.~\cite{Huber}, respectively. The correction to the energy spectrum due to nonequilibrium effects of long-lived fission fragments $c^\mathrm{ne}_i(E,t)$ followed Ref.~\cite{Mueller}. The contribution from spent nuclear fuel $S_{\mathrm{snf}}(E,t)$ was estimated following Refs.~\cite{FengpengSNF,ZhouSNF}. Combining the uncertainties of these components gave a 0.9\% reactor-uncorrelated uncertainty of predicted IBD rate associated with a single reactor~\cite{DYB_reactor}. Additional information is given in Refs.~\cite{DYB_CPC,DYB_reactor}.
These quantities were estimated on a daily basis, weighted by the fractional data acquisition time of each day for each experimental hall, and then summed for each week.
The accumulated predicted spectra $dN_r(E)/dE$ are provided~\cite{supp}.
\section{Data analysis}
\label{sec:Selection}
The data used in this analysis were recorded beginning on December 24, 2011, with two ADs in EH1, one in EH2, and three in EH3. Recording was paused on July 28, 2012, to install the final two ADs in EH2 and EH3. On October 19, 2012, recording resumed with the full-design configuration of eight ADs. The first measurement with $n$H IBDs at Daya Bay~\cite{DYB_nH} used the 217 days of data recorded in the six-AD configuration while this study uses an additional 404 days of data recorded in the full eight-AD configuration until November 27, 2013.
Data acquisition maintained an operational efficiency of $>$ 97\% with occasional pauses for maintenance. Excluding weekly calibrations, special calibrations, and problematic data, the data acquisition (DAQ) time $T_\mathrm{DAQ}$ of each AD is listed in Table~\ref{tab:IBDsummary}. With the $n$H selection criteria described in the following sections, about 780000 IBDs were observed.
\subsection{Calibration and reconstruction}
\label{sec:Reconstruction}
The gain [analog-to-digital converter channel/photoelectron] of each PMT channel was calibrated $in~situ$ by fitting the single photoelectron peak in the PMT dark noise spectrum. The peak was fit with a Poisson-Gaussian convolution~\cite{DYB_Det}.
This gain calibration was validated by an independent method using low-intensity LED pulses.
The energy scale [MeV/photoelectron] of each AD was calibrated $in~situ$ with muon-induced spallation neutrons that captured on Gd throughout the GdLS volume.
The two isotopes $^{157}$Gd and $^{155}$Gd, which release $\gamma$-cascades of 7.94 and 8.54~MeV, respectively, were fit with two Crystal Ball functions~\cite{CrystalBall} as described in Ref.~\cite{DYB_NIM}.
This energy scale calibration was validated by an independent method using weekly deployments of the $^{60}$Co $\gamma$ source of ACU A at the center of each AD.
The energy scale of an AD increased by 10-15\% from the center of the detector to the wall of the OAV, and changed by 2-6\% between the bottom and the top of the OAV, depending on the radial position.
Corrections of energy scale as a function of position were applied with two-dimensional maps ($z \ vs.\ r$) derived from spallation neutron-captures on Gd in each AD. The maps were extrapolated to the LS volume using spallation neutron-captures on H throughout the GdLS and LS volumes.
The energy after correction is referred to as the ``reconstructed'' energy $E_\mathrm{rec}$.
Using $n$H $\gamma$'s, the standard deviation of $E_\mathrm{rec}$ across an AD was observed to be less than 1.0\% for all ADs.
The energy resolution was measured to be roughly $9\%/\sqrt{E_\mathrm{rec}\mathrm{[MeV]}}$ at the center of an AD. It improved by around 20\% (relative) from the center to the wall of the OAV.
A single position associated with each event in an AD was ``reconstructed'' using charge-pattern templates derived from Monte Carlo simulation~\cite{DYB_NIM}.
From a simulation of positrons, the average distribution of charge from the 192 PMT channels, or the charge-pattern, was determined for each of 9600 voxels within the OAV, corresponding to 20, 20, and 24 divisions in $r^2$, $z$, and $\phi$ (where symmetry of $\phi$ was assumed to decrease statistical uncertainty).
For each event, a $\chi^2$ was calculated for each voxel using the expected (from the templates) and observed charges from each PMT channel.
The voxel with the smallest $\chi^2$ was selected and, with its nearest-neighbor voxels, interpolated to obtain the reconstructed position. The reconstructed positions of prompt events (see Section~\ref{sec:EventSelect}) are shown in Figs.~\ref{Fig:NF}(e) and \ref{Fig:NF}(f), where a residual voxel grid is apparent.
The resolution for a 2.2-MeV $\gamma$ was about 12~cm in the $r$-$\phi$ plane and 13~cm along the $z$ axis, in the LS volume. The position resolution improved by more than 40\% from the center of a detector to the wall of an OAV, and varied within a few percent vertically.
Using the $^{60}$Co $\gamma$ sources of the ACUs, the bias of the reconstruction was found to be about four times smaller than the resolution, near the wall of an OAV.
\subsection{IBD Candidate Selection}
\label{sec:EventSelect}
IBD candidates were selected from pairs of successive events in an AD, excluding those within predefined time ranges of detected muons to suppress muon-induced backgrounds.
The IBD selection criteria for the $n$Gd-~\cite{nGd8AD} and $n$H-IBD analyses are listed in Table~\ref{tab:criteria}.
First, AD events caused by spontaneous light emission from PMTs (PMT flashes) were removed as described in Section~\ref{sec:PMTflash}.
Then, for the $n$H-IBD analysis, AD events were required to have $E_\mathrm{rec} >$ 1.5 MeV to exclude low-energy backgrounds (see Section~\ref{sec:lowE}).
The AD events remaining after muon-event vetoes (see Section~\ref{sec:muonVetoes}) were grouped within a time window to identify double coincidences (see Section~\ref{sec:DCselection}).
The resulting prompt and delayed events were required to have $E_\mathrm{rec} <$ 12~MeV and $E_\mathrm{rec}$ within three standard deviations of the fitted $n$H $\gamma$ energy in each AD, respectively.
Finally, the distance between the reconstructed positions of the prompt and delayed events was required to be within 50~cm to suppress uncorrelated double coincidences (accidentals), which dominated the set of double coincidences (see Section~\ref{sec:distanceCut}). The resulting number of $n$H-IBD candidates ($N_\mathrm{DC}$) is listed in Table~\ref{tab:IBDsummary} for each AD. Details of the selection criteria are described below.
\begin{table}[t]
\begin{center}
\begin{tabular}[c]{l | c c} \hline\hline
& $n$H & $n$Gd \\ \hline
AD trigger & \multicolumn{2}{c}{$N_\mathrm{PMT} \geq$ 45 \textsc{or} $Q_\mathrm{sum} \gtrsim$ 65 p.e.} \\
20-cm PMT flash & \multicolumn{2}{c}{$Ellipse <$ 1} \\
5-cm PMT flash & \multicolumn{2}{c}{$Q < 100$\ p.e.} \\
Low energy & $>$ 1.5 MeV & $>$ 0.7 MeV \\
Detector latency & \multicolumn{2}{c}{\ $< 2\ \mu s$} \\
WS muon ($\mu_\mathrm{WS}$) \textsc{[iws/ows]} & $N_\mathrm{PMT} > 12/15$ & $N_\mathrm{PMT} > 12/12$ \\
AD muon ($\mu_\mathrm{AD}$) & \multicolumn{2}{c}{$> 20$\ MeV} \\
Showering AD muon ($\mu_\mathrm{sh}$) & \multicolumn{2}{c}{$> 2.5$\ GeV} \\
WS muon veto & (0, 400)\ $\mu s$ & (-2, 600)\ $\mu s$ \\
AD muon veto & (0, 800)\ $\mu s$ & (-2, 1000)\ $\mu s$ \\
Showering AD muon veto & (0 $\mu s$, 1 $s$) & (-2 $\mu s$, 1 $s$) \\
Coincidence time ($t_c$) & [1, 400]\ $\mu s$ & [1, 200]\ $\mu s$ \\
Prompt energy ($E_p$) & \multicolumn{2}{c}{$<$ 12\ MeV} \\
Delayed energy ($E_d$) & peak $\pm$ 3$\sigma$ & [6, 12] MeV \\
Coincidence distance ($d_c$) & $<$ 50\ cm & NA \\ \hline \hline
\end{tabular}
\caption{IBD selection criteria for the $n$H and $n$Gd~\cite{nGd8AD} analyses. See text for details. }
\label{tab:criteria}
\end{center}
\end{table}
\begin{table*}[t]
\begin{tabular}[c]{l | c c | c c | c c c c}\hline\hline
& EH1-AD1& EH1-AD2& EH2-AD1& EH2-AD2 & EH3-AD1 & EH3-AD2 & EH3-AD3 & EH3-AD4\\ \hline
$T_\mathrm{DAQ}$ [d] & 565.436 & 565.436 & 568.019 & 378.407 & 562.414 & 562.414 & 562.414 & 372.685 \\
$\varepsilon_{\mu}$ & 0.7949 & 0.7920 & 0.8334 & 0.8333 & 0.9814 & 0.9814 & 0.9812 & 0.9814 \\
$\varepsilon_m$ & 0.9844 & 0.9845 & 0.9846 & 0.9846 & 0.9844 & 0.9841 & 0.9839 & 0.9845 \\
$R_{\mu}$ [Hz] & 200.32 & 200.32 & 150.08 & 149.80 & 15.748 & 15.748 & 15.748 & 15.757 \\
$R_{s}$ [Hz] & 20.111 & 19.979 & 19.699 & 19.702 & 19.651 & 20.020 & 20.182 & 19.649 \\
$N_\mathrm{DC}$ & 217613 & 219721 & 208606 & 136718 & 56880 & 56106 & 59230 & 38037 \\
$N_\mathrm{Acc}$ & 26240$\pm$49 & 25721$\pm$49 & 25422$\pm$43 & 16365$\pm$29 & 29920$\pm$19 & 30065$\pm$20 & 32179$\pm$21 & 20427$\pm$15 \\
$N_\mathrm{Cor}$ & 191373$\pm$473 & 194000$\pm$475 & 183184$\pm$465 & 120353$\pm$449 & 26960$\pm$246 & 26041$\pm$244 & 27051$\pm$251 & 17610$\pm$196 \\
$R_\mathrm{Acc}$ [d$^{-1}$]& $59.31\pm0.11$ & $58.34\pm0.11$ & $54.54\pm0.09$ & $52.71\pm0.09$ & $55.07\pm0.04$ & $55.35\pm0.04$ & $59.27\pm0.04$ & $56.73\pm0.04$ \\
$R_\mathrm{Li9}$ [d$^{-1}$] & \multicolumn{2}{c |}{$2.36\pm1.02$} & \multicolumn{2}{c |}{$1.73\pm0.75$} & \multicolumn{4}{c}{$0.19\pm0.09$} \\
$R_\mathrm{FastN}$ [d$^{-1}$] & \multicolumn{2}{c |}{$2.11\pm0.18$} & \multicolumn{2}{c |}{$1.81\pm0.17$} & \multicolumn{4}{c}{$0.16\pm0.03$} \\
$R_\mathrm{AmC}$ [d$^{-1}$] & $0.07\pm0.04$ & $0.07\pm0.04$ & $0.07\pm0.03$ & $0.07\pm0.03$ & $0.03\pm0.02$ & $0.03\pm0.02$ & $0.03\pm0.02$ & $0.02\pm0.01$ \\
$R_\mathrm{IBD}$ [d$^{-1}$] & $428.01\pm1.48$ & $435.49\pm1.49$ & $389.41\pm1.25$ & $384.03\pm1.42$ & $49.24\pm0.45$ & $47.56\pm0.45$ & $49.44\pm0.46$ & $48.54\pm0.55$ \\
$n$H/$n$Gd & $0.993\pm0.007$ & $0.993\pm0.007$ & $0.995\pm0.007$ & $0.995\pm0.008$ & $1.015\pm0.012$ & $0.981\pm0.012$ & $1.019\pm0.012$ & $0.987\pm0.014$ \\ \hline \hline
\end{tabular}
\caption{Data summary for each AD. All per-day rates are corrected with $\varepsilon_{\mu}\varepsilon_m$. $T_\mathrm{DAQ}$ is the DAQ time, $\varepsilon_{\mu}$ is the muon-veto efficiency, $\varepsilon_m$ is the multiplicity selection efficiency, $R_{\mu}$ is the muon rate, $R_{s}$ is the rate of uncorrelated single events, $N_\mathrm{DC}$ is the number of double-coincidence (DC) events satisfying all IBD selection criteria, $N_\mathrm{Acc}$ is the number of accidental DCs, $N_\mathrm{Cor}$ is the number of correlated DCs, $R_\mathrm{Acc}$, $R_\mathrm{Li9}$, $R_\mathrm{FastN}$, $R_\mathrm{AmC}$, and $R_\mathrm{IBD}$ are the rates of accidental, fast neutron, $^9$Li/$^8$He, Am-C, and IBD (with all the backgrounds subtracted) DCs, and $n$H/$n$Gd is the ratio of the efficiency- and target proton-corrected $R_\mathrm{IBD}$ for the $n$H- and $n$Gd-IBD analyses. The differences in $R_\mathrm{IBD}$ among ADs in the same near hall are due primarily to differences in baselines to the reactors, and secondarily to differences in target mass.
}
\label{tab:IBDsummary}
\end{table*}
\subsubsection{PMT Flashes}
\label{sec:PMTflash}
PMT flashes are spontaneous emissions of light from the voltage divider of a PMT.
AD events caused by a flash from any one of the 192 20-cm PMTs were removed by requiring $Ellipse \equiv \sqrt{Quadrant^2+(q_\mathrm{max}/0.45)^2} < 1$, where $q_\mathrm{max}$ is the largest fraction of an AD event's total charge in a single PMT and $Quadrant$ is defined as $Q_3/(Q_2+Q_4)$ in which $Q_i$ is the total charge in AD azimuthal quadrant $i$ and quadrant 1 is approximately centered on the PMT with $q_\mathrm{max}$.
The efficiency of this criterion to select IBDs in the combined GdLS plus LS volume was estimated with Monte Carlo simulation~\cite{DYB_CPC} to be $>$ 99.99\%.
Flashes from six 5-cm calibration PMTs~\cite{DYB_Det} near the top and bottom reflectors were simply removed by requiring the charge output from each 5-cm PMT to be $<$ 100 photoelectrons.
\subsubsection{Low-energy Criterion}
\label{sec:lowE}
AD events were required to have $E_\mathrm{rec} >$ 1.5~MeV to exclude events caused by correlated $\beta$-$\alpha$ decays from the ${}^{214}$Bi-${}^{214}$Po-${}^{210}$Pb and ${}^{212}$Bi-${}^{212}$Po-${}^{208}$Pb decay chains, which originate from naturally-occurring ${}^{238}$U and ${}^{232}$Th, respectively.
Due to the greater quenching associated with $\alpha$'s, the 8.78-MeV $\alpha$ from the latter chain resulted in an apparent energy of $E_\mathrm{rec} =$ 1.26~MeV and the 7.68-MeV $\alpha$ from the former chain resulted in $E_\mathrm{rec} =$ 1.00~MeV.
Excluding these decays reduced the uncertainty of the total rate of accidentals by an order of magnitude.
This criterion rejected about 10\% of IBD prompt events.
\subsubsection{Muon-event Vetoes}
\label{sec:muonVetoes}
To suppress backgrounds from muon-induced spallation neutrons (Section~\ref{sec:FastN}) and long-lived spallation products such as $^9$Li and $^8$He (Section~\ref{sec:Li9}), an AD event was excluded from the analysis if it occurred within predefined veto time windows after cosmogenic muon events identified by the water shields or ADs.
Muon events from the ADs, IWS, and OWS that occurred within the 2-$\mu$s detector latency were grouped together for the accounting of all events associated with cosmogenic muons. The muon event with the earliest time in the group defined the start of the muon-veto time window.
A muon event in a water shield, referred to as a $\mu_{\mathrm{WS}}$, was defined by requiring $N_\mathrm{PMT} >$ 12 (15) in the IWS (OWS). The muon-detection efficiency of these selections was essentially 100\%, as determined relative to the ADs~\cite{muon}.
The higher threshold of the OWS in the $n$H-IBD analysis (see Table~\ref{tab:criteria}) removed correlated triggers that sometimes occurred $O$(100)~$\mu s$ after an OWS event, due to electronics noise.
These triggers were handled in the $n$Gd-IBD analysis by slightly modifying the multiple-coincidence criteria (see Section~\ref{sec:DCselection}) to have no overlap with a muon-veto time window.
An AD event that was grouped with a $\mu_{\mathrm{WS}}$ and with 20~MeV $< E_\mathrm{rec} <$ 2.5~GeV was defined as an AD muon event $\mu_{\mathrm{AD}}$. If instead, $E_\mathrm{rec} >$ 2.5~GeV, the event was defined as a showering AD muon event $\mu_{\mathrm{sh}}$. The total rate of muon events measured by each AD ($R_{\mu}$) is listed in Table~\ref{tab:IBDsummary}.
An AD event was excluded if it occurred within a veto time window of 400~$\mu$s, 800~$\mu$s, or 1~s after a $\mu_{\mathrm{WS}}$, $\mu_{\mathrm{AD}}$, or $\mu_{\mathrm{sh}}$, respectively.
The fraction of DAQ time remaining for IBD analysis after implementing these offline muon-vetoes is reported as $\varepsilon_\mu$ in Table~\ref{tab:IBDsummary}, with typical values of 79\%, 83\% and 98\% in EH1, EH2, and EH3, respectively.
\begin{figure*}[t]
\includegraphics[angle=0,width=\textwidth]{Ed_vs_Ep_6panel}
\caption{(a) Distribution of prompt {\it vs.}\ delayed reconstructed energy for all double coincidences with a maximum 50-cm separation in all near-hall ADs,
(b) total (621-day) accidental background sample (ABS) for all ADs in the near halls, (c) and (d) are the distributions of prompt {\it vs.} delayed reconstructed energy after subtracting the total ABS for the far and near halls, respectively, (e) and (f) are the reconstructed positions of all prompt events after subtracting the total ABS for the far and near halls, respectively. The sparser distribution of events at the bottoms of the ADs is due to the presence of acrylic supports below the IAV.}
\label{Fig:NF}
\end{figure*}
\subsubsection{Coincidence Time}
\label{sec:DCselection}
Correlated AD events were selected using a coincidence time window of [1, 400]~$\mu$s, which is about two times longer than the mean capture time of an IBD neutron on hydrogen in LS and about 14 times longer than that in GdLS.
Given the data recording window of 1~$\mu$s, coincidence windows were initiated 1~$\mu$s after an event to ensure distinction of prompt and delayed events. Lone events are denoted as ``singles'' and were used to construct accidental background samples (see Section~\ref{sec:ACC}). Only pairs of events, denoted as double coincidences (DCs), were used to select IBD candidates.
If more than two events occurred within [1, 400]~$\mu$s, they were excluded from further analysis.
In addition, if the first, or prompt, event of a DC occurred within [1, 400]~$\mu$s of a preceding event or muon-veto time window, the DC was excluded (this requirement was also applied to singles).
The fraction of DAQ time remaining for IBD analysis after implementing these multiple-coincidence criteria was about 98.4\% for each AD, and is reported as $\varepsilon_m$ in Table~\ref{tab:IBDsummary}.
This multiplicity selection efficiency was derived as described in Ref.~\cite{acc}, and calculated using the duration of the coincidence time window $T_c =$~399~$\mu$s and the rate of uncorrelated single events $R_s$ (which are uncorrelated events that satisfy the criteria of Sections~\ref{sec:PMTflash}-\ref{sec:muonVetoes}; not singles, which exclude events involved in coincidences):
\begin{equation}
\label{eq:mult}
\begin{aligned}
\varepsilon_m = ~ e^{-R_sT_c}
& \left\{ e^{-(R_{s}+R_{\mu})T_{c}} \right.\\
& \left.+ \frac{R_{\mu}}{R_{s}+R_{\mu}}[1-e^{-(R_{s}+R_{\mu})T_{c}}] \right.\\
& \left.+ \frac{R_s}{R_s+R_{\mu}}e^{-R_{\mu}T_c}[1-e^{-(R_s+R_{\mu})T_c}] \right.\\
& \left.- \frac{R_s}{2R_s+R_{\mu}}e^{-R_{\mu}T_c}[1-e^{-(2R_s+R_{\mu})T_c}] \right\} .
\end{aligned}
\end{equation}
\subsubsection{Coincidence Distance}
\label{sec:distanceCut}
The set of DCs was largely comprised of accidental coincidences (whose positions are uncorrelated throughout the detector); therefore, the spatial separation of the reconstructed positions of the prompt and delayed events $d_c$ was required to be within 50~cm.
This rejected 98\% of the accidental coincidences at a loss of 25\% of the IBDs.
Figure~\ref{Fig:NF}(a) shows the distribution of prompt energy {\it vs.}\ delayed energy for all DCs in all near-hall ADs after applying the coincidence-distance criterion.
Bands for both the 2.22-MeV $n$H and 8-MeV $n$Gd delayed events are apparent, with a large background of low-energy DCs around the $n$H band.
The clusters around 1.5 and 2.7 MeV are due to $\gamma$'s from ${}^{40}$K and ${}^{208}$Tl decays, respectively.
The bands between these clusters are dominated by the decay products of ${}^{238}$U.
The measured $n$H $\gamma$ energy was around 2.33~MeV, which is offset from the true value of 2.22~MeV because of nonlinear detector response and the calibration of the energy scale with $n$Gd events.
The $n$H delayed events were fit as described in Section~\ref{sec:Ecuts}, providing a mean and a standard deviation $\sigma$ for each AD. Delayed events were required to have $E_\mathrm{rec}$ within 3$\sigma$ ($\approx$0.42~MeV) of the mean for each AD, which excludes $\gamma$'s from ${}^{40}$K.
The accidental background from the remaining decays was effectively removed by the subtraction described in Section~\ref{sec:ACC}.
Backgrounds from correlated events are described in Section~\ref{sec:CorrBkg}.
Efficiencies and uncertainties of the IBD selection criteria are described in Section~\ref{sec:DetEff}.
\section{Accidental Background}
\label{sec:ACC}
Accidental backgrounds were caused by two uncorrelated AD events that satisfied the IBD selection criteria, and were almost entirely due to natural radioactivity in the materials around and within the detectors. The energy spectra of this background are visible below 3~MeV in Fig.~\ref{Fig:NF}(a). Because the delayed event of an $n$H IBD is from a 2.22-MeV $\gamma$, which overlaps with this background spectrum, the accidental background rate relative to the IBD rate was typically $>$ 50 times that of the $n$Gd-IBD analysis for the ADs in EH3 after applying all IBD selection criteria.
The background was estimated for each AD within each run (about 2-3 days) by constructing accidental background samples (ABSs) from the singles in a run.
An ABS was constructed by sequentially pairing singles from the first half of the run with singles from the second half of the run.
The resulting ABS consisted of $N_\mathrm{ABS-tot}$ accidentals, and after applying the remaining IBD selection criteria (distance and energy), the ABS consisted of $N_\mathrm{ABS-cut}$ accidentals.
To obtain the true value for $\varepsilon_\mathrm{ABS} \equiv N_\mathrm{ABS-cut}$/$N_\mathrm{ABS-tot}$, the calculation of $\varepsilon_\mathrm{ABS}$ was repeated for several hundred different pairing sequences of the singles, and the Gaussian mean of the resulting distribution was taken as $\varepsilon_\mathrm{ABS}$.
Figure~\ref{Fig:NF}(a) shows the energy distribution of all DCs (621 days) of all near-hall ADs without applying the delayed-energy criterion, and Fig.~\ref{Fig:NF}(b) shows the energy distribution of the total ABS (621 days) of all near-hall ADs after applying the coincidence-distance criterion.
Each ABS was scaled to a calculated number of accidentals ($N_\mathrm{Acc}$) and subtracted from its corresponding number of DCs ($N_\mathrm{DC}$) to obtain the energy distribution of correlated DCs ($N_\mathrm{Cor}$), which are dominantly due to IBDs:
\begin{equation}
\label{eq:sub}
\begin{aligned}
& \left. N_\mathrm{Cor} = N_\mathrm{DC} - N_\mathrm{Acc}, \right. \\
& \left. N_\mathrm{Acc} \equiv R_\mathrm{Acc} \cdot T_\mathrm{DAQ}\cdot\varepsilon_{\mu}\cdot\varepsilon_\mathrm{ABS}
, \right.
\end{aligned}
\end{equation}
where $T_\mathrm{DAQ}$ is the DAQ time, $\varepsilon_{\mu}$ is the muon-veto efficiency, and $R_\mathrm{Acc}$ is the rate of coincidence of uncorrelated single events, which is expressed as~\cite{acc}
\begin{equation}
\label{eq:Acc}
\begin{aligned}
R_\mathrm{Acc}
& \left. = R_s^2 \cdot T_c \cdot \varepsilon_m \right. \\
& \left. \approx R_s \cdot e^{-R_s T_c} \cdot R_sT_c e^{-R_sT_c}, \right.
\end{aligned}
\end{equation}
where $R_s$ is the rate of uncorrelated single events and $\varepsilon_m$ is the multiplicity selection efficiency, both defined in Eq.~(\ref{eq:mult}).
The approximation of Eq.~(\ref{eq:mult}) used in the second line ($\varepsilon_m \approx e^{-R_s T_c} \cdot e^{-R_sT_c}$) results from the condition $(R_s + R_{\mu})T_c \ll 1$ and is valid to within 0.1\% for $T_c =$ 399~$\mu$s, $R_s =$ 20~Hz, and the $R_\mu$ in Table~\ref{tab:IBDsummary}.
This approximation is not used in this analysis, but is shown here to illustrate the basic components of the calculation: $e^{-R_sT_c}$ is the probability of no prior event within $T_c$ and
$R_sT_c e^{-R_sT_c}$ is the probability of a subsequent event within $T_c$.
$N_\mathrm{DC}$, $N_\mathrm{Acc}$, and $N_\mathrm{Cor}$ are listed for each AD in Table~\ref{tab:IBDsummary}.
Figure~\ref{Fig:NF}(d) shows the energy distribution of $N_\mathrm{Cor}$ for all near-hall ADs [Fig.~\ref{Fig:NF}(c) shows $N_\mathrm{Cor}$ for the far-hall ADs], where the $n$H $\gamma$ peak is cleanly isolated from the accidental-dominated DCs shown in Fig.~\ref{Fig:NF}(a).
The effectiveness of the subtraction is also illustrated in Fig.~\ref{Fig:Ed}, which shows the energy spectrum of the delayed events after subtracting the accidental background for all near-hall ADs and all far-hall ADs.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{Ed_nearfar}
\caption{Reconstructed delayed-energy distribution after subtracting the accidental background for all four ADs in EH3 (black) and all four ADs in EH1 and EH2 (red), where the far-hall spectrum has been normalized to the area of the near-halls spectrum. (621 days of data.)}
\label{Fig:Ed}
\end{center}
\end{figure}
Both the $n$H and $n$Gd peaks are very similar between the two groups of ADs.
Figures~\ref{Fig:NF}(e) and \ref{Fig:NF}(f) show the reconstructed positions of $N_\mathrm{Cor}$ prompt events after subtracting the accidental background for all ADs in the far and near halls, respectively. The positions are generally uniform throughout the GdLS and LS volumes. The smaller concentration of events in the GdLS volume ($r^2 < 2.40$~m$^2$ and $|z| < 1.50$~m) is due to the greater fraction of neutron-captures on Gd.
The uncertainty of $N_\mathrm{Cor}$ is composed of the statistical uncertainties of $N_\mathrm{DC}$ and $N_\mathrm{ABS-cut}$,
and the systematic uncertainty of $R_{Acc}$, which is determined by the uncertainty of $R_s$. The uncertainty from $\varepsilon_m$ was negligible: using Eq.~(\ref{eq:mult}) and $R_s=40$~Hz, $R_{\mu}=200$~Hz, and $T_c=399~\mu$s (which are conditions similar to those in EH1), $d\varepsilon_m = 3\times10^{-6} d{R_{\mu}} - 6\times10^{-3} d{R_s}$.
By taking the average over a run, the induced systematic uncertainty from variations in $R_s$ or $R_{\mu}$ was negligible.
$R_s$ was estimated as the average of an upper and lower limit. The upper limit was derived from the total number of AD events after applying muon-event vetoes. These events were dominantly singles but included DCs and multiple coincidences. The lower limit was derived from the number of singles plus DCs that did not satisfy the coincidence-distance criterion. These DCs were dominantly accidentals.
Time-averaged values of $R_{s}$ are listed in Table~\ref{tab:IBDsummary} for each AD.
The difference between the two limits was assigned as the systematic uncertainty of $R_{s}$ and propagated to $R_{Acc}$, resulting in 0.18\%, 0.16\% and 0.05\% uncertainties of the accidental rate in EH1, EH2, and EH3, respectively. The larger uncertainties for the near halls are due to the higher rates of IBD reactions from reactor antineutrinos, which enlarged the upper limits.
Figure~\ref{Fig:SingleRate} shows $R_{s}$ as a function of time for each AD, where a downward trend began after the water shields were filled.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{singlesRate}
\caption{Rate of uncorrelated single events {\it vs.} time for each AD. Rates stabilized several months after water shields were filled (EH3 was filled less than a month before data-recording began).}
\label{Fig:SingleRate}
\end{center}
\end{figure}
During the first few weeks, $R_s$ decreased by $<$~0.05 Hz per day for near-hall ADs and by $<$~0.08 Hz per day for far-hall ADs. The near-hall water shields were filled earlier and so, the AD rates stabilized earlier. Considering that $R_s$ was calculated every 2-3~days, the uncertainty introduced to $R_{Acc}$ by these trends was estimated to be $< 2\times10^{-5}$, which is more than an order of magnitude smaller than the uncertainty in EH3.
There were also instantaneous increases of $R_s$, which were caused by muon-generated spallation products such as $^9$Li and $^8$He (Section~\ref{sec:Li9}), and spallation neutrons (Section~\ref{sec:FastN}). From a study of $R_s$ {\it vs.} time after muon-event vetoes, the impact of these products was estimated to be negligible.
Two methods were used to validate the subtraction of the accidental background.
The first method used the distribution of distance between the prompt and delayed events, which was dominated by accidental coincidences at large separations. After subtracting the accidental background, the resulting number of correlated DCs with large separations is expected to be zero.
Figure~\ref{Fig:DistIBD} shows the distribution of distance between the prompt and delayed events for DCs, accidentals, and correlated DCs.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{distanceDistrAccValid}
\caption{Distributions of the distance between the prompt and delayed events of all measured double coincidences and of the predicted accidental backgrounds (black points) in the far hall (top panel) and near halls (middle panel). The bottom panel shows the distance distributions after subtracting the accidental backgrounds for the near halls (blue) and the far hall (red). See the text for details.
}
\label{Fig:DistIBD}
\end{center}
\end{figure}
The two upper panels of Fig.~\ref{Fig:DistIBD} contain calculations of the relative difference between the measured number of double coincidences ($N_\mathrm{DC}$) and the predicted number of accidentals ($N_\mathrm{Acc}$), beyond 200~cm. These differences are consistent with zero with respect to their statistical uncertainties.
A constant fit in the bottom panel also shows that the distribution of selected $n$H IBD candidates ($N_\mathrm{Cor}$) beyond 200~cm is consistent with an expected fraction of about 0.05\%, which was determined from Monte Carlo simulation. This fraction corresponds to an expected fitted constant of about 0 (3) entries/2~cm for the far (near) hall(s).
The subtraction of the accidental background was also validated by the distribution of time between prompt and delayed events.
Figure~\ref{Fig:Time} shows the distribution of time between prompt and delayed events for DCs, accidentals, and correlated DCs.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{timeDistrAccValid}
\caption{Distributions of the time between the prompt and delayed events of all measured double coincidences and of the predicted accidental backgrounds (black points) in the far hall (top panel) and near halls (middle panel). The bottom panel shows the time distributions after subtracting the accidental backgrounds for the near halls (blue) and the far hall (red). See the text for details. }
\label{Fig:Time}
\end{center}
\end{figure}
The two upper panels of Fig.~\ref{Fig:Time} contain calculations of the relative difference between the measured number of double coincidences ($N_\mathrm{DC}$) and the predicted number of accidentals ($N_\mathrm{Acc}$), beyond 1000~$\mu$s. These differences are consistent with zero with respect to their statistical uncertainties.
A constant fit in the bottom panel also shows that the distribution of selected $n$H IBD candidates ($N_\mathrm{Cor}$) beyond 1000~$\mu$s is consistent with an expected fraction of 0.7\%, which was determined from Monte Carlo simulation. This fraction corresponds to an expected fitted constant of about 16 (110) entries/10~$\mu$s for the far (near) hall(s).
\section{Correlated Backgrounds}
\label{sec:CorrBkg}
After the accidental background was subtracted to obtain $N_\mathrm{Cor}$, correlated backgrounds were subtracted to obtain the number of measured $n$H IBDs ($N_\mathrm{IBD}$). In EH3 (EH1), $N_\mathrm{IBD}/N_\mathrm{Cor} =$~99.2\% (99.0\%). Correlated backgrounds consist of prompt and delayed events that originate from a single source and satisfy the IBD selection criteria. These backgrounds are primarily from cosmogenic muon-induced $^9$Li/$^8$He isotopes and spallation neutrons, and neutrons from $^{241}$Am-$^{13}$C calibration sources interacting with the SSV and its appendages.
The $^{13}$C($\alpha$,n)$^{16}$O background is less significant for the $n$H-IBD analysis than for the $n$Gd-IBD analysis and is briefly discussed.
\subsection{ $^9$Li/$^8$He Background}
\label{sec:Li9}
Cosmogenic muons and their spallation products interact with the $^{12}$C in organic liquid scintillators, producing neutrons and isotopes via hadronic or electromagnetic processes. Among the muon-induced isotopes, $^9$Li and $^8$He $\beta^-$-decay to neutron-unstable excited states, immediately followed by the ejection of a neutron. These $\beta^-$-neutron decays mimic the prompt and delayed events of IBD reactions.
The lifetimes of $^9$Li and $^8$He (257.2 and 171.7~ms, respectively) are longer than the muon-veto windows for a $\mu_{\mathrm{WS}}$ or $\mu_{\mathrm{AD}}$ (see Section~\ref{sec:EventSelect}), leading to a contamination of the IBD candidate sample. The temporal relation between $^9$Li/$^8$He decays and prior detected muons was used to estimate the collective yield of the $^9$Li and $^8$He background $N_\mathrm{Li/He}$ in each hall. The distribution of the time between the prompt event of a DC and its preceding muon was described by a formula following Ref.~\cite{Li}:
\begin{equation}
\label{eq:Equ_Li9He8}
\begin{aligned}
N(t)~=~ & N_\mathrm{Li/He} \left[ r\cdot\lambda_\mathrm{Li}\cdot e^{-\lambda_\mathrm{Li}t}+(1-r)\cdot\lambda_\mathrm{He}\cdot e^{-\lambda_\mathrm{He}t} \right] \\
& + N_\mathrm{BB}\cdot \lambda_\mathrm{BB}\cdot e^{-\lambda_\mathrm{BB}t} \\
& + N_{\mathrm{DC}\cancel{\mu}}\cdot R_{\mu}\cdot e^{-R_{\mu}t},
\end{aligned}
\end{equation}
where $\lambda_\mathrm{isotope} \equiv R_{\mu}+1/\tau_\mathrm{isotope}$ and $\tau_\mathrm{isotope}$ is the lifetime of the specific isotope ($^9$Li or $^8$He), $R_{\mu}$ is the muon rate (which depends on the muon selection criteria), $r$ is the fraction of $^9$Li decays among the $^9$Li and $^8$He decays, $\lambda_\mathrm{BB} \equiv R_{\mu}+2/\tau_\mathrm{B}$, and $N_\mathrm{BB}$ and $N_{\mathrm{DC}\cancel{\mu}}$ are the numbers of $^{12}$B-$^{12}$B coincidences and all other double coincidences (excluding those from cosmogenically-produced isotopes), respectively.
The beta-decaying isotope $^{12}$B was produced with a yield about one order of magnitude greater than the combined yield of $^9$Li and $^8$He. With its lifetime of $\tau_\mathrm{B}~\approx~$29~ms, double coincidences of $^{12}$B-$^{12}$B originating from a single muon contributed mainly within the first $\approx$50~ms of the time since the preceding muon distribution.
The fitted value of $N_\mathrm{Li/He}$ changed by up to 10\% when including and excluding the $^{12}$B term.
The fraction of $^9$Li $r$ could not be reliably determined because of the similar lifetimes of $^9$Li and $^8$He. Measurements of $^9$Li and $^8$He yields from Ref.~\cite{KamLANDisotopes} indicate that $r$ should be between roughly 85\% and 100\% at Daya Bay. Varying $r$ in this range resulted in a 4\% variation in the fitted value of $N_\mathrm{Li/He}$ in all halls.
To obtain a better estimate of $N_\mathrm{Li/He}$, $N_{\mathrm{DC}\cancel{\mu}}$ was reduced by suppressing accidentals among the double coincidences. This was done by augmenting the prompt-energy criterion from 1.5 $<E_{p}<$ 12.0 MeV to 3.5 $<E_{p}<$ 12.0 MeV. The measured number of $^9$Li/$^8$He was corrected with the efficiency of the augmented criterion with respect to the nominal criterion.
This ratio was determined to be 74\% by averaging measurements from all three halls with visible muon energy $E^\mathrm{vis}_{\mu}>$ 1~GeV ($E^\mathrm{vis}_{\mu}$ is the detected energy that was deposited by a muon traversing the detector).
The weighted average of the three measurements had a statistical uncertainty of 5\%.
The systematic uncertainty was estimated as the difference between the average and a Monte Carlo simulation, and therefore accounted for backgrounds in the measurements.
The simulation used $\beta$ spectra of $^9$Li/$^8$He decays calculated as those in Ref.~\cite{DYB3}. The resulting prompt-energy spectrum from the simulation is shown in Fig.~\ref{Fig:DCbkgSpectra}, where it has been normalized to $N_\mathrm{Li/He}$.
The difference in efficiency between the measurement and simulation was 6\%, giving a total uncertainty of 8\% for the efficiency of the augmented $E_p$ criterion.
The $^9$Li/$^8$He background was determined for three ranges of $E^\mathrm{vis}_{\mu}$: 0.02-1.0~GeV, 1.0-2.5~GeV, and $>$~2.5~GeV.
The highest energy range was defined as such because it identically defines a $\mu_\mathrm{sh}$, which was vetoed for 1~s (see Table~\ref{tab:criteria}) and therefore contributed only $O$(1)\% of the total $^9$Li/$^8$He background.
The lowest energy range was defined as such because it could not provide a reliable fit of $^9$Li/$^8$He due to its
higher $R_{\mu}$ and lower signal-to-background ratio: relative to the middle energy range, $R_{\mu}$ was 14 (11) times greater and $N_\mathrm{Li/He} / N_{\mathrm{DC}\cancel{\mu}}$ was about 5 (10) times lower, in EH1 (EH3).
To obtain a more reliable estimate of the $^9$Li/$^8$He background of the lowest energy range, $R_{\mu}$ was reduced and the signal-to-background ratio was increased, by isolating the muons that produced $^9$Li/$^8$He. Under the assumption that the isotopes were produced along with neutrons, every $\mu_\mathrm{AD}$ without a subsequent neutron (defined as a 1.8-12~MeV event within 20-200 $\mu$s) was excluded.
The measured number of $^9$Li/$^8$He was corrected with the efficiency of this altered $\mu_\mathrm{AD}$ definition with respect to the nominal definition.
Since this ratio could not be determined for the lowest energy range, the ratio for the middle energy range was used as a proxy.
This ratio was determined to be about 69\% (66\%) in the far (near) hall(s).
A 100\% uncertainty was assigned to the background for the lowest energy range, corresponding to a 1$\sigma$ lower bound of 35\% (33\%) for the efficiency of the altered $\mu_\mathrm{AD}$ definition in the far (near) hall(s).
The number of $^9$Li/$^8$He for both the middle and lowest energy ranges in EH1 and EH2 were determined with the combined data samples of EH1 and EH2.
The energy spectra of muons in EH1 and EH2 are similar~\cite{muon} such that their yields of $^9$Li/$^8$He per muon are expected to agree to $O$(1)\%~\cite{KhalchukovPowerLaw,HagnerPowerLaw}.
The $E^\mathrm{vis}_{\mu}$ spectra of the two near halls were observed to differ in scale by about 7\%.
This was due to a 7\% lower average gain of the high-charge range~\cite{FEE2} of the EH2 electronics.
After scaling the $E^\mathrm{vis}_{\mu}$ spectrum of EH2 by 7\%, the difference between the near-hall spectra was $O$(1)\% across the two energy ranges.
This scaling introduced a negligible uncertainty to the fitted number of $^9$Li/$^8$He.
The muon rate $R_{\mu}$ of the combined fit was fixed to the DC-weighted average of the measured muon rates in the two near halls.
Combining the uncertainties of the measured muon rates (0.3\%) and numbers of DCs (1\%), the weighted average had a 0.2\% uncertainty.
This 0.2\% uncertainty of $R_{\mu}$ corresponded to a 27\% change in the number of $^9$Li/$^8$He via Eq.~(\ref{eq:Equ_Li9He8}) for the middle energy range.
The 0.2\% uncertainty had a negligible impact on the lowest energy range because its muon rate was reduced as described above.
The fitted number of $^9$Li/$^8$He was divided among the near halls according to their measured muon rates (after scaling EH2) multiplied by their DAQ times.
Examples of fits to the time since the preceding muon without the $^{12}$B term for $E^\mathrm{vis}_{\mu}>$ 1.0~GeV are shown in Fig.~\ref{fig:Li9fit}.
The green areas represent the noncosmogenic DCs and the red areas represent the $^9$Li/$^8$He DCs. For presentation purposes, the plots use wider bins than the actual fits.
\begin{figure}[!h]
\centering
\includegraphics[width=\columnwidth]{Li9_fit}
\caption{Examples of fits of the time since the preceding muon in EH1+EH2 (top) and EH3 (bottom) for $E^\mathrm{vis}_{\mu}>$ 1.0~GeV. The green area is the noncosmogenic double-coincidence component and the red area is the $^9$Li/$^8$He component.}
\label{fig:Li9fit}
\end{figure}
Uncertainties were from statistics, the $^9$Li fraction $r$, the contribution of $^{12}$B, the augmented $E_p$ selection criterion, the altered $\mu_\mathrm{AD}$ definition for the lowest energy range, and binning effects.
The total uncertainty of the $^9$Li/$^8$He background was determined from the combination of all components of uncertainty, and was dominated by statistical uncertainty.
Table~\ref{tab:IBDsummary} lists the determined rate of background DCs due to $^9$Li/$^8$He in each hall. The rate was calculated by dividing the estimated $N_\mathrm{Li/He}$ by $T_\mathrm{DAQ}\varepsilon_{\mu}\varepsilon_m$ and correcting for the efficiencies of the altered definitions of the $E_{p}$ and $\mu_\mathrm{AD}$ criteria.
Since the $n$H- and $n$Gd-IBD analyses used different data samples, and the efficiencies were determined with distinct methods, there was no correlation of the $^9$Li/$^8$He background determinations between the $n$H- and $n$Gd-IBD analyses.
\subsection{Fast-neutron Background}
\label{sec:FastN}
In addition to producing radioactive isotopes such as $^9$Li and $^8$He, cosmogenic muon interactions can generate energetic neutrons via spallation.
Upon reaching an AD, a neutron may scatter off a proton and then capture on hydrogen, creating a prompt-delayed coincidence.
Given the high efficiency with which $\mu_{\mathrm{WS}}$'s are detected, the neutrons that contribute to this background predominantly originate from the rock surrounding an OWS.
Because the LS volume is more accessible than the GdLS volume to the externally-produced neutrons, this background is significantly higher than for the $n$Gd-IBD analysis.
A Monte Carlo simulation of neutrons induced from muons in the water shields was performed.
An empirical parameterization for neutron production from cosmogenic muons~\cite{Spallation_Neutron_Production} and the estimated average muon energy in an experimental hall~\cite{muon} were used to generate the initial kinetic energy and zenith angle distributions of the neutrons.
The resulting prompt-energy spectra of the simulated neutrons are shown in Fig.~\ref{Fig:fastN_MC}.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=\columnwidth]{spalln_sim}
\caption{Simulated prompt-recoil-energy spectra of spallation neutrons produced in the IWS or OWS by cosmogenic muons. See text for details.}
\label{Fig:fastN_MC}
\end{center}
\end{figure}
The increase of events with decrease of energy in the LS volume is due to the lesser containment of the recoil protons within the LS volume: the protons that recoil from fast neutrons that capture in the LS volume are closer to the boundary of the scintillating region compared to those associated with fast neutrons that capture in the GdLS volume, and thus, are more likely to deposit less energy in scintillator.
To determine the fast neutron background spectrum, a sample of spallation neutrons was obtained by slightly modifying the nominal IBD selection criteria: the upper prompt-energy criterion was removed and the OWS muon-event veto was excluded.
Muons identified with the IWS were still vetoed to avoid confusing a spallation neutron with a muon event in an AD.
In addition, the prompt event was required to occur within 300~ns after an OWS-identified muon and the delayed event at least 15~$\mu s$ after the muon to exclude muon decays. The OWS-identified muon events were required to occur at least 1200~$\mu$s after any muon events in an AD or the IWS.
The prompt recoil-energy spectrum of the OWS-identified spallation neutrons from EH1 is shown in Fig.~\ref{Fig:fastNdata}.
Figure~\ref{Fig:fastNdata} also shows the prompt-energy distribution of IBD candidates without the upper $E_{p}$ criterion and the spectrum obtained from the simulation. The OWS-identified and simulated spectra were normalized to the IBD candidates above 12~MeV, revealing consistent shapes.
\begin{figure}[!b]
\begin{center}
\includegraphics[width=\columnwidth]{spalln_fit}
\caption{Reconstructed prompt recoil-energy spectra of fast spallation neutrons from IBD candidates in EH1 with the upper $E_p$ limit removed (black line), OWS-identified muons (blue points), and simulation (red points).
The latter two spectra were normalized to the area of the extended IBD spectrum.
The green curve is the fit of the extended IBD spectrum using a first-order power law (see the text).
The inset is a log-log scaling of the plot. }
\label{Fig:fastNdata}
\end{center}
\end{figure}
Plotting the prompt recoil-energy spectrum in a log-log scale (see the inset of Fig.~\ref{Fig:fastNdata}) shows that the low-energy portion of the spectrum up to several tens of MeV is consistent with a power law [$N(E)=N_0E^{-a}$], while there is a distinct energy-dependence at higher energies. The entire spectrum could be fit after adding one degree of freedom to the power law; namely, extending the exponent to have a first-order dependence on energy:
\begin{equation}
\label{eq:powerLaw}
N(E) = N_0 \left(\frac{E}{E_0}\right)^{-a-\frac{E}{E_0}}.
\end{equation}
The fit of Eq.~(\ref{eq:powerLaw}) resulted in a $\chi^2$ per degree of freedom close to 1 for each hall. Bin widths of 2~MeV were selected for the near halls based on the stability of the fit parameters and the $\chi^2$ per degree of freedom. Due to the lower statistics of EH3, the corresponding bin width was 3~MeV. The value of $a$ was consistent among the three halls, yielding an average of 0.690~$\pm$~0.023. The value of $E_0$ averaged to (101.7~$\pm$~2.1)~MeV for the near halls and was (110~$\pm$~10)~MeV for the far hall.
The fast neutron background and its uncertainty were both estimated as in Ref.~\cite{nGd8AD}. The background was estimated as the number of events within the nominal prompt-energy selection window (1.5 $< E_\mathrm{rec} <$ 12~MeV) in the normalized OWS-identified spectrum of each hall. The spectrum was normalized to the extended IBD spectrum from all the ADs in a hall, between 12 and 300~MeV.
The systematic uncertainty was estimated using both the OWS-identified and extended IBD spectra. First, the extended IBD spectrum of each hall was fit between 12 and 300~MeV with the power law given in Eq.~(\ref{eq:powerLaw}). Then, the difference was taken between the integral of the function and the number of events in the normalized OWS-identified spectrum, with $E_\mathrm{rec}$ between 1.5 and 12~MeV. The largest relative difference among the three halls (6\% in EH3) was assigned to be the systematic uncertainty for each hall. In addition, each hall had a distinct fit uncertainty, which included the statistical uncertainty and was about 6\%, 7\%, and 18\% for EH1, EH2, and EH3, respectively.
The results are listed for each experimental hall in Table~\ref{tab:IBDsummary}.
There was no significant correlation between the $n$H- and $n$Gd-IBD fast neutron analyses because of their different selection criteria and independent event samples.
\subsection{Am-C Calibration Source Background}
One of the calibration sources deployed from each of the three ACUs atop an AD was an $^{241}$Am-$^{13}$C neutron source with a detected rate of 0.7~Hz~\cite{AmC}. Neutrons from these sources could inelastically scatter with the nuclei in the surrounding steel (SSV, ACU enclosures, {\it etc.}) and then capture on Fe, Cr, Ni, or Mn within the steel, producing $\gamma$'s that could enter the scintillating regions and satisfy the IBD selection criteria.
During the pause to install the final two ADs in the summer of 2012, two of the three Am-C sources were removed (from ACU-B and -C) from each AD in EH3, reducing this background in EH3 by about 40\% relative to the previous analysis~\cite{DYB_nH}.
This background was estimated using a special Am-C source~\cite{AmCbkgd} whose neutron emission rate was approximately 80 times higher than the Am-C calibration sources.
The special source was positioned on the top of EH3-AD2 near ACU-B for about 10 days during the summer of 2012.
Figure~\ref{Fig:LongPaper_AmC_1} shows the resulting distribution of the reconstructed vertical position of delayed events, which exhibits an excess at positive $z$ (the top half of the AD).
For comparison, the distribution from the adjacent EH3-AD1 (which had only an Am-C calibration source in ACU-A) is shown over the same period, exhibiting no apparent asymmetry.
The distributions of the vertical position of prompt events are similar.
\begin{figure}[!t]
\begin{center}
\includegraphics[width=\columnwidth]{AmC}
\caption{Distribution of vertical position of delayed events for EH3-AD2 with both its Am-C calibration source and the special Am-C source (solid blue line), and EH3-AD1 with only its Am-C calibration source (dashed red line). All sources were located at the tops of the detectors: $z \approx$~2.5~m. }
\label{Fig:LongPaper_AmC_1}
\end{center}
\end{figure}
The number of background DCs from the special Am-C source $N_{\mathrm{Special}}$ was estimated by subtracting $N_\mathrm{DC}$ of EH3-AD1 from $N_\mathrm{DC}$ of EH3-AD2 during the same period, resulting in $N_{\mathrm{Special}}$ = 137~$\pm$~41.6. The vertical positions of both the prompt and delayed events were required to be in the top half of each AD ($z_p>0$ and $z_d>0$).
The intensity of the special Am-C source was scaled to the intensities of the Am-C calibration sources of each AD using ``delayed-type'' events, which are defined as singles that satisfy the delayed-energy criterion.
The relatively low energy of the $n$H $\gamma$ selection admitted significant radioactive contamination into this sample of events. To avoid this contamination, the higher-energy $n$Gd delayed-type events were used.
In Ref.~\cite{AmCbkgd}, the number of $n$Gd delayed-type events due to an Am-C source $[N_{\mathrm{AmC-dtype}}]_{n\mathrm{Gd}}$ was estimated by the asymmetry of the vertical position distribution, which was similar to that in Fig.~\ref{Fig:LongPaper_AmC_1}.
The number of background DCs from each Am-C calibration source $N_{\mathrm{AmC}}$ was estimated as
\begin{equation}
\label{eq:AmC_corr_cal_edit}
N_{\mathrm{AmC}} = N_{\mathrm{Special}}\left[\frac{N_{\mathrm{AmC-dtype}}}{N_{\mathrm{Special-dtype}}}\right]_{n\mathrm{Gd}}
\end{equation}
where $N_{\mathrm{AmC-dtype}}$ is counted over the entire 621-day data period.
The $n$Gd ratio in Eq.~(\ref{eq:AmC_corr_cal_edit}) was about 0.12 for the far hall and 0.23 for the near halls.
The uncertainty of $N_{\mathrm{AmC}}$ was comprised of the 30\% statistical uncertainty of $N_{\mathrm{Special}}$ and an approximate 40\% systematic uncertainty shared with the $n$Gd-IBD analysis from a difference in delayed-type event rates among the near- and far-hall ADs.
This gives a total uncertainty of 50\% for the Am-C background.
Table~\ref{tab:IBDsummary} lists the rate of Am-C background DCs, which is $N_\mathrm{AmC}$ divided by $T_\mathrm{DAQ}\varepsilon_{\mu}\varepsilon_m$, for each AD. The prompt-energy spectrum of the Am-C background was modeled with an exponential, which was determined from both the simulation and the data with the special Am-C source. The spectrum is shown in Fig.~\ref{Fig:DCbkgSpectra}.
For the $n$Gd-IBD analysis, this background had a 45\% total uncertainty. Considering the common 40\% systematic uncertainty, the Am-C background determination was found to have a correlation coefficient of about 0.7 between the $n$H- and $n$Gd-IBD analyses:
\begin{equation}
\label{eq:AmCcorrelation}
\frac{40\% \cdot 40\%}{50\% \cdot 45\%} = 0.7.
\end{equation}
\subsection{$^{13}$C($\alpha$,n)$^{16}$O Background}
\label{sec:alphan}
The $^{13}$C($\alpha$,~n)$^{16}$O background is from four dominant sources of alpha decays in the liquid scintillator: the $^{227}$Ac (in the GdLS), $^{238}$U, and $^{232}$Th decay chains and $^{210}$Po, which is produced in the decay of $^{222}$Rn. The ($\alpha$, n) background rate was roughly estimated using the rates from the $n$Gd-IBD analysis~\cite{nGd8AD} and the ratio of the $n$H/$n$Gd IBD selection efficiencies.
The estimate in EH3 was approximately 0.02~$\pm$~0.01 DCs per AD per day.
This estimate is expected to be conservative because of the lower activity of the LS relative to the GdLS: using the selection criteria outlined in Ref.~\cite{DYB_CPC},
the concentration of $^{232}$Th was determined to be a few hundred times greater in the GdLS while that of $^{238}$U was estimated to be similar.
The uncertainty of the $^{13}$C($\alpha$,n)$^{16}$O background contributed negligibly to the total uncertainty of $\sin^2$2$\theta_{13}$ (see Table~\ref{tab:errorBudget}) and therefore, this background was neglected in this analysis.
\subsection{Summary of Correlated Backgrounds}
The rates of the correlated backgrounds are summarized in Table~\ref{tab:IBDsummary} and their prompt-energy distributions are illustrated in Fig.~\ref{Fig:DCbkgSpectra} for EH3.
\begin{figure}[!t]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{candidates_EH3}
\caption{Reconstructed prompt-energy distributions of the measured double coincidences after IBD selection (black points) and estimated backgrounds, for the sum of all ADs in EH3. }
\label{Fig:DCbkgSpectra}
\end{center}
\end{figure}
The rates of $n$H IBDs after subtracting all the backgrounds are listed for each AD in Table~\ref{tab:IBDsummary}.
With respect to the previous $n$H-IBD analysis~\cite{DYB_nH}, the absolute uncertainty of the dominant $^9$Li/$^8$He background was reduced by about 30\% because of increased statistics and various improvements in the method.
Reductions in the uncertainties of the fast neutron and Am-C backgrounds resulted primarily from an improved method of estimation and a fit of the full spectrum, and the removal of some Am-C sources, respectively. The overall uncertainty of backgrounds was reduced by 30\%.
Comparing to the $n$Gd-IBD analysis, the fast neutron background was about four to five times larger relative to the IBD rate in EH3, while the $^9$Li/$^8$He and $^{241}$Am-$^{13}$C backgrounds were equal within uncertainties, and the $^{13}$C($\alpha$,n)$^{16}$O background was about half as large.
The absolute uncertainty of the fast neutron background was about four to five times larger relative to the IBD rate in EH3, while the uncertainties of the $^9$Li/$^8$He and $^{241}$Am-$^{13}$C backgrounds were similar,
and the uncertainty of the $^{13}$C($\alpha$,n)$^{16}$O background was about half that of the $n$Gd-IBD analysis.
The impact of the uncertainties of the background estimations on the uncertainty of $\sin^2$2$\theta_{13}$ is described at the end of Section~\ref{sec:Fit}.
Due to the sharing of uncertainty components between the $n$Gd- and $n$H-IBD analyses, the Am-C background determinations had a correlation coefficient of about 0.7, while the $^9$Li/$^8$He
and fast neutron background determinations were uncorrelated, and the $^{13}$C($\alpha$,n)$^{16}$O background was neglected in this analysis.
\section{Detection Efficiency}
\label{sec:DetEff}
The expected number of selected IBDs from one AD was determined according to Eq.~(\ref{eq:predIBD}), in which the efficiency-weighted number of target protons was calculated considering antineutrino interactions in the GdLS, LS, and acrylic volumes $v$:
\begin{equation}
\label{eq:eff}
N_{\varepsilon}= \varepsilon_{\mu}\varepsilon_m \left[\sum^\mathrm{GdLS, LS, acry.}_{v}N_{p,v}\varepsilon_{E_p,v}\varepsilon_{T,v}\varepsilon_{E_d,v}\right]\varepsilon_{D},
\end{equation}
where
$\varepsilon_{\mu}$ and $\varepsilon_m$ are the muon-veto and multiplicity selection efficiencies of the AD, $N_p$ is the number of target protons of the AD,
$\varepsilon_{E_p}$ and $\varepsilon_{E_d}$ are the prompt- and delayed-energy selection efficiencies, and $\varepsilon_{T}$ and $\varepsilon_{D}$ are the coincidence-time and -distance selection efficiencies, respectively. The PMT flash selection efficiency (Section~\ref{sec:PMTflash}) is not included due to its negligible inefficiency.
The number of target protons was determined for each AD from measurements made prior to AD deployment.
The muon-veto, multiplicity, and distance selection efficiencies were determined with data. The prompt- and delayed-energy, and time selection efficiencies were determined with a simulation using a predicted antineutrino spectrum such as described in Section~\ref{sec:reactor}.
The simulation framework of Daya Bay is based on \textsc{Geant4}~\cite{GEANT4} and has been validated with comparisons to data~\cite{DYB_CPC}.
In comparing the IBD rates among the far hall and near halls, efficiencies and uncertainties common to all the ADs are irrelevant. The AD-uncorrelated uncertainties of the efficiencies, which reflect the identicalness of the ADs, were determined by comparing data among all eight ADs.
The uncertainties of $\varepsilon_{\mu}$ and $\varepsilon_m$ were negligible (see Section~\ref{sec:EventSelect}). The remaining quantities in Eq.~(\ref{eq:eff}) and their uncertainties, are discussed in this section.
The contribution from IBDs in the MO is described in Section~\ref{sec:spill}.
\subsection{Prompt-Energy Selection}
\label{sec:Eprompt}
The first selection criterion applied to AD events (after rejecting PMT flashes) was $E_\mathrm{rec} >$ 1.5~MeV. Ultimately, this selection affected only prompt events because of the more stringent requirement applied to delayed events. The prompt-energy selection efficiency and its uncertainty were determined with simulation in which the energy scale was aligned to that of the data (see Section~\ref{sec:Reconstruction}). The efficiency was defined as the number of IBD reactions $N$ that satisfied the prompt-energy criterion divided by the total number of IBD reactions:
\begin{equation}
\varepsilon_{E_p} = \frac{N(E_p >\ 1.5\ \mathrm{MeV})}{N_\mathrm{IBD}}.
\end{equation}
The higher-energy requirement of $E_p <$ 12~MeV was estimated to contribute negligibly to the inefficiency and uncertainty, as suggested by Fig.~\ref{Fig:DCbkgSpectra}.
The efficiency in the LS volume was lower than that in the GdLS volume because a larger fraction of the annihilation $\gamma$'s deposited energy outside the scintillating volumes. This fraction was largest for IBDs occurring in the acrylic elements. The net efficiency of all volumes was about 90\%.
The AD-uncorrelated uncertainty of the efficiency was estimated as the change in efficiency after shifting the energy scale by 0.5\%. The relative change in efficiency was about 0.1\%.
The 0.5\% shift is an estimate of the AD-uncorrelated uncertainty of the energy scale that was determined by comparing the fitted means of the $n$H-IBD $\gamma$ and $^{212}$Bi $\alpha$ peaks of all eight ADs.
For reference, the estimated uncertainty of the energy scale in the GdLS volume was 0.2\%~\cite{nGd8AD}.
\subsubsection{Variation with Baseline}
\label{sec:promptVar}
The $L/E$-dependence of neutrino oscillation [see Eq.~(\ref{eq:Psur})] implies that the shape of the neutrino energy spectrum changes with baseline $L$. Therefore, the efficiency of the prompt-energy criterion varies with baseline. The impact of this dependence on the multiple reactor-detector pairs at Daya Bay was estimated by applying oscillation to a predicted reactor antineutrino spectrum as a function of baseline. At each baseline~\cite{supp}, the IBD selection efficiency was determined with simulation samples for the GdLS, LS, and acrylic volumes.
The simulation accounted for energy deposited outside the scintillator volumes, and the nonlinearity~\cite{nGd8AD}, nonuniformity, and resolution of the detector energy-response. The oscillation parameter values were the same as those in Section~\ref{sec:reactor}. The resulting variation in the IBD selection efficiency as a function of baseline is illustrated for the LS region in Fig.~\ref{Fig:EpVSbaseline}.
The shape of the curve is due to the span of the data in the $L/E$ domain. For the near halls (smaller $L$), more oscillation occurred for lower energy antineutrinos, which decreased the number of IBD reactions with prompt energy below threshold and thus, increased the efficiency.
For illustration, the mean energy of a prompt event without oscillation was 3.626~MeV while the corresponding energy in EH1 (EH2) due to antineutrinos from the two (four) nearby reactors with oscillation was 3.630 (3.632)~MeV.
These numbers are representative of the first 4 (8) points in Fig.~\ref{Fig:EpVSbaseline}.
For the far hall (larger $L$), more oscillation occurred at median antineutrino energies and about equally at higher and lower energies, resulting in a net decrease in efficiency.
\begin{figure}[!h]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{eff_vs_baseline}
\caption{An example of the relative variation of the IBD selection efficiency with baseline using the value of $\sin^{2}$2$\theta_{13}$ presented in this article. This correction curve is for the LS region. The red circles denote the 48 reactor-detector pairs. Their error bars and the error band are identically defined by the uncertainty of $\sin^{2}$2$\theta_{13}$.}
\label{Fig:EpVSbaseline}
\end{center}
\end{figure}
In the fit for $\sin^{2}$2$\theta_{13}$ (Section~\ref{sec:Fit}), the IBD selection efficiencies in the GdLS, LS, and acrylic volumes of each AD were multiplied by a correction factor for each reactor baseline (6 reactors $\times$ 8 ADs = 48 baselines)~\cite{supp}.
The fit was first performed without correction factors. The resulting value of $\sin^{2}$2$\theta_{13}$ was then used to generate a set of correction factors and then fit again.
This iterative approach was tested using Asimov data samples generated according to Eq.~(\ref{eq:predIBD}) with known values of $\sin^{2}$2$\theta_{13}$. Several values of $\sin^{2}$2$\theta_{13}$ were tested and all fits converged consistently with negligible bias. No additional uncertainty was assigned.
Although several iterations were performed, the value of $\sin^{2}$2$\theta_{13}$ converged within the precision reported in this article after only one iteration.
The results of the fits without corrections were about 4\% larger than the true values for the Asimov data samples and the converged value for the measured data.
This variation of the IBD selection efficiency was estimated to be an order of magnitude smaller for the $n$Gd-IBD analysis, which required $E_p >$ 0.7~MeV.
\subsection{Coincidence-Time Selection}
\label{sec:capTime}
The efficiency of the coincidence-time selection was different for each detector volume $v$ due to the different densities and neutron-capture cross sections of the materials. The efficiency was defined as
\begin{equation}
\label{eq:EffLifetime}
\varepsilon_{T} = \frac{N(1 < t_c < 400\ \mu s; E_p >\ 1.5\ \mathrm{MeV})}{N(E_p >\ 1.5\ \mathrm{MeV})},
\end{equation}
and was determined with simulation. The efficiency in the LS volume was about 85\% and that in the GdLS volume was about 99\% due to the shorter neutron-capture time of $n$Gd.
These values were validated with data.
The neutron-capture time was studied in the GdLS and LS volumes by fitting for the mean neutron-capture time with the following formulas:
\begin{equation}
\label{eq:EffTime_TimeFit}
\begin{split}
N_\mathrm{Gd}(t)& = N_{0,\mathrm{Gd}} \cdot [(1+\alpha){\textstyle\frac{1}{\tau_\mathrm{Gd}}} e^{-t/\tau_\mathrm{Gd}} - \alpha {\textstyle\frac{1}{\tau_{0}}} e^{-t/\tau_{0}} ]+C_1 \\
N_\mathrm{LS}(t) & = N_{0,\mathrm{LS}} \cdot {\textstyle\frac{1}{\tau_\mathrm{LS}}} e^{-t/\tau_\mathrm{LS}} +C_2 ,
\end{split}
\end{equation}
where $\alpha$ balances two terms, the first corresponding to the capture of a neutron at thermal energies [$O$(0.025)~eV] with time constant $\tau_\mathrm{Gd}$, and the second representing the difference in capture cross section between thermal and IBD neutron energies [$O$(0.015)~MeV], with effective time constant $\tau_{0}$.
The capture-time spectrum in LS is due almost solely to $n$H which can be represented by a single exponential. This is because the number of captures per volume per time, which is proportional to the product of capture cross section and neutron velocity, is essentially independent of energy below IBD neutron energies. For $n$Gd, this product is much larger at thermal energies than at IBD energies (see, {\it e.g.} Ref.~\cite{ENDF}), effectively yielding two distinct time constants with $\tau_{0} < \tau_\mathrm{Gd}$.
The capture-time constant in LS is denoted by $\tau_\mathrm{LS}$, and $C_1$ and $C_2$ account for accidentals.
The neutron-capture times for the GdLS and LS regions were studied using $n$Gd- and $n$H-IBDs, respectively. The selection criteria were slightly modified from the nominal IBD criteria: the $n$Gd delayed events were selected between 6 and 10~MeV, while the $n$H prompt-energy lower limit was increased to 3.5~MeV to minimize the accidental background, and the $n$H delayed-energy criterion was fixed to 1.8-2.8~MeV. When fitting the $n$Gd-IBD spectrum, the reconstructed positions of the prompt events were required to satisfy $|z| <$ 1~m and $r <$ 1~m to minimize the fraction of neutrons that originated from, or had any interactions, outside GdLS. Similarly, when fitting the $n$H-IBD spectrum, a constraint of $r >$ 1.7~m was applied to minimize the fraction of neutrons that originated from GdLS.
The fit results using the data from all ADs are shown in Figs.~\ref{Fig:nGdTimeFit} and \ref{Fig:nHTimeFit}. Good agreement in the slopes is observed between the data and simulation. The fitted capture-time constants were about 28.1 and 216~$\mu s$ for the GdLS and LS volumes, respectively.
\begin{figure}[!b]
\includegraphics[width=\columnwidth]{nGdTimeFit.pdf}
\caption{Time separation for double coincidences selected with $n$Gd-IBD criteria in the GdLS volume from the data of all ADs (black points) and from simulation (red histogram). The spectra are normalized by the number of coincidences between 6 and 150~$\mu$s. The fit to data (blue curve) and fitted capture-time constant $\tau_\mathrm{Gd}$ are shown.}
\label{Fig:nGdTimeFit}
\end{figure}
\begin{figure}[!b]
\includegraphics[angle=0,width=\columnwidth]{nHTimeFit.pdf}
\caption{Time separation for double coincidences selected with $n$H-IBD criteria in the LS volume from the data of all ADs (black points) and from simulation (red histogram). The spectra are normalized by the number of coincidences between 30 and 300~$\mu$s. The fit to data (blue curve) and fitted capture-time constant $\tau_\mathrm{LS}$ are shown.}
\label{Fig:nHTimeFit}
\end{figure}
For reference, Fig.~\ref{Fig:Time} shows the total capture-time spectra of the far- and near-hall ADs for the nominal $n$H-IBD selection criteria before and after subtracting the accidental background.
The AD-uncorrelated uncertainty of the 400~$\mu s$ criterion in the combined GdLS and LS volume was partly estimated using $\beta$-$\alpha$ events from the $^{214}$Bi-$^{214}$Po-$^{210}$Pb decay chain. These events provided greater statistics than $n$H events and were used to determine the variation of the time measurements of the electronics.
The lifetime of $^{214}$Po is 237~$\mu s$, which is close to the mean $n$H capture time in LS.
The efficiency of the selection was determined relative to the number of double coincidences with a coincidence time window of [1, 1500]~$\mu$s.
The relative differences of the efficiencies of all eight ADs are shown in Fig.~\ref{Fig:canv_ggh_Bi_diff}, and are within 0.1\% at the selection criterion of 400~$\mu s$.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{timeEffDiff}
\caption{Efficiency (top panel) and relative difference to the average (bottom panel) {\it vs.} event time separation for $^{214}$Bi $\beta$-$\alpha$ events in each AD. The data of the far-hall ADs were combined in the bottom panel to increase statistics. The differences are within $\pm$0.1\% at the criterion of 400~$\mu$s. }
\label{Fig:canv_ggh_Bi_diff}
\end{center}
\end{figure}
Similarly, the uncertainty associated with the 1~$\mu$s criterion was determined to be 0.1\% by comparing the relative number of events between 1 and 2~$\mu$s.
Because the estimates of the uncertainties were performed using a source different from neutrons, additional uncertainties related to neutron-capture time were added. The uncertainties considered were identified from an expression of the mean neutron-capture time:
\begin{equation}
\label{eq:capTime}
\frac{1}{\tau} = \frac{v_n}{\lambda} = v_n \sum_i n_i \sigma_i(v_n),
\end{equation}
where $v_n$ is the velocity of the neutron, $\lambda$ is the mean free-path of the neutron, $n_i$ is the number-density of nucleus $i$, and $\sigma_i$ is the neutron-nucleus cross section. Isotopes other than Gd and H contributed less than 1\% of captures and were not considered.
For the LS volume, the measured density differed by $<$~0.1\% among the ADs.
In addition, the fluctuation in density caused by temperature changes uncorrelated among experimental halls was within 0.045\% during the data-recording period. These effects introduced a $<$~0.11\% uncertainty to the neutron-capture time $\tau$. This uncertainty was propagated through Eq.~(\ref{eq:EffTime_TimeFit}) to obtain an approximate 0.02\% AD-uncorrelated uncertainty.
The uncertainties from the $^{214}$Bi $\beta$-$\alpha$ event comparisons and neutron-capture time-related quantities were combined to give a total AD-uncorrelated uncertainty of 0.14\% for the efficiency of the coincidence-time criterion.
\subsection{Delayed-Energy Selection}
\label{sec:Ecuts}
The efficiency of the delayed-energy selection was determined with simulation and defined as
\begin{equation}
\label{eq:delayedEff}
\varepsilon_{E_d} = \frac{N(E_d \pm 3\sigma; 1 < t_c < 400\ \mu s; E_p >\ 1.5\ \mathrm{MeV})}{N(1 < t_c < 400\ \mu s; E_p >\ 1.5\ \mathrm{MeV})}.
\end{equation}
This definition does not preclude IBDs with the neutron captured by nuclei other than hydrogen; for example, $n$Gd IBDs comprised approximately 0.7\% of the IBDs after applying the delayed-energy criterion.
For both simulation and data, the $\mu~\pm$~3$\sigma$ selection was applied using the mean $\mu$ and standard deviation $\sigma$ from a fit of the delayed-energy spectrum with the Crystal Ball function~\cite{CrystalBall}.
The selection efficiency in the GdLS volume was about 15\% primarily because of neutron-capture by gadolinium.
The efficiency in the LS volume was about 65\% primarily because of the outward escape of the $n$H $\gamma$'s.
Two methods were used to estimate the AD-uncorrelated uncertainty of the delayed-energy selection efficiency.
One method is a relative comparison of the delayed-energy spectra of the ADs.
The comparison was made after applying all the $n$H selection criteria and subtracting the accidental backgrounds (errors from accidental subtractions were included in the energy spectra).
The method uses the number of events within two energy ranges:
the first is the nominal selection of $\mu~\pm$~3$\sigma$, which is approximately [1.90, 2.74]~MeV, and the second is [1.50, 2.80]~MeV.
These two ranges are visible for each AD in Fig.~\ref{fig:nHspectra}.
The upper value of the latter range was chosen to include most of the $n$H IBDs with $E_d >$ 2.74~MeV (0.1\% of $n$H IBDs) while the lower value corresponds to the low-energy criterion (Section~\ref{sec:lowE}) and includes more of the tail of the spectrum (12\% more $n$H IBDs).
The latter range contains both peak and tail portions of the spectrum and therefore is assumed to be sensitive to all factors that might influence the shape of the spectrum.
\begin{figure}[!t]
\includegraphics[width=\columnwidth]{Ed_peakScale_02}
\caption{Delayed energy spectra of $n$H-IBDs in all ADs. The entries of each histogram are normalized to the average number of IBDs in the far-hall ADs. The fitted means are scaled to the average of the mean of the far-hall ADs. The two pairs of vertical lines correspond to the largest and smallest 3$\sigma$ selections.}
\label{fig:nHspectra}
\end{figure}
For each AD $i$, the number of events in the nominal range $A$ ($N_{A,i}$) was plotted {\it vs.} the number of events in the extended range $B$ ($N_{B,i}$) and a linear relation was fit:
\begin{equation}
\label{eq:lineFit}
\overline{N}_A(N_{B,i}) = a+bN_{B,i}.
\end{equation}
This line represents the average behavior of all ADs, including differences in their spectral shape and backgrounds. Here, the efficiency of the delayed-energy selection $\varepsilon$ is defined as $N_A/N_\mathrm{Total}$, where $N_\mathrm{Total}$ is the number of events without the delayed-energy selection. The fitted line was used to determine the relative variation of $\varepsilon$ for each AD:
\begin{equation}
\label{eq:deltaEff}
\frac{\delta\varepsilon_i}{\varepsilon_i} = \frac{\delta N_{A,i}}{N_{A,i}} = \frac{N_{A,i}-\overline{N}_A}{N_{A,i}}=1-\frac{a+bN_{B,i}}{N_{A,i}}.
\end{equation}
This determination assumes that there is no variation in $N_\mathrm{Total}$. From studies with simulation, it was found that $N_A$ and $N_\mathrm{Total}$ are highly correlated under various scenarios that could modify the shape of the spectrum, including differences in OAV dimensions~\cite{DYB_Det} and the residual nonuniformity of $E_\mathrm{rec}$, making this assumption conservative. This determination also assumes that variations in the spectrum outside range $B$ are not systematically different from those within. Using simulation, differences in OAV dimensions or in the mean free path of the $\gamma$'s were found to have a greater impact on the shape of the spectrum at the low-energy end, but to contribute negligibly to $\delta\varepsilon_i/\varepsilon_i$. In addition, comparing the high-statistics spectra of the near-hall ADs did not reveal any systematic trends in the differences among spectra above 1.5~MeV, suggesting that there may not be any such trends below 1.5~MeV.
The statistical uncertainties of the data from the far-hall ADs were large, so they were excluded from the determination though they were conservatively used in the linear fit. Comparing the four near-hall ADs, the half-range of the $\delta\varepsilon_i/\varepsilon_i$ was 0.33\%.
This estimation directly includes AD-to-AD variations in the 3$\sigma$ selection, energy scale, and factors that may influence the shape of the spectrum; however, it does not include variations in the fraction of neutrons that capture on hydrogen (53\%) relative to other isotopes, such as Gd (46\%) and C (0.5\%), because such variations have an equivalent impact on $N_B$ and $N_A$.
The fraction of neutrons that capture on isotope $x$ is expressed similarly as the mean capture time in Eq.~(\ref{eq:capTime}):
\begin{equation}
\label{eq:capFrac}
f_x = \frac{n_x\sigma_x(v_n)}{\sum_i n_i \sigma_i(v_n)}.
\end{equation}
Performing error propagation on Eq.~(\ref{eq:capTime}) and Eq.~(\ref{eq:capFrac}), and combining the results, the variation of $f_x$ among the ADs was expressed in terms of the variation of $\tau$ and one of the $n_i$. In this way, the variation of the measured capture time in the GdLS was used to constrain the variation of $n_\mathrm{Gd}$.
The variation of $n_\mathrm{H}$ was taken to be negligible
because of the mixing of all production batches of scintillator~\cite{GdLS} and the filling procedures applied to the ADs~\cite{FillingSystem}.
As a result, the AD-to-AD variation of the fraction of $n$H captures was estimated to be $<$ 0.01\% and 0.16\% in the LS and GdLS volumes, respectively.
These two values correspond to approximately 0.03\% for the full volume.
Combining the variations estimated from the spectral comparison and the $n$H capture-fraction calculation yields a total AD-uncorrelated uncertainty of 0.33\% for the delayed-energy selection efficiency.
The second method used to evaluate the uncertainty of the delayed-energy selection efficiency is the ratio of the numbers of spallation $n$H to spallation $n$Gd ($N_{n\mathrm{H}}/N_{n\mathrm{Gd}}$), which utilizes the smaller variation of the $n$Gd delayed-energy selection efficiency and the larger sample of spallation neutrons.
The energy spectrum of spallation-$n$H and -$n$Gd $\gamma$'s from each AD was obtained by subtracting a background spectrum recorded in a background time window from the spectrum recorded in a signal time window.
Spallation neutrons generated by cosmogenic muons were identified as delayed-type events that followed WS- or AD-identified muons.
These muons were identified with greater purity by augmenting the definitions of a $\mu_\mathrm{WS}$ and $\mu_\mathrm{AD}$: for both the IWS and OWS, $N_\mathrm{PMT}$ was required to be $>$ 20, and a $\mu_\mathrm{AD}$ was required to have $E_\mathrm{rec} >$ 50~MeV. A 20-$\mu s$ muon-event veto-time was applied to avoid the ``ringing'' of PMT signals that followed high-energy events~\cite{ringing}.
The signal time window was between 20 and 700~$\mu s$ after a muon event. The background time window was a similar length, however, given the different distributions of muon energy and trajectory among halls~\cite{muon} (which affected the characteristics of the spallation products), the background window was tuned to be slightly different in each hall. By matching both the shape and population of the tail portions of the signal and background energy spectra, the background window was set to be between 700 and 1480, 1453, and 1384~$\mu s$, in EH1, EH2, and EH3, respectively.
Both $n$Gd and $n$H delayed-energy criteria were nominal (see Table~\ref{tab:criteria}).
The energy spectra of the spallation-neutron-capture $\gamma$'s were fitted with signal and background components.
The background component accounted for residual spallation products that were not subtracted with the background window.
For the $n$H spectrum, the signal component was a Crystal Ball function and the background function was a second-order polynomial.
For $n$Gd, the signal component was two Crystal Ball functions as mentioned in Section~\ref{sec:Reconstruction}, and the background function was a first-order polynomial. Fit results are shown in Figs.~\ref{fig:nHpol2Fit} and \ref{fig:nGdFit}, where the number of signal events defined as spallation neutrons are labeled as ``Nsig''.
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{spallnH_EH1_pol2a}
\caption{Fit of the spallation-$n$H reconstructed energy spectrum (black points) with a Crystal Ball function (black line) and a second-order polynomial (blue line) in EH1-AD1. The red line is the sum of the black and blue lines. The vertical dashed lines represent the delayed-energy selection criteria (Mean $\pm$ 3Sigma) within which Nsig and Nbkg were counted. }
\label{fig:nHpol2Fit}
\end{figure}
\begin{figure}[!b]
\centering
\includegraphics[width=\columnwidth]{spallnGd_EH1}
\caption{Fit of the spallation-$n$Gd reconstructed energy spectrum (black points) with two Crystal Ball functions (black line) and a first-order polynomial (blue line) in EH1-AD1. The red line is the sum of the black and blue lines. }
\label{fig:nGdFit}
\end{figure}
Compared with the previous analysis~\cite{DYB_nH}, the spallation neutron ratio is updated in this article by normalizing the number of neutrons to the number of target protons $N_p$ (Section~\ref{sec:TargetProton}):
\begin{equation}
\label{eq:spallationRatio}
\frac{N_{n\mathrm{H}}/(N_{p,\mathrm{LS}} + r_\varepsilon N_{p,\mathrm{GdLS}})}{N_{n\mathrm{Gd}}/N_{p,\mathrm{GdLS}}},
\end{equation}
where $r_\varepsilon$ is the ratio of efficiencies of selecting spallation $n$H in GdLS $vs.$ LS:
$r_\varepsilon \equiv \varepsilon_\mathrm{GdLS}/\varepsilon_\mathrm{LS}$.
Due to the non-uniform distribution of spallation neutrons, $r_\varepsilon$ is not precisely known; therefore, two extreme cases were considered: (a) the distribution is entirely within the LS ($r_\varepsilon$~=~0); (b) the distribution is uniform ($r_\varepsilon$~=~0.22 from simulated IBDs).
Figure~\ref{fig:nHnGdRD} shows the difference in the ratio defined by Eq.~(\ref{eq:spallationRatio}) for each near-hall AD relative to the mean of the four ADs.
The far-hall ADs were not used due to their lack of statistics.
The choice of $r_\varepsilon$ is found to have little impact on the variation of the ratio.
The half-range of the ratios of the near-hall ADs is approximately 0.35\%. Due to the use of a ratio with $n$Gd events, this estimation inherently includes the variation of the $n$Gd delayed-energy criterion, which was estimated to be 0.12\% for IBDs~\cite{DYB3}. This estimate also inherently includes the variation of the fraction of neutrons that capture on hydrogen.
\begin{figure}[!hb]
\includegraphics[width=\columnwidth]{spallnRatio_diff}
\caption{Difference in the ratio of the number of spallation-$n$H/-$n$Gd events in the fitted energy peaks of each near-hall AD relative to the mean of all near-hall ADs. See the text for details.}
\label{fig:nHnGdRD}
\end{figure}
Given the 0.33\% and 0.35\% relative uncertainties from the two independent methods, 0.35\% was assigned for the total AD-uncorrelated uncertainty of the delayed-energy selection efficiency.
To determine the correlation of the delayed-energy selection efficiency between the $n$H and $n$Gd analyses, the uncertainty was decomposed into three components: 3$\sigma$ variation, energy scale variation, and others. The contributions of the first two components were estimated with simulation by applying the largest and smallest 3$\sigma$ ranges (see Fig.~\ref{fig:nHspectra}) and shifting the energy scale (see Section~\ref{sec:Eprompt}), respectively.
The first component, which was dominant, does not exist for the $n$Gd-IBD analysis and thus, is uncorrelated. The correlation of energy scale variations between the $n$H- and $n$Gd-IBD analyses was estimated to be 0.8 with a linear fit of the measured $n$H-IBD {\it vs.} $n$Gd-IBD delayed-energy peaks. The latter component of ``others'' accounts for any contributions not directly evaluated, such as differences in OAV dimensions or the residual nonuniformity of $E_\mathrm{rec}$, and was assumed to be fully correlated. The hydrogen capture fractions of the $n$H analysis were determined to be anticorrelated with the gadolinium capture fraction of the $n$Gd analysis: in the GdLS volume, if the fraction of captures on Gd increases, then naturally the fraction on H decreases.
In the LS volume, the same anticorrelated relation exists via neutrons that are produced in GdLS or LS but capture in the other of the two volumes.
Combining the correlation constants and corresponding component uncertainties from both the $n$H and $n$Gd analyses yields an overall correlation coefficient of 0.07 for the efficiency of the delayed-energy selection.
\subsection{Coincidence-Distance Selection}
The efficiency of the coincidence-distance selection was measured with data and defined as
\begin{equation}
\label{eq:EffDistance}
\varepsilon_{D} = \frac{N(d_c<50\ \mathrm{cm}; E_d \pm 3\sigma;~...~; E_p>1.5\ \mathrm{MeV})}{N(E_d \pm 3\sigma; 1<t_c<400\ \mu s; E_p>1.5\ \mathrm{MeV})}.
\end{equation}
The efficiency was determined relative to the number of DCs with $d_c <$ 200~cm using the data of all 8 ADs with accidental backgrounds subtracted as shown in Fig.~\ref{Fig:DistIBD}.
The efficiency curves and relative differences with respect to the average are shown in Fig.~\ref{Fig:DistanceEffDiff}.
\begin{figure}[!t]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{distanceEffDiff}
\caption{Efficiency (top panel) and relative difference to the average (bottom panel) {\it vs.} coincidence-distance for correlated double coincidences ($N_\mathrm{Cor}$) in each AD. The data of the far-hall ADs were combined in the bottom panel to increase statistics. The differences are within $\pm$0.4\% at the criterion of 50~cm. }
\label{Fig:DistanceEffDiff}
\end{center}
\end{figure}
The efficiency for $d_c <$ 50~cm was about 75\%.
Because the total statistics of the far-hall ADs was only about half that of a single near-hall AD, the data of the four far-hall ADs were merged together when calculating the relative difference. All the differences were within $\pm$0.4\% at the 50-cm selection criterion.
Therefore, the AD-uncorrelated uncertainty of the efficiency of the coincidence-distance criterion was assigned to be 0.4\%.
\subsection{IBDs in Acrylic and Mineral Oil}
\label{sec:spill}
The target materials were primarily liquid scintillator, however, the IAV, OAV, and acrylic-encased reflectors were in direct contact or close proximity with the scintillators such that an IBD positron originating in these elements could enter the scintillators and deposit sufficient energy to trigger an AD. Such IBDs contributed an estimated 1.0\% of the $n$H-IBDs after selection.
IBD positrons originating in the MO rarely reached the scintillator and generally produced an insufficient amount of light to trigger an AD.
However, a few percent of the IBD positrons annihilated in-flight, producing a higher-energy $\gamma$ that was sometimes directed toward the scintillator with enough energy to pass the low-energy criterion. Some fraction of the corresponding IBD neutrons propagated to the LS and captured on H.
From simulation, it was estimated that approximately 0.06\% of the IBDs in the MO survived the selection criteria. This ``spill-in'' effect from the MO was found to have a negligible impact on the determination of $\sin^{2}$2$\theta_{13}$ and was not included in this analysis.
The impact of neutrons or $\gamma$'s (and their secondaries) that spill-out into the MO, or spill-in/out between the GdLS and LS, is naturally included in the prompt- and delayed-energy selection efficiencies and their uncertainties.
\subsection{Target Proton Number}
\label{sec:TargetProton}
The number of target protons $N_p$ was determined for each AD from the measured target masses $M$ and hydrogen mass-fractions $w_\mathrm{H}$ of the GdLS, LS, and acrylic volumes $v$:
\begin{equation}
\label{eq:TargetProton}
N_{p,v} = M_v\ w_{\mathrm{H},v}\ N_\mathrm{A}\ /\ m_\mathrm{H}\ ,
\end{equation}
where $N_\mathrm{A}$ is Avogadro's number and $m_\mathrm{H}$ is the molar mass of hydrogen.
The mass-fractions of hydrogen were determined by combustion analysis to be about 12.0\% for both GdLS and LS (with uncertainties at the level of 0.1\%)~\cite{DYB_Det}.
For acrylic (C$_5$H$_8$O$_2$), $w_\mathrm{H}$~=~8.05\%.
The AD-uncorrelated uncertainties of these quantities were taken to be negligible as described for $n_\mathrm{H}$ in Section~\ref{sec:Ecuts}.
The total masses of GdLS and LS were measured when filling each AD, using a load cell and Coriolis flow meter, respectively~\cite{FillingSystem}. The masses of acrylic components were measured with an industrial scale before filling~\cite{AVs}. The relevant masses are given for each AD~\cite{supp}.
Only the uncertainties of target mass were propagated to the final uncertainty of target proton number.
The average numbers of target protons in the GdLS, LS, and acrylic volumes are $1.43\times10^{30}$, $1.54\times10^{30}$, and $0.18\times10^{30}$, respectively. Values for each AD are provided~\cite{supp}.
The AD-uncorrelated uncertainties are listed in Table~\ref{tab:Eff}.
\subsection{Detector Leak}
Around the end of July, 2012, when data-recording was paused to install the final two ADs, a leak began between the LS and MO volumes of EH3-AD1. The levels of GdLS and LS in the overflow tanks~\cite{TargetMass} (see Fig.~\ref{fig:AD}) of EH3-AD1 slowly decreased while the level of MO slowly increased, suggesting that the LS was leaking into the MO region. This hypothesis was supported by measurements using the MO clarity system~\cite{DYB_Det} which showed significant decreases in the transmission of shorter-wavelength light through the MO and an increase of MO light yield over time, consistent with a gradual addition of scintillator into the MO. The hypothesis was further supported by the observation of an increased (decreased) rate of higher-energy (lower-energy) muons reconstructed in the MO volume.
These observed trends stabilized after about two years with an estimated leakage of about 20~kg. This loss of mass lowered the height of the LS level in the overflow tank and did not directly impact the number of target protons in the LS volume.
No impact on the detector response is expected in the LS volume due to the direction of the leak; however, in the MO volume, there is potential for an increase in trigger rate. Given a 20-kg leakage into the 36-ton volume, and assuming the light yield of the LS is two orders of magnitude greater than that of the MO, one may naively estimate an average increase of the light yield in the MO volume on the order of 1\%. In simulation, this increase was modeled as an increase in the energy scale, and was applied to prompt and delayed events of IBDs generated in the MO, resulting in a $O$(0.001)\% increase of the $n$H-IBD selection efficiency.
Indeed, no impact of the leak to the $n$H-IBD analysis has been observed in comparisons of various quantities before and after the start of the leak. These quantities included various event rates, neutron-capture energy peak and resolution, and IBD prompt and delayed event-position distributions.
Given the observed stabilization of the leak, no impact is expected in the future.
\subsection{Summary}
\label{sec:EffSum}
The efficiencies of the PMT flash rejection, prompt- and delayed-energy selection, and coincidence-time selection criteria were determined with simulation, while the number of target protons, the muon-veto and multiplicity and coincidence-distance selection efficiencies were determined with data. The AD-uncorrelated uncertainties of these quantities were determined by comparing data among the eight ADs.
The efficiency of the PMT flash rejection criterion was $>$ 99.99\% (see Section~\ref{sec:PMTflash}) and had a negligible uncertainty. Muon-veto and multiplicity selection efficiencies ($\varepsilon_{\mu}$ and $\varepsilon_m$) are listed in Table~\ref{tab:IBDsummary} and had negligible AD-uncorrelated uncertainties.
The product of the efficiencies of the prompt- and delayed-energy, and time selection criteria were about 14\%, 50\%, and 5\% in the GdLS, LS, and acrylic volumes, respectively. The efficiency of the coincidence-distance criterion was determined as an average for all volumes: 75\%. The AD-uncorrelated uncertainties of these efficiencies are listed for each detector volume $v$ in Table~\ref{tab:Eff}. The uncertainty of the delayed-energy selection efficiency reduced from 0.5\%~\cite{DYB_nH} to 0.35\% because of a new estimation and an update of the original estimation to scale the number of spallation neutrons with the number of target protons. This reduced the uncertainty of the $n$H-IBD selection efficiency by 15\%.
\begin{table}[!htb]
\begin{tabular}{ l c c }\hline\hline
& Uncertainty (\%) & Correlation \\ \hline
Target protons ($N_{p,\mathrm{GdLS}}$) & 0.03 & 1 \\
Target protons ($N_{p,\mathrm{LS}}$) & 0.13 & 0 \\
Target protons ($N_{p,\mathrm{acrylic}}$) & 0.50 & - \\
Prompt energy ($\varepsilon_{E_p}$) & 0.10 & 1 \\
Coincidence time ($\varepsilon_{T}$) & 0.14 & 1 \\
Delayed energy ($\varepsilon_{E_d}$) & 0.35 & 0.07 \\
Coincidence distance ($\varepsilon_{D}$) & 0.40 & 0 \\ \hline
Combined ($N_{\varepsilon}$) & 0.57 & 0.07 \\ \hline \hline
\end{tabular}
\caption{The relative per-detector uncorrelated uncertainties for each detector-related quantity.
The uncertainties of the $N_p$ are weighted when determining the combined uncertainty of $N_{\varepsilon}$ in the bottom row.
The last column contains the estimated correlation coefficients between the $n$H- and $n$Gd-IBD analyses.}
\label{tab:Eff}
\end{table}
Table~\ref{tab:Eff} also gives the estimated correlation coefficients between the detector efficiencies of the $n$H- and $n$Gd-IBD analyses. The number of target protons were fully correlated in the GdLS while uncorrelated in the LS due to their identical and independent methods of mass measurement, respectively.
The efficiency of the prompt-energy criterion was correlated through a common dependence on energy scale, and was conservatively treated as fully correlated. The coincidence-time criterion was also treated as fully correlated.
The delayed-energy criterion was largely independent because the primary contribution to the uncertainty in the $n$H analysis was the variation of the 3$\sigma$ selection, which does not exist in the $n$Gd analysis.
The coincidence-distance criterion was uncorrelated because there was no such selection in the $n$Gd-IBD analysis.
The overall correlation between the IBD detection efficiencies of the $n$H- and $n$Gd-IBD analyses was about 0.07.
The last row of Table~\ref{tab:IBDsummary} shows the ratio of the efficiency- and target proton-corrected rates of IBDs for the $n$H- and $n$Gd-IBD analyses, for each AD. The errors are the statistical, background, and AD-uncorrelated systematic uncertainties of both analyses. The consistency of the eight values with one another reflects the consistency of the selected number of IBDs, background estimates, and per-AD target proton and efficiency corrections, between the two analyses.
The consistency of the eight values with 1 reflects the accuracy of these values for both analyses.
\section{Results}
\label{sec:Results}
The measured and predicted IBD rates of each hall are shown over time in Fig.~\ref{Fig:IBDRealTime}. The measured rates are background-subtracted and efficiency-corrected ($\varepsilon_{\mu}\varepsilon_m$). The predictions are from Eq.~(\ref{eq:predIBD}) [$i.e.$, Eqs.~(\ref{eq:Phi}) and (\ref{eq:eff})], and are adjusted with the best-fit normalization factor $\epsilon$ from Eq.~(\ref{eq:Chi2Definition}).
The six reactors are seen to have operated continually at their nominal power output. The two reactors nearby EH1 were refueled every 16 months and the four reactors nearby EH2 were refueled every 8-12 months, each with 1-2 months downtime.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{IBD_vs_time}
\caption{Measured IBD rate {\it vs.} time for each experimental hall (blue points). Each point spans one week and the error bars are purely statistical. The dashed red lines are the expected IBD rates assuming no oscillation. The solid red lines are the expected IBD rates with the best-fit value of $\sin^2$2$\theta_{13}$. The final two of eight ADs were installed during the $\approx$12-week gap in all halls.}
\label{Fig:IBDRealTime}
\end{center}
\end{figure}
\subsection{Antineutrino Disappearance}
\label{sec:RatioDeficit}
The disappearance of $\overline{\nu}_{e}$~is quantified without invoking a model of neutrino oscillation and with minimal impact from models of reactor antineutrino spectra, by directly comparing the measured IBD rate at the far hall with the rate expected based on the measurements at the near halls.
The expected number of IBDs in the far hall was expressed as a combination of the two near-hall measurements:
\begin{equation}
\label{eq:FarNoOsc}
\overline{N}_\mathrm{EH3} \equiv \alpha N_\mathrm{EH1} + \beta N_\mathrm{EH2},
\end{equation}
where $N_\mathrm{EH1}$ and $N_\mathrm{EH2}$ are the measured numbers of IBDs after subtracting all the backgrounds and correcting for the muon-veto and multiplicity selection efficiencies ($\varepsilon_{\mu}$ and $\varepsilon_m$) in EH1 and EH2.
Expressions for the weights $\alpha$ and $\beta$ were determined using Eq.~(\ref{eq:FarNoOsc}) with the number of measured IBDs replaced by the number of predicted IBDs assuming no oscillation.
This number was calculated for experimental hall $i$ using Eq.~(\ref{eq:predIBD}) without oscillation:
\begin{equation}
\label{eq:pred}
\overline{N}_i=
\sum_{r=1}^6 \overline{N}_{ir} \equiv \sum_{r=1}^6 \sum_{d_i} \frac{N_{\varepsilon,d_i}}{4\pi L_{d_ir}^2} \iint_{\{t_{d_i}\}} \! \sigma_\nu \frac{d^{2}N_r}{dE dt} dE dt,
\end{equation}
where $d_i$ denotes the $d$-th AD in experimental hall $i$ and the $N_{\varepsilon}$ do not include $\varepsilon_m$ and $\varepsilon_\mu$.
The modified Eq.~(\ref{eq:FarNoOsc}) directly yields $\beta = (\overline{N}_\mathrm{3}-\alpha \overline{N}_\mathrm{1}) / \overline{N}_\mathrm{2}$.
The weight $\alpha$ was obtained by operating on the difference between the two predictions for EH3: $\Delta\overline{N} = \overline{N}_\mathrm{3}-\alpha \overline{N}_\mathrm{1} - \beta \overline{N}_\mathrm{2}$. The variance of $\Delta\overline{N}$ ($\sigma_\Delta^2$) was obtained via error propagation with respect to the reactor-uncorrelated relative uncertainty (which was taken to be identical for all reactors), and then its minimum was found with respect to $\alpha$, yielding
\begin{equation}
\label{eq:alpha}
\begin{aligned}
\alpha = \frac{\sum_r{(\overline{N}_{3r}-\frac{\overline{N}_3}{\overline{N}_2}\overline{N}_{2r})(\overline{N}_{1r}-\frac{\overline{N}_1}{\overline{N}_2}\overline{N}_{2r})}}{\sum_r{(\overline{N}_{1r}-\frac{\overline{N}_1}{\overline{N}_2}\overline{N}_{2r}})^2}.
\end{aligned}
\end{equation}
This expression minimizes the impact of the reactor-uncorrelated uncertainty.
For the 621-day data set used in this analysis, $\alpha =$~0.054 and $\beta =$~0.216.
These values are dominated by the baselines $L_{dr}$, and only slightly influenced by the integrated emission rates $d^{2}N_r(E, t)/dE dt$.
Thus, $\beta$, which is associated with EH2, is four times larger than $\alpha$ primarily because of the shorter baselines between EH3 and the four reactors nearby EH2.
The reactor-uncorrelated uncertainty is suppressed by a factor of about 20, which can be seen by evaluating the expression for $\sigma_\Delta^2$.
Using Eq.~(\ref{eq:FarNoOsc}) and the values of $\alpha$ and $\beta$, the ratio of the observed to the expected number of IBDs at the far hall was
\begin{equation}
\label{eq:RatioDeficit}
\begin{aligned}
R \equiv \frac{N_{\mathrm{EH3}}}{\overline{N}_{\mathrm{EH3}}} = 0.950 \pm 0.005.
\end{aligned}
\end{equation}
Figure~\ref{Fig:canv_sig_near} shows the measured prompt-energy spectrum at the far hall and that predicted with the near-hall measurements via Eq.~(\ref{eq:FarNoOsc}).
The ratios $R$ of each energy bin are shown in the bottom panel and demonstrate the effect of $\overline{\nu}_{e}$~disappearance as a function of energy. The best-fit curve is the ratio of far-hall and normalized near-hall predictions using Eq.~(\ref{eq:predIBD}) and the result for $\sin^2$2$\theta_{13}$ presented in the next section.
\begin{figure}[ht]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{measpred_ratio}
\caption{Top: Reconstructed prompt-energy spectrum of the far hall (solid blue points) and the expectation based on the measurements of the two near halls (empty black points).
Spectra are background-subtracted. Error bars are purely statistical.
Bottom: Ratio of the Far/Near halls and the curve representing the best-fit value of $\sin^{2}$2$\theta_{13}=$~\nHresult. }
\label{Fig:canv_sig_near}
\end{center}
\end{figure}
\subsection{Fit for \boldmath$\sin^{2}\!2\theta_{13}$\unboldmath}
\label{sec:Fit}
To determine $\sin^{2}$2$\theta_{13}$, a $\chi^2$ was constructed with pull terms for the background uncertainties and the AD- and reactor-uncorrelated uncertainties:
\begin{equation}
\label{eq:Chi2Definition}
\begin{aligned}
\resizebox{\hsize}{!}{$\displaystyle\chi^2 = \sum_{d=1}^{8}\frac{[N_{\mathrm{DC},d}-\overline{N}_{\mathrm{IBD},d}(1 +\epsilon +\sum_{r=1}^{6} \omega^{d}_{r}\alpha_r +\epsilon_d)-(1+\eta_d)B_d]^2}{(\sigma_{\mathrm{DC},d})^2}$}\\
\resizebox{\hsize}{!}{$\displaystyle \hspace{0.9cm} +\sum_{r=1}^{6}\frac{\alpha_r^2}{\sigma_R^2} + \sum_{d=1}^{8}\left(\frac{\epsilon_d^2}{\sigma_D^2} +\frac{\eta_d^2}{(\sigma_{B,d})^2}\right), \hspace{4.2cm}$}
\end{aligned}
\end{equation}
where $N_{\mathrm{DC},d}$ is the number of measured double coincidences from the $d$-th AD given in Table~\ref{tab:IBDsummary},
$B_d$ is the sum of the accidental and correlated backgrounds derivable from Table~\ref{tab:IBDsummary},
$\sigma_{\mathrm{DC},d}$ is the statistical uncertainty of $N_\mathrm{DC}$,
and $\overline{N}_\mathrm{IBD}$ is the expected number of IBDs from Eq.~(\ref{eq:predIBD}), which contains the oscillation parameter $\sin^{2}$2$\theta_{13}$.
The $\omega_r^d$~\cite{supp} are the fractions of IBDs in the $d$-th AD due to the $r$-th reactor, which were calculated using Eq.~(\ref{eq:predIBD}) without oscillation (including oscillation decreased the best-fit value of $\sin^{2}$2$\theta_{13}$ by less than 0.03\%).
The reactor-uncorrelated uncertainty (0.9\%) is denoted as $\sigma_R$.
The parameter $\sigma_D$ is the AD-uncorrelated uncertainty of IBD detection efficiency from Table~\ref{tab:Eff}.
The parameter $\sigma_{B,d}$ is the combination of all background uncertainties, which are given in Table~\ref{tab:IBDsummary}.
There are twenty two corresponding pull parameters denoted as $\alpha_r$, $\epsilon_d$, and $\eta_d$.
The normalization factor $\epsilon$ was fit and accounted for any biases in
the backgrounds $B_d$ that were common to all halls or detectors,
and any biases in the predicted number of IBDs $\overline{N}_{\mathrm{IBD},d}$ that were common to all detectors; {\it i.e.}, in reactor-related models/quantities, the IBD cross section model, or IBD selection efficiencies.
Iterating over $\sin^{2}$2$\theta_{13}$ with the efficiency correction factors as described in Section~\ref{sec:promptVar}, the best-fit value for both the normal and inverted neutrino-mass hierarchies was
\begin{equation}
\begin{aligned}
\sin^{2}2\theta_{13}=0.071 \pm 0.011,
\end{aligned}
\end{equation}
with a $\chi_\mathrm{min}^2$ per degree of freedom of 6.3/6.
Figure~\ref{Fig:canv_th13_pred_measu_edit} shows the ratio of the measured rate to the predicted rate assuming no oscillation, for each detector.
\begin{figure}[!b]
\begin{center}
\includegraphics[angle=0,width=\columnwidth]{measpred_ratio_vs_baseline}
\caption{Ratio of measured to predicted IBD rate in each detector assuming no oscillation {\it vs.} flux-weighted baseline. Each detector is represented with a green square (blue circle) for the $n$H ($n$Gd) analysis. Error bars include statistical, detector-related, and background uncertainties. The dashed green (blue) curve represents the neutrino oscillation probability using the $n$H ($n$Gd) result for $\sin^{2}$2$\theta_{13}$ and the global fit value of $\Delta m_{32}^{2}$ (the $n$Gd result for $\Delta m^2_{ee}$~\cite{nGd8AD}). The solid red curve represents the oscillation probability using the $n$H-$n$Gd combined result and $\Delta m_{32}^{2}$, and its magenta error band is from the uncertainty of $\Delta m_{32}^{2}$.
The baselines of EH1-AD2 and EH2-AD2 are shifted by +20~m, and those of EH3-AD1, 2, 3, and 4 are shifted by -30, -10, +10, and +30~m, respectively, for visual clarity.}
\label{Fig:canv_th13_pred_measu_edit}
\end{center}
\end{figure}
The most recent $n$Gd result from Daya Bay~\cite{nGd8AD} is included for comparison.
The 5.0\%-deficit of EH3 relative to the near halls given in Eq.~(\ref{eq:RatioDeficit}) is apparent. For the $n$Gd-IBD analysis, this deficit was about 5.2\%, and the best-fit value was $\sin^2$2$\theta_{13}$ = 0.084.
The red curve is the oscillation survival probability $P_\nu$ of Eq.~(\ref{eq:Psur}) with a value of $\sin^2$2$\theta_{13}$ = 0.082 from the combination of the $n$H- and $n$Gd-IBD analyses, which is described in the next section.
The contributions of various quantities to the total uncertainty of $\sin^2$2$\theta_{13}$ ($\sigma_\mathrm{total}$) are listed in Table~\ref{tab:errorBudget}, where they are presented as fractions of $\sigma^2_\mathrm{total}$.
The variance of a quantity was estimated as $\sigma^2_\mathrm{total}$ minus the square of the fit error when fixing the nuisance parameter of said quantity to its best-fit value.
The sum of the fractions is not equal to 1 due to correlations.
The statistical uncertainty is the largest individual component. The second- and third-largest uncertainties are those of the coincidence-distance criterion and the delayed-energy criterion (see Table~\ref{tab:Eff} for the components of the detector contribution). The reactor-uncorrelated uncertainty is reduced by a factor of 20, as in the relative expression of Eq.~(\ref{eq:RatioDeficit}).
\begin{table}[!h]
\begin{tabular}{ l c c }\hline\hline
& Uncertainty Fraction (\%) & Correlation \\ \hline
Statistical & 51.8 & 0 \\
Detector & 39.2 & 0.07 \\
Reactor & 4.2 & 1 \\
$^9$Li/$^8$He & 4.4 & 0 \\
Accidental & 0.4 & 0 \\
Fast neutron & 0.3 & 0 \\
Am-C & 0.1 & 0.7 \\ \hline
Combined & 100.4 & 0.02 \\ \hline \hline
\end{tabular}
\caption{Contributions of individual uncertainties to the total uncertainty of $\sin^2$2$\theta_{13}$. See the text for details. Detector uncertainties are characterized in Table~\ref{tab:Eff}. The last column contains the estimated correlation coefficients between the $n$H- and $n$Gd-IBD analyses.}
\label{tab:errorBudget}
\end{table}
\subsection{\boldmath$n$\unboldmath H-\boldmath$n$\unboldmath Gd Combined Result}
\label{sec:nGdnHcombined}
The result for $\sin^{2}$2$\theta_{13}$ from the current analysis was combined with that from the most recent $n$Gd-IBD spectral analysis from Daya Bay~\cite{nGd8AD}. The combination was performed both analytically and via a simultaneous fit of the $n$Gd-IBD and $n$H-IBD data sets. Correlations between the two analyses were estimated for efficiencies, backgrounds, and reactor-related quantities.
The correlation coefficients of the various uncertainty components are listed in Tables~\ref{tab:Eff} and \ref{tab:errorBudget}. Reactor-related uncertainties are fully correlated and statistical uncertainties are uncorrelated. The correlation of quantities with negligible uncertainty, such as DAQ time and muon-veto efficiency, had negligible impact.
The correlation coefficients of the detector-related quantities are described in Section~\ref{sec:EffSum} and listed in Table~\ref{tab:Eff}.
The accidental backgrounds were treated as uncorrelated because of the distinct methods and event samples used in the $n$H- and $n$Gd-IBD analyses.
The Am-C background was estimated to have a correlation coefficient of 0.7,
while the other backgrounds were uncorrelated (see Section~\ref{sec:CorrBkg}).
The procedure to analytically combine the analyses is the same as that used for the previous combination~\cite{DYB_nH}. Updated values for backgrounds, efficiencies, and the fraction of uncertainty due to statistics were taken from Ref.~\cite{nGd8AD}, for the $n$Gd-IBD analysis. For the $n$H-IBD analysis, these values are listed in Tables~\ref{tab:IBDsummary}, \ref{tab:Eff}, and \ref{tab:errorBudget}, respectively.
Using the correlation coefficients presented in this article, these values give an overall correlation coefficient of 0.02 between the two analyses, indicating essentially independent determinations of $\sin^{2}$2$\theta_{13}$. Though the correlation will increase as the fraction of statistical uncertainty decreases, this value is smaller than the previous correlation coefficient of 0.05~\cite{DYB_nH} primarily because of the distinct estimation of the $n$H-$^{9}$Li background and the significant reductions in the systematic uncertainties of the Am-C backgrounds for both analyses.
With the $n$Gd-IBD result of $\sin^{2}$2$\theta_{13}$ = \nGdResult~and the $n$H-IBD result of \nHresult, both the analytical calculation and simultaneous fit resulted in
\begin{equation}
\begin{aligned}
\sin^{2}2\theta_{13}=\combin ,
\end{aligned}
\end{equation}
which is an 8\% improvement in precision.
\subsection{Independent Analysis}
The present $n$H-IBD analysis was cross-checked with an independent analysis based on a different analysis framework~\cite{LAF}.
IBD candidates were independently selected using the same criteria (see Table~\ref{tab:criteria}) and the backgrounds and muon-veto efficiencies were independently evaluated.
Using the $\chi^{2}$ in Eq.~(\ref{eq:Chi2Definition}),
the best-fit value was $\sin^{2}$2$\theta_{13}=0.071 \pm 0.011$,
with a $\chi_\mathrm{min}^{2}$ per degree of freedom of 6.4/6.
\section{Discussion}
\label{sec:Future}
The precision to which $\theta_{13}$ is determined is crucial to constraining the leptonic {\it CP} phase $\delta$~\cite{T2K_CPvsT13, NOvA, MINOS, LBNE}.
The $n$H-IBD analysis in this article provides an independent determination of $\sin^2$2$\theta_{13}$ and improves the overall precision of $\theta_{13}$.
Given that the uncertainty of the $n$H-IBD result is dominated by the systematic uncertainties of the delayed-energy and coincidence-distance criteria, improved precision is foreseen by reducing the uncertainties of the distance criterion with increased statistics, and the delayed-energy criterion with an optimization of the selection.
In addition, improved precision will be achieved with a spectral analysis of the prompt-energy spectrum, which is underway. This will also provide a new determination of the mass-squared difference $\Delta m^2_{32}$.
The analysis of $n$H-IBDs has helped to maximize the fiducial volume of the ADs to supernova neutrinos~\cite{DYB_SN}.
It should also provide an opportunity to reduce the dominant uncertainty of detection efficiency in the measurement of reactor antineutrino flux~\cite{DYB_reactor}, given the lesser sensitivity of the $n$H-IBD analysis to neutron spill-in/out effects.
Furthermore, the data-driven techniques developed to study the accidental background and the IBD selection criteria may be useful for other experiments that use or plan to use $n$H-IBDs, such as JUNO~\cite{JUNO}, RENO-50~\cite{RENO50}, and LENA~\cite{LENA}.
\section{Conclusion}
A sample of about 780000 $n$H-IBDs was obtained with the 6-AD and full 8-AD configurations of the Daya Bay experiment and was used to compare the number of reactor antineutrinos at far and near halls, yielding a new independent determination of sin$^{2}2\theta_{13}$ = \nHresult. The uncertainty is reduced by 40\% compared with the previous $n$H-IBD result primarily because of the factor of 3.6 increase in statistics, but also because of the 15\% and 30\% reductions in the uncertainties of the IBD selection efficiency and backgrounds, respectively.
The new result is consistent with that from the $n$Gd-IBD analysis from Daya Bay, providing a valuable confirmation of the $n$Gd-IBD result. Combining the $n$H- and $n$Gd-IBD results provides a new improved determination of
sin$^{2}2\theta_{13}$~= $\combin$.
\section{Acknowledgments}
The Daya Bay Experiment is supported in part by
the Ministry of Science and Technology of China,
the United States Department of Energy,
the Chinese Academy of Sciences,
the CAS Center for Excellence in Particle Physics,
the National Natural Science Foundation of China,
the Guangdong provincial government,
the Shenzhen municipal government,
the China General Nuclear Power Group,
the Key Laboratory of Particle \& Radiation Imaging (Tsinghua University), Ministry of Education,
the Key Laboratory of Particle Physics and Particle Irradiation (Shandong University), Ministry of Education,
the Research Grants Council of the Hong Kong Special Administrative Region of China,
the MOST fund support from Taiwan,
the U.S. National Science Foundation,
the Ministry of Education, Youth and Sports of the Czech Republic,
the Joint Institute of Nuclear Research in Dubna, Russia,
the NSFC-RFBR joint research program,
the National Commission for Scientific and Technological Research of Chile, and
the Tsinghua University Initiative Scientific Research Program.
We acknowledge Yellow River Engineering Consulting Co., Ltd.\ and China Railway 15th Bureau Group Co., Ltd.\ for building the underground laboratory.
We are grateful for the ongoing cooperation from the China Guangdong Nuclear Power Group and China Light~\&~Power Company.
|
1,477,468,751,063 | arxiv | \section{Introduction}
\IEEEPARstart{P}{oint} set registration is a challenging but meaningful task, which has wide application in many fields \cite{myronenko2009image,ma2016non,wu2012online,klaus2006segment,maintz1998survey,raguram2008comparative,yuille1988computational,sonka2014image}. For example, point set registration algorithm can be used to align a pool of local frames to a global one for large-scale 3D reconstruction or 3D mapping \cite{Ding_2019_CVPR}.
\begin{figure*}
\begin{center}
\includegraphics[width=14cm]{figg1.png}
\end{center}
\caption{Comparison of the pipeline between previous learning methods, direct optimization methods and our Deep-3DAligner for point set registration. Our method starts with optimizing a randomly initialized latent spatial correlation representation (SCR) feature, which is then decoded to the desired geometric transformation to align source and target point clouds, avoiding the explicit design of feature encoder and correlation module which is often challenging for point clouds input and increasing the model complexity by leveraging the structure of deep neural networks in comparison to direct optimization methods.}
\label{first}
\end{figure*}
Most existing non-learning methods solve the registration problem through an iterative optimization process to search the optimal geometric transformation to minimize a pre-defined alignment loss between transformed source point set and target point set \cite{myronenko2007non,ma2013robust,ma2014robust,ling2005deformation}. The geometric transformation can be modeled by a specific type of parametric transformation (e.g. rotation, translation, thin-plate spline and so on) \cite{besl1992method}. For example, one of the most commonly applied methods, iterative closest point (ICP) \cite{besl1992method}, estimates the rigid transformation based on a set of corresponding points. The ICP model, however, strongly depends on the initialization and has limited performance in choosing corresponding points. Moreover, iterative methods usually treat registration as an independent optimization process for each given pair of source and target point sets, which cannot transfer knowledge from registering one pair to another.
In recent years, deep-learning-based algorithms have been implemented in various industries and achieved great success, researchers are increasingly interested in bringing deep-learning-based solutions to the field of point set registration. As shown in Figure \ref{first}, instead of directly optimizing the transformation matrix towards a minimization of alignment loss in non-learning based methods, learning-based methods usually leverage modern feature extraction technologies for feature learning and then regress the transformation matrix based on the mutual information and correlation defined on the extracted features of source and target shapes. The most recent model, deep closest point (DCP) \cite{wang2019deep}, leverages DGCNN \cite{wang2019dynamic} for feature learning and a pointer network to perform soft matching. To refine the soft matching results to predict the final rigid transformation, the DCP model further proposes a singular value decomposition layer for fine-tuning. However, it is still challenging to design an explicit module for learning both the features from unstructured point clouds and their ``geometric relationship" \cite{Wang_2018_CVPR}. Existing works developed various models to compute the spatial correlation feature. For example, FlowNet3D \cite{liu2019flownet3d} tied to concatenate two global descriptors of source and target point sets; \cite{balakrishnan2018unsupervised} used a U-Net-based structure to mix the source and target volumetric shapes; \cite{rocco2017convolutional} proposed a correlation tensor calculated from source and target feature map and so on. In contrast, our paper proposes Deep-3DAligner, a novel unsupervised registration framework, as shown in Figure \ref{first}, relies on a directly optimizable SCR feature instead of requiring designing feature encoder and correlation module. Besides, Deep-3DAligner is trained in an unsupervised manner, which is different from the DCP that uses the ground-truth transformation parameters (i.e. rotation and translation matrix) for training.
With the development of the SCR feature, our proposed Deep-3DAligner framework is illustrated in Figure \ref{main}, which contains three main components. The first component is an SCR optimizer where the deep SCR feature is optimized from a randomly initialized feature. The second component is a transformation decoder which decodes the SCR feature to regress the transformation parameters for the point sets alignment. The third component is an alignment loss that measures the similarity between the transformed source point set and the target one. In the pipeline, there are two communication routes, indicated by black and red dashed lines. The communication route in black is for the data flow for the Deep-3DAligner paradigm, where the source and target point sets are used as input. The communication route in red is the back-propagation route with which the alignment loss is back-propagated to update the SCR and the transformation decoder. Our contribution is as follows:
\begin{itemize}
\item We introduce a novel unsupervised learning approach for the point set registration task.
\item We introduce a spatial correlation representation (SCR) feature which can eliminate the design challenges for encoding the spatial correlation between source and target point sets in comparison to learning-based methods.
\item Experimental results demonstrate the effectiveness of the proposed method for point set registration, and even without ground truth transformation for training, our proposed approach achieved comparative performance compared to most recent supervised state-of-the-art approaches.
\end{itemize}
\section{Related Works}
\subsection{Iterative registration methods}
The development of optimization algorithms to estimate rigid and non-rigid geometric transformations in an iterative routine has attracted extensive research attention in past decades. Assuming that a pair of point sets are related by a rigid transformation, the standard approach is to estimate the best translation and rotation parameters in the iterative search routine, therein aiming to minimize a distance metric between two sets of points. The iterative closest point (ICP) algorithm \cite{besl1992method} is one successful solution for rigid registration. It initializes an estimation of a rigid function and then iteratively chooses corresponding points to refine the transformation. However, the ICP algorithm is reported to be vulnerable to the selection of corresponding points for initial transformation estimation. Go-ICP \cite{yang2015go} was further proposed by Yang et al. to leverage the BnB scheme for searching the entire 3D motion space to solve the local initialization problem brought by ICP. Zhou et al. proposed fast global registration \cite{zhou2016fast} for the registration of partially overlapping 3D surfaces. The TPS-RSM algorithm was proposed by Chui and Rangarajan \cite{chui2000new} to estimate parameters of non-rigid transformations with a penalty on second-order derivatives. As a classical non-parametric method, coherence point drift (CPD) was proposed by Myronenko et al. \cite{myronenko2007non}, which successfully introduced a process of fitting the Gaussian mixture likelihood to align the source point set with the target point set. Existing classical algorithms have achieved great success on the registration task. Although the independent iterative optimization process limits the efficiency of registering a large number of pairs, inspiring us to design a learning-based system for this task.
\subsection{Learning-based registration methods} In recent years, learning-based methods have achieved great success in many fields of computer vision \cite{su2015multi,sharma2016vconv,maturana2015voxnet,qi2017pointnet,verma2018feastnet,masci2015geodesic,zeng20173dmatch,wang20173densinet,wang2018unsupervised,wang2020few}. In particular, recent works have started a trend of directly learning geometric features from cloud points (especially 3D points), which motivates us to approach the point set registration problem using deep neural networks \cite{rocco2017convolutional,balakrishnan2018unsupervised,zeng20173dmatch,qi2017pointnet,verma2018feastnet,masci2015geodesic,chen2019arbicon,wang2019non,wang2019coherent,li2019pc}. PointNetLK \cite{aoki2019pointnetlk} was proposed by Aoki et al. to leverage the newly proposed PointNet algorithm for directly extracting features from the point cloud with the classical Lucas $\&$ Kanade algorithm for the rigid registration of 3D point sets. Liu et al. proposed FlowNet3D \cite{liu2019flownet3d} to treat 3D point cloud registration as a motion process between points. Wang et al. proposed a deep closest point \cite{wang2019deep} model, which first leverages the DGCNN structure to exact the features from point sets and then regress the desired transformation based on it. Balakrishnan et al. \cite{balakrishnan2018unsupervised} proposed a voxelMorph CNN architecture to learn the registration field to align two volumetric medical images. For the registration of 2D images, an outstanding registration model was proposed by Rocco et al. \cite{rocco2017convolutional}. For the learning-based registration solutions listed above, the main challenge concerns how to effectively model the ``geometric relationship" between source and target objects in a learning-based approach. For example, \cite{rocco2017convolutional} proposed a correlation tensor between the feature maps of source and target images. \cite{balakrishnan2018unsupervised} leveraged a U-Net-based structure to concatenate features of source and target voxels. \cite{liu2019flownet3d}
\cite{aoki2019pointnetlk} used a PointNet-based structure, and \cite{wang2019deep} used a DGCNN structure to learn the features from a point set for further registration decoding.
In contrast, we first propose a model-free structure to skip the encoding step. Instead, we initialize an SCR feature without pre-defining a model, which is to be optimized with the weights of the network from the alignment loss back-propagation process.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{ulmain1.png}
\end{center}
\caption{Our pipeline. For a pair of input source and target point sets, our method starts with the SCR optimization process to generate a spatial-correlation representation feature, and a transformation regression process further decodes the SCR feature to the desired geometric transformation. The alignment loss is back-propagated to update the weight of the transformation decoder and the SCR feature during the training process. For testing, the weights of the transformation decoder remain constantly without updating.}
\label{main}
\end{figure*}
\section{Approach}
We introduce our approach in the following sections. First, we define the learning-based registration problem in section \ref{sc_problem}. In section \ref{sc_scr}, we introduce our spatial-correlation representation. The transformation decoder is illustrated in section \ref{sc_trans_reg}. In section \ref{sc_loss}, we provide the definition of the loss function. Section \ref{sc_optim} illustrates the newly defined optimization strategy.
\subsection{Problem statement}\label{sc_problem}
Given training dataset $\bold{D}=\{(\bold{S_i}, \bold{G_j}) \text{ ,where } \bold{S_i}, \bold{G_j} \subset \mathbb{R}^N (N=2 \text{ or } N=3) \}$, the optimization task of a deep-learning-based method for registration problem can be generally defined in the following way. We assume the existence of a function $g_{\theta}(\bold{S_i},\bold{G_j}) = \phi$ using a neural network structure, where $\phi$ represents the parameters of the network. The rigid point set registration is represented by a homogeneous transformation matrix, which is composed by a rotation matrix $\bold{R} \in SO(3)$ and a translation vector $\bold{t} \in \mathbb{R}^3$. Given a pair of input source and target point sets $(\bold{S_i},\bold{G_j})$, a trained model is able to predict the parameters $\phi$ based on the optimized weights $\bold{\theta^{optimal}}$ in the neural network structure. A pre-defined alignment metric between transformed source and target point sets can be defined as objective loss function to update weights $\theta$. For a given dataset $\bold{D}$, a stochastic gradient-descent-based algorithm can usually be utilized for the optimization of the weights $\theta$ to minimize the pre-defined loss function:
\begin{equation}
\begin{split}
\bold{\theta^{optimal}} =\argmin_{\theta}[\mathbb{E}_{(\bold{S_i},\bold{G_j})\sim \bold{D}}[\mathcal{L}(\bold{S_i},\bold{G_j}, g_{\theta}(\bold{S_i},\bold{G_j}))]],
\end{split}
\end{equation}
where $\mathcal{L}$ represents a similarity metric.
\subsection{Spatial-Correlation Representation}\label{sc_scr}
In this paper, we define the spatial correlation representation as the latent feature that characterizes the essence of spatial correlation between a given pair of source and target point sets. As shown in Figure \ref{first}, to compute the SCR feature, source and target point sets are usually fed to a feature in previous works (i.e. PointNet \cite{qi2017pointnet}) for the deep spatial feature extraction, and followed with a pre-defined correlation module (i.e. \cite{rocco2017convolutional}). However, the design of an appropriate feature encoder for unstructured point clouds is challenging compared to the standard discrete convolutions assume the availability of a grid structured input (e.g. 2D image). Furthermore, the design of a correlation module for a pair of input spatial features has a significant impact on the transformation decoder. The limitation of the hand-crafted design of modules for the extraction of individual spatial feature and spatial correlation feature motivates us to design a model-free based SCR as described below.
To eliminate the side effects of the hand-craft design in feature encoder and correlation module, as shown in Figure \ref{main}, we define a trainable latent SCR (Spatial-Correlation Representation) feature for each pair of point sets. As shown in Figure \ref{first}, our SCR optimizer, which takes input as a randomly initialized latent vector to reason the optimal SCR for the point set registration task, replaces the previously hand-crafted feature encoder and correlation module. More specifically, as shown in Figure \ref{main}, for a pair of source and target point sets $(\mathbf{S}, \mathbf{G})$, the randomly initialized latent vector $\bold{z}$ from Gaussian distribution as an initialized SCR. The initialized SCR is optimized during the training process together with the transformation decoder. In this way, in comparison with previous methods, we avoid the challenging problem of explicit defining the spatial correlation between two point sets but simply optimize the SCR feature from a random initialized status during the model training process. The implicit design of SCR allows Deep-3DAligner more flexibility in spatial correlation feature learning that is more adaptive for the alignment of unseen point sets.
\subsection{Transformation Decoder}\label{sc_trans_reg}
Given the above spatial-correlation representation (SCR) feature, we then design a decoding network to regress the desired transformation parameters, as illustrated in Figure \ref{main}. More specifically, we first formulate the input by stacking the coordinates of each point $x$ in source point set $S$ with its corresponding SCR feature $\bold{z}$. We note this input as $[x, \mathbf{z}]$. Then we define a multi-layer perceptron (MLP) architecture for learning the parameters of the rigid transformation to transform the source point set toward the target point set. This architecture includes successive MLP layers with the ReLU activation function, $\{g_i\}_{i=1,2,...,s}$, such that $g_i : \mathbb{R}^{v_{i}}\to \mathbb{R}^{v_{i+1}}$, where $v_{i}$ and $v_{i+1}$ are the dimensions of the layer inputs and outputs respectively. Then, we use a max pool layer to exact the global feature $\mathbf{L}$, calculated as:
\begin{equation}
\begin{split}
\bold{L}=Maxpool\{g_sg_{s-1}...g_1([\bold{x_i},\bold{z}])\}_{\bold{x_i}\in \bold{S}}
\end{split}
\end{equation}
where the notation [*,*] represents the concatenation of vectors in the same domain.
We further decode the global feature $\bold{L}$ to the transformation parameters by a second network, which includes $t$ successive MLP layers with a ReLU activation function $\{\gamma_i\}_{i=1,2,...,t}$ such that $\gamma_i : \mathbb{R}^{w_{i}}\to \mathbb{R}^{w_{i+1}}$, where $w_{i}$ and $w_{i+1}$ are the dimensions of the layer inputs and outputs respectively.
\begin{equation}
\begin{split}
\bold{\mathbf{\phi}}=\gamma_t\gamma_{t-1}...\gamma_1(\mathbf{L})
\end{split}
\end{equation}
Now, we can get the transformed source point set $\bold{S'}$ by
\begin{equation}
\begin{split}
\bold{S'} =\mathbf{\bold{T}_{\phi}}(\bold{S})
\end{split}
\end{equation}
where $\bold{T}_{\phi}$ denotes the transformation function defined by the predicted transformation parameters $\phi$. Based on the transformed source point set and the target point set, we can further define the alignment loss function in the next section.
\subsection{Loss function}\label{sc_loss}
In our unsupervised setting, we do not have the ground truth transformation for supervision and we do not assume a direct correspondence between these two point sets. Therefore, a distance metric between two point sets, instead of the point/pixel-wise loss is desired. In addition, A suitable metric should be differentiable and efficient to compute. In this paper, we adopt the Chamfer distance proposed in \cite{fan2017point} as our loss function. The Chamfer loss is a simple and effective alignment metric defined on two non-corresponding point sets. We formulate the Chamfer loss between our transformed source point set $T_{\phi}(\mathbf{S})$ and target points set $\mathbf{G} $ as:
\begin{equation}
\begin{split}L_{\text{Chamfer}}(T_{\phi}(\mathbf{S}),\mathbf{G})
&= \sum_{x\in T_{\phi}(\mathbf{S})}\min_{y \in \mathbf{G}}||x-y||^2_2\\
&+ \sum_{y\in \mathbf{G}}\min_{x \in T_{\phi}(\mathbf{S})}||x-y||^2_2
\end{split}
\end{equation}
where $\phi$ represents all the parameters to define the transformation and $\phi$ is predicted from our model based on the optimized SRC feature and the trained decoder.
\subsection{Optimization Strategy}\label{sc_optim}
In section \ref{sc_scr}, we define a set of trainable latent vectors $\bold{z}$, one for each pair of point sets as the SCR feature. During the training process, these latent vectors are optimized along with the weights of network decoder using a stochastic gradient descent-based algorithm. For a given training dataset $\bold{D}$, our training process can be expressed as:
\begin{equation}
\begin{split}
\bold{\theta^{optimal}, z^{optimal}} =\argmin_{\theta, \mathbf{z}}[\mathbb{E}_{(\bold{S_i},\bold{G_j})\sim \bold{D}}[\mathcal{L}(\bold{S_i},\bold{G_j}, g_{\theta}(\bold{S_i},\mathbf{z}))]],
\end{split}
\end{equation}
where $\mathcal{L}$ represents the pre-defined loss function.
For a given testing dataset $\bold{W}$, we fix the network parameters $\tilde{\theta}= \bold{\theta^{optimal}}$ and only optimize the SRC features:
\begin{equation}
\begin{split}
\bold{z^{optimal}} =\argmin_{\mathbf{z}}[\mathbb{E}_{(\bold{S_i},\bold{G_j})\sim \bold{W}}[\mathcal{L}(\bold{S_i},\bold{G_j}, g_{\tilde{\theta}}(\bold{S_i},\mathbf{z}))]].
\end{split}
\end{equation}
The learned decoder network parameters $\tilde{\theta}$ here provides a prior knowledge for the optimization of SRC. After this optimization process, the desired transformation can be determined by $T_{\phi}=T_{g_{\tilde{\theta}}(\bold{S_i},\mathbf{z_i^{optimal}})}$ and the transformed source shape can be generated by $\mathbf{S_i'}=T_{\phi}(\mathbf{S_i}), \forall \bold{S_i} \in \bold{W}$.
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{ul_exp_3.png}
\end{center}
\caption{Randomly selected qualitative results of our model for registration of unseen samples. Left columns: inputs. Right columns: outputs. The red points represent source point sets, and the blue points represent the target point sets.}
\label{allres}
\end{figure*}
\begin{table*}
\small
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccc}
\hline
Model& MSE(R) &RMSE(R)& MAE(R) &MSE(t)& RMSE(t)& MAE(t)
\\
\hlin
ICP \cite{besl1992method}& 894.897339& 29.914835& 23.544817& 0.084643 &0.290935& 0.248755\\
Go-ICP \cite{yang2015go}& 140.477325& 11.852313& 2.588463& 0.000659& 0.025665& 0.007092\\
FGR \cite{zhou2016fast}& 87.661491& 9.362772& 1.999290& 0.000194& 0.013939 &0.002839\\
PointNetLK \cite{aoki2019pointnetlk}& 227.870331& 15.095374& 4.225304& 0.000487& 0.022065& 0.005404\\
DCPv1+MLP(Supervised)\cite{wang2019deep} & 21.115917& 4.595206& 3.291298& 0.000861 &0.029343&0.022501\\
\hline
DCPv2+MLP(Supervised)\cite{wang2019deep} & 9.923701& 3.150191& 2.007210&0.000025& 0.005039& 0.003703\\
DCPv1+SVD(Supervised)\cite{wang2019deep} & 6.480572& 2.545697& 1.505548& 0.000003 &0.001763& 0.001451\\
DCPv2+SVD(Supervised)\cite{wang2019deep} & 1.307329& 1.143385& \textbf{0.770573}& \textbf{0.000003}& \textbf{0.001786}& \textbf{0.001195}\\
\hlin
Deep-3DAligner (MLP-based, Unsupervised) & \textbf{1.154405} &\textbf{1.074432}&0.830864&0.000444 &0.020904& 0.014533\\
\hline
\end{tabular}}
\end{center}
\caption{ModelNet40: Test on unseen point clouds. Our model is trained in an unsupervised manner without any ground-truth labels. Our model does not require attention mechanism and SVD-based fine-tuning processes.}
\label{ttt2}
\end{table*}
\section{Experiments}\label{sc_exp}
We describe the experimental dataset and settings in section \ref{sc_dataset} and section \ref{sc_setting} respectively. In sections \ref{sc_exp1} and \ref{sc_exp2}, we test our model's performance on different settings and compare our model's performance with state-of-the-art methods. In section \ref{sc_pd}, we further demonstrate the robustness of our model in the presence of P.D. noise. In section \ref{sc_di}, we further demonstrate the robustness of our model in the presence of D.I. noise. In section \ref{sc_do}, we further demonstrate the robustness of our model in the presence of D.O. noise. In section \ref{sc_direct}, we compare our model's performance with the direct optimization version.
\subsection{Dataset preparation}\label{sc_dataset}
We test the performance of our model for 3D point set registration on the ModelNet40 dataset. This dataset contains 12311 pre-processed CAD models from 40 categories. For each 3D point object, we uniformly sample 1024 points from its surface. Following the settings of previous work, points are centered and re-scaled to fit in the unit sphere. To demonstrate the robustness of our model in the presence of various noise types, we add noise to the target point sets for model evaluation. To prepare the position drift (P.D.) noise, a zero-mean Gaussian is applied to each point in the target point set. The level of P.D. noise is defined as the standard deviation of Gaussian Noise. To prepare the data incompleteness (D.I.) noise, we randomly remove a certain amount of points from the entire point set. The level of D.I. noise is defined as the ratio of the eliminated points and the entire set. To prepare the data outlier (D.O.) noise, we randomly add a certain amount of points generated by a zero-mean Gaussian to the point set. The level of D.O. noise is defined as the ratio of the added points to the entire point set.
\begin{table*}
\small
\begin{center}
\resizebox{\textwidth}{!}{\begin{tabular}{ccccccc}
\hline
Model& MSE(R) &RMSE(R)& MAE(R) &MSE(t)& RMSE(t)& MAE(t)
\\
\hlin
ICP\cite{besl1992method}& 892.601135&29.876431&23.626110&0.086005&0.293266&0.251916\\
Go-ICP \cite{yang2015go}& 192.258636&13.865736&2.914169&0.000491&0.022154&0.006219\\
FGR \cite{zhou2016fast}&97.002747&9.848997&1.445460&0.000182&0.013503&0.002231\\
PointNetLK \cite{aoki2019pointnetlk}& 306.323975&17.502113&5.280545&0.000784&0.028007&0.007203\\
DCPv1+SVD (Supervised) \cite{wang2019deep} & 19.201385&4.381938&2.680408&0.000025&0.004950&0.003597\\
DCPv2+SVD (Supervised) \cite{wang2019deep} & 9.923701& 3.150191& 2.007210& \textbf{0.000025} &\textbf{0.005039}&\textbf{0.003703}\\
Deep-3DAligner (MLP-based, Unsupervised) & \textbf{3.715267} &\textbf{1.485832}&\textbf{1.040233}&0.000822 &0.026767& 0.022763\\
\hline
\end{tabular}}
\end{center}
\caption{ModelNet40: Test on unseen categories. Our model is trained in an unsupervised manner without ground-truth labels. Our model does not require SVD-based fine-tuning processes.}
\label{ttt5}
\end{table*}
\subsection{Experimental Settings}\label{sc_setting}
We train our network using batch data from the training data set $\{(\mathbf{S_i},\mathbf{G_i}) | \mathbf{S_i}, \mathbf{G_i} \in \mathbf{D} \}_{i=1,2,...,b}$. We set the batch size $b$ to 128. The latent vectors are initialized from a Gaussian distribution $\mathcal{N}(0,1)$ with a dimension of 2048. For the decoding network, the first part includes 2 MLP layers with dimensions (256,128) and a max pool layer. Then, we use 3 additional MLPs with dimensions of (128, 64, 3) for decoding the rotation matrix and with dimensions of (128, 64, 3) for decoding the translation matrix. We use the leaky-ReLU \cite{xu2015empirical} activation function and implement batch normalization \cite{ioffe2015batch} for every layer except the output layer. Our model is optimized with Adam optimizer. The learning rate is set as 0.001 with exponential decay of 0.995 at each epoch. For the outlier and missing point cases, we clip the Chamfer distance by a fixed value of 0.1.
We use the mean squared error (MSE), root mean squared error (RMSE), and mean absolute error (MAE) to measure the performance of our model and all comparing methods. Lower values indicate better alignment performance. All angular measurements in our results are in units of degrees. The ground-truth labels are only used for the performance evaluation and are not used during the training/testing process.
For performance evaluation, we compare our method with both supervised and unsupervised methods. The current state-of-the-art results are achieved by DCP, which is a supervised approach including four different versions. The version DCPv1+MLP uses deep neural networks (DNNs) to model the transformation. The version DCPv2+MLP improves its performance by integrating a supervised attention mechanism. DCPv1+SVD further improves its performance by integrating an additional SVD-based fine-tuning process, and DCPv2+SVD integrates both attention and SVD to further boost the performance. Since our Deep-3DAligner is an unsupervised approach and only use DNNs as auxiliary function to model the transformation, we will use DCPv1+MLP as the baseline model for compassion.
\subsection{Full Dataset Training \& Testing}\label{sc_exp1}
In this experiment, we follow previous works, DCP \cite{wang2019deep} and PointNetLK \cite{aoki2019pointnetlk}, to test our model for 3D point set registration on unseen point sets.\\
\noindent{\textbf{Experiment Setting:}}
For the 12,311 CAD models from the ModelNet40, following exactly DCP's setting, we split the dataset into 9,843 models for training and 2,468 models for testing. As in DCP, we treat each 3D shape as our source point set, and randomly apply a rigid transformation on it along each axis to generate our target point set. The rotation is uniformly sampled from 0 to 45 degrees, and the translations are uniformly sampled in $[-0.5, 0.5]$. Note that we follow exactly the same experimental setting as the previous work in DCP for synthetic data simulation, where both source and target point sets are simulated with the same sampling. We train our Deep-3DAligner, DCP and PointNetLK on the divided training dataset and then evaluate the performance on the testing set. ICP, Go-ICP, and FGR are tested directly on the testing dataset. Note that our model is trained without using any ground-truth information, and our model does not require the SVD-based fine-tuning processes as used in DCP.\\
\noindent{\textbf{Results:}}
We list the quantitative experimental results in Table \ref{ttt2}. In this table, we evaluate the performance based on the prediction errors of rotation angles and translation vectors. The first three columns illustrate the comparison results for the rotation angle prediction. As we can see from the results, our method achieves significantly better performance than the baseline DCPv1+MLP model and also get even slightly better or comparative performance against the state-of-the-art approach (DCPv2+SVD). For example, our method achieves 1.074 RMSE(R) in rotation prediction, compared to 1.143(R) achieved by DCPv2+SVD. The last three columns illustrate the comparison results for translation prediction. As we can see from the results, our method achieves slightly better performance against DCPv1+MLP. One may note that the translation vector prediction performance of our model is inferior to that of DCPv2+MLP, DCPv1+SVD, DCPv2+SVD. The reason for this gap is that DCPv2 +MPL/SVD adopts an additional attention mechanism in its network for enhancement. DCPv1/DCPv2+SVD leverage SVD as an additional fine-tuning process to refine their results. SVD results in additional computational complexity, whereas our method uses only MLP-based networks and is trained end to end. Moreover, DCP assumes the same sampling of points and we tested that DCP experienced a severe performance degradation for randomly sampled points of source and target shapes, whereas our model with Chamfer distance is robust to the way of point sampling. As an unsupervised learning paradigm, we do not use ground-truth labels or any correspondence relationship between source and target points for training. Chamfer distance-based loss is less sensitive to the translation, which possibly contributes to the deficit of translation prediction performance. We will explore other loss functions in a future study to address this problem. We note that the result of translation prediction is still better than PointNetLK. As shown in Figure \ref{allres}, we randomly select the qualitative results from the testing dataset. For shapes in various poses, the registration results indicate our model achieves remarkable performance in aligning the source point sets with the target point sets.
\subsection{Category Split}\label{sc_exp2}
In this experiment, we follow previous works, DCP and PointNetLK, to test our model for 3D point set registration on objects from unseen categories. \\
\begin{table*}
\centering
\small
\begin{tabular}{ccccccc}
\hline
Model & MSE(R)& MAE(R) & MSE(t) & MAE(t)\\
\hline
Ours & 2.9240& 0.8541 & 0.0002 & 0.012 \\
DCP&12.040397&2.190995&0.000642&0.015552\\
\hline
\end{tabular}
\caption{Quantitative result for 3D point set registration in presence of P.D. noise.}
\label{tab777}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{l2.png}
\end{center}
\caption{Randomly selected qualitative results in presence of P.D. noise. Left columns: inputs. Right columns: outputs. The red points represent source point sets, and the blue points represent the target point sets.}
\label{fig5}
\end{figure*}
\noindent{\textbf{Experiment Setting:}}
To test the generalizability of our model, we split ModelNet40 evenly by category into training and testing sets in the same way as DCP. Our Deep-3DAligner, DCP, and PointNetLK are trained on the first 20 categories and test on the remaining categories. ICP, Go-ICP, and FGR are also tested on the held-out categories. Note that we follow the exact same experimental setting as the previous work in DCP for synthetic data simulation, where both source and target point sets are simulated with the same sampling. Our model is trained without using any ground-truth information, and our model does not require any SVD-based fine-tuning processes.\\
\noindent{\textbf{Results:}} As shown in Table \ref{ttt5}, the quantitative results indicate that our model achieves superior generalization ability on unseen categories as an unsupervised method. In comparison, all the supervised learning methods experienced a dramatic performance drop compared to the results in Table \ref{ttt2}. For example, we can see that PointNetLK and DCPv2+SVD obtain a MSE(R) of 227.87 and 1.31 in ``the training/testing split test'' as described in section \ref{sc_exp1} (see Table \ref{ttt2}). However, the corresponding values in ``seen/unseen categories test'' as described in this section increase to 306.32 and 9.92 respectively (see Table \ref{ttt5}). The MSE(R) of PointNetLK increased from 227.87 for unseen point clouds to 306.324 for unseen categories. Unsupervised algorithms, such as ICP and FGR, achieve similar accuracy for unseen categories and unseen point clouds. Our method has a small performance drop for the unseen categories compared to the results for unseen point clouds. Particularly, in the prediction of the rotation matrix for unseen categories, our method outperforms state-of-the-art DCPv2-SVD by a large margin (6.21) in MSE(R).
\subsection{Resistance to Point Drifts (P.D.) Noise}\label{sc_pd}
In this experiment, we further verify our model's performance for 3D rigid point set registration in the presence of P.D. noise.\\
\noindent{\textbf{Experiment Setting:}} To test our model's performance for 3D rigid point set registration in the presence of P.D. noise, we firstly split ModelNet40 as explained in section 4.3. Then we add the P.D. noise on the target shape by the way introduced in section 4.1. In this section, we choose the DCP as the baseline model for comparison. We train the DCP and our model using our prepared training dataset and then test them using the testing dataset. The quantitative results with a P.D. noise level of 0.01 are demonstrated in Table \ref{tab777} and additional random selected qualitative results for various P.D. noise levels from 0.01 to 0.1 are shown in Figure \ref{fig5}. Note that we follow exactly the source code provided by DCP and the default settings in DCP for training and testing their model. Our model is trained without using any ground-truth information, and our model does not require any SVD-based fine-tuning processes.\\
\begin{table*}
\centering
\small
\begin{tabular}{ccccccc}
\hline
Model & MSE(R)& MAE(R) & MSE(t) & MAE(t)\\
\hline
Ours & 7.3354&1.4702 & 0.0008 &0.0222 \\
DCP & 34.624447&3.720148 & 0.002301 &0.032245 \\
\hline
\end{tabular}
\caption{Quantitative result for 3D point set registration in presence of D.I. noise.}
\label{tab888}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{l3.png}
\end{center}
\caption{Randomly selected qualitative results on dataset in presence of D.I. noise. Left columns: inputs. Right columns: outputs. The red points represent source point sets, and the blue points represent the target point sets.}
\label{fig6}
\end{figure*}
\noindent{\textbf{Results:}} As shown in Table \ref{tab777}, our method is more robust to P.D. noise. In comparison to DCP, for all the shown metrics of alignment performance our method achieves much better results than DCP. Especially for the rotation matrix estimation, our method achieves an MSE of 2.92 in comparison to 12.04 achieved by DCP. In addition, from the qualitative results shown in Figure \ref{fig5}, we notice that when P.D. noise level even increases from 0.01 to 0.1, the alignment result is nearly perfect for most cases.
\subsection{Resistance to Data Incompleteness (D.I.) Noise}\label{sc_di}
In this experiment, we further verify our model's performance for 3D rigid point set registration in the presence of D.I. noise.\\
\noindent{\textbf{Experiment Setting:}} To test our model's performance for 3D rigid point set registration in the presence of D.I. noise, we firstly split ModelNet40 as explained in section 4.3. Then we add the D.I. noise on the target shape by the way introduced in section 4.1. We compare our model with the state-of-the-art DCP model. Both models are trained using the training set and then tested on the test set. The quantitative result for D.I. noise level 0.1 is demonstrated in Table \ref{tab888}. Figure \ref{fig6} gives some randomly selected registration results for various D.I. noise levels from 0.1 to 0.3. One should note that our model is trained without using any ground-truth information, and our model does not require any SVD-based fine-tuning processes. We adjust the Chamfer distance loss by clipping large values which exceed 0.1.\\
\noindent{\textbf{Results:}} As shown in Table \ref{tab888}, for dataset in presence of D.I. noise, our model shows superior performance in comparison to DCP. Moreover, our proposed model achieves better performance than DCP model on all evaluation metrics. For the precision of rotation matrix estimation, our method achieves an MSE(R) of 7.33 in comparison to 36.62 achieved by DCP. However, as can be seen in Figure \ref{fig6}, our model performs well when the D.I. level is lower than 0.2 but experiences a performance drop when D.I. level increased to 0.3. When the D.I noise level increases to 0.3, our model performs well on objects with the missing parts in the middle of the body. For example, the cases such as the screen case in the first column and the chair case in the third column, driven by the Chamfer distances loss, the optimal alignment solution is correct. However, for the case in the last column (the entire lower part of the target point set is missing), our method mistakenly aligns the two objects by shifting the source point set higher. For these cases, without additional adjustment to the missing points, minimization of the pre-defined unsupervised alignment loss eventually leads to alignment failures.
\subsection{Resistance to Data Outliers (D.O.) Noise}\label{sc_do}
In this experiment, we further verify our model's performance for 3D rigid point set registration in the presence of D.O. noise.\\
\noindent{\textbf{Experiment Setting:}} To test our model's performance for 3D rigid point set registration in the presence of D.O. noise, we firstly split ModelNet40 as explained in section 4.3. Then we add the D.O. noise on the target shape by the way introduced in section 4.1. We compare the performance of our model with DCP model. Both models are trained on the training set the evaluated on the test set. The quantitative result for D.O. noise level 0.1 is demonstrated in Table \ref{tab999} and some randomly selected qualitative results for various D.O. noise levels from 0.1 to 0.6 are shown in Figure \ref{fig4}. We adjust the Chamfer distance loss by clipping large values which exceed 0.1. \\
\noindent{\textbf{Results:}} As shown in Table \ref{tab999}, for dataset in presence of D.O. noise, the quantitative result achieved by our model is better than the results of DCP. In comparison to the method DCP, for the precision of rotation matrix estimation, our method achieves 16.57 MSE(R) in comparison to 46.99 achieved by DCP. For the precision of translation matrix estimation, our method achieves 0.006 MSE and 0.025 MAE in comparison to 0.002 MSE(R) and 0.037 MAE achieved by DCP. Even though our model is trained in an unsupervised way, we have a comparable performance for translation prediction in the presence of D.O. data noise. As we can be seen in Figure \ref{fig4}, our model performs well when the D.I. level is as low as 0.1, but our model experiences a clear performance drop when D.O. level increased to 0.6. When the D.O. noise increases to 0.6, our model can still align well for approximately half of the cases. Some failures cases may occur due to the initialization problem when our model faces heavy outliers. The cases with D.O. noise needs more focuses and works in future study. Here we only show our model's performance without specially dealing with the outliers.
\begin{table*}
\centering
\small
\begin{tabular}{ccccccc}
\hline
Model & MSE(R)& MAE(R) & MSE(t) & MAE(t)\\
\hline
Ours & 16.5751 & 1.3631 & 0.0060 & 0.0255 \\
DCP & 46.992622 & 4.586546 & 0.002941 & 0.037136 \\
\hline
\end{tabular}
\caption{Quantitative result for 3D point set registration in presence of D.O. noise.}
\label{tab999}
\end{table*}
\begin{figure*}
\begin{center}
\includegraphics[width=17cm]{l1.png}
\end{center}
\caption{Randomly selected qualitative results in presence of D.O. noise. Left columns: inputs. Right columns: outputs. The red points represent source point sets, and the blue points represent the target point sets.}
\label{fig4}
\end{figure*}
\begin{table*}
\centering
\small
\begin{tabular}{ccccccccc}
\hline
Methods& MSE(R)& RMSE(R)& MAE(R) & MSE(t) &RMSE(t)& MAE(t)\\
\hline
Ours &1.154405 &1.074432&0.830864&0.000444 &0.020904& 0.014533 \\
Direct Optimiz. & 406.131713&16.454065&13.932246&0.087263&0.295404&0.253658\\
\hline
\end{tabular}
\caption{Quantitative comparison result for 3D point set registration between our model and direct optimization algorithm.}
\label{tab888}
\end{table*}
\subsection{Oblation study: comparison with direct optimization method}\label{sc_direct}
In this experiment, we conduct further experiments to verify the design of our spatial-correlation representation (SCR) and decoder network. \\
\noindent{\textbf{Experiment Setting:}}
To test the effectiveness of our designed decoder with the learned prior knowledge, we compare our model with the direct optimization algorithm. For the direct optimization algorithm version, we optimize the same alignment loss by directly passing the gradients directly to R, t matrix. For preparing the dataset, we follow exactly the settings as section 4.3. The direct optimization algorithm is directly tested on the testing set. Our model is pre-trained on the training dataset without using any label information and then tested on the testing set for comparison. We use the same evaluation metric as in section 4.3 and the quantitative result is demonstrated in Table \ref{tab888}. \\
\noindent{\textbf{Results and Discussion:}} As is shown in Table \ref{tab888}, directly optimizing R,t with regard to the Chamfer loss cannot lead to a feasible solution of the point set registration problem. This method only leads to an MSE(R) of 406.13 for notation prediction, which is unacceptable in real-world scenarios. With the same experimental settings, our proposed method gets significantly better performance, which demonstrates the effectiveness of the proposed SCR representation and decoder network.\\
Given source and target point sets with a size of $N = 2048$, we can first define a relative position tensor $N\times 3$ which characterizes the relative positions between two point sets and the alignment is measured using Chamfer loss. Our approach essentially leverages a multi-layer neural network to learn the nonlinear mapping function that maps the relative position tensor $N\times 3$ to geometric transformation representation (R,t) (dimension of 6). Technically, it is possible to directly optimize the chamfer distance over the parameters (R,t) by setting up one single-layer network (i.e. the geometric transformation representation layer). However, practically it is not feasible to just train a single-layer neural network that is capable of mapping the high-dimensional relative position tensor (e.g. dimension of $3,000$ for point sets of $1,000$) to low-dimensional geometric transformation representation (R,t) (dimension of 6). Therefore, our method uses a multi-layer neural network to model this non-linear mapping/dimension reduction problem. In addition, to better formulate the concept of relative position tensor, we propose to design spatial correlation representation (SCR) features. During training, we jointly optimize the SCR and decoder to complete the non-linear mapping process. In the testing phase, we fix the trained decoder and only optimize the SCR for a new given pair of source and target point sets.
\section{Conclusion}
This paper introduces a novel unsupervised learning-based approach to our research community for point set registration. In contrast to recent learning-based methods (i.e. Deep Closest Point), our Deep-3DAligner gains competitive advantages by eliminating the side effects of the hand-craft design in feature encoder and correlation module. We conducted experiments on the ModelNet40 datasets to validate the performance of our unsupervised Deep-3DAligner for point set registration. The results demonstrated that our proposed approach achieved comparative performance compared to most recent supervised state-of-the-art approaches. Our Deep-3DAligner also achieves reliable performance for 3D point sets in the presence of Gaussian noise, outliers, and missing points.
\bibliographystyle{IEEEtran}
|
1,477,468,751,064 | arxiv | \section{Introduction}\label{sec: introduction}
\NEW
Millimeter wave (mmWave) wireless communications are one of the most promising candidates to support extremely high data rates in future wireless networks~\cite{rappaport2014mmWaveBook,Niu2015Survey,Nitsche2014IEEE}.
MmWave communications are attractive for many applications such as ultra short range communications, augmented reality, data centers, vehicular networks, mobile offloading, mobile fronthauling, and in-band backhauling. Due to their great commercial potential, several international activities have emerged to standardize mmWave communications in wireless personal and local area networks (WPANs and WLANs). Examples include IEEE~802.15.3c, ECMA~387~\cite{rappaport2014mmWaveBook}, IEEE~802.11ad~\cite{Nitsche2014IEEE}, WirelessHD, WiGig, and recently the IEEE~802.11ay study group on next generation 60~GHz.\footnote{Detailed information about these projects can be found at the following addresses: \url{http://www.wirelesshd.org} (WirelessHD), \url{http://wirelessgigabitalliance.org} (WiGig), and \url{http://www.ieee802.org/11/Reports/ng60_update.htm} (IEEE~802.11ay), respectively.}
}
Special propagation features and hardware constraints of mmWave systems introduce many new challenges in the design of efficient physical, medium access control (MAC), and routing layers.
The severe channel attenuation, vulnerability to obstacles, directionality of mmWave communications, the reduced interference footprint, and high signaling overhead demand a thorough reconsideration of traditional protocols and design principles, especially at the MAC layer.
\NEW{In this paper, we focus on short range mmWave networks. Compared to~\cite{rappaport2014mmWaveBook,Nitsche2014IEEE,Niu2015Survey} that survey either the existing standards or the research literature, we deliver original contributions based on the features specific to mmWave networks that are mostly ignored in the design of the existing mmWave standards.
To distinguish this paper from~\cite{shokri2015mmWavecellular} that discusses MAC layer design for mmWave cellular networks, we should notice the following important differences, which are more relevant to our studies, between short range and cellular networks: (i) short range networks may rely on carrier sensing among terminals,
and (ii) they may use multihop communications, which may also affect traffic patterns.
In this paper, we show that, contrary to mainstream belief, a mmWave network may exhibit both noise-limited and interference-limited regimes. We highlight the significant mismatch between transmission rates of control and data messages, which challenges the MAC layer efficacy of the existing mmWave standards in dense deployment scenarios. We also raise the prolonged backoff time problem and discuss the beam training overhead and its consequences such as the alignment-throughput tradeoff. To address these new problems, we discuss the necessity of new collision-aware hybrid resource allocation protocols that facilitate concurrent transmissions with QoS guarantees, and also the need for a more efficient retransmission policy. We argue the benefits of a hybrid reactive/proactive control plane to minimize the signaling overhead and propose, for this purpose, a new MAC layer message, which is also able to alleviate the prolonged backoff time. Finally, we discuss the potential of multihop communication techniques to compensate for the error-prone mmWave physical layer, provide reliable mmWave connections, and extend mmWave communication range.}
\NEW{Throughout this paper, we identify critical MAC layer aspects of the existing mmWave standards that may limit the efficacy and use cases of future mmWave networks. The detailed discussions and proposed solution approaches presented in this paper provide useful insights for future and emerging mmWave network technologies, such as IEEE~802.11ay.}
The rest of this paper is organized as follows. In Section~\ref{sec: fundamentals}, we describe the essential aspects of mmWave networks. In Section~\ref{sec: standardization}, existing mmWave standards are briefly reviewed. \NEW{Section~\ref{sec: Gap-Analysis} presents new fundamental aspects that are missing in the current standards, followed by MAC design guidelines in Section~\ref{sec: MAC-Design-Aspects}.} Concluding remarks are presented in Section~\ref{sec: concluding-remarks}.
\section{Fundamentals}\label{sec: fundamentals}
\subsection{The Directed mmWave Wireless Channel}\label{subsec: mmWave-channel}
MmWave communications use frequencies in the range 30--300~GHz, though the frequencies 6--30~GHz are also often referred to as mmWave\cite{shokri2015mmWavecellular}. The main characteristics of mmWave systems are high path-loss, large bandwidth, short wavelength/high frequency, and high penetration loss. Very small wavelengths allow the implementation of massive numbers of antenna elements in the current size of radio chips, which boosts the achievable antenna gain at the cost of extra signal processing. Such a gain can largely or even completely compensate for the higher path-loss of mmWave systems without any extra transmission power. Moreover, directional communications introduce the concept of directional spatial channel, i.e., a channel can be established in a specific direction with a range that varies according to the directionality level.
\subsection{Beam Training}\label{sec: beamsearching}
The use of low-complexity and low-power mmWave devices, along with the massive number of antennas, make traditional digital beamforming based on instantaneous channel state information very expensive. Instead, the existing standards establish a mmWave link using analog beamforming (also called beam-searching) based on pre-defined beam steering vectors (beam training codebook), each covering a certain direction with a certain beamwidth~\cite{rappaport2014mmWaveBook,Niu2015Survey,Nitsche2014IEEE}.
Current standards suggest a three-stage beam-searching technique to reduce alignment overhead. After a quasi-omnidirectional (low resolution pattern) sweep, a coarse grained sector-level sweep (second level resolution pattern) is performed, followed by a beam-level refinement phase (the highest resolution pattern specified in the codebook). An exhaustive search over all possible transmission and reception directions is applied in each level through a sequence of pilot transmissions. The combination of vectors that maximizes the signal-to-noise ratio (SNR) is then selected for the beamforming.
\NEW{IEEE~802.11ad allows investigation of multiple beamforming vectors within a message, as opposed to sending separate training on each beamforming vector, which is the approach adopted in IEEE~802.15.3c. This
modification makes it possible to explore multiple beam patterns with lower overall overhead in IEEE~802.11ad. One of the main drawbacks of analog beamforming is the lack of multiplexing gain, which is addressed by the hybrid digital/analog beamforming architecture, see~\cite[Section~II-C]{shokri2015mmWavecellular} for more details.
}
\subsection{Deafness and Blockage}
Directional communications and vulnerability to obstacles in mmWave networks have two main consequences~\cite{rappaport2014mmWaveBook}: (1) deafness and (2) blockage.
\emph{Deafness} refers to the situation in which the main beams of the transmitter and the receiver do not point to each other, preventing the establishment of a directional communication link. \NEW{Deafness introduces a time consuming procedure for beam searching or \emph{alignment}; an operation in which two beams are pointing to each other, so that the link budget between the transmitter and the receiver is maximized.} The alignment procedure complicates the link establishment phase; however, it substantially reduces multiuser interference~\cite{Singh2011Interference}, as the receiver listens only to a specific directed channel.
In the extreme case, multiuser interference is almost completely suppressed and no longer limits the throughput, so that a mmWave network may operate in a noise-limited regime, unlike conventional interference-limited networks.\footnote{\NEW{Not being in an interference-limited regime does not necessarily imply that a network operates in a noise-limited regime, rather it only implies that the throughput per channel use is limited by the noise power. The network throughput performance, however, can be limited by other factors such as the signaling overhead, as will be argued in Section~\ref{sec: rate-mismatch}.}} This unique feature makes mmWave suitable for very dense deployments of infrastructure nodes and terminals.
\emph{Blockage} instead refers to very high attenuation due to obstacles (e.g., 35~dB due to the human body~\cite{shokri2015mmWavecellular}) that cannot be solved by just increasing the transmission power or increasing the antenna gain using narrower beams. \NEW{Overcoming blockage requires a search for alternative directed mmWave channels that are not blocked, which however entails a new alignment procedure and the consequent overhead.}
\subsection{Control Channel}
Many operations such as establishing a communication channel, discovering neighbors, exchanging routing information, and coordinating channel access rely on the exchange of signaling messages on a control channel. The characteristics of mmWave communications introduce fall-back and directionality tradeoffs, which also appear in mmWave cellular networks~\cite{shokri2015mmWavecellular}. The \emph{fall-back} tradeoff is the tradeoff between sending control messages through a mmWave or a microwave channel. The mmWave channel is subject to blockage, reducing the reliability of the control channel. A dedicated microwave control channel facilitates network synchronization and broadcasting at the expense of higher hardware complexity and energy consumption, since an extra transceiver should be tuned on the microwave control channel~\cite{nitsche2015steering}. \NEW{Moreover, a microwave control channel cannot be used to estimate the mmWave channel and adopt proper beamforming. This is a serious drawback that may hinder the use of hybrid beamforming in future mmWave networks.}
The \emph{directionality} tradeoff refers to the option of establishing a control channel in omnidirectional or directional operation modes. An omnidirectional control channel alleviates the deafness problem at the expense of being subject to a very short range, whereas a directional one increases the coverage with extra alignment overhead.
Altogether, we may have two justifiable control channels: (1) omnidirectional-microwave, which is employed in ECMA~387, and (2) directional-mmWave,\footnote{Note that realizing a control channel in the mmWave band with omnidirectional transmission and/or reception while having antenna gains for data transmission introduces a mismatch between the ranges at which a link with reasonable data rate can be established and the range at which control messages can be exchanged. Such a mismatch may substantially degrade the system performance, see~\cite{shokri2015mmWavecellular} and references therein.} which is employed in IEEE~802.15.3c and IEEE~802.11ad. The delay and coverage performance of these control channels are evaluated for a cellular context in~\cite{shokri2015mmWavecellular}, and evaluating their performance in short range networks is an interesting subject of future studies.
\section{Standardization in mmWave Communications}\label{sec: standardization}
In this section, we shortly review the recent IEEE standards for personal and local area networks at 60~GHz.
Broadly speaking, the standards define a network with one coordinator and several mmWave devices.\footnote{ECMA~387 supports distributed network architectures as well~\cite{rappaport2014mmWaveBook}.} The coordinator, which can be a device itself, is responsible for broadcasting synchronization beacons and managing radio resources. Fig.~\ref{fig: NetworkArchitecture} shows a mmWave network with four directional links.
\begin{figure}[!t]
\centering
\includegraphics[width=0.8\columnwidth]{NetworkArchitecture}\\
\caption{Network architecture of existing mmWave WPAN and WLAN. The coordinator broadcasts synchronization commands and manages available resources.}
\label{fig: NetworkArchitecture}
\end{figure}
\subsection{Personal Area Networks: IEEE~802.15.3.c}
The IEEE~802.15.3c standard~\cite{rappaport2014mmWaveBook} has been considered as one of the prominent MAC candidates to support mmWave wireless personal area networks, known as piconets. Supporting up to 5.78~Gbps data rate,
it enables several applications such as high speed Internet access, streaming content, video on demand, and high definition TV.
Among a group of devices, one will be selected as piconet coordinator (PNC), broadcasting beacon messages. Time is divided into successive super-frames, each consisting of three portions: beacon, contention access period (CAP), and channel time allocation period (CTAP), as shown in Fig.~\ref{subfig: TimingStructure-802.15.3c}.
\NEW{In the beacon, the coordinator transmits an omnidirectional or multiple quasi-omnidirectional beacons to facilitate the discovery procedure.
In the CAP, devices contend to register their channel access requests at the PNC, based on carrier sense multiple access with collision avoidance (CSMA/CA). Although some devices with low QoS requirements may use this period for data transmission, PNC serves requests with high QoS demands, registered in CAP, during CTAP. Resource allocation in CTAP is based on time division multiple access (TDMA). CTAP is comprised of channel time allocations (CTAs), serving data traffic that requires QoS guarantees.}
\subsection{Local Area Networks: IEEE~802.11ad}
IEEE~802.11ad adds modifications to the IEEE~802.11 physical and MAC layers to enable mmWave communications at 60~GHz. It provides up to 6.7~Gbps data rate using 2.16~GHz bandwidth over a short range. IEEE~802.11ad supports many applications, including uncompressed high-definition multimedia transmissions and wireless docking stations.
IEEE~802.11ad defines a network as a personal basic service set (PBSS) with one coordinator, called PBSS control point (PCP), and several stations.
\NEW{A superframe, called beacon interval, is divided into a beacon header interval (BHI) and a data transfer interval (DTI). BHI consists of a beacon transmission interval (BTI), an association beamforming training (A-BFT), and an announcement transmission interval (ATI). DTI consists of several contention-based access periods (CBAPs) and service periods (SPs). In BTI, PCP transmits directional beacon frames that contain basic timing for the personal BSS, followed by beamforming training and association to PCP in the A-BFT period. ATI is allocated for request-response services where PCP sends information to the stations.
Depending on the required QoS level, a device will be scheduled in the CBAP to transmit data using CSMA/CA, or in the SP for contention-free access using TDMA. This schedule is announced to the participating stations prior to the start of DTI.}
Fig.~\ref{subfig: TimingStructure-802.11ad} illustrates generic timing segmentation of a superframe in IEEE~802.15.3c and a beacon interval in IEEE~802.11ad.
\begin{figure}[!t]
\centering
\subfigure[Superframe of IEEE~802.15.3c]{
\includegraphics[width=\columnwidth]{TimingStructureA}
\label{subfig: TimingStructure-802.15.3c}
}
\subfigure[Beacon interval of IEEE~802.11ad]{
\centering
\includegraphics[width=\columnwidth]{TimingStructureB}
\label{subfig: TimingStructure-802.11ad}
}
\caption{Network timing structure of existing IEEE mmWave standards. In IEEE~802.15.3c, beacon messages are transmitted in BP. Channel access requests are made in CAP and served in CTAP using TDMA. Similar procedures are adopted in IEEE~802.11ad.}
\label{fig: TimingStructure}
\end{figure}
\subsection{\NEW{Local Area Networks: IEEE~802.11ay}}
\NEW{IEEE 802.11ay is the most recent study group within IEEE, formed in May~2015, that aims to modify IEEE~802.11ad to enhance the throughput, range, and most importantly the use cases, while ensuring backward compatibility and coexistence with legacy mmWave standards. Supporting data rates of at least 20~Gbps\footnote{To the best of our knowledge, the maximal target data rate is not specified so far, but there are indications that the target rates aim towards hundred(s) of~Gbps} and a maximum range of 1000~m, IEEE~802.11ay enables a wide variety of applications ranging from backup wireless connections in data centers to mobile backhauling. To achieve these goals, the study group is investigating several techniques, including channel bonding, hybrid beamforming, and higher modulation orders, among others.
As the study group has not released any stable document so far, we cannot provide further details on this standard. In the next two sections, we highlight the bottlenecks of current mmWave standards and our suggestions to this study group on how to improve MAC layer efficiency of future mmWave networks.}
\section{\NEW{Limitations of Existing mmWave Standards}}\label{sec: Gap-Analysis}
In this section, we discuss the main MAC design issues that arise in mmWave communications and state the weaknesses of the current solutions, including existing standards, when they are applied to support next generation short range wireless communications.
\NEW{To highlight the existing challenges and possible solution approaches in the following sections, we simulated a mmWave WPAN with a random number of aligned mmWave links (aligned transmitter-receiver pairs),\footnote{\NEW{In many use cases of mmWave networks such as mobile backhauling, we can neglect the beam training overhead due to low-mobility and high traffic. The impact of the alignment overhead on the network performance is discussed in Section~\ref{sec: alignment-overhead}.}} all operating with the same beamwidth.
The number of links is a Poisson random variable with a given density per unit area. They are uniformly distributed in a 10x10~${\text{m}}^2$ area and operate at 60~GHz. We also uniformly distribute a random number of obstacles with density 0.25 (on average 1 obstacle in a 2x2~${\text{m}}^2$ area) in the environment. The obstacles are in the shape of lines with random orientation, and their length is uniformly distributed between 0 and 1~m. Penetration loss is -30~dB, path-loss exponent is 3, and the minimum required SNR at the receiver is 10~dB. We simulate both slotted~ALOHA and TDMA, a simple collision-based versus a simple collision-free protocol. Both slotted~ALOHA and TDMA use the same directionality level. For slotted~ALOHA, in a given time slot, every link will be active with a given transmission probability. For TDMA, we activate only one link at a time, similar to existing mmWave standards. Active links transmit with power 2.5~mW. Every transmitter generates traffic with constant bit rate (CBR) 300~Mbps, the size of all packets is 10~kB, time slot duration is 25~$\mu$s, transmission rate is 1 packet per slot (link capacity around 3~Gbps), the transmitters have infinite buffers to save and transmit the packets, and the emulation time is 1 second. For benchmarking purposes, we also simulate a network with omnidirectional communications, where we fix all the parameters and only increase the transmit power to achieve the same transmission range as directional communications.}
\subsection{\NEW{Transitional Behavior of Interference}}\label{sec: transitional-behavior}
Directional communications with pencil-beam operation significantly reduces multiuser interference in mmWave networks. An interesting question is whether in this case, a mmWave network is noise-limited, as opposed to conventional interference-limited networks. This is a fundamental question at the MAC layer that affects the design principles of almost all MAC layer functions. For instance, as the system moves to the noise-limited regime, the required complexity for proper resource allocation and interference avoidance functions at the MAC layer is substantially reduced~\cite{Shokri2015Transitional}.
Instead, pencil-beam operation complicates negotiation among different devices in a network, as control message exchange may require a time consuming beam training procedure between transmitter and receiver~\cite{shokri2015mmWavecellular}.
The seminal work in~\cite{Singh2011Interference} confirms the feasibility of a \emph{pseudowired} (noise-limited) abstraction in outdoor mmWave mesh networks. However, as shown in~\cite{Shokri2015Beam}, activating all links may cause a significant performance drop compared to the optimal resource allocation in dense deployment scenarios due to non-negligible multiuser interference. Further, the comprehensive analysis of~\cite{Shokri2015Transitional} illustrates that mmWave networks may not be necessarily noise-limited; rather they show a \emph{transitional behavior}, from a noise-limited to an interference-limited regime.
\begin{figure}[!t]
\centering
\subfigure[]{
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\columnwidth]{CollisionGivenL_I_Theta}
\label{subfig: CollisionGivenL_I_Theta}
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\columnwidth]{OptimMACThr_OptimlTxProb}
\label{subfig: OptimMACThr_OptimlTxProb}
\end{minipage}%
}
\caption{Illustration of the transitional behavior of mmWave networks: \subref{subfig: CollisionGivenL_I_Theta} collision probability and~\subref{subfig: OptimMACThr_OptimlTxProb} optimal transmission probability of slotted ALOHA. \NEW{The negligible collision probability in~\subref{subfig: CollisionGivenL_I_Theta} and the very high optimal transmission probability in~\subref{subfig: OptimMACThr_OptimlTxProb} correspond to negligible multiuser interference. High collision probability and small optimal transmission probability correspond to the interference-limited regime. MmWave networks with narrow beam operation exhibit a full range of behaviors, from noise-limited to interference-limited, whereas microwave networks with omnidirectional operation always experience an interference-limited regime.}}
\label{fig: Transitional-behavior}
\end{figure}
\NEW{Fig.~\ref{fig: Transitional-behavior} illustrates the transitional behavior of interference in a mmWave network.
Negligible collision probability in this figure indicates negligible multiuser interference, whereas high collision probability corresponds to the interference-limited regime. From Fig.~\ref{subfig: CollisionGivenL_I_Theta}, we see that even for a network of modest size, the collision probability may be high enough to invalidate the assumption of being in a noise-limited regime, e.g., 0.2 collision probability for the case of 1 transmitter in a 2x2~${\text{m}}^2$ area and an operating beamwidth of 25$\degree$. Moreover, as can be observed in all curves of Fig.~\ref{subfig: CollisionGivenL_I_Theta}, there is a transition from a noise-limited to an interference-limited regime in a mmWave network with directional communications, whereas traditional networks with omnidirectional communications always experience an interference-limited regime without any transitional behavior under ``realistic'' parameter choices.
The transitional region of mmWave networks depends on the density of the transmitters, the density and the average size of the obstacles, the operating beamwidth, and also the MAC protocol.}
Fig.~\ref{subfig: OptimMACThr_OptimlTxProb} shows the behavior of the optimal transmission probability that maximizes the throughput of slotted ALOHA as a function of link density and operating beamwidth. From the figure, it can be observed that the optimal transmission probability is 1 in many cases, implying that we can simply activate all links with no penalty for the average link throughput (noise-limited regime). However, as the operating beamwidth or the link density increases, we should activate fewer links by reducing the transmission probability, in order to decrease the high contention level inside the network (interference-limited regime).
\subsection{\NEW{Control and Data Rate Mismatches}}\label{sec: rate-mismatch}
Current collision avoidance mechanisms suggest that a network with uncoordinated users will benefit from accepting collisions on tiny signaling messages such as request-to-send (RTS) and clear-to-send (CTS) to avoid retransmission of large data messages. To increase the robustness of signaling messages, current mmWave standards transmit control messages at much lower rate compared to the data messages. IEEE~802.11ad, for instance, supports a peak transmission rate of 27.7~Mbps for control and 6.7~Gbps for data messages~\cite{Nitsche2014IEEE}.
\NEW{This significant mismatch between the transmission rates of control and data messages substantially increases the cost of collision avoidance procedures and challenges the efficacy of current mmWave standards in handling short packets. To illustrate this inefficiency, we provide the following example.}
Let $t_i$ be the time required to transmit message $i$. With negligible propagation and queuing delays and with no collision on a directed spatial channel, the current CSMA/CA protocol introduces the following delay to transmit a payload: $2 t_{\rm SIFS} + t_{\rm RTS} + t_{\rm CTS} + t_{\rm DIFS} + t_{\rm DATA}$, where $t_{\rm DATA} = t_{\rm header} + t_{\rm payload}$. Note that the transmitter should wait for a SIFS duration before sending every RTS and CTS, and wait for a DIFS duration before every regular data frame. In IEEE~802.11ad, $t_{\rm SIFS} = 2.5$~$\mu$s and $t_{\rm DIFS} = 6.5$~$\mu$s. Considering 20 Bytes for RTS and CTS messages, we have $t_{\rm RTS}=t_{\rm CTS}= 5.5$~$\mu$s. Every data packet contains an 8-Byte header, which should be transmitted at rate 27.7~Mbps, so $t_{\rm header}= 2.2$~$\mu$s. To transmit 10 KBytes of payload, we need only $t_{\rm DATA} = 13.6$~$\mu$s, while the total delay is 36.1~$\mu$s, leading to 37\% channel utilization. This inefficiency increases even more as the size of the payload reduces, for instance, the channel utilization is only 12\% for 1 KByte of payload. This means that CSMA/CA consumes around 90\% of the time resources only to ensure avoidance of collisions even in a noise-limited scenario. This inefficient handling of short packets hinders the applicability of current mmWave technologies (with Gbps data rate and small interference footprint) to massive wireless access scenarios where we have frequent transmissions of packets with small payloads. In fact, the huge overhead of having an unnecessary proactive collision avoidance protocol may be one of the main bottlenecks of future applications of mmWave networks.
The significant mismatch between transmission rates of control and data messages, along with the reduced average collision probability in mmWave networks, demands development of new MAC layer protocols with on-demand and minimal use of signaling. Note that proactive transmission of some vital control messages, such as beam training pilots, may still be mandatory. These mandatory control overheads may limit the delay/channel utilization performance and therefore the applicability of mmWave networks to use cases with sporadic transmissions of small payloads. \NEW{This suggests the existence of a minimal payload size to make the establishment of a costly mmWave link beneficial, whose characterization is an interesting topic for future studies.}
\subsection{Prolonged Backoff Time}\label{sec: prolonged-backoff-time}
Suppressing interference in mmWave networks with pencil-beam operation comes at the expense of complicated link establishment. Besides the huge overhead due to the collision avoidance procedures, conventional CSMA/CA that was originally developed for omnidirectional transmissions introduces a \emph{prolonged backoff time} in mmWave networks. To elaborate, assume that a mmWave transmitter tries to access the channel by sending an RTS message after the backoff timer expires. Assume that the receiver does not hear the RTS due to either deafness or blockage, and therefore does not send the CTS message. The traditional CSMA/CA protocol assumes that a collision occurred and therefore increases the backoff time exponentially. In mmWave networks, this may be the wrong decision, which may unnecessarily prolong the backoff time. Similar issues may also exist in the random access phase of mmWave cellular networks, as mentioned in~\cite{shokri2015mmWavecellular}.
To enhance the performance of CSMA/CA in directional communications, \cite{Ramanathan2005Adhoc} modifies traditional CSMA/CA such that each device exponentially increases the contention window size upon a missing ACK, while this increment is linear with each missing CTS. Although this proposal is better than the original CSMA/CA in the sense that different events demand different actions, it fails to solve the prolonged backoff time problem in mmWave systems. In fact, blockage, deafness, and collision, which are caused by different physical reasons, deserve a different handling at the MAC layer, a fact that is somewhat ignored in~\cite{Ramanathan2005Adhoc}. In the next section, we propose a novel MAC level message to facilitate the detection of a collision, thereby solving the prolonged backoff time problem.
\subsection{Alignment Overhead}\label{sec: alignment-overhead}
The adopted beam training approach of the existing standards introduces an alignment overhead, which depends on the number of directions that have to be searched, which in turn depends on the selected transmission and reception beamwidths. For a given beamwidth,~\cite{li2013efficient} suggests a new technique based on Rosenbrock search as a replacement for the existing two-stage exhaustive search, to reduce the alignment overhead by up to 65\% for a given operating beamwidth.
Alignment overhead, besides demanding more efficient search procedures, introduces an alignment-throughput tradeoff that necessitates an optimization over the operating beamwidth~\cite{Shokri2015Beam}. Narrower beamwidths increase the search granularity, thus the alignment overhead, but provide a higher transmission rate due to higher antenna gains and lower multiuser interference. Adopting larger beamwidths speeds up the search process at the expense of a degraded transmission rate. The tradeoff shows that using extremely narrow beams (or excessively increasing the beamforming codebook size) is not beneficial in general due to the increased alignment overhead, and there is an optimal beamwidth (optimal codebook size) at which the tradeoff is optimized~\cite{Shokri2015Beam}.
\section{\NEW{MAC Design for Future Short Range mmWave Networks}}\label{sec: MAC-Design-Aspects}
\NEW{In this section, we discuss the implications of the fundamental aspects highlighted in the previous section on the efficient MAC design in future mmWave networks.}
\subsection{\NEW{Collision-aware Hybrid MAC}}\label{sec: hyybrid-MAC}
\begin{figure}[!t]
\centering
\subfigure[]{
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\columnwidth]{OptimMACThr_MaxThr}
\label{subfig: Comparison3}
\end{minipage}%
}
\subfigure[]{
\begin{minipage}[c]{0.45\textwidth}
\includegraphics[width=\columnwidth]{NetThr_Delay}
\label{subfig: Comparison4}
\end{minipage}%
}
\caption{Performance comparison of slotted~ALOHA and TDMA in mmWave WPANs. The alignment overhead is neglected. ``S-ALOHA" stands for slotted~ALOHA, and $p$ is its transmission probability. Different points of \subref{subfig: Comparison4} represent different link densities from 0.02 to 2 links per unit area. Operating beamwidth in \subref{subfig: Comparison4} is 10$^{\rm o}$. Increasing the link density may reduce the link throughput, increase the network throughput, and increase the delay. Slotted~ALOHA significantly outperforms TDMA in terms of link throughput, network throughput, and delay performance. On the other hand, TDMA guarantees collision-free communication.}
\label{fig: SlottedALOHA-vs-TDMA}
\end{figure}
\NEW{To investigate proper resource allocation strategies for mmWave networks, we compare the average throughput of a link, the network throughput, and the delay performance of slotted~ALOHA to those of TDMA in Fig.~\ref{fig: SlottedALOHA-vs-TDMA}. We define delay as the difference between the time a new packet is inserted into the transmission queue at the transmitter and the time it is correctly received at the receiver.}
Specifically, Fig.~\ref{subfig: Comparison3} reports the maximum throughput of a link in slotted~ALOHA, associated with the optimal transmission probability in Fig.~\ref{subfig: OptimMACThr_OptimlTxProb}, and Fig.~\ref{subfig: Comparison4} shows the network throughput against the corresponding average delay obtained by changing the link density. First, neglecting the alignment overhead, the throughput of a link in slotted~ALOHA will decrease with the operating beamwidth, due to a higher collision probability.
Moreover, TDMA activates only one link at a time -- orthogonal use of time resources -- irrespective of the number of links. Considering the traffic generation rate of this example, which is 0.1 of the link capacity, the network will be saturated roughly with 10 links, and further increasing the number of links will not improve the network throughput (see Fig.~\ref{subfig: Comparison4}),\footnote{The network throughput of TDMA is at most 1 packet per slot. This upper bound is achieved if there is no obstacle in the environment.} but will instead reduce the time share of every link and consequently reduce the average throughput of a link, see Fig.~\ref{subfig: Comparison3}. Besides, every link experiences a higher delay to access the channel and transmit its data, see different points of the TDMA curve in Fig.~\ref{subfig: Comparison4}. Note that with a fixed packet generation rate, the \emph{effective link capacity} (link capacity multiplied by its time share) of every link in TDMA decreases with the number of links in the network, so the queues of the transmitter may become unstable. \NEW{Instead, slotted~ALOHA leverages small multiuser interference and is able to effectively re-use the time resources (spatial gain), thus every link can handle more traffic due to a higher effective link capacity. Significant spatial gain in mmWave networks is also highlighted in~\cite{son2012frame,niu2015exploiting}, where the authors try to leverage this gain in a general noise-limited network~\cite{son2012frame} and in a device to device network~\cite{niu2015exploiting}.
Fig.~\ref{subfig: Comparison4} shows that slotted~ALOHA significantly outperforms TDMA in terms of both network throughput and delay, thanks to this significant spatial gain. However, unlike slotted~ALOHA, TDMA can guarantee communication with no collisions, which may be of importance in some applications, e.g., under short delay or ultra high reliability constraints.}
Current mmWave standards, such as IEEE~802.15.3c and IEEE~802.11ad, adopt resource allocation approaches that were originally developed for interference-limited microwave networks. In particular, the network traffic is mostly served in the contention-free phase even in a noise-limited regime. \NEW{However, devices in a mmWave network may show a full range of behaviors from noise-limited to interference-limited, demanding a dynamic (collision-aware) incorporation of both contention-based and contention-free phases in the resource allocation framework. The contention-based phase improves the throughput/delay performance by leveraging concurrent transmissions, and the contention-free phase can be applied to deliver only the remaining traffic. To exemplify the superior feature of this collision-aware hybrid MAC, we note that isolated devices that receive almost no interference can transmit during all the data transmission interval (DTI) without extra scheduling delay, whereas existing hybrid MAC solutions force them to register their requests, wait until they are scheduled, and transmit for a short portion of DTI, see Fig.~\ref{fig: TimingStructure}.
It follows that, in a noise-limited regime, we may deliver most of the traffic in the contention-based phase (where contention does not actually occur). This can lead to providing around an order of magnitude higher throughput for a given link density and supporting an order of magnitude denser network with given average per-link throughput, see Fig.~\ref{subfig: Comparison3}, extending the use cases of future mmWave networks.}
\subsection{\NEW{Efficient Retransmission Policy}}\label{sec: retransmission-policy}
\NEW{In the previous subsection, we showed that the TDMA phase of the hybrid MAC of existing standards needs modification. In this subsection, we show that the CSMA/CA phase of their hybrid MAC needs to be thoroughly modified as well.}
\NEW{Retransmission after a random backoff is a common solution to handle collisions without any network-wide coordination.
In CSMA/CA, adopted by existing standards, retransmission of an RTS message after random backoff leads to a virtual channel reservation and collision-free data transmission. It can also alleviate the well-known hidden and exposed node problems. However, the special characteristics of mmWave networks diminish the benefits of CSMA/CA over simple CSMA. First, proactive channel reservation generally causes a significant throughput drop and extra delay in mmWave networks due to the overwhelming signaling overhead, as described in Section~\ref{sec: rate-mismatch}. Moreover, the directionality of mmWave networks substantially reduces the hidden and exposed node problems, and consequently the need for collision avoidance. In the following, we also show that different links experience different collision levels, a feature that should be addressed in designing proper retransmission policies
}
\NEW{The design of efficient retransmission policies depends largely on the distribution of the number of links in the same collision domain (links with strong mutual interference). An increased number of links in the same collision domain results in more retransmissions, and therefore a higher delay to establish a channel. Fig.~\ref{fig: CollProb_Densities} shows such distribution as a function of operating beamwidth and link density. Each plot contains three sets of distributions that correspond to link densities of 0.11, 1, and 10 links per square meter, from left to right, respectively. Under pencil-beam operation and relatively low link density, a mmWave network is comprised of devices with homogenous collision behavior, as almost all of them show a noise-limited behavior. Increasing either the link density or the operating beamwidth shifts the mmWave network toward the interference-limited regime. In the extreme case of omnidirectional communication, all the devices show another homogenous behavior, i.e., an interference-limited regime. However, as can be observed in Fig.~\ref{subfig: DegreeDist_Beamwidth_30}, devices in mmWave networks may show a full range of behaviors from noise-limited to interference-limited. To design an efficient retransmission policy for such networks, a link should be able to identify the size of the collision domain it belongs to. This is a largely open problem in mmWave networks, demanding new analytical models and protocol designs. A direct research question is whether, in mmWave networks, reactive retransmission of a data message after a random backoff procedure (CSMA) is a better option to be adopted by all devices than proactive execution of costly collision avoidance mechanisms (CSMA/CA). Another open question is whether, upon detecting a collision, the transmitter-receiver pairs should (1) adopt a narrower beamwidth at the cost of some extra alignment overhead but with the possible benefit of operating with no multiuser interference and therefore significant throughput enhancements (see Section~\ref{sec: hyybrid-MAC}), (2) execute a random backoff procedure to share DTI among the set of colliding links in a distributed fashion, or (3) send a TDMA reservation request to the coordinator. The proper choice depends on the use case, QoS requirements, and available information such as the collision domain size.}
\begin{figure}[!t]
\centering
\subfigure[operating beamwidth 5$\degree$]{
\begin{minipage}[c]{0.48\textwidth}
\includegraphics[width=\columnwidth]{DegreeDist_Beamwidth_5}
\label{subfig: DegreeDist_Beamwidth_5}
\end{minipage}%
}
\vspace{+5mm}
\subfigure[operating beamwidth 30$\degree$]{
\begin{minipage}[c]{0.48\textwidth}
\includegraphics[width=\columnwidth]{DegreeDist_Beamwidth_30}
\label{subfig: DegreeDist_Beamwidth_30}
\end{minipage}%
}
\vspace{+5mm}
\subfigure[operating beamwidth 360$\degree$]{
\begin{minipage}[c]{0.48\textwidth}
\includegraphics[width=\columnwidth]{DegreeDist_Beamwidth_360}
\label{subfig: DegreeDist_Beamwidth_360}
\end{minipage}%
}
\caption{\NEW{Distribution of the number of conflicting links for different operating beamwidth and density of the transmitters. Three sets of distributions in each figure from left to right correspond to link densities of 0.11, 1, and 10 links per square meter, respectively. Obstacle density is 0.11. Size 1 for collision domain represents isolated devices with no incoming interference.}}
\label{fig: CollProb_Densities}
\end{figure}
\subsection{\NEW{Collision Notification}}\label{sec: hybrid-control-plane}
\NEW{Due to the heterogenous behavior of the collisions in mmWave networks, detecting the collision level provides useful information for a link to adopt proper retransmission policies, make control plane more efficient, implement an on-demand TDMA phase, and solve the prolonged backoff time problem.}
\NEW{To develop a procedure that estimates the collision level, we first consider orthogonal signatures for different types of messages like RTS, CTS, and data. Inspired by the use of pseudo-orthogonal symbol sequences (PSS) in synchronization symbols, we can readily implement orthogonal signatures by adding corresponding PSSs to the header of any message. Then, a correlator at any receiver matches (the time shifts of) the received signal with the reference symbol sequences to identify the type of the received messages. First, as this scheme is very robust and can work well even at very low SNRs, we can transmit this part of the header at very high rate, decreasing the time overhead of this part of the header. Moreover, if multiple messages of the same type are received (due to multiple transmitters), the receiver can distinguish them as they are received by different time shifts. If messages of different types are received, again, the receiver can distinguish them due to their orthogonal signatures. Note that the types of the superimposed messages are detectable due to the robustness of PSSs; they are short and easily detectable even at very low SNR, at which neither the header nor the payload are decodable.}
\NEW{We introduce a novel MAC level message, called collision notification (CN), which any receiver will transmit upon receiving messages that are not decodable due to a collision. To distinguish non-decodable message(s) due to a collision from those due to severe channel attenuations, we note that the correlator's output at the receiver peaks at several time shifts in the case of collision. Alternatively, the receiver can use a simple hard decision based on the received energy (energy detector); the level of the received power is very low in the case of severe channel attenuation, blockage, or deafness, whereas it is very high in the case of collision with multiple simultaneous received signals. The proposed CN message can address the prolonged backoff time problem, facilitate the on-demand realization of the TDMA phase, and reduce the frequency of unnecessary executions of the costly collision avoidance procedure. In the following, due to lack of space, we only mention how the CN message alleviates the prolonged backoff time problem, whereas other direct applications of the CN message are the subject of our future work.}
A simple scheme to alleviate the prolonged backoff time problem, illustrated in Fig.~\ref{fig: Protocol}, may work as follows. After sending a directional (or omnidirectional) RTS to a receiver that is ready to receive, the following cases might occur:
\begin{itemize}
\item Scenario 1 (success): The transmitter receives a CTS before timeout. Then, it starts transmission based on the CSMA/CA mechanism.
\item Scenario 2 (collision): The receiver fails to decode the RTS due to a collision. It sends a CN message. Upon receiving the CN message, the transmitter knows that there is a contention to access this channel in this direction, and therefore sends another RTS after running the random backoff procedure.
\item Scenario 3 (deafness or blockage): The transmitter does not receive a CTS nor a CN. In this case, after timeout, it knows that there is either deafness or blockage. Hence, it tries to find another directed spatial channel instead of running an unnecessary backoff.
\end{itemize}
\begin{figure}[!t]
\includegraphics[width = \columnwidth]{Protocol}
\caption{A simple protocol for mitigating prolonged backoff time.
For a given time, in Scenario 1, device N2 detects an RTS. The next step is for device N2 to send a CTS message to reserve the channel. In Scenario 2, device N3 receives more than one RTS at the same time. It sends a CN message to let the transmitters run the backoff procedure. In Scenario 3, device N2 does not receive the RTS of device N1 due to either deafness or blockage, and will be silent at the next step.
}
\label{fig: Protocol}
\end{figure}
\begin{figure}[!t]
\centering
\includegraphics[width=\columnwidth]{PerformanceComparision_Blockage}
\caption{Average backoff time of the device winning the contention among 20 devices for accessing the same transmission resource (frequency and direction). Standard RTS-CTS based negotiation leads to unnecessarily prolonged backoff time, while a slight modification of this standard negotiation, by introducing CN, effectively mitigates the problem.
}
\label{fig: PerformanceComparision_Blockage}
\end{figure}
\NEW{If a set of receivers fail to decode RTS messages, they will all respond back with CN messages in the same direction they were listening, and their intended transmitters will then execute a collision avoidance procedure. If a set of transmitters correctly receive CN messages, they will start the conventional collision avoidance procedure (Scenario~2). If the CN messages collide, the intended transmitters can still identify the existence of multiple CN messages, even if their entire headers/payloads are not decodable, so the corresponding transmitters correctly execute the collision avoidance procedure. The CN message, however, increases feedback traffic.
The minimum required size of PSSs, required SNR, transmission rate of PSSs, and the performance of the message type detection will be the subject of our future studies.}
Thanks to the CN message, the transmitter can sense the presence of contention in the channel and take the proper MAC layer action to avoid the prolonged backoff time, which is the result of deafness and blockage, and not of contention on the channel. We simulate a network with a Bernoulli link failure model, i.e., every link fails due to blockage independently and with constant blockage probability. Fig.~\ref{fig: PerformanceComparision_Blockage} shows the performance enhancement due to the introduction of CN. With a blockage probability of 0.02, for instance, the average backoff time will be dramatically decreased by about 95\% (twenty times) if CN is used.
\subsection{\NEW{Multihop Communications}}
\NEW{Relaying and multihop communications are key components of future mmWave networks for range extension and for blockage alleviation~\cite{shokri2015mmWavecellular,rappaport2014mmWaveBook,kim2013joint,Singh2009Blockage}. It is also essential for multihop backhauling, which is an important use case of IEEE~802.11ay. In~\cite{kim2013joint}, range extension using a relay node is investigated for an outdoor sport broadcasting system. Extensive analysis demonstrated that high quality live videos of 10 sources can be efficiently transmitted over 300~m. Besides range extension,~\cite{Singh2009Blockage} showed that having an alternative path using relay node(s) can significantly alleviate blockage. The backup paths are recorded in the coordinator and established upon blockage on the direct path, increasing connectivity to about $100\%$.}
\NEW{Unfortunately, current mmWave standards support only single- or two-hop links\footnote{IEEE~802.15.3c supports only single-hop communications, while ECMA~387 and IEEE~802.11ad support also two-hop communications.} rather than the complete multihop communication capability envisioned in IEEE~802.11ay.
Adding more hops entails additional alignment overhead per hop, which may limit the benefits of multihop communications. As stated in~\cite{shokri2015mmWavecellular}, the beamforming vector of the analog beamformer depends only on the large scale components of the channel, which will be almost constant over many consecutive superframes (beacon intervals). However, current mmWave standards neglect this important feature and perform a complete beam training procedure in every superframe. We suggest that each device estimates the topology of the network in the neighbor discovery phase. Then, it creates a table of proper spatial resources (directions) based on the feedback received from previous transmission attempts (piggybacking over data transmissions). The table is updated upon every received feedback, and each transmitter tries to communicate with other devices using the most updated table. This a priori information on the possible directions can substantially reduce the beam training space, thereby reducing the alignment overhead. The design of the analog beamformer is then reduced to beam-tracking over consecutive superframes, while the digital beamformer (in a hybrid beamforming architecture) may be still designed per superframe.}
\NEW{In addition to more efficient beam training, a joint routing and scheduling approach is necessary in multihop communications to leverage the low interference footprint in mmWave communications using scheduling, while guaranteeing connectivity using routing protocols. Designing such joint approach is an interesting future research direction.}
\section{Conclusions}\label{sec: concluding-remarks}
Millimeter wave (mmWave) communication systems are promising solutions to provide extremely high data rates and support massive uncoordinated access in future wireless networks. Severe channel attenuation, blockage, and deafness, along with a reduced interference footprint, differentiate mmWave systems from legacy systems that operate at microwave frequencies. \NEW{MmWave networks may face transitional behaviors, heterogenous sizes of the collision domains, significant mismatch between transmission rates of control and data messages, prolonged backoff time, and alignment-throughput tradeoff. This paper discussed how the MAC layer functions of existing mmWave standards are not effective in addressing these new challenges. It was argued that the use of new collision-aware hybrid resource allocation, more efficient retransmission policies, collision notification, and multihop communication has the potential to significantly improve the performance of short range mmWave networks.}
\bibliographystyle{IEEEtran}
|
1,477,468,751,065 | arxiv | \section{Introduction}
\vspace{-.75em}
Language models (LM) are very powerful in lipreading systems (e.g. in \cite{bowden2013recent,6288999}). Language models built upon the ground truth utterances of datasets learn grammar and structure rules of words and sentences (the latter in the case of continuous speech). However, visual co-articulation effects in visual speech signals damage the performance of visual speech LM's as visually, people do not utter what the language model expects \cite{lieberman1963some}. These models are commonplace but while higher-order $N$-gram LM's may improve classification rates, the cost of this model is disproportionate to the common goal of developing more accurate classifiers. So we compare which unit would best optimize a lipreading (visual speech) LM to observe their limitations. As in \cite{bear2018alternative} we compare three units; visemes (visual speech units) \cite{lan2010improving}, phonemes (audible speech units), and words.
In the first two columns of Table~\ref{tab:sn_tests2} we list pairings of classifier units and language model units. For each pair we build a conventional lipreading system with the HTK toolkit \cite{htk} to classify Active Appearance model \cite{Matthews_Baker_2004} features extracted on 12 speakers from the RMAV audio-visual speech dataset\cite{bowden2013recent}. Phonemes are the International Phonetic Alphabet \cite{international1999handbook}, and our visemes are speaker-dependent visemes \cite{bear2017phoneme,bear2018comparing}. Word labels are from the RMAV ground truth. \textbf{Classifier units} are the labels used to identify individual classification models and \textbf{language units} make up the label scheme used for building the post-classification decoding LM.
\section{Analysis}
\vspace{-.75em}
Fig~\ref{fig:sn_effects_talker} shows word correctness (on the $y$-axis) for each speaker along the $x$-axis over three figures, one per LM unit. The viseme LM is on the left, phoneme LM middle, and word LM on the right.
\begin{figure}[!hbt]
\centering
\begin{tabular}{lcr}
\includegraphics[width=0.33\textwidth]{viseme_network_units_by_talker} &
\includegraphics[width=0.33\textwidth]{phoneme_network_units_by_talker} &
\includegraphics[width=0.33\textwidth]{word_network_units_by_talker}
\end{tabular}
\caption{Effects of support network unit choice with each type of labeled HMM classifier units. Along the $x$-axis is each speaker, $y$-axis values are correctness, $C$. Viseme LM is on the left, phoneme LM in the middle, and word LM on the right.}
\label{fig:sn_effects_talker}
\end{figure}
The viseme LM is the lowest correctness ($0.02\pm0.0063$). On the surface the idea of visemes classifiers is a good one because they take visual co-articulation into account to some extent. However as seen here, an LM of visemes is too complex due to the effect of homophemes \cite{thangthai2018comparing}.
The phoneme LM (Fig~\ref{fig:sn_effects_talker}, middle) is more exciting. For all speakers we see a statistically significant increase in $C_w$ compared to the viseme LM $C_w$ in Fig~\ref{fig:sn_effects_talker} left. Looking more closely between speakers, we see that for four speakers (2, 9, 10 and 12), the viseme classifiers outperform the phonemes, yet for all other speakers there is no significant difference. On average they are identical with an all-speaker mean $C_w$ of $0.19\pm0.0036$ (Table~\ref{tab:sn_tests2}).
Lastly in Fig~\ref{fig:sn_effects_talker} (right) is $C_w$ of a word model paired with classifiers built on viseme, phoneme, and word units. Here word classifiers perform very poorly. We attribute this to insufficient training samples per class due to the extra number of classes in the word space ($>1000$ in RMAV) compared to the number of classes in the phoneme space ($49$) and so we do not recommend word-based classifiers without large volumes of visual speech data such as in \cite{chung2018learning,stafylakis2017deep}. Also in Fig~\ref{fig:sn_effects_talker} (bottom), are the phoneme and viseme classifiers (in green and red respectively) with a word LM. Here, for five of our twelve speakers (3, 5, 7, 8, and 11), the phoneme classifiers out-perform the visemes and for the other speakers there is no significant difference once a word LM is applied. This demonstrates that the strength of a good word network can help negate translations between acoustic and visual speech spaces. Fig~\ref{fig:sn_effects_talker} $C_w$ values and one standard error values are in Table~\ref{tab:sn_tests2}. This suggests phoneme units are most robust for visual speech language models but in practical terms this is not an easily intelligible output so words are preferred.
\begin{table}[!h]
\centering
\caption{All speaker mean $C_w$ for each pair of HMM and language model units.}
\begin{tabular}{|l|l|r|r|}
\hline
Classifier units & Network units & $C_w$ & $1$se \\
\hline \hline
Viseme & Viseme & $0.02$ & $0.0063$ \\
Viseme & Phoneme & $0.19$ & $0.0036$ \\
Viseme & Word & $0.09$ & $0.0$ \\
\hline
Phoneme & Phoneme & $0.19$ & $0.0036$ \\
Phoneme & Word & $0.20$ & $0.0043$ \\
\hline
Word & Word & $0.19$ & $0.0005$\\
\hline
\end{tabular}
\label{tab:sn_tests2}
\end{table}
\vspace*{-\baselineskip}
\section{Conclusion}
\vspace{-.75em}
For some speakers viseme classifiers with phoneme LMs are the better choice whereas others are easier to lipread with phoneme classifiers with a word LM. As experimenters we have no evidence to know which approach is best for a test speakers until after testing all unit options, so we recommend using either phoneme or word-based language network for future lipreading system development as these enable more interpretable prediction transcripts. Whilst lipreading LMs are powerful, they are not the solution to training machines to lipread \emph{all} speakers due to the great variation in speaker visual speech signals.
\bibliographystyle{splncs}
|
1,477,468,751,066 | arxiv | \section{Preliminary notions}
We first need some linear operators on the ring $\mu$ of symmetric functions. As Schur functions are a basis, it suffices to describe the operators on these functions. The first one is a projection on Schur functions indexed by partitions that have at most $k$ parts, denoted $\downarrow_k$, and defined by
\[
s_{\mu} \downarrow_k = \left \{ \begin{array}{l l} s_{\mu} & \text{if} \ \ell (\mu) \leq k\\ 0 & \text{else} \\ \end{array} \right.
\]
Also, consider the following bilinear operator, denoted $\odot$, that adds the indices of two Schur functions in the following way:
\[
s_{\mu} \odot s_{\lambda} = s_{\mu + \lambda},
\]
where the sum of two partitions is done componentwise, adding zero parts if necessary.
For example, we know the following formula, that goes back to Littlewood \cite{Littlewood}:
\[
h_2[h_n] = \displaystyle \sum_{k=0}^{\lfloor \frac{n}{2} \rfloor} s_{2n-2k,2k}.
\]
Using this operator, this formula can be described recursively:
\[
h_2[h_n] = s_{22} \odot (h_2[h_{n-2}]) + s_{2n},
\]
with $h_2[h_0] = s_0 = 1$ and $h_2[h_1] = h_2$.
We also need the original formula for $h_3[h_n]$, due to Thrall \cite{Thrall}:
\begin{thm}
For any $n \in \mathbb{N}$, we have
\[
h_3[h_n] = \displaystyle \sum_{\substack{\lambda \vdash 3n \\ \ell (\lambda) \leq 3}} f_{\lambda} s_{\lambda},
\]
where $f_{\lambda}$ can be described as follows: take $a_{\lambda} = \min\{1+\lambda_1-\lambda_2, 1+\lambda_2-\lambda_3\}$, and define $a_{\lambda}'=a_{\lambda}+i$, where $i$ is the only number in $\{-2,0,2\}$ such that $a_{\lambda}'$ is a multiple of $3$. If $a_{\lambda}'$ is even, then $f_{\lambda} = \frac{a_{\alpha}'}{6}$. If $a_{\alpha}'$ is odd and $\lambda_2$ is even, $f_{\lambda} = \frac{a_{\alpha}'+3}{6}$; if $\lambda_2$ is odd, $f_{\lambda}=\frac{a_{\alpha}'-3}{6}$.
\end{thm}
As F. Bergeron points out to the author, we can have a recursive description of $f_{\lambda}$. For $i \in \mathbb{N}$ and $\omega \in \{0,1\}$, define
\[
g(i,\omega) = \left \{ \begin{array}{c l} g(i - 6 ,k) + 1 & \text{if} \ i \geq 6 \\ 1 & \text{if} \ k=0 \ \text{and} \ i \neq 0,2\\1 & \text{if} \ k=1 \ \text{and} \ i=4\\0 & \text{else} \\ \end{array} \right.
\]
Then, $f_{\lambda}=g \left(a_{\lambda},\lambda_2 \mod2\right)$, with $a_{\lambda}$ as in the theorem.
\section{The formula}
We can now state and prove the following recurrence formula:
\begin{thm}
The plethysm $h_3[h_n]$ can be described as:
\begin{align*}
h_3[h_0] &= 1; \qquad \qquad h_3[h_1] = s_3; \\
(h_3[h_n])\downarrow_2 &= s_{66} \odot (h_3[h_{n-4}]) \downarrow_2 + \displaystyle \sum_{k=2}^n s_{3n-k,k} + s_{3n}; \\
h_3[h_n] &= (h_3 \circ h_n) \downarrow_2 + s_{222} \odot (h_3[h_{n-2}]) + s_{441} \odot ((h_3[h_{n-3}]) \downarrow_2 ).
\end{align*}
\end{thm}
\begin{proof}
Using Thrall formula and the bilinearity of $\odot$, we have to show that
\begin{align*}
\displaystyle \sum_{\substack{\lambda \vdash 3n \\ \ell (\lambda) \leq 2}} f_{\lambda} s_{\lambda} &= \displaystyle \sum_{\substack{\mu \vdash (3n-12) \\ \ell (\mu) \leq 2}} f_{\mu} (s_{66} \odot s_{\mu}) + \displaystyle \sum_{k=2}^n s_{3n-k,k} + s_{3n}; \\
\displaystyle \sum_{\substack{\lambda \vdash 3n \\ \ell (\lambda) = 3}} f_{\lambda} s_{\lambda} &= \displaystyle \sum_{\substack{\mu \vdash (3n-6) \\ \ell (\mu) \leq 3}} f_{\mu} (s_{222} \odot s_{\mu}) + \displaystyle \sum_{\substack{\mu \vdash (3n-9) \\ \ell (\mu) \leq 2}} f_{\mu} (s_{441} \odot s_{\mu}).
\end{align*}
To prove the first equation, we have to show that:
\begin{align}
f_{(3n)} &= 1; \\
f_{(3n-1,1)} &= 0; \\
f_{(3n-k,k)} &= 1 && \text{if} \ 2 \leq k \leq 5; \\
f_{(3n-k,k)} &= f_{(3n-k-6,k-6)} + 1 && \text{if} \ 6 \leq k \leq n; \\
f_{(3n-k,k)} &= f_{(3n-k-6,k-6)} && \text{if} \ n+1 \leq k \leq \lfloor \frac{3n}{2} \rfloor.
\end{align}
$(1)$: We have $a_{(3n)} = \min\{3n+1,1\} = 1$, so $f_{(3n)} = g(1,0) = 1$.
$(2)$: We have $a_{(3n-1,1)} =\min\{3n,2\}=2$, so $f_{(3n-1,1)} = g(2,1) = 0$.
$(3)$: We have to consider all the cases one by one.
\begin{enumerate}[label=\roman*)]
\item We have $a_{(3n-2,2)} =\min\{3n-1,3\}=3$, and $f_{(3n-2,2)}=g(3,0)=1$.
\item We have $a_{(3n-3,3)} =\min\{3n-2,4\}=4$, and $f_{(3n-3,3)}=g(4,1)=1$.
\item We have $a_{(3n-4,4)} =\min\{3n-3,5\}=5$, and $f_{(3n-4,4)}=g(5,0)=1$.
\item We have $a_{(3n-5,5)} =\min\{3n-4,6\}=6$, and $f_{(3n-5,5)}=g(6,1)=g(0,1)+1=1$.
\end{enumerate}
$(4)$: For $6 \leq k \leq n$, we have $a_{(3n-k,k)} =\min\{1+3n-2k,1+k\}=1+k$, and $f_{(3n-k,k)} = g(1+k, k \mod 2)$. Define $\mu = (3n-k-6,k-6)$, so that $(3n-k,k) = (6,6)+\mu$. Then, we have $a_{\mu} = \min\{1+3n-2k,k-5\} = k-5$, so $f_{\mu} = g(k-5,k \mod 2)$. Applying the recursiveness of the function $g$, we have $f_{(3n-k,k)} = g(k+1,k\mod2) = g(k-5,k\mod2) + 1 = f_{\mu}+1$.
$(5)$: For $n+1 \leq k \leq \lfloor \frac{3n}{2} \rfloor$, there are two cases. When $n+1 \leq k \leq n+2$, we have that $a_{(3n-k,k)} = \min\{1+3n-2k,1+k\}= 1+3n-2k$ and $a_{\mu} = \min\{1+3n-2k,k-5\}=k-5$. If $n+3 \leq k \leq \lfloor \frac{3n}{2} \rfloor$, $a_{(3n-k,k)} = a_{\mu} = 1+3n-2k$. So, again, we have to consider all the possible cases:
\begin{enumerate}[label=\roman*)]
\item For $k=n+1$, so $\lambda=(2n-1,n+1)$, we have $f_{\mu}=g(n-4, n-5 \mod2)$, and $f_{\lambda}=g(n-1,n+1 \mod2)$. If $m = \lfloor \frac{n-6}{6} \rfloor$ and $\ell = n \mod6$, then $n=6(m+1)+\ell$. So $f_{\mu} = g(\ell+2,n-5 \mod 2) + m$, and $f_{\lambda} = g(\ell+5,n+1\mod2)+m$. If $\omega = n+1 \mod2$, then we have the following cases:
\begin{enumerate}[label=\alph*)]
\item $\omega=0$,$\ell=1$: $f_{\mu}=g(3,0)+m=m+1$ and $f_{\lambda}=g(6,0)+m=g(0,0)+m+1=m+1$, so they are equal.
\item $\omega=0$,$\ell=3$: $f_{\mu}=g(5,0)+m=m+1$ and $f_{\lambda}=g(8,0)+m=g(2,0)+m+1=m+1$, so they are equal.
\item $\omega=0$,$\ell=5$: $f_{\mu}=g(7,0)+m=g(1,0)+m+1=m+2$ and $f_{\lambda}=g(10,0)+m=g(4,0)+m+1=m+2$, so they are equal.
\item $\omega=1$,$\ell=0$: $f_{\mu}=g(2,1)+m=m$ and $f_{\lambda}=g(5,1)+m=m$, so they are equal.
\item $\omega=1$,$\ell=2$: $f_{\mu}=g(4,1)+m=m+1$ and $f_{\lambda}=g(7,1)+m=g(1,1)+m+1=m+1$, so they are equal.
\item $\omega=1$,$\ell=4$: $f_{\mu}=g(6,1)+m=g(0,1)+m+1=m+1$ and $f_{\lambda}=g(9,1)+m=g(3,1)+m+1=m+1$, so they are equal.
\end{enumerate}
\item For $k=n+2$, so $\lambda=(2n-2,n+2)$, we have $f_{\mu}=g(n-3, n-4 \mod2)$, and $f_{\lambda}=g(n-3, n+2 \mod2)$, so they are equal.
\item For $n+3 \leq k \leq \lfloor \frac{3n}{2} \rfloor$, we have $f_{\mu}=g(1+3n-2k,k-5 \mod 2)$ and $f_{\lambda}=g(1+3n-2k,k+1 \mod2)$, so they are equal.
\end{enumerate}
Now, consider the second equation. For partitions $\lambda \vdash 3n$ with $3$ parts, we have to show that:
\begin{align}
f_{(\lambda_1,\lambda_2,\lambda_3)} &= f_{(\lambda_1-2,\lambda_2-2,\lambda_3-2)} && \text{if} \ \lambda_3 \geq 2; \\
f_{(\lambda_1,\lambda_2,\lambda_3)} &= f_{(\lambda_1-4,\lambda_2-4)} && \text{if} \ \lambda_3 =1 \ \text{and} \ \lambda_2 \geq 4; \\
f_{(\lambda_1,\lambda_2,\lambda_3)} &= 0 && \text{if} \ \lambda_3 =1 \ \text{and} \ \lambda_2 \leq 3.
\end{align}
$(6)$: Define $\mu=(\lambda_1-2,\lambda_2-2,\lambda_3-2)$, so that $\lambda=(2,2,2)+\mu$. We have that $a_{\lambda}$ is either $1+\lambda_1-\lambda_2=1+(\mu_1-2)-(\mu_2+2)=1+\mu_1-\mu_2$ or $1+\lambda_2-\lambda_3=1+\mu_2-\mu_3$, so $a_{\lambda}=a_{\mu}$. We also have that $f_{\lambda}=g(a_{\lambda},\lambda_2 \mod2)$ and $f_{\mu}=g(a_{\lambda},\lambda_2-2 \mod2)$, so they are equal.
$(7)$: If $\lambda_3=1$ and $\lambda_2 \geq 4$, then $\lambda=(3n-k-1,k,1)$ for $4 \leq k \leq \lfloor \frac{3n-1}{2}\rfloor$. Define $\mu=(3n-k-5,k-4)$, so that $\lambda = (4,4,1)+\mu$. We have to examine a few cases:
\begin{enumerate}[label=\roman*)]
\item If $4 \leq k \leq n$, then $a_{(3n-k-1,k,1)}=\min\{3n-2k,k\}=k$ and $a_{\mu}=\min\{3n-2k,k-3\}=k-3$. If $m = \lfloor \frac{k-6}{6} \rfloor$, $\ell = k \mod6$ and $\omega=k \mod2$, we have the following cases:
\begin{enumerate}[label=\alph*)]
\item $k=5$: $f_{\mu}=g(2,1)=0$ and $f_{\lambda}=g(5,1)=0$, so they are equal.
\item $\omega=0$,$\ell=0$: $f_{\mu}=g(3,0)+m=m+1$ and $f_{\lambda}=g(6,0)+m=g(0,0)+m+1=m+1$, so they are equal.
\item $\omega=0$,$\ell=2$: $f_{\mu}=g(5,0)+m=m+1$ and $f_{\lambda}=g(8,0)+m=g(2,0)+m+1=m+1$, so they are equal.
\item $\omega=0$,$\ell=4$: $f_{\mu}=g(7,0)+m=g(1,0)+m+1=m+2$ and $f_{\lambda}=g(10,0)+m=g(4,0)+m+1=m+2$, so they are equal.
\item $\omega=1$,$\ell=1$: $f_{\mu}=g(4,1)+m=m+1$ and $f_{\lambda}=g(7,1)+m=g(1,1)+m+1=m+1$, so they are equal.
\item $\omega=1$,$\ell=3$: $f_{\mu}=g(6,1)+m=g(0,1)+m+1$ and $f_{\lambda}=g(9,1)+m=g(3,1)+m+1=m+1$, so they are equal.
\item $\omega=1$,$\ell=5$: $f_{\mu}=g(8,1)+m=g(2,1)+m+1$ and $f_{\lambda}=g(11,1)+m=g(5,1)+m+1=m+1$, so they are equal.
\end{enumerate}
\item If $k=n+1$, then $\lambda=(2n-2,n+1,1)$. We have $a_{(2n-2,n+1,1)}=\min\{n-2,n+1\}=n-2$ and $a_{\mu}=\min\{n-1,n-2\}=n-2$, so they are equal. Then, $f_{\lambda} = g(a_{\lambda},k \mod2) = g(a_{\mu}, k-4 \mod2) = f_{\mu}$.
\item If $n+2 \leq k \leq \lfloor \frac{3n}{2}\rfloor$, then $a_{(3n-k-1,k,1)}=\min\{3n-2k,k\}=3n-2k$ and $a_{\mu}=\min\{3n-2k,k-3\}=3n-2k$, so they are equal. Then, $f_{(3n-k-1,k,1)}=g(a_{\lambda},k \mod2) = g(a_{\mu}, k-4 \mod2) = f_{\mu}$.
\end{enumerate}
$(8)$: If $\lambda_3=1$ and $\lambda_2 \leq 3$, then we have to test the three possibilities and see that the coefficient is zero:
\begin{enumerate}[label=\roman*)]
\item If $\lambda_2=1$, then $a_{(3n-2,1,1)}=\min\{3n-2,1\}=1$, and $f_{(3n-2,1,1)}=g(1,1)=0$.
\item If $\lambda_2=2$, then $a_{(3n-3,2,1)}=\min\{3n-4,2\}=2$, and $f_{(3n-3,2,1)}=g(2,0)=0$.
\item If $\lambda_2=3$, then $a_{(3n-4,3,1)}=\min\{3n-6,3\}=3$, and $f_{(3n-4,3,1)}=g(3,1)=0$.
\end{enumerate}
So, the formula is true.
\end{proof}
\section{Conclusion}
This recurrence gives a faster way to compute $h_3[h_n]$. The author firmly believes that such recurrence formulas can be found to compute $h_m[h_n]$ recursively for any $m$. In effect, Dent's two column result [ ? ] is equivalent to the fact that $h_m[h_n] - s_{\underbrace{22...2}_{m \ \text{times}}} \odot h_m[h_{n-2}]$ is Schur-positive. If we find other results that have a nice recursive definition, this would hint to a proof of the Foulkes' conjecture. But even in the case $h_4[h_n]$, such a recurrence is hard to find.
\bibliographystyle{abbrv}
\nocite{*}
|
1,477,468,751,067 | arxiv | \section{Introduction}
In 1931 Dirac linked the existence of magnetic monopoles (MMs) with
the quantization of electric charge and postulated the relation
between the elementary
electric charge $e$ of the electron and a basic magnetic charge $g$
\cite{dirac}:
\begin{equation}
\begin{array}{ccc} g = \frac{n\hbar c}{2e} = n g_{D}, & & n =
1, 2, ... \\
\end{array}
\label{Eq.Dirac}
\end{equation}
where $n$ is an unknown integer and $g_{D} = \hbar c/2e = 68.5e$ is the
unit Dirac magnetic charge (in the cgs system).
If free quarks exist, Eq.~\ref{Eq.Dirac} should be modified by
replacing $e$
with $e/3$, which effectively increases $g$ by a factor of 3.
There was no prediction for the monopole mass. A rough estimate,
obtained assuming that the classical monopole radius is equal to the
classical electron radius, yields $m_{M}\approx
g^{2}m_{e}/e^{2}\approx n^{2} \cdot 4700 m_{e}\approx n^{2} \cdot
2.4$~GeV/c$^{2}$.
Since 1931, experimental searches for ``classical Dirac'' monopoles
have been performed at nearly every new high-energy accelerator,
employing a variety of direct and indirect methods \cite{gglp}. By a
classical (Dirac) monopole, we mean a particle without electric
charge or hadronic interactions and with magnetic charge $g$
satisfying the Dirac quantization condition (Eq.~\ref{Eq.Dirac}).
Within the framework of Grand Unified Theories (GUT) of the
strong and electroweak interactions,
supermassive magnetic monopoles
with masses $m \geq 10^{16} \,$GeV/c$^{2}$ could have been
produced in the early Universe as intrinsically stable topological
defects at a high energy phase transition that leaves an unbroken
U(1) group \cite{GUTmm}.
At the present time, such monopoles could exist in the penetrating
cosmic radiation as
``fossil'' remnants of that transition.
The detection of such particles would be one of the most spectacular
confirmations of GUT
predictions.
The most stringent upper limits on an isotropic flux of GUT magnetic
monopoles, assuming monopole masses $m_{M}>10^{16}$~GeV/c$^{2}$, have
been set by the MACRO experiment \cite{MACRO}.
In some Grand Unified theories values of the monopole mass as low as
$10^{4}$~GeV/c$^{2}$ are allowed \cite{lightmm,alvaro}.
Although it is not yet possible to set direct limits at this mass
scale, it is worthwhile to search in the accessible region at
LEP energies.
Searches for classical point-like monopoles have been performed
mainly at high-energy accelerators and in cosmic radiation
experiments.
Monopole searches have predominantly used either ionization or
induction detection techniques.
Induction experiments measure the monopole magnetic charge and are
independent of monopole mass and velocity. These experiments
search for
the induction of a persistent current within a superconducting loop
\cite{loop}. Searches for magnetic monopoles using this method have
been performed at the p$\bar{\mathrm p}$ Tevatron collider assuming that
produced MMs could stop, and be trapped and bound, in the matter
surrounding the D0 and CDF collision regions \cite{trapped}. The same
strategy has been used to search for magnetic monopoles produced in
e$^{+}$p collisions at HERA \cite{hera}.
Ionization experiments rely on the large magnetic charge of monopoles
to produce more ionization than an electrical charge travelling with the same
velocity.
For $g=g_{D}$ and velocities $\beta =\left(v/c\right) \geq 10^{-2}$ a
magnetic monopole behaves, in terms of ionization energy loss
$(dE/dx)$, like an equivalent electric charge with $(ze)_{eq}=g_{D}
\beta$. The energy losses are thus very large
\begin{equation}
(dE/dx)_{g}=(g \beta /e)^{2}(dE/dx)_{e}
\label{Eq.dedx}
\end{equation}
and Dirac magnetic monopoles would be easily distinguished
from minimum ionizing electrically charged Standard Model (SM)
particles \cite{dedx,GG,lepOLD}.
Direct searches for magnetic monopoles using tracking devices
were performed at p$\bar{\mathrm p}$ and \ensuremath{\mathrm{e}^+\mathrm{e}^-}\ colliders.
Experiments at the Tevatron collider established cross section limits of about
$2\times 10^{-34}$~cm$^{2}$ for MMs with $m_{M} < 850$~GeV/c$^{2}$
\cite{Bertani}, while searches at LEP have excluded masses up to
45~GeV/c$^{2}$ \cite{LEP}.
Indirect searches for classical monopoles have relied on the effects
of virtual monopole/anti-monopole loops added to QED processes
in p$\bar{\mathrm p}$ and \ensuremath{\mathrm{e}^+\mathrm{e}^-}\ collisions \cite{abbott, acciarri}.
Since the Standard Model \ensuremath{{\mathrm{Z}^0}} boson could couple to monopoles,
assuming that the coupling between the \ensuremath{{\mathrm{Z}^0}}\ and a MM pair is
larger than
for any lepton pair, the measurement of the \ensuremath{{\mathrm{Z}^0}}\ decay width
provides an indirect limit on MM production
for $m_{M}<m_{Z}/2$ \cite{alvaro,lepOLD}.
This paper describes a direct search for MM
pairs produced in $\ensuremath{\mathrm{e}^+\mathrm{e}^-} \rightarrow M\bar{M}(\gamma)$ reactions.
The data were collected with the OPAL detector at the LEP accelerator
at CERN.
This search was primarily
based on the $dE/dx$ measurements in the tracking chambers.
OPAL has a well established analysis to search for stable,
long-lived, massive particles using the $\ensuremath{\mathrm{d}E/\mathrm{d}x}$ signatures of
individual charged particle tracks \cite{benelli}.
This analysis technique could not be used here because MMs are too
heavily ionizing, resulting in charge saturation in the central jet chamber.
Therefore, a new analysis
method was developed based on hit information rather than
reconstructed tracks.
The analysis was sensitive to MMs with masses from 45~GeV/c$^{2}$ up
to the kinematic limit (about 103~GeV/c$^{2}$).
\section{The OPAL Detector}
A description of the OPAL detector and its jet chamber can be found
in reference \cite{ref:detector}.
Only a brief overview is given here.
The OPAL detector operated at LEP between 1989 and 2000 and is now
dismantled.
The central detector comprised a system of tracking chambers,
providing track reconstruction over 96\% of the full solid
angle\footnote
{The OPAL right-handed coordinate system is defined such that the
$z$-axis is
in the
direction of the electron beam, the $x$-axis points toward the
centre of the
LEP ring, and $\theta$ and $\phi$ are the polar and azimuthal
angles, defined
relative to the $+z$- and $+x$-axes, respectively. The radial
coordinate is
denoted by $r$.}
inside a 0.435~T uniform magnetic field parallel to the beam axis.
It consisted of a two-layer
silicon microstrip vertex detector, a high-precision vertex drift
chamber with axial and stereo wires,
a large-volume jet chamber and a set of $z$-chambers measuring
the track coordinates along the beam direction.
The jet chamber (CJ) \cite{CJ} is the most important detector for this analysis.
The chamber, with a diameter of about 2m and a length of about 4m, was divided
into 24 azimuthal sectors, each equipped with 159
sense wires. Up to 159 position and $\ensuremath{\mathrm{d}E/\mathrm{d}x}$
measurements per track were thus possible.
The CJ also provided the hardware trigger for monopole candidates.
This trigger identified events with highly ionizing particles.
Of the 159 sense wires of a sector, 36 wires were combined to define
three groups
with 12 wires each. One group was at an inner region, close to the
\ensuremath{\mathrm{e}^+\mathrm{e}^-} collision axis. The other two groups were at central and outer
regions. For each wire,
hits from highly ionizing tracks were identified as those yielding
an integrated signal above a threshold of 1250 counts in the Flash
Analogues to Digital Converters (FADC).
For comparison, a minimum ionizing particle yields about 200 FADC
counts.
Values slightly above 1000 FADC counts
are typical for protons with a momentum of a few hundred MeV.
If, within a group, more than 10 wires detected a high dE/dx hit, a decision bit was set.
If this bit was set by all groups of a sector, the monopole trigger was fired.
Using raw hit information of randomly triggered events, the monopole trigger
was determined to have an efficiency greater than 99\%.
A lead-glass electromagnetic
calorimeter located outside the magnet coil
covered the full azimuthal range with good hermeticity
in the polar angle range of $|\cos \theta |<0.984$.
The magnet return yoke was instrumented for hadron calorimetry
covering the region $|\cos \theta |<0.99$ and was surrounded by
four layers of muon chambers.
Electromagnetic calorimeters close to the beam axis
completed the geometrical acceptance down to 24 mrad
on each side of the interaction point.
These small-angle calorimeters were also used to measure the
integrated luminosity by counting Bhabha events \cite{lumi-paper}.
In order to trigger on the signal described in the
introduction, only data collected when the monopole trigger was active
were used.
The data-set analysed here was recorded during the LEP2 phase
with an average centre-of-mass (c.m.) energy of 206.3~GeV, and corresponded to a
total integrated luminosity of 62.7~pb$^{-1}$.
\section{Monte Carlo Simulation}
The signal reaction $\mathrm{e}^{+}\mathrm{e}^{-}\rightarrow
M\bar{M}$
was simulated at $\sqrt{s_{MC}}$~=~208~GeV for
monopole masses ($m_{M}$) of
45, 50, 55, 60, 65, 70, 75, 80, 85, 90, 95, 100, 101, 102, 103 and
104~GeV/c$^{2}$ with Monte Carlo (MC) event samples.
Each sample contained 1000 events.
Small differences in the centre-of-mass energies between the
OPAL data analysed ($\sqrt{s_{min}}$=~203.6~GeV,
$\sqrt{s_{max}}$=~207.0~GeV, for an average
$\sqrt{s_{data}}$=~206.3~GeV) and the signal MC samples
($\sqrt{s_{MC}}$) have a negligible effect on the analysis. MM masses
were scaled to the c.m. energy with the equation:
\begin{equation}
m_{scaled} = m_{M}\sqrt{\frac{s_{data}}{s_{MC}}}
\end{equation}
This scaling is valid
since $\ensuremath{\mathrm{d}E/\mathrm{d}x}$ (hence detection efficiency) is a linear function of
mass.
The very large value of the magnetic charge makes it impossible to
use perturbative theory to calculate the MM production process.
MMs were assumed to be spin 1/2 particles, produced from the \ensuremath{\mathrm{e}^+\mathrm{e}^-}\,
initial state
via annihilation into a virtual photon, which yields a
monopole-antimonopole
pair with a uniform azimuthal distribution and with the typical
fermion polar angle distribution $\propto (1+ \cos^{2}\theta)$:
\begin{equation}
\mathrm{e}^{+}\mathrm{e}^{-}\rightarrow \gamma^{*} \rightarrow
M\bar{M}
\end{equation}
Since magnetic charge cannot be simulated directly, MMs were
simulated as heavy electrically charged fermions
with an effective charge of $(ze)_{eq}=g_{D} \beta$ (assuming $n=1$).
The specific ionization energy loss was computed according to Eq.~\ref{Eq.dedx}.
A magnetic monopole interacts with a magnetic field analogously to
how an electron interacts with an electric field. The Lorentz force
for a magnetic monopole carrying magnetic charge $g$ is:
\begin{equation}
\vec{F}=g\left(\vec{B}-\vec{v}\times\vec{E}\right)
\end{equation}
The GEANT3 \cite{geant} based OPAL detector simulation program
\cite{gopal}
was used to simulate the behavior of the MMs in the OPAL detector.
The routines to transport the particles through the magnetic field were modified
such that over a given step the change in the momentum $d\vec{p}/dt$ of the monopole
was obtained by solving analytically the differential equation:
\begin{equation}
\frac{d\vec{p}}{dt} = g\vec{B}
\end{equation}
The solution describes the motion of a magnetic monopole in a uniform
magnetic field. The trajectory is a parabola,
accelerating in the direction of the magnetic field. In
the plane perpendicular to the magnetic field the motion is along a
straight line, in sharp contrast to electrically charged particles,
which curve in this plane.
We studied the effects of multiple scattering of the monopoles and the
modelling of the electric field between the anode, cathode, and
potential wires in CJ and found them to be negligible.
A software emulation of the monopole trigger was used to study its efficiency.
For the simulated monopole events, the trigger efficiency was found
to be essentially 100\%.
The background was estimated using MC simulations of Standard Model
processes, generated at $\sqrt{s}$=206~GeV.
Two-fermion events ($Z^{0*}/\gamma^{*}\rightarrow
f\overline{f}(\gamma)$ with $f$ = $e, \mu,\tau, q$) were simulated
with KK2f \cite{twofermions}. For the two-photon background, the
PYTHIA \cite{pytia} and PHOJET \cite{phojet} Monte Carlo generators
were used for $\ensuremath{\mathrm{e}^+\mathrm{e}^-} q\overline{q}$ final states and the Vermaseren
\cite{vermaseren} and BDK \cite{bdk} generators for all $\ensuremath{\mathrm{e}^+\mathrm{e}^-}
l^{+}l^{-}$ final states. Four-fermion final states were simulated
with grc4f \cite{grc4f}, which takes into account interference
between all diagrams.
All generated signal and background events were processed through the
full simulation of the OPAL detector. The same event
analysis chain was applied to the simulated events and to the data.
\section{Data Analysis}
\label{sect.4}
\begin{table}
\begin{center}
{\small
\begin{tabular}{|c|l|l|} \hline
Cut & Description & cut value \\\hline
Preselection & Total charge per hit (CJ): & $\ge$ 1000~FADC \\
& Number of Tracks plus Clusters: & $\le$ 18 \\\hline
1
& The first hit wire: & $\le$ 2 \\
& Number of Tracks plus Clusters: & $\le$ 4 \\\hline
2 & Distance between the 2 sectors: & $\ge$ 8 \\
3 & Number of hits in overflow in $\majorsec$: & $\ge$ 10 \\
4 & Z mean coordinate (CJ): & $\le$ 50~cm \\
5 & Charge per hit in the $\majorsec$: & $\ge$ 3700~FADC counts
\\
6 & Charge per hit in the $\minorsec$: & $\ge$ 3000~FADC counts
\\
& Total charge per hit (CJ): & $\ge$ 2500~FADC counts
\\\hline
\end{tabular}}
\parbox{0.9\textwidth}{\caption {\sl
List of cuts applied to the data.
\label{tab:raw1}}}
\end{center}
\end{table}
Magnetic monopoles would distinguish themselves
by their anomalously high ionization energy loss in CJ and by
the different plane of curvature of the trajectory in the magnetic field,
compared to electrically charged particles.
The large value of the specific energy loss ($\ensuremath{\mathrm{d}E/\mathrm{d}x}$) of a MM in the
gas of
the tracking detectors would induce a saturation in most of the wire
hits.
With the signals from both ends of the wire saturated, it is not
possible to determine the $z$ position from charge sharing. In this
case the $z$ position is set to zero by the reconstruction program.
In the MC, most MM events are seen to exhibit a mean z-coordinate near
zero, because of saturation effects.
Rather than trying to reconstruct the MM tracks in 3 dimensions,
events were examined for the characteristic MM pattern
of ionisation in the sectors of the OPAL Jet Chamber.
Pair-produced magnetic monopoles, $\ensuremath{\mathrm{e}^+\mathrm{e}^-}\rightarrow M\bar{M}(\gamma)$,
would be expected to be produced back to back with a
characteristic pattern of hits
in the jet chamber.
This would have resulted in an azimuthal separation of about 12 sectors
between the two sectors with the highest energy deposits,
called $\majorsec$ and $\minorsec$, with little energy deposited
elsewhere in the detector.
Based on these considerations, events were rejected if the overall
charge deposited on the sense wires
normalised per hit was smaller than 1000 FADC counts, or if the total multiplicity
of tracks plus clusters in the detector was greater than 18.
The level of the FADC counts were based on gains and calibrations. We
refer to these two cuts as the preselection, see
Table~\ref{tab:raw1}.
To reject some un-modelled events, further cuts were applied:
the number of reconstructed tracks plus clusters
had to be no more than 4 and the first wire hit in CJ had to be one
of the first two wires (cut 1 in Table~\ref{tab:raw1}).
Table~\ref{tab:raw1} summarizes the other selection criteria.
We required the $\majorsec$ and $\minorsec$ to have an azimuthal
separation of at least
eight sectors (cut 2) and the number of hits in overflow in the
$\majorsec$ to be larger than or equal to 10 (cut 3). Since
the typical MM signature would exhibit a mean z-coordinate
near zero, the
average of the $z$ coordinate in CJ was required to be less than 50~cm (cut
4). The deposited charge per hit in $\majorsec$ and $\minorsec$
was required to be larger than 3700 FADC and 3000 FADC counts, respectively (cut
5 and cut 6) and the total charge per hit in all the
CJ sectors to be larger than 2500 FADC counts (cut 6).
The Standard Model background was dominated by Bhabha events and
two-photon hadronic events, with a contribution from other two-photon
events.
The effect of the cuts on the samples at an average
c.m. energy of $\sqrt{s}$=206.3~GeV is
shown in Table~\ref{tab:raw2}.
After applying cut 1, there was poor agreement between data
and MC (see Table~\ref{tab:raw2}).
This was because the
data still contained remaining un-modelled backgrounds from beam-gas interactions,
cosmic rays and
detector noise. This un-modelled background was much reduced by the
subsequent cuts since beam-gas interactions yield particles
which mainly travel along the beam pipe and do not have the
characteristic back-to-back
pattern, and detector noise does not
deposit large amounts of charge on the wires.
The remaining difference of 15-20$\%$ between the number of events in
the data
and MC after cut 2 does not affect our results, as the signal is so separated
from the the background that we can impose very hard cuts to remove
all the background without affecting the detection efficiency.
Fig.~\ref{fig:variable} shows the distribution of two of the main variables used by the analysis
after cut 2:
the charge per hit in the CJ sector $\majorsec$ and the average of
the z-coordinate.
The total number of data events at this stage is 2928 and the total
number of
the MC Standard Model events is 2462 (Table~\ref{tab:raw2}).
Since the magnetic monopole behavior would be very different from any
electrically charged SM particles,
all the variables used by the analysis have a very well separated
distribution for the MM signal and SM MC backgrounds.
For this reason it can be seen from Table~\ref{tab:raw2} that no MC
background event survived the analysis cuts.
Moreover the overall detection efficiency is very high ($\geq 90\%$)
for almost all MM masses.
In Fig.~\ref{fig:effy} the detection efficiency for pair-produced
magnetic monopoles at $\sqrt{s}\cong$ 206~GeV is shown as a function
of m$_M$.
\begin{table}
\begin{center}
{\small
\begin{tabular}{|ccc|c|c|c|c|c|c|c|c|c|c|} \hline
&&&
\multicolumn{9}{|c|}{Number of background events SM MC} & \\
\hline
cut & data & Total SM MC & bhabha & 2f & qq & $2\gamma$(e) &
$2\gamma(\mu)$ & $2\gamma(\tau)$ & $\nu\nu$ & 4f & $2\gamma$(q) &
sig. eff.($\%$) \\ \hline
1 & 44491 & 5707 & 4231 & 0.7 & 0.6 & 75.3 & 2.2 & 71.9 &
1.9& 57 & 1266 & 91 \\
2 & 2928 & 2462 & 1927 & 0.1 & 0.3 & 6.0 & 0.1 & 27.5 &
0.3 & 14 & 487 & 91 \\
3 & 2576 & 2194 & 1661 & 0.1 & 0.3 & 5.4 & 0.1 & 27.4 &
0.3 & 12.8 & 487 & 91 \\
4 & 1982 & 1597 & 1405& 0.0 & 0.0 & 0.4 & 0.0 & 6.9 &
0.0 & 6.4 & 177 & 91 \\
5 & 2 & 1.2 & 0.5 & 0.0 & 0.0 & 0.0 & 0.0 & 0.1 &
0.0 & 0.0 & 0.6 & 91 \\
6 & 0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 & 0.0 &
0.0 & 0.0 & 0.0 & 91 \\\hline
\end{tabular}}
\parbox{0.9\textwidth}{\caption {\sl
The number of data and Monte Carlo events remaining after the cuts
for analysed data-set collected
at $\sqrt{s}$=206.3~GeV and for various
MC SM background processes normalised to the integrated luminosity of
the data (62.7~pb$^{-1}$).
The last column gives the efficiencies (in percent) for the magnetic
monopole MC signal
simulated in the mass region between 45~GeV/c$^{2}$ and
103~GeV/c$^{2}$.
\label{tab:raw2}}}
\end{center}
\end{table}
\section{Estimates of Systematic Uncertainties}
The distributions of the variables in the data and SM MC have similar shapes.
The differences in the mean values are quite small.
The MC modelling of the $\ensuremath{\mathrm{d}E/\mathrm{d}x}$ may introduce some systematic uncertainties.
These
were evaluated by displacing the cut value on a given variable $x$
from the original position $x_{0}$ to a new position
$\overline{x}_{0}$, to reproduce on the simulated events the effect
of the cut on the real data. $\overline{x}_{0}$ is defined by:
\begin{equation}
\overline{x}_{0}=\left(x_{0}-\left\langle
x\right\rangle_{data}
\right)\frac{\sigma_{bkg}}{\sigma_{data}}+\left\langle
x\right\rangle_{bkg}
\end{equation}
where $\left\langle x\right\rangle_{data}$, $\left\langle
x\right\rangle_{bkg}$, $\sigma_{data}$ and $\sigma_{bkg}$ are the
mean values and the standard deviations of the distributions of the
variable $x$ for the data and the simulated background. These
quantities were calculated from the $x$ distributions of the events
surviving the cuts on all the other variables used in the selection.
It was verified that using the distribution of $x$ at other stages of
the selection leads to negligible changes in the values of this
uncertainty.
The procedure was repeated for the main variables used in the event
selection (Table~\ref{tab:raw1}): the number of overflows in
$\majorsec$, the Z mean coordinate in CJ and the charge per hit in
$\majorsec$ and $\minorsec$. The difference between the reduced
efficiency, due to the displacement of the cut, and that obtained
with the nominal selection was taken as the systematic uncertainty
due to the modelling of the variable under consideration. The
relative systematic uncertainties in the signal efficiency associated
with the various quantities are reported in Table~\ref{tab:sys}. The
range comes from different values obtained for the different MM
masses.
At a given centre-of-mass energy the different systematic
uncertainties were assumed to be independent, so that the total
systematic uncertainty was calculated as the quadratic sum of the
individual uncertainties. The global systematic uncertainty ranges
between 0.4\% and 5.2\% (Table~\ref{tab:sys}).
The MC statistical uncertainty, due to the limited number of signal
events generated, has been computed using a binomial formula and is
reported in Table~\ref{tab:sys}.
\begin{table}
\begin{center}
\begin{tabular}{|c|c|}\hline
Quantity & Systematic uncertainty (\%) \\\hline
Number of overflows in $\majorsec$ & 0.0 - 0.2 \\
Z mean coordinate (CJ) & 0.2 - 0.4 \\
Charge per hit in $\majorsec$ & 0.3 - 4.7 \\
Charge per hit in $\minorsec$ & 0.2 - 2.2 \\\hline\hline
Global systematic uncertainty & 0.4 - 5.2 \\\hline
Signal MC statistics & 0.6 - 0.8 \\\hline\hline
Total & 0.7 - 5.3 \\\hline
\end{tabular}
\parbox{0.9\textwidth}{\caption {\sl Summary of systematic
uncertainties for the
signal efficiency of the various quantities used in the analysis. The
range of results corresponds to the values obtained for the different MM
masses.
\label{tab:sys}}}
\end{center}
\end{table}
\section{Results and Conclusions}
No magnetic monopole signal was found in this search.
In Figure~\ref{fig:limit} the $95\%$ CL upper limit on the production
cross-section
at an average c.m. energy of $\sqrt{s}=206.3$~GeV is shown as a
function of the monopole mass.
The average upper limit on the cross-section, computed using a
frequentist approach, is 0.05~pb in the mass
range $45<m_M<102$~GeV/c$^{2}$. This limit is essentially independent
of the mass in this range.
The computation of the cross-section is non-trivial. Nevertheless we
expect the cross-section to be large.
The cross-section for the pair production of Dirac Magnetic Monopoles
computed assuming a naive tree-level coupling through an s-channel
virtual photon, according to the effective charge $(ze)_{eq}= g_{D}
\beta$, is around 5 orders of magnitude larger than the upper limit
obtained in this experiment \cite{LEP}. In this model we can thus
exclude classical MMs in the mass range 45-102 GeV/c$^{2}$.
This is a new excluded mass range for Dirac magnetic monopole
searches in \ensuremath{\mathrm{e}^+\mathrm{e}^-}\ interactions.
\section{Acknowledgements}
\par
We particularly wish to thank the SL Division for the efficient operation
of the LEP accelerator at all energies
and for their close cooperation with
our experimental group. In addition to the support staff at our own
institutions we are pleased to acknowledge the \\
Department of Energy, USA, \\
National Science Foundation, USA, \\
Particle Physics and Astronomy Research Council, UK, \\
Natural Sciences and Engineering Research Council, Canada, \\
Israel Science Foundation, administered by the Israel
Academy of Science and Humanities, \\
Benoziyo Center for High Energy Physics,\\
Japanese Ministry of Education, Culture, Sports, Science and
Technology (MEXT) and a grant under the MEXT International
Science Research Program,\\
Japanese Society for the Promotion of Science (JSPS),\\
German Israeli Bi-national Science Foundation (GIF), \\
Bundesministerium f\"ur Bildung und Forschung, Germany, \\
National Research Council of Canada, \\
Hungarian Foundation for Scientific Research, OTKA T-038240,
and T-042864,\\
The NWO/NATO Fund for Scientific Research, the Netherlands.\\
|
1,477,468,751,068 | arxiv | \section{Introduction}
The unique properties of graphene, the first truly two-dimensional material, have spurred a flood of research on fundamental physics and on technological possibilities. \cite{Neto_Gfn_ElProp_RMP09, Castro_BBLG_ElProp_JPCM10, Nilsson_BLG_MultiLG_electron_PRB08, SDS_Gfn_RMP11, PesinAHM_Gfn_Spintron_NP12, McCann_MLGBLG_ElProp_12, McCann_BLG_ElProp_12} Monolayer graphene (MLG) has a non-Bravais honeycomb lattice structure with two triangular sublattices. The physical consequences of its linear
$\pi$-band crossing at the Fermi level, described at low energies
by a chiral massless Dirac $\vec{k} \cdot \vec{p}$ Hamiltonian, have been discussed at length.
Bilayer graphene (BLG) consists of two Bernal-stacked coupled graphene monolayers, resulting in four inequivalent sites and four corresponding $\pi$-bands. Its carriers are chiral but massive and characterized
at low energies by a gapless parabolic spectrum displaying sublattice pseudospin-momentum locking.
In BLG a band gap may be induced by a top gate \cite{Oostinga_BLG_GateInducedInsul_NM08, KuzmenkoCrassee_BLG_TuneGap_PRB09, Deshpande_BLG_DiracPt_APL09} or by dual gates
which can vary the carrier density in the two layers independently. \cite{Ohta_BLG_ControlElStr_S06}
The theory of electronic transport in this unique and tunable $\pi$-band system has been investigated extensively. \cite{Adam_BLG_Transp_PRB08, Adam_BLG_TempDepCond_PRB10, SDS_BLG_Transp_PBR10, Culcer_BLG_transp_PRB09, Hwang_BLG_inhomogeneity_temperature_PRB10, Kechedzhi_BLG_TrigWarp_PRL07, Kechedzhi_MLG_BLG_WL_EPJ07, Koshino_BLG_Deloc_PRB08, KoshinoAndo_BLG_Transp_PRB06, Min_BLG_Opt_TranspGap_PRB11, Min_MultiLG_OptCond_PRL09, Rossi_BLG_Percol_PRL11, Sinner_MLG_BLG_Dis_Cond_PRB11, Trushin_BLG_ThermCond_EPL12, Trushin_BLG_MinCond_PRB10, Cserti_BLG_MinCond_TrigWarp_PRL07, Gradinar_BLG_Strain_Lifshitz_PRB12, Li_MLG_BLG_CrgImp_SSC12, Li_BLG_insulating_subgap_conductance_NP11, Prada_BLG_QSHE_SSC11, Yuan_BLG_TLG_DisTransp_PRB10, Hatami_BLG_MgnFldCondDis_PRB11, McCannFalko_BLG_LL_QHE_PRL06, Abergel_BLG_Vly_Crnt_APL09}
Experimental studies have focused on transport,
\cite{MiyazakiLi_BLG_unipolar_APL12, Miyazaki_BLG_Dis_PerpE_NL10, Bao_BLG_MinCond_12, Efetov_BLG_MultibandTransp_PRB11, Gorbachev_BLG_WL_PRL07, Lee_BLG_magnetotransport_NL11, Shioya_BLG_TuneNonlnrCrnt_APL12, VelascoJing_BLG_insulating_spectroscopy_NN12, Xiao_BLG_CrgImpSct_PRB10} magnetotransport, \cite{Novoselov_BLG_QHE_NP06, Henriksen_BLG_CycloRes_PRL08, Kim_BLG_SP_VP_PRL11, Sanchez-Yamagishi_BLG_Twist_QHE_PRL12, vanElferen_BLG_suspended_QHFmg_PRB12, Freitag_BLG_Susp_SpontGap_PRL12} and optics. \cite{Yang_MLG_BLG_Xtn_OptResp_PRL09, ParkLouie_Xtn_NL10, Martin_BLG_Susp_Compress_PRL10} Recent efforts that have succeeded in manufacturing and manipulating quantum dots, \cite{Fringes_BLG_QD_PSSB11, Drocher_BLG_QD_12} have been motivated by
potential applications in quantum computing.
An extensive body of research has been devoted to electron-electron interactions in BLG. \cite{Barlas_BLG_neutral_nonfermi_PRB09, NandkishoreLevitov_BLG_ee_PRB10, Lemonik_BLG_GS_PRB12, Nilsson_BLG_ee_PhaseDia_PRB06,
Stauber_BLG_Fmg_JPCM08, Jung_BLG_persistent_11, Jung_BLG_PseudospinFmg_PRB11, NandkishoreLevitov_BLG_QAHE_PRB10, Min_BLG_Superfl_PRB08, Zhang_BLG_SpontSymBrk_PRB10, Zhang_BLG_SpontQHE_PRL12, Barlas_BLG_exciton_condensation_PRL10, Castro_BLG_Fmg_PRL08, Mayorov_BLG_interaction_spectrum_S11, Abergel_BLG_Correl_10, KusminskiyCampbell_BLG_ee_EPL09, KusminskiyNilsson_BLG_Compress_PRL08, Borghi_BLG_Compress_PRB10, BorghiPolini_MLGBLG_Fermi_enhancement_SSC09, Sensarma_BLG_Correl_PRB11, Vafek_BLG_ee_RG_PRB10, Killi_BBLG_LuttLiq_PRL10,
Borghi_BLG_dynamical_response_collective_PRB09, Hwang_BLG_RKKY_PRL08, Wang_BLG_Scrn_PRB07, Tse_BLG_Drude_Scrn_PRB09, Sensarma_BLG_DynScrn_PRB10, Wang_BBLG_BBLG_PRB10, NandkishoreLevitov_BLG_DynScrn_Xtn_PRL10, Gamayun_BLG_DynScrn_PRB11, TriolaRossi_BLG_Gap_Scrn_12, Abergel_BLG_Gap_Screen_12, LvWan_BLG_screening_transp_PRB10, Bena_MLG_BLG_LDOS_1Imp_PRL08, Hwang_MLG_BLG_Drag_PRB11, Rossi_Gfn_EffMed_PRB09, Hassan_BLG_pn_plasmon_PRB12} Interactions in all forms of graphene are expected to be strong when the two-dimensional
material is surrounded by low-$\kappa$ dielectrics.
Progress in studying interaction effects has been
improved by achieving samples with higher mobility, \cite{Morozov_MLG_BLG_IntrMob_PRL08}
by studying suspended BLG samples that are not influenced by a substrate \cite{Feldman_BLG_Susp_BrkSym_NP09} and by breakthroughs in fabricating samples with top and back gates. \cite{Taychatanapat_BLG_DualGate_PRL10, Yan_BLG_Bolo_NN12} Interactions are expected to become more important as one approaches the charge neutrality point. \cite{Barlas_BLG_neutral_nonfermi_PRB09, NandkishoreLevitov_BLG_ee_PRB10} In equilibrium interactions in BLG lead to competing ground states, \cite{Lemonik_BLG_GS_PRB12, Nilsson_BLG_ee_PhaseDia_PRB06} including a host of exotic states. \cite{Stauber_BLG_Fmg_JPCM08, Castro_BLG_Fmg_PRL08, Barlas_BLG_exciton_condensation_PRL10, Jung_BLG_persistent_11, Jung_BLG_PseudospinFmg_PRB11, NandkishoreLevitov_BLG_QAHE_PRB10, Min_BLG_Superfl_PRB08, Zhang_BLG_SpontSymBrk_PRB10, Zhang_BLG_SpontQHE_PRL12,Mayorov_BLG_interaction_spectrum_S11}
Theoretical studies of electron-electron interactions in equilibrium BLG\cite{KusminskiyNilsson_BLG_Compress_PRL08, Borghi_BLG_Compress_PRB10, BorghiPolini_MLGBLG_Fermi_enhancement_SSC09, Sensarma_BLG_Correl_PRB11, KusminskiyCampbell_BLG_ee_EPL09, Abergel_BLG_Correl_10, Vafek_BLG_ee_RG_PRB10, Killi_BBLG_LuttLiq_PRL10}
have demonstrated, among other properties, that screening and Friedel oscillations have different functional forms from MLG and 2DEGs. \cite{Hwang_BLG_RKKY_PRL08, Wang_BLG_Scrn_PRB07, Tse_BLG_Drude_Scrn_PRB09, Sensarma_BLG_DynScrn_PRB10, Wang_BBLG_BBLG_PRB10, NandkishoreLevitov_BLG_DynScrn_Xtn_PRL10, Gamayun_BLG_DynScrn_PRB11, TriolaRossi_BLG_Gap_Scrn_12, Abergel_BLG_Gap_Screen_12, Borghi_BLG_dynamical_response_collective_PRB09, LvWan_BLG_screening_transp_PRB10, Bena_MLG_BLG_LDOS_1Imp_PRL08}
The role of electron-electron interactions out of equilibrium has not yet received
attention. Given the wealth of research on transport, it is timely to address the influence
of electron-electron interactions on the charge conductivity of BLG.
As in single-layer graphene, charge currents in BLG lead to a net
pseudospin polarization. Interactions are
therefore expected to renormalize the charge current and with it the pseudospin polarization.
The question naturally arises of whether this polarization may be enhanced by interactions
and produce observable effects. It is important to understand whether the effect on the conductivity of non-equilibrium contributions to the interaction self-energy can be substantial, and whether it can be controlled using various tuning parameters such as the carrier density $n_e$.
This paper is therefore concerned with the effect of electron-electron interactions in bilayer graphene transport in the metallic regime $\varepsilon_F \tau/\hbar \gg 1$. We begin with the quantum Liouville equation for the density matrix, working in the first Born approximation with respect to momentum scattering.
Electron-electron interactions are taken into account self-consistently using the
non-equilibrium Hartree-Fock approximation, with screening treated in the random phase approximation.
We determine an exact expression for the conductivity in the presence of interactions within our framework. This work is distinct from recent papers discussing other interactions in BLG transport. \cite{Hassan_BLG_pn_plasmon_PRB12, Rossi_Gfn_EffMed_PRB09, Hwang_MLG_BLG_Drag_PRB11} In addition, the mean-field effect discussed here is not related to Coulomb drag, and electron-electron scattering is not relevant to the discussion at hand, which
for simplicity assumes that the temperature $T = 0$.
We demonstrate that electron-electron interactions renormalize
the charge conductivity. The interaction effect reduces
the conductivity. However, the effect has a very weak density dependence and
will be difficult to distinguish experimentally from a slight increase in disorder strength,
which is not normally known.
Surprisingly, the interaction effect \textit{vanishes} as the carrier density $n_e$ tends towards zero
because of a subtle interplay between
the electric field, the pseudospin degree of freedom, and the electron-electron interactions
mean-field. The effect is unexpectedly weak when the the Fermi wave vector $k_F$ is small compared to a wave vector $q_0$, introduced below, the size of which is set by the interlayer tunneling.
In BLG this wave vector is $q_0 \approx 4$nm$^{-1}$. Consequently even a density $\sim 10^{13}$cm$^{-2}$,
which is relatively large in experimental terms, gives a Fermi wave vector small compared to $q_0$.
A recent study of out-of-equilibrium interactions in TI showed that, for a Dirac cone, the renormalized conductivity has the same density dependence as the bare conductivity.\cite{Culcer_TI_ee_PRB11} This result is expected to apply also to MLG. In BLG, however, one would expect the renormalization to have a different density dependence given the quadratic dispersion and different functional form of the screened Coulomb potential. We find that the density dependence of the renormalization is indeed different from that of the bare conductivity, but the fractional change in the conductivity due to interactions is much weaker than in TI/MLG and vanishes at low densities.
The vanishing of the renormalization in the limit $n_e \rightarrow 0$ is explained by the fact that the pseudospin of BLG is characterized by a winding number of 2. The projection of the equilibrium
pseudospin at ${\bm k}$ onto the pseudospin at ${\bm k}'$ has a different rotational symmetry than
the driving term due to the electric field.
As $n_e \rightarrow 0$, the product of these terms averages to zero over the Fermi surface.
In this sense the relative weakness of this interaction effect in BLG is
related to the smaller Fermi velocity enhancement in BLG compared to MLG. \cite{BorghiPolini_MLGBLG_Fermi_enhancement_SSC09}
The outline of this paper is as follows. In Sec.~\ref{sec:Ham}, we introduce the BLG band Hamiltonian and discuss the technical details of the kinetic equation solution. In Sec.~\ref{sec:sct}, we calculate the scattering term in BLG in the Born approximation. In Sec.~\ref{sec:nonint} we briefly review charge transport in the absence of interactions. In Sec.~\ref{sec:int} we calculate the first order mean-field correction to the conductivity due to electron-electron interactions, then obtain an exact result to all orders. The results are discussed in Sec.~\ref{sec:disc}, while Sec.~\ref{sec:sum} summarizes our findings.
\section{Hamiltonian and Method}
\label{sec:Ham}
Our study is based on a commonly used $2$-band model for BLG:
\begin{equation}\label{BLGHamiltonian}
H_{0{\bm k}} = Ak^2 \left(
\begin{array}{cccc}
0 & e^{-2i\theta} \\
e^{2i\theta} &0\\
\end{array}
\right)
=Ak^2({\bm \sigma}\cdot\hat{\bm n}_{\bm k})
\end{equation}
where $\hat{\bm n}_{\bm k}$ is the unit vector $(\cos2\theta,\sin2\theta)^{\mathrm{T}}$ with $\theta$ the polar angle of ${\bm k}$ and $A = \hbar^2v_F^2/t_\perp$ is a material-specific constant, which determines the Fermi velocity of BLG. Here $v_F$ is the (constant) Fermi velocity of MLG and $t_\perp \approx 0.4$eV is the interlayer hopping parameter.
\cite{SDS_Gfn_RMP11}
This model is valid at energies small compared to $t_\perp$, except that it
neglects trigonal warping terms which become important at very low energies.
(We comment on the role of these terms in Sec.~\ref{sec:disc}.)
It acts in a layer pseudospin space and has eigenstates which are equal weight sums of top and
bottom layers with interlayer phase angle $ 2 \theta$.
The Hamiltonian $H_{0{\bm k}}$ can be understood as representing a Zeeman-like interaction involving the pseudospin degree of freedom with a momentum-dependent effective magnetic field whose direction is given by
$\hat{\bm n}_{\bm k}$. Unlike MLG, the pseudospin winds twice when
momentum winds around the Fermi surface.
The eigenvalues of $H_{0{\bm k}}$ are $\varepsilon_{{\bm k}\pm} = \pm Ak^2$.
The many-body Hamiltonian $\mathcal{H}$ in $2^{nd}$ quantization is
\begin{equation}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle \mathcal{H} = & \displaystyle \sum_{{\bm k}{\bm k}'ss'} (H_{{\bm k}{\bm k}'}^{ss'} c^\dag_{{\bm k}s} c_{{\bm k}'s'} + \frac{1}{2} \, \sum_{\bm q} V_{\bm q} \, c^\dag_{{\bm k} + {\bm q}, s}c^\dag_{{\bm k}' - {\bm q}, s'}c_{{\bm k}'s'}c_{{\bm k}s}).
\end{array}
\end{equation}
The one-particle matrix element $H_{{\bm k}{\bm k}'ss'}$ accounts for band structure contributions, as well as disorder and driving electric fields, discussed below. For the matrix element $V_{\bm q} = V_{q}$
we use the statically screened Coulomb potential, determined here in the random phase approximation (RPA), \cite{Hwang_BLG_RKKY_PRL08} with $\epsilon_r$ the relative permittivity,
\begin{equation}
V_q=\frac{e^2}{2\epsilon_0\epsilon_r[q + q_0g(q)]},
\end{equation}
where the constant wave vector $\displaystyle q_0=\frac{e^2}{2\pi\epsilon_0\epsilon_rA}$, and
\begin{equation}
g(q)=\frac{1}{2k_F^2}\sqrt{4k_F^4+q^4}-\ln\bigg[\frac{k_F^2+\sqrt{k_F^4+q^4/4}}{2k_F^2}\bigg].
\end{equation}
$g(q)$ is a dimensionless function that increases monotonically from 1 to 1.755 as $q$ varies from 0 to $2k_F$, and the Fermi wave vector $k_F=\sqrt{\pi n_e}$.
Both intra-band and inter-band contributions to static screening are included in $g(q)$. For definiteness we will assume that the carrier density $n_e > 0 $.
The effective single-particle kinetic equation for the ${\bm k}$-diagonal part of the density matrix, $f_{\bm k}$, is derived from the quantum Liouville equation in the weak momentum scattering regime exactly as in Ref.~\onlinecite{Culcer_TI_ee_PRB11}
\begin{equation}\label{Kinetic}
\frac{df_{\bm k}}{dt} + \frac{i}{\hbar}[H_{0{\bm k}},f_{\bm k}] + \hat{J}(f_{\bm k}) = - \frac{i}{\hbar}[H_{\bm k}^E,f_{\bm k}] + \frac{i}{\hbar}[\mathcal{B}^{MF}_{\bm k},f_{\bm k}],
\end{equation}
The scattering term $\hat{J}(f_{\bm k})$ in the first Born approximation is given by
\begin{widetext}
\begin{equation}\label{Scattering}
\arraycolsep 0.3 ex\begin{array}{rl}\displaystyle
\hat{J}(f_{\bm k}) = \frac{n_i}{{\hbar}^2}\lim_{\eta\to0}\int\frac{d^{2}k'}{(2\pi)^2}
|\bar{U}_{{\bm k}{\bm k'}}|^{2}\int_{0}^{\infty}dt'e^{-\eta t'} \{e^{-iH_{0{\bm k'}}t'/\hbar}(f_{\bm k}-f_{\bm k'})e^{iH_{0{\bm k}} t'/\hbar} + e^{-iH_{0{\bm k}}t'/\hbar}(f_{\bm k}-f_{\bm k'})e^{iH_{0{\bm k}'}t'/\hbar}\},
\end{array}
\end{equation}
\end{widetext}
where $n_i$ is impurity density and $\bar{U}_{{\bm k}{\bm k}'}$ the potential of a single impurity. The mean-field electron-electron interaction term is
\begin{equation}\label{EEEffective}
\mathcal{B}^{MF}_{\bm k}(f_{\bm k})=\frac{1}{(2\pi)^2}\int dk'k'\int_{0}^{2\pi}d\gamma \, V_{{\bm k}{\bm k'}} \, f_{{\bm k}'},
\end{equation}
where $\gamma = \theta' - \theta$ is the relative angle between wave vectors ${\bm k}$ and ${\bm k}'$, $V_{{\bm k}{\bm k'}}=V_{\bm q}$, and $\theta$ and $\theta'$ are the polar angles of ${\bm k}$ and ${\bm k}'$ respectively. The one-particle Hamiltonian $H_{{\bm k}{\bm k}'}^{ss'} = H_{0{\bm k}}^{ss'} \delta_{{\bm k}{\bm k}'} + H_{E{\bm k}{\bm k}'}\delta_{ss'} + U_{{\bm k}{\bm k}'}\delta_{ss'}$, where $H_{E{\bm k}{\bm k}'}$ is the electrostatic potential due to the driving electric field ${\bm E}$, and $U_{{\bm k}{\bm k}'} $ is the \textit{total} disorder potential.
The matrix element $\bar{U}_{{\bm k}{\bm k}'}$ of the RPA-screened Coulomb potential of a single impurity between plane waves is
\begin{equation}
\bar{U}_{{\bm k}{\bm k'}}=\frac{Ze^2}{2\epsilon_0\epsilon_r[q+q_0g(q)]}
\end{equation}
where $Z=1$ is the ionic charge. Below we suppress the pseudospin indices $ss'$ and treat all quantities as $2 \times 2$ matrices.
We decompose $f_{\bm k}=n_{\bm k} \openone + S_{\bm k}$, with $n_{\bm k}$ a scalar part and $S_{\bm k}$ a pseudospin part, which can be expressed as $S_{\bm k}=\frac{1}{2}{\bm S}_{\bm k}\cdot{\bm \sigma}$, where the vector ${\bm S}_{\bm k}$ is real and its $z$ component is zero in equilibrium. Since the current operator is proportional to ${\bm \sigma}$, we are only interested in $S_{\bm k}$. We decompose $\hat{J}(f_{\bm k})=\hat{J}(n_{\bm k})+\hat{J}(S_{\bm k})$ and $\mathcal{B}^{MF}_{\bm k}(f_{\bm k})=\mathcal{B}^{MF}_{\bm k}(n_{\bm k})+\mathcal{B}^{MF}_{\bm k}(S_{\bm k})$, and $S_{\bm k}$ satisfies
\begin{widetext}
\begin{equation}\label{Sint}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle \frac{dS_{\bm k}}{dt}+\frac{i}{\hbar}[H_{0\bm k},S_{\bm k}]+\hat{J}(S_{\bm k}) =-\frac{i}{\hbar}[H_{\bm k}^E,S_{\bm k}]+\frac{i}{\hbar}[\mathcal{B}^{MF}_{\bm k}(S_{\bm k}),S_{\bm k}]
\end{array}
\end{equation}
\end{widetext}
The interaction terms in Eq. (\ref{Sint}) can be
included iteratively, {\em i.e.} the solution is expanded in orders of $V$, i.e. $S_{\bm k}=\sum_{n}S_{E{\bm k}}^{ee,(n)}$,
$\mathcal{B}_{k}^{MF}=\sum_{n \geqslant 0 }\mathcal{B}_{k}^{MF,(n)}$ with $\mathcal{B}^{MF,(0)}_{\bm k}=0$, and
$\mathcal{B}^{MF,(n)}_{\bm k}=\frac{1}{(2\pi)^2}\int dk'k'\int_{0}^{2\pi}d\gamma V_{{\bm k}{\bm k'}}S_{E{\bm k}}^{ee,(n-1)}$.
Here $S_{E{\bm k}} \equiv S_{E{\bm k}}^{ee,(0)}$ in the absence of electron-electron interactions. Substituting the above expansion into the kinetic equation and keeping only terms linear in the electric field (note that $\mathcal{B}_{k}^{MF}$ is linear in ${\bm E}$), we obtain the equations below for each order $n\geqslant0$:
\begin{widetext}
\begin{equation}\label{Iteration}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle\frac{dS_{E{\bm k}}^{ee,(n)}}{dt}+\frac{i}{\hbar}\big[H_{0\bm k},S_{E{\bm k}}^{ee,(n)}\big]
+\hat{J}\big[S_{E{\bm k}}^{ee,(n)}\big]
\displaystyle=-\frac{i}{\hbar}\big[H_{\bm k}^{E},S_{E{\bm k}}^{ee,(n)}\big]+\frac{i}{\hbar}\big[\mathcal{B}^{MF,(n)}_{\bm k},S_{0{\bm k}} \big],
\end{array}
\end{equation}
\end{widetext}
where $S_{0{\bm k}}$ is the pseudospin-dependent part of the equilibrium density matrix, given below.
Following the method used in Ref.~\onlinecite{Culcer_TI_ee_PRB11}, we solve Eq.\ (\ref{Iteration}) for each $n$ by projecting onto directions parallel to (commuting with) and perpendicular to $H_{0{\bm k}}$, obtaining
\begin{equation}
\begin{array}{rl}
\displaystyle \frac{dS_{{\bm k}\parallel}}{dt}+P_{\parallel}\hat{J}(S_{\bm k}) = & \displaystyle D_{{\bm k}\parallel} \\ [1ex]
\displaystyle \frac{dS_{{\bm k}\perp}}{dt}+\frac{i}{\hbar}[H_{\bm k},S_{{\bm k}\perp}]+P_{\perp}\hat{J}(S_{\bm k}) = & \displaystyle D_{{\bm k}\perp},
\end{array}
\end{equation}
where the parallel and perpendicular components
\begin{equation}
\begin{array}{rl}
\displaystyle S_{{\bm k}\parallel} = & \displaystyle (1/2)({\bm S}_{\bm k}\cdot\hat{\bm n}_{\bm k})({\bm \sigma}\cdot\hat{\bm n}_{\bm k})=
(1/2)s_{{\bm k}\parallel}\sigma_{{\bm k}\parallel} \\ [1ex]
\displaystyle S_{{\bm k}\perp} = & \displaystyle (1/2)({\bm S}_{\bm k}\cdot\hat{\bm m}_{\bm k})({\bm \sigma}\cdot\hat{\bm m}_{\bm k})=(1/2)s_{{\bm k}\perp}\sigma_{{\bm k}\perp}
\end{array}
\end{equation}
and the unit vector $\hat{\bm m}_{\bm k}=\hat{\bm z}\times\hat{\bm n}_{\bm k}$.
\section{Scattering term}
\label{sec:sct}
The scattering term does not mix the monopole ($n_{\bm k}$) and dipole ($S_{\bm k}$) components of the density matrix. Applying the decomposition ${\bm S}_{\bm k}=s_{{\bm k}\parallel}\hat{\bm n}_{\bm k}+s_{{\bm k}\perp}\hat{\bm m}_{\bm k}$ and ${\bm \sigma}=\sigma_{{\bm k}\parallel}\hat{\bm n}_{\bm k}+\sigma_{{\bm k}\perp}\hat{\bm m}_{\bm k}$, as well as $\hat{\bm n}_{\bm k'}=\cos(2\gamma)\hat{\bm n}_{\bm k}+\sin(2\gamma)\hat{\bm m}_{\bm k}$ and $\hat{\bm m}_{\bm k'}=-\sin(2\gamma)\hat{\bm n}_{\bm k}+\cos(2\gamma)\hat{\bm m}_{\bm k}$, to the expression for $\hat{J}(S_{\bm k})$, we obtain four projected terms as
\begin{equation}
\begin{array}{rl}
\displaystyle P_{\parallel}\hat{J}(S_{{\bm k}\parallel}) = & \displaystyle \frac{n_i \sigma_{{\bm k}\parallel}}{16\pi\hbar A}\int d{\theta}'|\bar{U}_{{\bm k}{\bm k'}}|^{2} (s_{{\bm k}\parallel}-s_{{\bm k'}\parallel}) (1 + \cos2\gamma) \\ [3ex]
\displaystyle P_{\perp}\hat{J}(S_{{\bm k}\parallel}) = &\displaystyle\frac{n_i \sigma_{{\bm k}\perp}}{16\pi\hbar A}\int d{\theta}'|\bar{U}_{{\bm k}{\bm k'}}|^{2} (s_{{\bm k}\parallel}-s_{{\bm k'}\parallel})\sin2\gamma
\\ [3ex]
\displaystyle P_{\parallel}\hat{J}(S_{{\bm k}\perp}) = & \displaystyle \frac{n_i \sigma_{{\bm k}\parallel}}{16\pi\hbar A}\int d{\theta}'|\bar{U}_{{\bm k}{\bm k'}}|^{2} (s_{{\bm k}\perp}+s_{{\bm k'}\perp})\sin2\gamma
\\ [3ex]
\displaystyle P_{\perp}\hat{J}(S_{{\bm k}\perp}) = & \displaystyle \frac{n_i \sigma_{{\bm k}\perp}}{16\pi\hbar A}\int d{\theta}'|\bar{U}_{{\bm k}{\bm k'}}|^{2}(s_{{\bm k}\perp}+s_{{\bm k'}\perp})(1 - \cos2\gamma)
\end{array}
\end{equation}
Using $q = 2k_F \sin\frac{\gamma}{2}$, we obtain a cumbersome expression for $|\bar{U}_{{\bm k}{\bm k'}}|$. We make the following Fourier expansions
\begin{equation}\label{Fourier}
\arraycolsep 0.3 ex
\begin{array}{rl}
\displaystyle |\bar{U}_{{\bm k}{\bm k'}}|^2(\gamma) = & \displaystyle \sum U_n e^{in\gamma} \\ [3ex]
\displaystyle (1 + \cos2\gamma) \, |\bar{U}_{{\bm k}{\bm k'}}|^2(\gamma) = & \displaystyle \sum W_n e^{in\gamma} \\ [3ex]
\displaystyle s_{{\bm k}\parallel} = & \displaystyle \sum s_{k\parallel n}e^{in\theta}.
\end{array}
\end{equation}
The parallel projection of the scattering term
\begin{equation}
P_{\parallel}\hat{J}(S_{{\bm k}\parallel}) = \frac{n_i}{8\hbar A}\sum_n (W_0-W_n)s_{k\parallel n}e^{in\theta}\sigma_{{\bm k}\parallel},
\end{equation}
where $W_{-n}=W_{n}$ since $|\bar{U}_{{\bm k}{\bm k'}}|^2(\gamma)$ and $1 + \cos2\gamma$ are even functions of $\gamma$.
\section{Transport in non-interacting BLG}
\label{sec:nonint}
We briefly review transport in the absence of interactions. Writing $S_{\bm k} = S_{0{\bm k}} + S_{E{\bm k}}$
and keeping terms to o linear order in ${\bm E}$ we obtain
\begin{equation}
\frac{dS_{E{\bm k}}}{dt}+\frac{i}{\hbar}[H_{0\bm k},S_{E{\bm k}}]+\hat{J}(S_{E{\bm k}})=D_{E{\bm k}}
\end{equation}
where the electric-field driving term
\begin{equation}
\arraycolsep 0.3 ex
\begin{array}{rl}
\displaystyle D_{E{\bm k}} = & \displaystyle \frac{e{\bm E}}{\hbar}\cdot\pd{S_{0{\bm k}}}{\bm k}
= \frac{1}{2} \, d_{E{\bm k}\parallel} \sigma_{{\bm k}\parallel} + \frac{1}{2} \, d_{E{\bm k}\perp} \sigma_{{\bm k}\perp} \\ [3ex]
\displaystyle d_{E{\bm k}\parallel} = & \displaystyle \frac{e{\bm E}\cdot{\hat{\bm k}}}{\hbar}
\bigg(\frac{\partial f_{0+}}{\partial k}-\frac{\partial f_{0-}}{\partial k}\bigg) \\ [3ex]
\displaystyle d_{E{\bm k}\perp} = & \displaystyle \frac{e{\bm E}\cdot\hat{\bm \theta}}{\hbar k}(f_{0+}-f_{0-}),
\end{array}
\end{equation}
in which $f_{0\pm} \equiv f_0 (\varepsilon_{{\bm k}\pm})$, with $f_0$ the Fermi-Dirac distribution function, and $S_{0{\bm k}}=(1/2)(f_{0+}-f_{0-})\sigma_{{\bm k}\parallel}$. We assume the temperature to be absolute zero, thus
\begin{equation}
\arraycolsep 0.3ex
\begin{array}{rl}
\displaystyle d_{E{\bm k}\parallel} = & \displaystyle -\frac{e{\bm E}\cdot{\hat{\bm k}}}{\hbar}\delta(k-k_F) \\ [1ex]
\displaystyle S_{{\bm k}\parallel} = & \displaystyle -\frac{\tau e{\bm E}\cdot\hat{\bm k}}{4\hbar}\delta(k-k_F)\sigma_{{\bm k}\parallel},
\end{array}
\end{equation}
where the momentum relaxation time
\begin{equation}
\tau = \frac{8\hbar A}{n_i(W_0-W_1)}.
\end{equation}
The velocity operator is given by
\begin{equation}
{\bm v}_{\bm k} = \frac{1}{\hbar} \, \pd{H_{0{\bm k}}}{\bm k}.
\end{equation}
The expectation value of the current density operator is $\bkt{{\bm j}} = \displaystyle -eg_vg_s\int\frac{d^2k}{(2\pi)^2}\mathrm{Tr}[{\bm v}_{\bm k}S_{\bm k}]$, where Tr acts in pseudospin space, and $g_v=g_s=2$ are the valley and spin degeneracies, respectively. Substituting $S_{\bm k}=(1/2)(s_{{\bm k}\parallel}\sigma_{{\bm k}\parallel}+s_{{\bm k}\perp}\sigma_{{\bm k}\perp})$, taking ${\bm E}\,\parallel\,{\bm x}$, the velocity operator is $v_x=v_{x{\bm k}\parallel}\sigma_{{\bm k}\parallel}+v_{x{\bm k}\perp}\sigma_{{\bm k}\perp}$, where $v_{x{\bm k}\parallel}=2Ak\cos\theta/\hbar$ and $v_{x{\bm k}\perp}=-2Ak\sin\theta/\hbar$, and assuming that $\varepsilon_F \tau/\hbar \gg 1$, it follows that the conductivity is
\begin{equation}
\sigma_{xx}^{\mathrm{bare}} = \frac{Ae^2k_F^2\tau}{\pi\hbar^2}.
\end{equation}
The Zitterbewegung (interband coherence) contribution to the conductivity,
plays an essential role for the minimum conductivity which survives at the charge neutrality point,\cite{Culcer_BLG_transp_PRB09, Culcer_MLG_transp_PRB08, Trushin_BLG_MinCond_PRB10, David_BLG_BS_MinCond_PRB12},
as in the MLG \cite{Culcer_MLG_transp_PRB08, Katsnelson_EPJB06} and topological insulators (TI) cases, \cite{Culcer_TI_transp_PRB10, Culcer_TI_PhysE12} but is next-to-leading order in the
small parameter $\hbar/\varepsilon_F \tau$ and not considered here.
\section{Interaction renormalization}
\label{sec:int}
Interactions in equilibrium BLG renormalize the constant $A$ (that is, they renormalize the Fermi velocity).\cite{BorghiPolini_MLGBLG_Fermi_enhancement_SSC09} This does not make any qualitative changes to our arguments and derivation below, and for simplicity we assume henceforth that $A$ represents the renormalized $A$. In this section we will determine the mean-field interaction correction $\mathcal{B}_{\bm k}^{MF,(1)}$. From Eq.\ (\ref{Iteration}) it is evident that only the part of $\mathcal{B}_{\bm k}^{MF,(1)} \propto \sigma_{{\bm k}\perp}$ contributes to the dynamics. We abbreviate $l=k/k_F$ and
\begin{equation}\label{firstB}
\mathcal{B}_{\bm k}^{MF,(1)}=-\frac{\tau e^3E_x}{16\pi\epsilon_0\epsilon_r\hbar}\, I_{ee}^{(1)}(l,n_e)\sin\theta\sigma_{{\bm k}\perp},
\end{equation}
where the dimensionless quantities
\begin{widetext}
\begin{equation}\label{Iee}
\arraycolsep 0.3 ex
\begin{array}{rl}
\displaystyle I_{ee}^{(1)}(l,n_e) = &\displaystyle -\int_{0}^{2\pi}\frac{d\gamma}{2\pi}\, \frac{\sqrt{\pi n_e}\sin\gamma\sin(2\gamma)}
{\sqrt{\pi n_e(l^2+l'^2-2ll'\cos\gamma)}+q_0g(l,l',\gamma)} \\ [3ex]
\displaystyle g(l,l',\gamma) = &\displaystyle \frac{1}{2}\sqrt{4+(l^2+l'^2-2ll'\cos\gamma)^2}
-\ln\bigg[\frac{1}{2}+\frac{1}{4}\sqrt{4+(l^2+l'^2-2ll'\cos\gamma)^2}\bigg].
\end{array}
\end{equation}
\end{widetext}
The driving term arising from $\mathcal{B}_{\bm k}^{MF,(1)}$ contributes only to $S_{{\bm k}\perp}$, and we easily find
that
\begin{equation}
S_{E{\bm k}\perp}^{ee,(1)}=\frac{\tau eE_xq_0}{16\hbar k^2}I_{ee}^{(1)}(l,n_e)f_{0}
\sin\theta\sigma_{{\bm k}\perp}.
\end{equation}
An additional correction arises from the equation
\begin{equation}
P_{\parallel}\hat{J}[S_{E{\bm k}\parallel}^{ee,(1)}]=-P_{\parallel}\hat{J}[S_{E{\bm k}\perp}^{ee,(1)}],
\end{equation}
Taking this into account, the first-order correction to the diagonal conductivity is
\begin{equation}
\sigma_{xx}^{(1)}=\frac{q_0[1+\beta(n_e)]}{4\sqrt{\pi n_e}}I_{ee}^{(1)}(n_e)\sigma_{xx}^{\mathrm{bare}}
\end{equation}
with $\beta(n_e)=(U_1-U_3)/(2U_0+2U_2-3U_1-U_3)$, in which $U_n$ is the $n$-th Fourier coefficient of $|U_{{\bm k}{\bm k'}}|^2$ as defined in Eq.\ (\ref{Fourier}), and $I^{(1)}_{ee}(n_e)=\int_{0}^{1}dl I^{(1)}_{ee}(l,n_e)$. Notice that
$\beta$ vanishes for momentum-independent (short-range) interactions.
The angular structure of $I_{ee}^{(1)}(l,n_e)$ in Eq.\ (\ref{Iee}) can be understood by noting that the electric field driving term is responsible for the factor of $\sin\gamma$, while the factor of $\sin2\gamma$ arises from the projection of the pseudospin component parallel to $\hat{\bm k}$ onto the pseudospin component parallel to $\hat{\bm k}'$. For the massless Dirac cones of TI and MLG, where the (pseudo)spin winds around the Fermi surface only once, these terms (i.e. the electric-field driving term and the pseudospin projection) have the same rotational symmetry and reinforce each other. In BLG, the fact that the pseudospin winds twice around the Fermi surface is crucial, and makes the angular structure of this term entirely different from MLG and TI. As $n_e \rightarrow 0$, $I_{ee}^{(1)}(l,n_e)$ averages to zero over the Fermi surface. Its effect at small $n_e$ is therefore correspondingly small. In this context, it must also be noted that $q_0$ is set by $t_\perp$, the (sizable) interlayer hopping parameter, and that $q_0 \gg k_F$ even at $n = 10^{13}$cm$^{-2}$, which in transport ordinarily constitutes a large carrier density ($k_F \approx 5.5 \times 10^{8}$m$^{-1}$).
We retain only terms of linear order in the external electric field. Under these conditions, the following two equations are sufficient to obtain all higher order terms $(n>1)$,
\begin{equation}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle\frac{dS_{E{\bm k}\perp}^{ee,(n)}}{dt}+\frac{i}{\hbar}[H_{\bm k},S_{E{\bm k}\perp}^{ee,(n)}]
=&\displaystyle\frac{i}{\hbar}[\mathcal{B}_{\bm k}^{MF,(n)},S_{0{\bm k}}]\\[3ex]
\displaystyle P_{\parallel}\hat{J}[S_{E{\bm k}\parallel}^{ee,(n)}]=&\displaystyle -P_{\parallel}\hat{J}[S_{E{\bm k}\perp}^{ee,(n)}].
\end{array}
\end{equation}
In the higher orders ($n>1$), $S_{E{\bm k}}^{ee,(n-1)}$ is fed into $\mathcal{B}^{MF,(n)}_{\bm k}$, which then determines $S_{E{\bm k}}^{ee,(n)}$, completing the self-consistent loop. Repeating the iteration, we obtain a general formula for $I_{ee}^{(n)}(n_e)$ for $n>1$, i.e.
\begin{widetext}
\begin{equation}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle I_{ee}^{(n)}(n_e)=&\displaystyle (-\sqrt{\pi n_e})^n\bigg[\prod_{i=1}^{n-1}\int_{0}^{1}
\frac{dl_i}{l_i}\int_{0}^{2\pi}\frac{d\gamma_i}{2\pi}\bigg]\int_{0}^{1}dl_n\int_{0}^{2\pi}\frac{d\gamma_n}{2\pi}\\[3ex]
&\displaystyle\bigg[\prod_{i=1}^{n-1}\frac{[\cos\gamma_i\cos(2\gamma_i)+\beta(n_e) \sin\gamma_i\sin(2\gamma_i)]}{\sqrt{\pi n_e(l_i^2+l_{i+1}^2-2l_il_{i+1}\cos\gamma_i)}+q_0g(l_i,l_{i+1},\gamma_i)}\bigg]\\[3ex]
&\displaystyle\frac{\sin\gamma_n\sin(2\gamma_n)}{\sqrt{\pi n_e(l_n^2+1-2l_n\cos\gamma_n)}+q_0g(l_n,1,\gamma_n)},
\end{array}
\end{equation}
\end{widetext}
and the $n$th-order interaction correction to the conductivity is $\sigma_{xx}^{(n)}=[1+\beta(n_e)](q_0/\sqrt{16\pi n_e})^nI_{ee}^{(n)}(n_e)\sigma_{xx}^{\mathrm{bare}}$. Finally, the exact conductivity is
\begin{widetext}
\begin{equation}
\sigma_{xx} = \sigma_{xx}^{\mathrm{bare}} \, \left\{1+[1+\beta(n_e)] \sum_{n>0}\left(\frac{q_0}{\sqrt{16\pi n_e}}\right)^nI^{(n)}_{ee}(n_e)\right\}.
\end{equation}
\end{widetext}
We refer to $\sigma_{xx}$ as the full conductivity, to distinguish it from the bare conductivity $\sigma_{xx}^{\mathrm{bare}}$. The appearance of $\beta(n_e) \ll 1$ is related to the factor of $2\gamma$ appearing in the mean-field interaction term.
\section{Discussion}
\label{sec:disc}
In equilibrium electron-electron interactions renormalize the band parameter $A$.\cite{BorghiPolini_MLGBLG_Fermi_enhancement_SSC09}
Here we have obtained an exact result, within a self-consistent Hartree-Fock approximation, for the
influence of interactions on the conductivity of doped metallic BLG.
Below we comment on the sign of the interaction renormalization, its size, and its density dependence.
\begin{figure}[tbp]
\bigskip
\includegraphics[width=\columnwidth]{sigmavsn.pdf}
\caption{\label{sigmavsn} Fractional change in the conductivity $\sigma_{xx}/\sigma_{xx}^{bare}$ (black), and the parameter $\beta$ (blue), as a function of the carrier density $n_e$.}
\end{figure}
We recall that a charge current in BLG
necessarily gives rise to a steady-state pseudospin polarization.
Consequently, $\sigma_{xx}$ may be understood by considering pseudospin dynamics on the Fermi surface.
The renormalization reflects the interplay of pseudospin-momentum locking embodied in $H_{0{\bm k}}$ and the mean pseudospin-field $\mathcal{B}^{MF}_{\bm k}$ arising from electron-electron interactions.
The pseudospin of one carrier on the Fermi surface at ${\bm k}$ is subject to two competing interactions. The effective field $Ak^2\hat{\bm n}_{\bm k}$ tends to align the spin with its band value.
The mean-field $\mathcal{B}^{MF}_{\bm k}$ tends to align a pseudospin at ${\bm k}$ against the total existing pseudospin polarization. The net result is a small steady-state rotation of the pseudospin at each ${\bm k}$ away from the direction of the effective field $Ak^2\hat{\bm n}_{\bm k}$. Thus the overall effect of interactions is to align individual pseudospins in the direction opposite to that of the existing pseudospin polarization.
Since the renormalization is negative, interactions cannot cause $\sigma_{xx}$ to diverge, and
there is no possibility of a Fermi-surface instability.
The conductivity is therefore reduced by interactions, which is reminiscent of the result of Ref.~\onlinecite{Barlas_graphene_chirality_correlation_PRL07}.
One may gain insight by further analyzing the functional form of the ratio $\sigma_{xx}/\sigma_{xx}^{bare}$, concentrating on its density dependence. Taking $A=0.71\,\mathrm{eV\cdot nm^2}$, as well as $\epsilon_r=1$ for simplicity, the wave vector $q_0=e^2/(2\pi\epsilon_0\epsilon_rA)=4.0\,\mathrm{nm^{-1}}$. As discussed before, in all realistic transport regimes $k_F=\sqrt{\pi n_e}\ll q_0$. In this low-doping regime, $\beta(n_e)$ becomes independent of $n_e$ for all $U_n\propto n_e^{1/2}$, and the conductivity simplifies to
\begin{widetext}
\begin{equation}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle{\frac{\sigma}{\sigma^{\mathrm{bare}}}}
=&\displaystyle 1+(1+\beta)\sum^{\infty}_{n=1}
\left(-\frac{1}{4}\right)^n\left[
\prod_{i=1}^{n-1}\int_{0}^{1}\frac{dl_i}{l_i}\int_{0}^{2\pi}\frac{d\gamma_i}{2\pi}\right]
\int_{0}^{1}dl_n\int_{0}^{2\pi}\frac{d\gamma_n}{2\pi}\\[3ex]
&\displaystyle\left[\prod_{i=1}^{n-1}\frac{\cos\gamma_i\cos(2\gamma_i)
+\beta\sin\gamma_i\sin(2\gamma_i)}{g(l_i,l_{i+1},\gamma_i)}\right]
\frac{\sin\gamma_n\sin(2\gamma_n)}{g(l_n,1,\gamma_n)}.
\end{array}
\end{equation}
\end{widetext}
In this limit the full conductivity has almost exactly the same density dependence as the bare conductivity. The behavior at densities commonly encountered in transport is illustrated in Fig.~\ref{sigmavsn}. The small size of the renormalization makes its detection challenging. At small $n_e$, the ratio tends to zero as a result of the vanishing of angular integral appearing in $I_{ee}^{(1)}(l,n_e)$ in Eq.\ (\ref{Iee}). Steady-state expectation values are determined by the electric-field driving term, which contains a factor of ${\bm E}\cdot\hat{\bm k}$. Unlike MLG/TI, the pseudospin is not a linear function of $\parallel \hat{\bm k}$ (or $\hat{\bm \theta}$), but is characterized by a winding number of $2$. As a result of this, in BLG the interaction renormalization of the conductivity/pseudospin polarization is negligible when $k_F \ll q_0$. At large $n_e$, the behavior of $\sigma_{xx}$ is summarized by
\begin{widetext}
\begin{equation}\label{highdoping}
\arraycolsep 0.3 ex\begin{array}{rl}
\displaystyle\frac{\sigma}{\sigma^{\mathrm{bare}}}
=&\displaystyle 1-\frac{(1+\beta)q_0}{\sqrt{16\pi n_e}}
\int_{0}^{1}dl\int_{0}^{2\pi}\frac{d\gamma}{2\pi}\frac{\sin\gamma\sin(2\gamma)}{\sqrt{l^2+1-2l\cos\gamma}}.
\end{array}
\end{equation}
\end{widetext}
In this regime the ratio $\sigma_{xx}/\sigma_{xx}^{bare} \propto 1/\sqrt{n_e}$, decreases with increasing carrier density, but for this trend to be noticeable one requires $\sqrt{\pi n_e} \gg q_0$, which can never be reached in practice.
It is enlightening to compare the interaction renormalization of the conductivity in BLG with the case of TI and MLG. In TI, as in MLG, the interaction renormalization of the conductivity is density independent and again accounts for only a fraction of the total conductivity. At first sight, it seems striking that the same observation holds in BLG. Retracing the mathematical steps, the first order correction to the density matrix in TI is \cite{Culcer_TI_ee_PRB11}
\begin{equation}
S_{E{\bm k}\perp}^{ee,(1)}=\frac{eE_xr_s\tau I_{ee}^{(1)}(l,r_s)}{16\hbar k}f_{0}\sin\theta\sigma_{{\bm k}\perp},
\end{equation}
hence $1/k$ in TI corresponds to $k_F/k^2$ in BLG, which results in approximately the same density dependence. The reason for this correspondence is that the TI Hamiltonian is $\propto k$ while the BLG Hamiltonian is $\propto k^2$, so the steady-state (pseudo)spin densities differ by a factor of $k$. At the same time, screening also differs by a factor of $k$ between the two, and the additional density dependences arising from these two factors effectively cancel out. Although the density dependence is different from TI, the correction is more complex but still weak. At very low-energies trigonal warping terms must be added to the BLG band structure,
leading to the formation of Dirac cones. In this limit, we would expect that the
interaction correction to conductivity we discuss, would cross over to a form similar
to that appropriate for MLG, TI's and other Dirac cone systems provided that
this regime is not preempted by interaction-driven phase transitions to gapped states. \cite{Zhang_BLG_SpontSymBrk_PRB10, Zhang_BLG_Ordered_PRB12}
\section{Summary}
\label{sec:sum}
We have calculated the effect of non-equilibrium interaction self-energy effects on the conductivity of metallic bilayer graphene. Although these effects can be large in some systems, in BLG they give rise to a negative renormalization of the conductivity which is small and has a weak density dependence. This property follows from
the large interlayer tunneling parameter in BLG, which leads to a $\pi$-band pseudo spin
with a momentum-space winding number of $2$ that is incommensurate with the
velocity winding number of $1$. The corresponding effects could be larger
when a gap is opened using a bias voltage or when a magnetic field is present.
This work is supported by the National Natural Science Foundation of China under grant number 91021019.
AHM was supported by DOE grant DE-FG03-02ER45958 and by Welch Fiundation grant No. TBF1473.
We gratefully acknowledge discussions with S.~Das Sarma.
|
1,477,468,751,069 | arxiv | \section{Introduction}
In 1972 it was suggested by Wolff \citep{1972ApJ...176..833W} that solar flares could stimulate free oscillations in the Sun. The general idea by Wolff was that the flares would cause a thermal expansion that would act as a mechanical impulse by causing a wave of compression to move subsonically into the solar interior. Later the detection of global low-degree p-mode oscillations was confirmed in disk-interegrated sunlight \citep[by e.g.][]{1976Natur.259...92B, 1981Natur.293..443C} and soon it was generally accepted that the global p-mode oscillations were driven by stochastic excitation from granulation \citep{1988ApJ...326..462G} and not by solar flares.
Although the p~modes seemed to be excited by the near-surface convection zone a number of studies have investigated if some parts of the excitation of the low-degree p~modes were caused by e.g. flares. These studies have come out with contradicting results. \citet{1999MNRAS.303L..63G} found a high correlation between temporally varying p-mode power measured at low degree in GONG data and the coronal mass ejection event number, but due to the way they normalized the correlation coefficient and because of the selective selection of the events it is not possible to make a quantitative evaluation of the correlation coefficient obtained by \citet{1999MNRAS.303L..63G} nor is it possible to compare their value of the correlation coefficient with values obtained in other data sets. \citet{2006SoPh..238..219A} on the other hand found no correlation between a longer disk-integrated GONG data set and flare and coronal mass ejection indices. In this study the correlation coefficients were properly normalized and there were no obvious selection effects. Analysis of BiSON data have also revealed that though the strength of the p~modes follows the distribution expected from stochastic excitation by near-surface convection, there is evidence of a few more very large events than the numbers predicted by the theory, but these events show only a poor correlation with flares \citep{1995soho....2..335C}. \citet{1998A&A...339..261F} and \citet{1998A&A...330..341F} made a statistical analysis similar to the one by \citet{1995soho....2..335C} and found a mean correlation between p~modes of degree 0 and 1 of 0.6 \% in GOLF data obtained from 1996-97 and 10.7 $\pm$ 5.9 \% in IPHIR data obtained in 1988 which could suggest a relation between the p~modes and transient events; but no matches between the events in the p~modes and in the flares or coronal mass ejections were made. \citet{1992ApJ...394L..65B} analyzed the correlation between acoustic energy and activity, not in time, but in space and found that while the energy of the p-modes with frequencies between 2.5 and 4.0 mHz is suppressed in active regions the energy in the 5.5 to 7.5 mHz frequency range is enlarges around the active regions.
The discovery by \citet{karoff2008} of a high correlation between the energy in the high-frequency part of the acoustic spectrum and the solar X-ray flux was therefore the first evidence that supported Wolffs idea. The reason why \citet{karoff2008} were able to see a correlation where others had failed was that \citet{karoff2008} did not look at the frequency range where most of the p-mode energy is positioned around 3 mHz. Instead they looked at frequencies higher than the atmospheric acoustic cut-off frequency (5.3 mHz). Why the energy in the acoustic spectrum is more correlated with the solar X-ray flux above the cut-off frequency than below is one of the questions that we do not have a definitive answer for. We will come back to this question, but let us start by describing what we know about the global waves in the Sun with frequencies higher than the atmospheric acoustic cut-off frequency.
\section{Why high-frequency waves?}
High-frequency waves were first discovered in high-degree observations of the Sun from the Big Bear Solar Observatory \citep{1988ApJ...334..510L} and later in GOLF low-degree disk-integrated observations \citep{1998ApJ...504L..51G} . Recently they have also been seen in the BiSON disk-integrated radial-velocity data \citep{2003ESASP.517..247C} and the VIRGO intensity data \citep{2005ApJ...623.1215J}.
Two models have been proposed in order to account for the low-degree high-frequency waves with frequencies higher than the atmospheric acoustic cut-off frequency. The first model which was originally proposed by \citet{1991ApJ...375L..35K} explains the high-frequency waves as an interference phenomenon between ingoing and outgoing waves from a localized source just beneath the photosphere. The other model which was originally proposed by \citet{1990ApJ...362..256B} suggests that the high-frequency waves are (partly) reflected by the sudden change in the temperature at the transition region between the chromosphere and the corona, and not at the photosphere which is the case for the ordinary p~modes. The later model is supported by observations of partial reflection of waves at the transition regiont by the use of time-distance helioseismology \citep{1997ApJ...485L..49J}. In either case the amount of energy that is stored in the high-frequency waves is extremely low compared to the amount of energy stored in the p-mode oscillations with frequencies below the atmospheric acoustic cut-off frequency.
So with this in mind we can think of three suggestions as to why the energy in the acoustic spectrum is more correlated with the solar X-ray flux above the cut-off frequency than below it:
First, as noted by \citet{karoff2008} the observed energy per mode at high frequency is significantly lower than the energy per mode observed at the peak (3 mHz) and according to \citet{2001ApJ...546..585S} one finds the p-mode energy to decrease inversely as the fourth power of the frequency (above 3 mHz) simply as a consequence of the granulation power decreasing as $\nu^{-4}$. Therefore a single event (such as a flare) that excites a mode will have a larger relative effect at high frequency because the other excitation sources are much smaller.
Secondly, the intensity signal from VIRGO is not only sensitive to oscillations in the photosphere, but also to oscillations in the chromosphere. So what we are seeing might be oscillations in the chromosphere rather than oscillations below the photosphere. This idea is supported by the fact that it is not clear how much of the energy correlated with the solar X-rays goes in to the background at high frequency and how much goes in to the distinct modes. On the other hand chromosphere oscillations are believed to manifest themselves at high frequency as a few modes with large mode line-widths \citep{1993ASPC...42..111H, 1995ASPC...76..303D} though there are not many observations of disk-integrated chromosphere oscillations. A few chromosphere modes with large line-widths could be what is seen in the analysis by \citet{karoff2008} and it indeed agrees nicely with the results from MDI and GONG by Kumar (these proceedings p. ???). The idea of chromospheric oscillations is further supported by the fact that flare models seem to suggest that most of the flare energy is released in the upper part of the chromosphere and not close to the photosphere \citep{1972SoPh...24..414H} .
And thirdly, if it is assumed that the high-frequency waves are partly reflected by the sudden change in the temperature at the transition region between the chromosphere and the corona, as suggested by \citet{1990ApJ...362..256B}, then the top of the chromosphere would be the place where the high-frequency waves would be most sensitive to excitation while p-mode oscillations with frequencies below the atmospheric acoustic cut-off frequency would not be influenced as strongly by flare energy release at the top of the chromosphere. Also, it is easy to imagine that a flare could change the temperature profile over the transition region for a while, which would have a large influence on the reflection and thus the amplitudes of the high-frequency waves in the \citet{1990ApJ...362..256B} model.
In general the two last suggestions have the advantage that the amount of energy that needs to be supplied by the flares is much smaller than if the nature of the high-frequency waves had been the same as that of the ordinary p modes. This is to some extent also true for the first suggestion as the amplitudes of the high-frequencies waves are much smaller than those of the ordinary p modes.
We are of course investigating how to discriminate between the three possible solutions to the question above. We are investigating the possibility to use observations of high-frequency waves obtained at different wavelengths in order to analyze phase and amplitude differences of high-frequency waves at different heights in the solar atmosphere. We might then be able to use these phase and amplitude differences to discriminate different the models from one and another. The observations that could be used for this are e.g. the Magneto-Optical Filters at Two Heights (MOTH) observations from the South Pole \citep{2004SoPh..220..317F}, observations with the Global Oscillations at Low Frequency New Generation (GOLF-NG) instrument at at the Observatorio del Teide (Salabert, p. ??? in these proceedings) and observations of the blue sky by the Stellar Observations Network Group (SONG) instruments (Chrisenten-Dalsgaard, p. ??? in these proceedings). SONG has the advantage the it will make high-cadence high-resolution spectra available in a large frequency range which will make it possible to study a large number of absorption lines covering a large range of heights in the solar atmosphere.
\section{Flares on the far-side of the Sun}
As solar flares impact the Earth's environment and can be highly damaging for orbiting spacecraft \citep{2006Natur.441..402C} huge efforts are conducted in order to forecast the appearance of them -- the so-called space weather discipline. The major way to do this forecast is to predict the activity on the far side of the Sun which allows a warning of up to half the solar rotation period. This is mainly done with two methods: helioseismic holography and by monitoring sky reflected Lyman $\alpha$ radiation.
The concept of helioseismic holography was first proven by \citet{2000Sci...287.1799L}. In summery the method treats the Sun as a gigantic lens that allows us to see though the Sun. This can be accomplished as helioseismic holography do not study light, but sound waves to which the Sun is transparent. In this way helioseismic holography studies the acoustic waves in a localized region on the near side of the Sun (the pupil) in order to image the acoustic waves in localized regions on the far side (the focal point). The active regions on the solar surface (e.g. sun spots) change the nature of the acoustic waves that travel though them. Therefore active regions can be seen on the far side of the Sun with helioseismic holography.
Active regions on the near side can clearly be seen in Lyman $\alpha$ observations of the Sun. This is due to the fact that active regions are much brighter in Lyman $\alpha$ radiations than the quiet Sun. But the Lyman $\alpha$ from the Sun can not only be seen by looking at the Sun; it can also be seen by looking at secondary sources in the interplanetary medium, since the interplanetary medium contains H atoms that re-emit the Lyman $\alpha$ radiation from the Sun. As the interplanetary medium not only sees the near side of the Sun, but also the far side Lyman $\alpha$ radiation can be used to monitor active regions on the far side of the Sun. This concept was first proven by \citet{2000GeoRL..27.1331B} who used Lyman $\alpha$ observations from the SWAN instrument on SOHO to see active regions on the far side of the Sun.
There are two disadvantages of both methods. The first one is that the resolution of the active regions on the far side of the Sun is not really good -- something we cannot do anything about. The other one is that it is not really active regions that we are interested in, in order to predict the space weather. It is ratter the solar flares, as flares (and corona mass ejections) impact the Earth's environment -- this could be improved by using flare driven global oscillations. The idea is that if flares drive the high-frequency waves they will be driven equally well on the far side as on the near side of the Sun and as the waves that we have studied are low-degree oscillations we would see the signal on the near side in both cases as these waves travel almost directly though the Sun. The ability to differentiate between far side active regions and flares is important as it is not all active regions that host large flares and can thus be damaging for the Earth's environment.
The concept of using the low-degree waves to predict flares on the far side of the Sun still awaits to be proven and of course relies on the assumption that the correlation that we have seen between the solar X-ray flux and the energy in the high-frequency part of the acoustic spectrum is indeed caused by flare driven global waves, but it holds the potential to improve the precision in space weather prediction.
\section{Flare Generated Waves in other Solar-like Stars}
As the correlation between the solar X-ray flux and the energy in the high-frequency part of the acoustic spectrum was found using disk-integrated data it might be possible to see the same kind of correlation in other solar-like stars. \citet{2007MNRAS.381.1001K} has already shown that other solar-like stars very likely display waves with frequencies above the atmospheric acoustic cut-off frequency and it will thus at least be possible to analyze if correlations like those for the Sun also exist for other stars. We have great expectations for the data the we are going to receive from the Kepler mission (Metcalfe, p. ??? in these proceedings) as these will have a temporal corvage of more than 3.5 years and hopefully have a precision that allows observations of high-frequency waves in other solar-like stars. If this is the case and again assuming that the correlations that we see between the solar X-ray flux and the energy in the high-frequency part of the acoustic spectrum is caused by flare driven global waves then we should be able to observe the signature of flares on other solar-like stars.
Such observations could be used in the quest to understand the solar dynamo as the Kepler data will hopefully allow us to observe and study stellar cycles in solar-like stars by observing the shifts in the frequencies and amplitudes that are caused by the stellar cycles \citep[see e.g.][]{2000MNRAS.313...32C}. By combining the observations of stellar cycles and stellar flares we would be able to study how this relates to models of stellar cycles. This is an important issue as it is not fully understood why the Sun, in both cycle 21 and 23, showed a two years lag between the maximum for the appearance of flares to that of the sun spot number \citep{2004SoPh..219..125V}
\acknowledgements
I thank Hans Kjeldsen, Bill Chaplin, Doughlas Gough, Mike Thompson, J$\o$rgen Christensen-Dalsgaard, Bernhard Fleck and Hugh Hudson for useful discussion.
I thank the local organising committee for helping me with financial support to attend the meeting. I also acknowledge financial support from the Danish Natural Science Research Council and the Danish AsteroSeismology Centre.
\newpage
|
1,477,468,751,070 | arxiv | \section{Introduction}
Hilbert spaces of entire functions play an important part in modern analysis. Their structural properties
(e.g., problems of uniqueness, interpolation or sampling) are of interest both from the function-theoretic point of view
and for numerous applications -- spectral problems for canonical systems (de Branges spaces),
signal processing and time--frequency analysis (Paley--Wiener space, Bargmann--Segal--Fock space),
theoretical physics, etc.
Recently, a systematic study of a new class of Hilbert spaces of entire functions was initiated
by Yu.~Belov, T.~Mengestie and K.~Seip in \cite{bms1, bms2} and by E.~Abakumov, Yu.~Belov and the author
in \cite{abb19, loc2, bar18}. These spaces, defined via discrete Cauchy transform and generalizing de Branges spaces,
were named the {\it Cauchy--de Branges spaces} in \cite{abb19}. One of motivations for the study
of the Cauchy--de Branges spaces is a functional model for rank one perturbations of compact normal operators
\cite{bar18} (see Section \ref{appli}).
Let $T = \{t_n\}_{n=1}^\infty \in \co$ be a set of distinct complex numbers such that $|t_n| \to \infty$, $n\to \infty$,
and let $\mu = \sum_n \mu_n \delta_{t_n}$ with $\sum_n \frac{\mu_n}{|t_n|^2+1} <\infty$.
We keep this notation throughout the whole paper.
With the pair $(T, \mu)$ we associate the space of Cauchy transforms
$$
\hh(T,\mu) = \bigg\{f(z) = \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n}: \ (c_n)\in \ell^2 \bigg\}
$$
equipped with the norm $\|f\|_{\hh(T,\mu)} = \|(c_n)\|_{\ell^2}$. It is easy to see that $\hh(T,\mu)$ is a Hilbert space.
While $\hh(T,\mu)$ consists of meromorphic functions (with simple poles in the set $T$), it is more convenient to work with
its isomorphic copy consisting of entire functions. Let $A$ be an entire function which has simple zeros at the points
$t_n$ and no other zeros. Put
$$
\hh(T,A,\mu) = \bigg\{F(z) = A(z) \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n}: \ (c_n) \in \ell^2 \bigg\}
$$
and $\|F\|_{\hht}= \|(c_n)\|_{\ell^2}$.
We will refer to spaces $\hht$ as the {\it Cauchy--de Branges spaces} (CdB-spaces for short).
It should be mentioned that the space $\hht$ is essentially determined by $(T, \mu)$
and spaces $\hh(T,A_1,\mu)$ and $\hh(T,A_2,\mu)$ are canonically isomorphic to each other and to $\hh(T,\mu)$.
Any Cauchy--de Branges space $\hht$ is a Reproducing Kernel Hilbert space
(i.e., the evaluation functionals $w\mapsto F(w)$ are continuous on $\hht$ for any $w\in \co$).
We will denote by $K_w$ the reproducing kernel of $\hht$ at the point $w$.
It is easy to see that $\hht$ has the {\it Division Property}:
$$
F \in\hht, \ F(w) = 0 \ \Longrightarrow \ \frac{F(z)}{z-w} \in\hht.
$$
It also follows from the definition of the inner product in $\hht$ that the functions $F_n(z) =\mu_n^{1/2}
\frac{A(z)}{z-t_n}$ form an orthonormal basis in $\hht$. Also note that
for $F(z) = A(z) \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n}$ one has
$$
(F, F_n)_{\hht} = c_n = \frac{F(t_n)}{\mu_n^{1/2}A'(t_n)}.
$$
Hence, $K_{t_n} = \mu_n^{1/2}\overline{A'(t_n)} F_n$ and, thus, the system of reproducing kernels $\{K_{t_n}\}_{t_n\in T}$
is an orthogonal basis in $\hht$.
In fact, this property characterizes CdB-spaces.
\begin{proposition} \cite{bms2}
\label{ax1}
Let $\hh$ be a Reproducing Kernel Hilbert space which consists of entire functions and has the Division Property.
If $\hh$ has an orthogonal basis of reproducing kernels, then there exist
$T$, $A$ and $\mu$ as above such that $\hh=\hht$ with equality of norms.
\end{proposition}
Cauchy--de Branges spaces generalize classical de Branges spaces. Namely, any de Branges space isometrically coincides
with a space $\hht$ where $T\subset \R$ and the function $A$ is real on $\R$. In particular, if $T=\Z$, $\mu_n\equiv 1$
and $A(z) = \pi^{-1}\sin \pi z$, then $\hht$ coincides with the Paley--Wiener space $PW_\pi$,
since in this case the formula
$$
F(z) = A(z) \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n} = \sum_{n\in\Z} (-1)^n c_n\frac{\sin \pi(z-n)}{\pi(z-n)},
\qquad c_n = (-1)^n F(n),
$$
is simply the Shannon--Kotelnikov--Whittaker sampling formula. We discuss the relation between
de Branges space and CdB-spaces in more detail in Section \ref{prelim}.
De Branges spaces were introduced by L. de Branges in the beginning of 1960-s in his famous
solution of the direct and inverse spectral problems for two-dimensional canonical systems
(see \cite{br} or \cite{rom}). They also turned out to be a very
interesting object from the point of view of function theory, while operators on de Branges spaces
serve as models for various classes of abstract linear operators \cite{gt, martin, silva, by16}.
While at present there is no theory relating CdB spaces to spectral theory of a class of differential operators,
these spaces have rich connections with many areas of function and operator theory.
Therefore, it seems to be a noteworthy goal to extend some of the basic results of the de Branges theory
to the more general and complicated field of CdB spaces.
A specific aim of the present (partially survey) paper is to study analogs of two important properties
of de Branges spaces in CdB-space setting. The first part deals with the properties of the systems of reproducing kernels
corresponding to the zeros of certain entire functions associated to the space. In the case of de Branges spaces they correspond to
orthogonal bases of reproducing kernels. The second theme of the paper is a characterization of the density of
the domain of multiplication by $z$ in $\hht$ in terms of the spectral data $(T, \mu)$.
\subsection{De Branges orthogonal bases of reproducing kernels and their generalizations.} Given $T$, $A$ and $\mu$,
we define for any $\gamma \in \co$ the entire function
\begin{equation}
\label{gh}
B_\gamma(z) = A(z) \bigg(\gamma + \sum_n \Big(\frac{1}{t_n-z} -\frac{1}{t_n}\Big) \mu_n \bigg).
\end{equation}
We assume that the term in the brackets is simply $-1/z$ in case
when $t_n=0$ (also we do not lose much in generality if we assume that $0\notin T$).
Functions $B_\gamma$ with real $\gamma$ play an important role
in the de Branges theory. In particular, they give rise to orthogonal bases of reproducing kernels.
\begin{theorem} \cite[Theorem 22]{br}
Let $\hht$ be a de Branges space. Then for any $\gamma \in \rl$
all zeros of $B_\gamma$ are real and simple,
for different $\gamma$-s the zero sets
$\mathcal{Z}(B_\gamma)$ of $B_\gamma$ interlace, and the family of
reproducing kernels $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$
is an orthogonal basis in $\hht$ for all $\gamma \in \rl$ except, maybe, one.
\end{theorem}
Thus, in de Branges spaces there is a continuous family of orthogonal bases of reproducing kernels. This
is a special case of a more general construction of the so-called Clark measures developed later (and independently)
by D.N. Clark \cite{clark} (see, also, a survey \cite{saks}). Moreover,
the property of having at least two orthogonal bases of reproducing kernels
distinguishes de Branges spaces among all Hilbert spaces of entire functions:
\begin{theorem} \cite{bms1}
\label{jam}
If a Reproducing Kernel Hilbert space $\hh$ of entire functions
with Division Property has two orthogonal bases of reproducing kernels, then $\hh = \hht$
where $T$ lies on a straight line. Thus, $\hh$ is a de Branges space up to a rotation.
\end{theorem}
In fact in \cite{bms1} a stronger result is proved: if a discrete weighted Hilbert transform
is unitary, then the corresponding points lie on a line or on a circle.
If $\gamma\notin \rl$, then in the de Brangean setting ($T\subset \rl$, $A$ is real on $\rl$)
the system $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$ is no longer orthogonal, but it still can
form an unconditional basis (i.e., Riesz basis up to normalization) in $\hht$. It follows
from the classical Hardy space theory that this is the case if and only if $\mathcal{Z}(B_\gamma)$
is a Carleson interpolating sequence in $\cp$ or in $\cm$ (see Proposition \ref{carl} for details).
Zeros of functions $B_\gamma$ have one more important property: any two zero sets $\mathcal{Z}(B_\gamma)$
define the de Branges space up to a nonvanishing factor.
\begin{theorem} \cite[Theorem 24]{br}
\label{twos}
Let $\hh(T,A,\mu)$ and $\hh(\tilde T, \tilde A, \tilde \mu)$ be two de Branges spaces.
If there exist $\alpha, \beta\in \rl\setminus \{0\}$,
$\alpha\ne \beta$, such that $\mathcal{Z}(B_\alpha) = \mathcal{Z}(\tilde B_\alpha)$
and $\mathcal{Z}(B_\beta) = \mathcal{Z}(\tilde B_\beta)$, then $\tilde T= T$, $\tilde \mu = \mu$
and $\tilde A = SA$ for some nonvanishing entire $S$.
\end{theorem}
On the canonical systems side zeros of $B_\gamma$ correspond to spectra of selfadjoint extensions
of the ``canonical system operator'' (see \cite{rom}). Thus, Theorem \ref{twos}
is a de Branges space counterpart of the classical ``two spectra theorem'' of G. Borg which allows to recover the potential
of the Sturm--Liouville equation from the spectra of its Dirichlet and Neumann problems (see, e.g., \cite[Section 7]{rom}).
We are interested in the properties of the system of reproducing kernels $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$
in the case of general CdB-spaces. We know that such system cannot be an orthogonal basis unless $T$ lies on a line.
Several natural questions are:
\medskip
\\
{\bf Problems.} 1. Is it true that there always exist
$\gamma$ such that $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$ is a Riesz basis for $\hht$?
\smallskip
2. More generally, how the set of parameters
$\gamma$ such that $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$ is a Riesz basis depends on $T$ and $\mu$?
\smallskip
3. Is it true that the family $\{K_w\}_{w\in \mathcal{Z}(B_\gamma)}$ is at least complete in $\hht$, i.e.,
$\mathcal{Z}(B_\gamma)$ is always a uniqueness set for $\hht$ when $\gamma\ne 0$? In the case when
$B_\gamma$ has multiple zeros this means that there is no nonzero function in $\hht$ with zeros at
$\mathcal{Z}(B_\gamma)$ of the corresponding multiplicity.
\smallskip
4. Do the zero sets $\mathcal{Z}(B_\alpha)$ and $\mathcal{Z}(B_\beta)$ with $\alpha\ne\beta$ define $\hht$
up to a nonvanishing factor?
\medskip
These problems seem to be in general difficult. In particular,
it is not even clear whether the zero set of $B_\gamma$ is always infinite. This is related to a problem
posed by J.~Clunie, A.~Eremenko and J.~Rossi \cite{cer}:
\medskip
\\
{\bf Conjecture} (Clunie, Eremenko, Rossi, 1993). {\it Let $a_n>0$, $t_n\in \co$, $|t_n| \to \infty$, and
$\sum_n \frac{a_n}{|t_n|+1} <\infty$. Then the function
$$
f(z) = \sum_n \frac{a_n}{z-t_n}
$$
has infinitely many zeros. }
\medskip
The conjecture was confirmed for many special cases \cite{cer, el, lr}, however, to the best of our knowlegde, it is still open in general.
The question for the regularized Cauchy transforms (as in the definition of $B_\gamma$) is apparently
more complicated, since now we nead to deal with a Cauchy transform with coefficients $\mu_n/t_n$
with $\mu_n>0$. Therefore, in what follows we will distinguish a class of CdB-spaces with additional condition that
$$
\sum_n \frac{\mu_n}{|t_n|+1} <\infty.
$$
In this case we will say that CdB-space $\hht$ belongs to the {\it convergence class}.
We say that $\hht$ is {\it small} if, moreover, $\sum_n \mu_n <\infty$.
For CdB-spaces of the convergence class we modify the definition of the functions $B_\gamma$
and put
\begin{equation}
\label{ghh}
B_\gamma(z) = A(z) \bigg(\gamma + \sum_n \frac{\mu_n}{t_n-z}\bigg).
\end{equation}
We also say that CdB-space $\hht$ is a {\it space of finite order} if all functions in $\hht$ are of finite order.
It is not difficult to show that this is the case if and only if $A$ is of finite order (and its order majorizes the order of all elements
in $\hht$), see \cite[Lemma 2.5]{abb19}.
Under these additional restrictions on $\hht$ we are able to obtain positive answers to the above questions.
\begin{theorem}
\label{comp}
1. Let $\hht$ be a CdB-space of the convergence class and of finite order. Then for any $\gamma\ne 0$
the sequence $\mathcal{Z}(B_\gamma)$ is a uniqueness set for $\hht$ \textup(counting multiplicities\textup).
2. If, moreover, the space $\hht$ is small, then $\mathcal{Z}(B_0)$ is not a uniqueness set,
but the subspace of functions vanishing on $\mathcal{Z}(B_0)$ is one-dimensional.
\end{theorem}
It is well known (see, e.g., \cite[Theorem 6.2]{go})
that in the conditions of Theorem \ref{comp} the function $B_\gamma$ has infinitely many zeros,
and moreover, its zero set has zero defect and maximal possible order. Theorem \ref{comp} shows that
this set is also maximal in the sense of being a uniqueness set for the corresponding CdB-space.
We also can prove a variant of two spectra theorem.
\begin{theorem}
\label{twos1}
Let
$\hh(T,A,\mu)$ and $\hh(\tilde T, \tilde A, \tilde \mu)$ be two converegence class CdB-spaces of finite order
and let
$B_\gamma$ and $\tilde B_\gamma$ be the corresponding functions \eqref{ghh}. If there exist $\alpha, \beta\in \co\setminus \{0\}$,
$\alpha\ne \beta$, such that $\mathcal{Z}(B_\alpha) = \mathcal{Z}(B_\alpha)$
and $\mathcal{Z}(B_\beta) = \mathcal{Z}(\tilde B_\beta)$ \textup(counting multiplicities\textup), then $\tilde T= T$, $\tilde \mu = \mu$
and $\tilde A = SA$ for some nonvanishing entire $S$.
\end{theorem}
We pass to the question about Riesz bases of the form $\{\tilde K_w\}_{w\in \mathcal{Z}(B_\gamma)}$,
where $\tilde K_w = K_w/\|K_w\|$ are the normalized reproducing kernels.
We can prove this property only in a special case
of small CdB spaces with a certain separation of $T$.
We say that the sequence $T=\{t_n\}$ is {\it power separated} (with exponent $N$) if
there exist numbers $C>0$ and $N>-1$ such that, for any $n$,
\begin{equation}
\label{powsep}
{\rm dist}\,(t_n, T\setminus\{t_n\}) \geq C(|t_n|+1)^{-N}.
\end{equation}
\begin{theorem}
\label{bas}
Let $\hht$ be a small \textup(i.e., $\sum_n \mu_n <\infty$\textup) CdB-space. Assume that $T$ is power separated
with exponent $N$ and $(t_n^{2N} \mu_n) \in\ell^p$ for some $p>0$. If $\gamma\ne 0$ and
the zeros of $B_\gamma$ are simple, then the family $\{\tilde K_w\}_{w\in \mathcal{Z}(B_\gamma)}$ is a Riesz basis in $\hht$.
In particular, the theorem is true if $\hht$ is a small CdB-space and $T$ is separated, i.e.,
$|t_n-t_m|\ge \delta$ for some $\delta>0$ and any $n\ne m$.
\end{theorem}
In Section \ref{riss} we prove a more general result (Theorem \ref{bas2}) about Riesz bases
of reproducing kernels corresponding to the
zeros of some entire function which is a small, in a sense, perturbation of $A$. In Section \ref{appli}
we use the functional model for rank one perturbations of normal operators to show that
the eigenvectors of certain rank one perturbations form a Riesz basis. Recently, O. Dobosevych
and R. Hryniv \cite{hry1, hry2} studied possible spectra of rank one perturbations of
unbounded selfadjoint operators whose spectrum is discrete and separated. Assume that $T=\{t_n\}_{n\ge 1} \subset \rl$
is separated and let $\A$ be an unbounded selfadjoint operator with simple spectrum $T$ on some Hilbert
space $H$. Then a set $S = \{s_n\}_{n\ge 1}$ is the spectrum of some rank one pertubation $\LL = \A+ a\otimes b$, $a,b\in H$,
if and only if $S$ can be enumerated so that $\sum_n|s_n-t_n| <\infty$ \cite[Theorems 3.1, 4.1]{hry2}.
It is easy to extend this statement to normal operators $\A$ and also to show that in this case
the (generalized) eigenvectors of $\LL$ form a Riesz basis in $H$ (see Theorem \ref{bas3} below).
\subsection{Operator of multiplication by $z$ in CdB-spaces.}
In the de Branges theory an important role is played by the operator of multiplication by $z$. Clearly, this is an unbounded operator.
Given a CdB space $\hht$, the domain of the operator $M_{z^N}$ of multiplication by $z^N$, $N\in \N$, is given by
$\mathcal{D}_{z^N} = \{ F\in\hht: \ z^N F\in \hht\}$, $M_{z^N} F=z^N F(z)$, $F\in \mathcal{D}_{z^N}$.
The multiplication operator $M_z$ in a de Branges space serves as a model for a class
of symmetric linear operators with deficiency indices $(1, 1)$ (to be precise, for simple regular closed
operators), see \cite{martin, silva}.
A necessary and sufficient condition for the domain of $M_z$ to be dense in a de Branges space
is given in \cite[Theorem 22]{br}.
It turns out that either $\mathcal{D}_z$ is dense, or its closure has codimension 1 and is itself
a de Branges space with respect to the initial norm. Moreover, if a de Branges space has a de Branges subspace of
codimension 1, then it necessarily must be given by the closure of $\mathcal{D}_z$.
Analogous results hold for $M_{z^N}$. We extend these results to the case of general CdB-spaces.
At least some arguments of \cite{br} essentially use the symmetry with respect to $\R$
and cannot be applied in the general case. Also, we replace the notion of de Branges subspaces by a more general
notion of a {\it nearly invariant subspace}.
We say that a closed linear subspace $\hh_0 \subset \hht$ is {\it nearly invariant} if
it has the Division Property itself, that is, $f\in \hh_0$, $f(w)=0$ implies that
$\frac{f(z)}{z-w} \in \hh_0$.
\begin{theorem}
\label{dom}
Given a CdB space $\hht$, the following are equivalent:
\begin{enumerate}
\item [(i)] $\hht$ contains a nearly-invariant subspace $\hh_0$ of codimension $N$\textup;
\smallskip
\item [(ii)] ${\rm clos}\,\mathcal{D}_{z^N}$ is a subspace of $\hht$ of codimension $N$\textup;
\smallskip
\item [(iii)] $\sum_n |t_n|^{2N-2} \mu_n <\infty$.
\end{enumerate}
In this case the nearly invariant subspace of codimension $N$ is unique
and is given by $\hh_0 = {\rm clos}\,\mathcal{D}_{z^N}= \{B_0, \dots, B_{N-1}\}^\perp$,
where
\begin{equation}
\label{bj}
B_j(z) = A(z)\sum_n \frac{\overline{t_n}^j \mu_n}{z-t_n}, \qquad j=0, \dots, N-1.
\end{equation}
\end{theorem}
In particular, ${\rm clos}\,\mathcal{D}_{z} \ne \hht$ if and only if
$\sum_n \mu_n <\infty$ ($\hht$ is small) and ${\rm clos}\,\mathcal{D}_{z} = \{B_0\}^\perp$.
Note that the definition of $B_0$ from \eqref{bj} coincides with the function $B_0$ given by \eqref{ghh}.
It follows from Theorem \ref{dom} that if $\sum_n |t_n|^{N} \mu_n <\infty$ for any $N$, i.e.,
$L^2(\mu)$ contains the set $\mathcal{P}$ of all polynomials,
then $\hht$ contains subspaces ${\rm clos}\,\mathcal{D}_{z^N}$ of any finite codimension,
ordered by inclusion. The ordered structure of de Branges subspaces of a de Branges space
is one of the most striking and important features of de Branges theory. Under certain conditions
on the spectrum, ordered structure for nearly invariant subspaces of a CdB-space was proved in \cite[Theorems 1.3, 1.4]{abb19}.
Under certain conditions one can describe CdB spaces such that all nontrivial
nearly invariant subspaces are of finite codimension (and, thus, are given by
${\rm clos}\,\mathcal{D}_{z^N}$, $N\in\N$).
\begin{theorem}
\label{stro1}
Let $\hht$ be a CdB-space such that $T$ is power separated \textup(i.e., satisfies \eqref{powsep}\textup).
Then the following assertions are equivalent\textup:
\smallskip
1. $\htt$ contains a nearly invariant subspace of any finite codimension and any nontrivial
nearly invariant subspace is of this form.
\smallskip
2. The set of all polynomials $\pp$ is contained in $L^2(\mu)$ and is dense there.
\end{theorem}
It is a natural question, whether the subspace $\hh_0$ in Theorem \ref{dom} is itself a CdB-space with respect to
the norm inherited from $\hht$, i.e., whether it has an {\it orthogonal} basis of reproducing kernels.
In Proposition \ref{nobas} we show that while $\hh_0$ coincides with a CdB-space with equivalence of norms, it is
a CdB-space itself if and only if $\hht$ is a rotation of a de Branges space (meaning that all $t_n$ lie on some straight line).\
\bigskip
\\
\textbf{Organization of the paper.} In Section \ref{prelim} we discuss some basic facts concerning
de Branges and Cauchy--de Branges spaces. Theorems \ref{comp}
and \ref{twos1} are proved in Section \ref{33}. Theorem \ref{bas} as well as a more general
sufficient condition for being a Riesz basis of reproducing kernels are proved in Section~\ref{riss},
while in Section \ref{appli} these results are applied to the spectral theory of rank one perturbations
of normal operators via a functional model. Two specific examples are considered in Section \ref{examp}.
Finally, the proofs of Theorems \ref{dom} and \ref{stro1} are given in Sections \ref{riv} and \ref{stru}
respectively.
\bigskip
\\
\textbf{Notations.} In what follows we write $U(x)\lesssim V(x)$ if
there is a constant $C$ such that $U(x)\leq CV(x)$ holds for all $x$
in the set in question. We write $U(x)\asymp V(x)$ if both $U(x)\lesssim V(x)$ and
$V(x)\lesssim U(x)$. The standard Landau notations
$O$ and $o$ also will be used.
The zero set of an entire function $f$ will be denoted by $\mathcal{Z}(f)$.
We denote by $D(z,R)$ the disc with center $z$ of radius $R$ and by
$m_2$ the Lebesgue area measure in $\CC$.
\bigskip
\\
\textbf{Acknowledgement.} The author is grateful to Artur Nicolau for useful discussions concerning Frostman shifts,
to Vladimir Shemyakov for the help with numerical experiments and to Antonio Rivera for
the discussions of the material of Section \ref{riv}.
\bigskip
\section{Preliminaries}
\label{prelim}
\subsection{Reproducing kernels of CdB spaces}
Let $f(z) = A(z) \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n} \in \hht$. Then
$$
f(t_n) = A'(t_n) c_n \mu_n^{1/2} =\Big( f, \frac{A(z) \overline{A'(t_n)} \mu_n}{z-t_n} \Big).
$$
Thus, the functions $\frac{A(z) \overline{A'(t_n)} \mu_n}{z-t_n}$, the reproducing kernels at the points $t_n$,
form an orthogonal basis in $\hht$. Note also that
\begin{equation}
\label{skal}
(f, g)_{\mathcal{H}(T,A,\mu)} = \big((c_n), (d_n)\big)_{\ell^2}=
\sum_n \frac{f(t_n) \overline{g(t_n)}}{|A'(t_n)|^2 \mu_n}
\end{equation}
for any $g(z) = A(z) \sum_n \frac{d_n\mu_n^{1/2}}{z-t_n} \in \hht$,
and so the space $\mathcal{H}(T,A,\mu)$ is isometrically embedded into the space $L^2(\nu)$, where
$\nu = \sum_n |A'(t_n)|^{-2} \mu_n^{-1} \delta_{t_n}$.
If $w\notin T$, then the reproducing kernel at the point $w$ is given by
$$
K_w(z) = A(z) \sum_n \frac{\mu_n}{(\bar w -\bar t_n)(z-t_n)}.
$$
Any CdB space has the Division Property. Indeed, if $f(w) = 0$ and $w\notin T$, then
$$
\frac{f(z)}{z-w} = A(z) \sum_n \frac{c_n \mu_n^{1/2}}{(t_n-w)(z-t_n)}.
$$
For $w=t_m\in T$, we have $f(t_m) =0$ if and ony if $c_m =0$. Then
$$
\begin{aligned}
\frac{f(z)}{z-t_m} & = A(z) \sum_{n\ne m}\frac{c_n \mu_n^{1/2}}{(z-t_m)(z-t_n)} \\
& =
A(z) \sum_{n\ne m}\frac{c_n \mu_n^{1/2}}{(t_n-t_m)(z-t_n)} +
\frac{A(z)}{z-t_m} \sum_{n\ne m}\frac{c_n \mu_n^{1/2}}{t_m-t_n}.
\end{aligned}
$$
As mentioned in the Introduction, existence of an orthogonal basis of reproducing kernels
and the Division Property distinguishes CdB spaces among all
Reproducing Kernel Hilbert spaces of entire functions. For the sake of completeness we outline the proof.
\begin{proof}[Proof of Proposition \ref{ax1}]
Assume that $\{K_{t_n}\}$ is Riesz basis of reproducing kernels in $\hh$.
Then $K_{t_m}(t_n) = (K_{t_m}, K_{t_n})_{\hh} =0$, $n\ne m$. Fix some
$t_m$ and put $A = (z-t_m)K_{t_m}$. Then $\frac{A(z)}{z-t_n}$ belongs to $\hh$ and vanishes on $\{t_l\}_{l\ne n}$, whence
$\frac{A(z)}{z-t_n} = a_n K_{t_n}$ for some constant $a_n$. Put $\mu_n = \big\| \frac{A(z)}{z-t_n} \big\|_{\hh}^{-2}$.
Then any element of $\hh$ can be written as the sum of an orthogonal series
$$
f(z) = \sum_n c_n \mu_n^{1/2} \frac{A(z)}{z-t_n}
$$
and $\|f\|_{\hh} = \|(c_n)\|_{\ell^2}$. Thus, $\hh = \hht$.
\end{proof}
In what follows for $f\in\hht$ it will be sometimes convenient to write $f\longleftrightarrow (c_n)$
in place of $f(z) = A(z) \sum_n \frac{c_n \mu_n^{1/2}}{z-t_n}$. Note that if $f(w) = 0$, $w\notin T$, one has
$\frac{f(z)}{z-w} \longleftrightarrow \big( \frac{c_n}{t_n-w}\big)$. Also, if $f\in {\mathcal D}_{z^N}$,
then $z^N f \longleftrightarrow (t_n^N c_n)$.
\subsection{De Branges spaces}
There are several equivalent ways to define de Branges spaces (see \cite{br}).
There is an axiomatic definition: a
Reproducing Kernel Hilbert space $\hh$ which consists of entire functions is a {\it de Branges space} if
the map $F\mapsto F^*$, where $F^*(z) = \overline{F(\overline z)}$
is an isometry on $\hh$ and for any $F\in \hh$ and $w\in \co$ such that $F(w) =0$
the function $\frac{z-\bar w}{z-w} F$ belongs to $\hh$ and has the same norm as $F$.
It is clear that if $T\subset \R$ and $A$ is real on $\R$ (i.e., $A^*=A$),
then $\hht$ is a de Branges space.
Another approach uses the notion of an Hermite--Biehler function.
An entire function $E$ is said to be in the Hermite--Biehler class if
$E$ has no zeros in $\mathbb{C}_+ \cup\rl$ and
$$
|E(z)| > |E^*(z)|, \qquad z\in {\mathbb{C}_+}.
$$
With any such function we associate the space
$\mathcal{H} (E) $ which consists of all entire functions
$F$ such that $F/E$ and $F^*/E$ restricted to $\mathbb{C_+}$ belong
to the Hardy space $H^2=H^2(\mathbb{C_+})$.
The inner product in $\he$ is given by
$$
( F,G)_{\he}= \int_\rl \frac{F(t)\overline{G(t)}}{|E(t)|^2} \,dt.
$$
Any space $\he$ satisfies the axioms of a de Branges space and any de Branges space
(without common real zeros)
is of this form for some $E$ \cite[Theorem 23]{br}.
A function $E$ is in the Hermite--Biehler class if and only if
$\Theta = \Theta_E = E^*/E$ is inner in $\cp$: the mapping $F\mapsto F/E$
is a unitary operator from $\mathcal{H}(E)$
onto the subspace $K_\Theta = H^2\ominus\Theta H^2$ of the Hardy space
$H^2$ known as a {\it model subspace}.
The reproducing kernel of ${\mathcal H} (E)$
corresponding to the point $w\in \mathbb{C}$ is given by
\begin{equation}
\label{repr}
K_w(z)=\frac{\overline{E(w)} E(z) - \overline{E^*(w)} E^*(z)}
{2\pi i(\overline w-z)} =
\frac{\overline{A(w)} B(z) -\overline{B(w)}A(z)}{\pi(z-\overline w)},
\end{equation}
where entire functions $A$ and $B$ are defined by $A = \frac{E+E^*}{2}$,
$B=\frac{E^*-E}{2i}$, so that $A$ and $B$ are real on $\mathbb{R}$
and $E=A - iB$. Note that $\frac{B}{A} = i\frac{1-\Theta}{1+\Theta}$ has positive
imaginary function in $\cp$ and is real on $\R$; thus, it can be written as
\begin{equation}
\label{herg}
\frac{B(z)}{A(z)} = pz+q + \sum_n \Big(\frac{1}{t_n-z} -\frac{1}{t_n}\Big) \mu_n
\end{equation}
for some $p\ge 0$, $q\in \rl$ and $\mu_n>0$ such that $\sum_n \frac{\mu_n}{t_n^2+1} <\infty $.
We always assume that the term in the brackets is simply $-1/z$ in case
when $t_n=0$. The reproducing kernels $\{K_{t_n}\}_{t_n\in \ZZ(A)}$
form an orthogonal basis in $\he$ if and only if $p=0$. More generally, note that $\hh(e^{i\alpha} E) = \he$
for any $\alpha\in \R$, and the reproducing kernels corresponding to the zeros
of the function $A \cos \alpha + B \sin \alpha$ (which is ``$A$-function'' for $e^{i\alpha} E$)
form an orthogonal basis in $\he$ for all $\alpha\in [0, \pi)$
except at most one \cite[Theorem 22]{br}.
If $p=0$ (equivalently, $A\notin \he$), then
$\{K_{t_n}\}_{t_n\in \ZZ(A)}$ form an orthogonal basis in $\he$ and it is easy to see
that $\he = \hht$.
\bigskip
\section{Proofs of Theorems \ref{comp} and \ref{twos1}}
\label{33}
In this section one of our key tools is the following variant of Liouville's theorem. We say that $\Omega\subset \CC$ is a {\it
set of zero area density} if
$$
\lim_{R\to\infty} \frac{m_2(\Omega \cap D(0, R))}{R^2} = 0.
$$
The following result was proved in \cite{bbb-fock}. Its proof was based on deep estimates of the harmonic
measure due to A. Beurling and L. Ahlfors. A nice elementary argument was later suggested by B.N. Khabibullin \cite{khab}.
We present the proof by Khabibullin below.
\begin{theorem}
\label{dens}
If an entire function $f$ of finite order is bounded on
$\CC\setminus \Omega$ for some set $\Omega$ of zero area density,
then $f$ is a constant.
\end{theorem}
\begin{proof}
Let $|f(z)| \le 1$ for $z\in \co\setminus \Omega$ where $\Omega$ has zero area density. Put $u = \max (\log |f|, 0)$.
Then $u$ is subharmonic and $u=0$ on $\co\setminus \Omega$. For $z\in\co$, $r>0$, let
$$
M(z, r) =\frac{1}{\pi r^2} \int_{D(z,r)} u(\zeta)\, dm_2(\zeta), \qquad M(r) = M(0,r).
$$
By subharmonicity, $u(z) \le M(z,r)$ for any $r$. Also, $M(z,r) \le 4 M(2r)$ whenever $|z| \le r$.
Then we have
$$
\begin{aligned}
M(r) = \frac{1}{\pi r^2} \int_{D(0,r) \cap\Omega} u(z)\, dm_2(z) & \le
\frac{1}{\pi r^2} \int_{D(0,r) \cap\Omega} M(z,r) \, dm_2(z) \\
& \le 4 M(2r)\frac{m_2(D(0,r) \cap\Omega)}{\pi r^2}.
\end{aligned}
$$
Since $\Omega$ is of zero density, we conclude that $M(r) = o(M(2r))$, $r\to\infty$.
It is easy to see that the latter condition contradicts the fact that $f$ is of finite order unless $M(r) \equiv 0$.
Assume that $M(r)>0$ for sufficiently large $r$.
For any $\gamma>0$
choose $r_0$ such that $M(r_0)>0$ and $M(2r) \ge \gamma M(r)$, $r\ge r_0$. Since
$f$ is of finite order, $u(z) =O(|z|^\rho)$, $|z|\to\infty$, whence
$M(r) \le C r^\rho$ for sufficiently large $r$. Thus, for any $n\in\mathbb{N}$,
$M(2^n r_0) \le C 2^{\rho n} r_0^\rho$. At the same time $M(2^n r_0) \ge \gamma^n M(r_0)$.
Since $\gamma$ can be taken arbitrarily large (e.g., $\gamma>2^\rho$) we come to a contradiction. Thus, $M(r) \equiv 0$,
whence $|f(z)| \le 1$ in $\co$.
\end{proof}
The following result contains Statement 1 of Theorem \ref{comp}
as a special case (where $a_n=-\mu_n$).
\begin{theorem}
\label{comp1}
Let $\hht$ be a CdB-space of finite order. Let
$$
G(z) = A(z) \bigg(\gamma + \sum_n \frac{a_n}{z-t_n}\bigg),
$$
where $\gamma\in \co\setminus\{0\}$ and $\sum_n \frac{|a_n|}{|t_n|+1} <\infty$.
Then the zero set $\mathcal{Z}(G)$ \textup(counting multiplicities\textup) is a uniqueness set for $\hht$.
\end{theorem}
\begin{proof}
A classical result of meromorphic function theory (see \cite[Theorem 6.1]{go}) says that if
$\sum_n \frac{|a_n|}{|t_n|+1} <\infty$, then $f(z) = \sum_n \frac{a_n}{z-t_n}$ satisfies
$$
\int_0^{2\pi} |f(re^{i\phi})|^p d\phi \to 0, \qquad r\to \infty,
$$
for any $p\in (0,1)$. From this we easily obtain that
$|f(z)| \le |\gamma|/2$ on $\co\setminus\Omega$,
where $\Omega$ has zero area density, whence $|G| \ge |\gamma|\cdot |A|/2$ on $\co\setminus\Omega$.
Assume that there exists $F(z) = A(z) \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n} \in\hht$ which vanish on $\mathcal{Z}(G)$
counting multiplicities. Then we can write $F=G H$ for some entire function $H$ and so
$$
H = \frac{F}{G} = \frac{A}{G} f_1, \qquad f_1(z) = \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n}.
$$
Applying \cite[Theorem 6.1]{go} to $f_1(z)$ we conclude that $|f_1(z)| \le 1$ outside another set $\Omega_1$ of zero area density.
We conclude that $H$ is bounded in $\co\setminus(\Omega \cup \Omega_1)$. At the same time $H$ is of finite order since
$H=F/G$ and both $F$ and $G$ are of finite order. By Theorem \ref{dens} $H$ is constant and so $G\in\hht$.
It remains to note that $G\notin \hht$ for any $\gamma\ne 0$. Indeed, if $G \in \hht$, then,
comparing the values at $t_n$ we conclude that $G (t_n) = A'(t_n) a_n$, whence $(\mu_n^{-1/2}a_n) \in \ell^2$ by \eqref{skal}.
It follows that $A(z) \sum_n \frac{a_n}{z-t_n} \in \hht$ and, finally, $\gamma A\in \hht$, which is absurd.
\end{proof}
\medskip
\begin{proof}[Proof of Theorem \ref{comp}]
Statement 1 follows from Theorem \ref{comp1} applied to $a_n= - \mu_n$. Let the space $\hht$ be small.
Then it is clear that $B_0 \in \hht$, $B_0\longleftrightarrow (\mu_n^{1/2})$. Thus, $\mathcal{Z}(B_0)$ is not a uniqueness set.
Let $F\in \hht$ vanish on $\mathcal{Z}(B_0)$. We need to show that $F$ is a multiple of $B_0$.
We have
$$
\frac{zB_0(z)}{A(z)} = -\sum_n \mu_n +\sum_n \frac{\mu_n t_n}{t_n -z}.
$$
Then, by \cite[Theorem 6.1]{go} (as in the proof of Theorem \ref{comp1}) we conclude that $|B_0(z)| \gtrsim |z|^{-1}|A(z)|$
outside a set of zero area density. If $F(z) = \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n}$ vanish on $\mathcal{Z}(B_0)$,
we can write $F=B_0H$ for some entire function $H$ of finite order, whence
$$
\frac{B_0(z) H(z)}{A(z)} = \sum_n \frac{c_n\mu_n^{1/2}}{z-t_n} = o(1)
$$
as $|z|\to \infty$ outside another set of zero area density. We conclude that $|H(z)| = o(|z|)$ outside a set of zero density
and so $H=const$ by Theorem \ref{dens}.
\end{proof}
\medskip
\begin{proof}[Proof of Theorem \ref{twos}]
Assume that
$\mathcal{Z}(B_\alpha) = \mathcal{Z}(\tilde B_\alpha)$
and $\mathcal{Z}(B_\beta) = \mathcal{Z}(\tilde B_\beta)$ (counting multiplicities). Then
$\tilde B_\alpha = g_1 B_\alpha$ and $\tilde B_\beta = g_2 B_\beta$ where $g_1, g_2$ are nonvanishing entire functions.
Dividing one of these equation by the other, we get
$$
\bigg(\alpha + \sum_n \frac{\tilde \mu_n}{z-\tilde t_n} \bigg)
\bigg(\beta + \sum_n \frac{ \mu_n}{z- t_n} \bigg) = g(z)
\bigg(\alpha + \sum_n \frac{\mu_n}{z- t_n} \bigg)
\bigg(\beta + \sum_n \frac{ \tilde \mu_n}{z- \tilde t_n} \bigg),
$$
where $g = g_1/g_2$ is a nonvanishing entire function.
Arguing as in the proof of Theorem \ref{comp1} (making use of \cite[Theorem 6.1]{go})
we conclude that each of the brackets is bounded and bounded away from zero
on $\co\setminus\Omega$, where $\Omega$ has zero area density,
whence $g$ is bounded on $\co\setminus\Omega$. Since $\hht$ is a space
of finite order, $g$ also is of finite order, and, by Theorem \ref{dens}, $g$ is a constant. Since, moreover,
$$
\alpha + \sum_n \frac{\mu_n}{z- t_n} \to \alpha
$$
as $z\to \infty$ outside a set of zero area density, we conclude that $g\equiv 1$. Hence,
$$
(\beta-\alpha) \bigg(\sum_n \frac{ \mu_n}{z- t_n} - \sum_n \frac{ \tilde \mu_n}{z- \tilde t_n} \bigg)=0,
$$
whence $T=\tilde T$, $\tilde \mu= \mu$ and $\tilde A =SA$ for some entire nonvanishing $S$.
\end{proof}
The two spectra theorem remains true if one takes $T$ as one of the spectra.
\begin{corollary}
\label{ed}
Let $\hh(T,A,\mu)$ and $\hh(T, \tilde A, \tilde \mu)$ be two convergence class CdB-spaces of finite order.
If there exists $\alpha \in \co\setminus \{0\}$, such that $\mathcal{Z}(B_\alpha) = \mathcal{Z}(\tilde B_\alpha)$
\textup(counting multiplicities\textup),
then $\tilde \mu = \mu$ and $\tilde A = SA$ for some nonvanishing entire $S$.
\end{corollary}
\begin{proof}
Since $\mathcal{Z}(B_\alpha) = \mathcal{Z}(\tilde B_\alpha)$ counting multiplicities, we have
$$
A(z) \bigg(\alpha + \sum_n \frac{\mu_n}{z-t_n} \bigg)
= g(z) \tilde A(z) \bigg(\alpha + \sum_n \frac{\tilde \mu_n}{z- t_n} \bigg)
$$
for some nonvanishing entire function $g$. Arguing as above we conclude that $g\tilde A/A =1$, whence $\tilde \mu =\mu$
and $\tilde A = SA$ for some nonvanishing entire $S$.
\end{proof}
\bigskip
\section{Riesz bases of reproducing kernels}
\label{riss}
We start with the following elementary proposition.
\begin{proposition}
\label{hry}
Given two sequence $T = \{t_n\}_{n\ge 1}$, $|t_n| \to \infty$, and $\{a_n\}_{n\ge 1}$,
assume that there exist $\delta_n>0$ such that the discs $D(t_n, \delta_n)$ are pairwise disjoint
and
\begin{equation}
\label{pro2}
\sum_n \frac{|a_n|}{|z-t_n|} \to 0 \qquad \text{as}\ \ |z| \to\infty, \ z\notin \cup_n D(t_n, \delta_n).
\end{equation}
Put
\begin{equation}
\label{pro}
G(z) = A\bigg(1+ \sum_{n\ge 1} \frac{a_n}{z-t_n}\bigg).
\end{equation}
Then there exists an enumeration of the zero set $\mathcal{Z} (G)$
\textup(counted with multiplicities\textup), $\mathcal{Z} (G) = \{s_n\}_{n\ge 1}$, such that
\begin{equation}
\label{pro1}
|s_n-t_n| \asymp |a_n|.
\end{equation}
Conversely, if there exist $T$ and
$\delta_n>0$ such that the discs $D(t_n, \delta_n)$ are pairwise disjoint and
the set $S= \{s_n\}_{n\ge 1} \subset \co$ satisfies
\begin{equation}
\label{pro2a}
\sum_n \frac{|s_n-t_n|}{|z-t_n|} \to 0 \qquad \text{as}\ \ |z| \to\infty, \ z\notin \cup_n D(t_n, \delta_n),
\end{equation}
then $S=\mathcal{Z} (G)$
for a function $G$ of the form \eqref{pro} with $|a_n| \asymp |s_n-t_n|$.
\end{proposition}
\begin{proof}
Clearly, $G(t_n) = 0$ when $a_n =0$.
By Rouch\'e theorem, there exists $n_0$ such that for any $n> n_0$ the functions
$\gamma (z-t_n)+a_n$ and $(z-t_n)\big(\gamma+ \sum_{k\ge 1} \frac{a_k}{z-t_k}\big)$
have the same number of zeros in $D(t_n, \delta_n)$
whence $G$ has a unique zero $s_n \in D(t_n, \delta_n)$ and $|s_n - t_n| \asymp |a_n|$.
Now we can write
$$
G(z) = A(z) \prod_{n>n_0} \frac{z-s_n}{z-t_n} \cdot \frac{H(z)}{\prod_{n\le n_0}(z-t_n)}
$$
for some entire function $H$. The infinite product converges uniformly
on compact subsets of the plain, since
$$
\sum_n \frac{|s_n-t_n|}{|z-t_n|} \to 0
$$
uniformly outside of small neighborhoods of $t_n$. An application of the maximum principle completes the argument.
It remains to show that $H$ is a polynomial of degree exactly $n_0$. Indeed,
we have
$$
\bigg|\frac{G(z)}{A(z)}\bigg| \asymp 1, \qquad
\prod_{n>n_0} \bigg|\frac{z-s_n}{z-t_n} \bigg| = \prod_{n>n_0} \bigg|1 + \frac{t_n-s_n}{z-t_n} \bigg| \asymp 1,
$$
for $|z| >R$ (with a sufficiently large $R$) and $z\notin \cup_n D(t_n, \delta_n)$. We conclude that
$H$ is a polynomial of degree $n_0$.
To prove the converse, define $G(z) = A(z)\prod_{n\ge 1} \frac{z-s_n}{z-t_n}$
and put
$$
a_n ={\rm Res}_{t_n} \frac{G}{A} = (t_n-s_n)\prod_{k\ne n} \frac{s_n-s_k}{s_n-t_k}.
$$
By \eqref{pro2a} the product converges and $|a_n| \asymp |s_n-t_n|$.
Here we use that $|s_n-t_n| = o(\delta_n)$, $n\to\infty$, whence $\sum_{k\ne n} \frac{|t_k-s_k|}{|s_n-t_k|} \to 0$
as $n\to\infty$. We can therefore write
$$
\frac{G(z)}{A(z)} = \sum_n \frac{a_n}{z-t_n} +H(z)
$$
for some entire $H$. Note that $|G(z)/A(z)| \asymp 1 $
when $z\notin \cup_n D(t_n, \delta_n)$ and $|z|$ is sufficiently large.
Thus, making use of \eqref{pro2a}, we conclude that $H$ is a nonzero constant.
\end{proof}
The following lemma provides some natural sufficient conditions for \eqref{pro2}.
Recall that the sequence $T=\{t_n\}$ is {\it power separated} (with exponent $N$) if
there exist numbers $C>0$ and $N>-1$ such that, for any $n$,
${\rm dist}\,(t_n, T\setminus\{t_n\}) \geq C(|t_n|+1)^{-N}$.
Note that we allow $N$ to be negative. If $N=0$,
then $T$ is simply separated, i.e., $|t_n-t_m|\ge \delta$ for some $\delta>0$ and any $n\ne m$.
Any power separated sequence has finite convergence exponent and so $\hht$ is a space of finite order.
A typical (and in a sense ``the largest'') example of a power separated sequence with the exponent $N$ is
$\{m^\alpha + i n^\alpha\}_{m,n\ge 1}$ where $\alpha = \frac{1}{1+N}$.
\begin{lemma}
\label{bbb}
Assume that $T = \{t_n\}$ is power separated with exponent $N$. If $(t_n^N a_n) \in \ell^p$
for some $p <2$, then \eqref{pro2} holds for $\delta_n = \frac{C}{3}(|t_n|+1)^{-N}$.
If, additionally, $T$ lies in a finite union of some strips, then \eqref{pro2} holds
when $(t_n^N a_n) \in \ell^p$ for some $p\in (0, \infty)$.
\end{lemma}
\begin{proof}
It is sufficient to prove the lemma for the sequence
$\{m^\alpha + i n^\alpha\}_{m,n\ge 1}$ where $\alpha = \frac{1}{1+N}$.
In the general case, appending the sequence if necessary, we can always consider $T$
to be a small perturbation of a dilation of $\{\pm m^\alpha \pm i n^\alpha\}_{m,n\ge 1}$. We omit the technicalities.
For $t_n = k^\alpha+ i m^\alpha \ne k_0^\alpha+ i m_0^\alpha$ one has
$$
|k^\alpha+i m^\alpha - (k_0^\alpha+i m_0^\alpha)| \gtrsim |t_n|^{-N} (|k-k_0| + |m-m_0|).
$$
Since $\sum_{(k, m) \ne (k_0, m_0)} (|k-k_0| + |m-m_0|)^{-q} <\infty$ if and only if $q>2$, the condition
$(t_n^N a_n) \in \ell^p$ for some $p <2$ is sufficient for \eqref{pro2}.
\end{proof}
The following result contains Theorem \ref{bas} as a special case (where $c_n = -\gamma^{-1}\mu_n^{1/2}$,
$(\mu_n) \in \ell^1$ and $(t_n^N \mu_n) \in\ell^p$).
Recall that we denote by $\tilde K_w$ the normalized reproducing kernel of $\hht$
at a point $w$.
\begin{theorem}
\label{bas1}
Let $\hht$ be a CdB-space such that $T$ is power separated with exponent $N$ and
$(t_n^{2N} \mu_n) \in\ell^p$ for some $p\in (0, \infty)$.
Let
$$
G(z) = A\bigg(1+ \sum_{n\ge 1} \frac{c_n \mu_n^{1/2}}{z-t_n}\bigg),
$$
where $(c_n) \in\ell^2$. If all zeros of $G$ are simple, then $\{\tilde K_s\}_{s\in \mathcal{Z}(G)}$ is a Riesz basis in $\hht$.
If $T$ lies in a finite union of some strips, then the conclusion of the theorem remains true under a weaker
condition $(t_n^{2N} \mu_n) \in\ell^\infty$.
\end{theorem}
\begin{proof}
If $(t_n^{2N} \mu_n) \in\ell^p$, $p\in(0,1)$ and $(c_n) \in\ell^2$, we have
$(t_n^N c_n \mu_n^{1/2}) \in \ell^s$ for some $s<2$, while
in the case when $T$ lies in a finite union of some strips and $(t_n^{2N} \mu_n) \in\ell^\infty$
we have $(t_n^N c_n \mu_n^{1/2}) \in \ell^2$. By Lemma \ref{bbb}, \eqref{pro2} is satisfied
for $a_n = c_n \mu_n^{1/2}$.
Let $\mathcal{Z}(G) = \{s_n\}$. According to Proposition \ref{hry}
we may assume that $s_n$ are enumerated so that $|s_n-t_n| \asymp |c_n| \mu_n^{1/2}$.
We will show that $\{\tilde K_{s_n}\}$ is a quadratic perturbation of the orthonormal basis $\{\tilde K_{t_n}\}$.
Recall that, for $s_n\notin T$, one has
$$
K_{s_n} (z) =\overline{A(s_n)} A(z) \sum_k \frac{\mu_k}{(\bar s_n-\bar t_k) (z-t_k)}, \qquad
K_{t_n} (z) = \overline{A'(t_n)} \mu_n \cdot \frac{A(z)}{z-t_n},
$$
whence $\|K_{t_n}\|^2 = |A'(t_n)|^2 \mu_n$ and
$$
\|K_{s_n}\|^2 = |A(s_n)|^2 \sum_k \frac{\mu_k}{|s_n - t_k |^2}.
$$
Similar to the proof of Lemma \ref{bbb}, it is easily seen that
if $T$ is separated with the exponent $N$ and $(t_n^{2N} \mu_n) \in\ell^p$, then
$$
\sum_{k \ne n} \frac{\mu_k}{|s_n - t_k |^2} =O(1).
$$
Thus,
$$
\|K_{s_n}\|^2 \asymp \frac{|A(s_n)|^2 \mu_n}{|s_n-t_n|^2 }.
$$
We will show that
\begin{equation}
\label{agani}
\sum_n \bigg\| \frac{(\bar s_n - \bar t_n) K_{s_n}}{\overline{A(s_n)} \mu_n^{1/2}} - \frac{K_{t_n}}{\overline{A'(t_n)} \mu_n^{1/2}} \bigg\|^2 <\infty.
\end{equation}
Indeed, we have
$$
\frac{(\bar s_n - \bar t_n) K_{s_n}}{\overline{A(s_n)} \mu_n^{1/2}} - \frac{K_{t_n}}{\overline{A'(t_n)} \mu_n^{1/2}}
= A(z) \frac{\bar s_n - \bar t_n}{\mu_n^{1/2}} \sum_{k\ne n} \frac{\mu_k}{(\bar s_n-\bar t_k) (z-t_k)}.
$$
Hence, making use of $|s_n-t_n| \asymp |c_n| \mu_n^{1/2}$,
$$
\bigg\| \frac{(\bar s_n - \bar t_n) K_{s_n}}{\overline{A(s_n)} \mu_n^{1/2}} - \frac{K_{t_n}}{\overline{A'(t_n)} \mu_n^{1/2}} \bigg\|^2
\asymp |c_n|^2 \sum_{k\ne n} \frac{\mu_k}{|s_n-t_k|^2} \lesssim |c_n|^2,
$$
which proves \eqref{agani}.
To prove the theorem we would like to refer to the classical Bari's theorem about systems which are quadratically close to
an orthonormal basis. For this one needs that the system in question is quadratically independent. This is easy to show, but
since we deal with systems of reproducing kernels, we also can use their special properties. Recall that a sequence
of normalized reproducing kernels $\{\tilde k_\lambda\}_{\lambda\in\Lambda}$ in a Reproducing Kernel Hilbert space of analytic
functions $\hh$ is a Riesz basis if and only if the map
$$
f\mapsto \{f(\lambda)/\|k_\lambda\|\}_{\lambda\in\Lambda}
$$
is an isomorphism of $\hh$ onto $\ell^2$. It is well known and easy to see
that if $\hh$ has the Division Property, then the property to be a Riesz basis
is stable with respect to moving a finite number of points. More precisely,
for the sets $\Lambda = \{\lambda_n\}_{n\ge 1}$ and $\Tilde \Lambda = \{\tilde \lambda_n\}_{n\ge 1}$ of {\it distinct} points
such that $\lambda_n = \tilde \lambda_n$ for $n\ge n_0$ the families
$\{\tilde k_\lambda\}_{\lambda\in\Lambda}$ and $\{\tilde k_{\tilde \lambda}\}_{\lambda\in\tilde \Lambda}$
are or are not Riesz bases simultaneously.
Now choose $n_0$ such that
$$
\sum_{n>n_0} \bigg\| \frac{(\bar s_n - \bar t_n) K_{s_n}}{\overline{A(s_n)} \mu_n^{1/2}} - \frac{K_{t_n}}{\overline{A'(t_n)} \mu_n^{1/2}} \bigg\|^2 <1.
$$
Then $\{\tilde K_{s_n}\}_{n>n_0}\cup \{\tilde K_{t_n}\}_{n\le n_0}$ is a Riesz basis, and so is $\{\tilde K_{s_n}\}$.
\end{proof}
\begin{remark}
{\rm 1. In the case when $G$ has a multiple zero, say,
a zero $\lambda$ of multiplicity $m>1$, one should add to the system $\{\tilde K_s\}_{s\in \mathcal{Z}(G)}$
the (normalized) reproducing kernels for derivatives $K^{(j)}_\lambda$, $1\le j\le m-1$,
where $K^{(j)}_\lambda$ is a function in $\hht$ such that $(F, K^{(j)}_\lambda) = F^{(j)}(\lambda)$, $F\in \hht$.
\smallskip
2. In applications to spectral theory the power separation condition is often relaxed to some weaker ``separation in the mean''
conditions (see, e.g., \cite[Sections 6, 7]{shkal}).
In this case, using similar methods, one can show that the corresponding system of kernels will be a {\it Riesz basis with brackets}.
We will not pursue this idea here. }
\end{remark}
\bigskip
\section{Applications to rank one perturbations of normal operators}
\label{appli}
Following L. de Branges \cite{br}, we say that an entire function $G$ is {\it associated} to the space
$\mathcal{H}(T,A,\mu)$ and write $G\in {\rm Assoc}\,(\mathcal{H}(T, A, \mu))$
if, for any $F\in \mathcal{H}(T,A,\mu)$ and $w\in\CC$, we have
$$
\frac{F(w)G(z) - G(w)F(z)}{z-w} \in \mathcal{H}(T,A,\mu).
$$
If $G$ has zeros, then the inclusion $G\in {\rm Assoc}\,(\mathcal{H}(T, A, \mu))$ is equivalent to
$\frac{G(z)}{z-\lambda} \in \mathcal{H}(T,A,\mu)$ for some (any)
$\lambda\in \mathcal{Z}(G)$. In particular, we have $A\in {\rm Assoc}\,(\mathcal{H}(T, A, \mu)) \setminus \mathcal{H}(T,A,\mu)$.
CdB-spaces form a natural setting for a functional model of rank one perturbations
of compact normal operators. Let $\A$ be a compact normal operator with simple
point spectrum $\{s_n\}$ and trivial kernel (i.e., $0\notin \{s_n\}$). Thus,
$\A$ is unitarily equivalent to multiplication by $z$ in $L^2(\nu)$,
$\nu = \sum_n \delta_{s_n}$. Put $t_n = s_n^{-1}$, $T=\{t_n\}$.
For $a=(a_n)$, $b=(b_n) \in L^2(\nu)\cong \ell^2$, consider the rank one perturbation of $\A$,
$$
\LL = \A + a\otimes b, \qquad \LL x = \A x+ (x, b)a.
$$
Assume that $b=(b_n)$ is a cyclic vector for $\A$, i.e., $b_n \ne 0$ for any $n$.
The following functional model for $\LL$ was studied in \cite[Theorem 2.9]{bar18}.
It is based on the standard representation of the resolvent as a Cauchy-type integral.
\begin{theorem}[Functional model for rank one perturbations]
\label{model}
1. Let $\A$ and $\LL$ be as above. Then
there exist
\begin{itemize}
\item
a positive measure $\mu=\sum_n\mu_n\delta_{t_n}$ such that
$\sum_n \frac{\mu_n}{|t_n|^2 +1}<\infty$\textup;
\item
a space $\mathcal{H}(T,A,\mu)$\textup;
\item
an entire function $G\in {\rm Assoc}\,(\mathcal{H}(T, A, \mu))$ with $G(0)=1$
\end{itemize}
such that $\LL$ is unitarily equivalent to the model operator
$\mathcal{T}_G: \mathcal{H}(T,A,\mu) \to \mathcal{H}(T,A,\mu)$,
$$
(\mathcal{T}_Gf)(z) = \frac{f(z) - f(0)G(z)}{z}, \qquad f \in \mathcal{H}(T,A,\mu).
$$
2. Conversely, for any space $\mathcal{H}(T,A,\mu)$ with $0\notin T$, and the function
$G \in {\rm Assoc}\,(\mathcal{H}(T, A, \mu))$ with $G(0) = 1$
the corresponding operator $\mathcal{T}_G$ is a model
of a rank one perturbation for some compact normal operator $\A$
with spectrum $\{s_n\}$, $s_n = t_n^{-1}$.
3. Moreover, the set of eigenvalues of $\LL$ coincides with $\{\lambda^{-1}:\ \lambda\in \ZZ(G) \}$
and the corresponding eigenvectors of the model operator $\mathcal{T}_G$ are given
by $\frac{G(z)}{z-\lambda}$, $\lambda\in \ZZ(G)$, while
the eigenvectors of $\mathcal{T}_G^*$ are of the form $K_\lambda$, $\lambda\in \ZZ(G)$.
\end{theorem}
The measure $\mu$ and the function $G$ in this model are related to the perturbation by the formulas
$$
\mu_n = |t_n|^2 |b_n|^2
$$
and
$$
G(z) = A(z)\bigg( 1+
z \sum_n \frac{a_n \bar b_n t_n }{z- t_n} \bigg) =
A(z) \bigg( 1+
\sum_n a_n \bar b_n t^2_n \Big(\frac{1}{z- t_n} +\frac{1}{t_n} \Big) \bigg).
$$
Note that $\LL^*$ also is a rank one perturbation of a normal operator $\A^*$, and applying
the above model to $\LL^*$ we reduce the properties of eigenvectors of $\LL$ to the study of geometric properties
of systems of reproducing kernels in CdB spaces. In case of multiple zeros, one should also consider reproducing kernels for derivatives
$K_\lambda^{(j)}$.
Also, note that the systems $\Big\{\frac{G(z)}{G'(\lambda)(z-\lambda)}\Big\}_{\lambda\in \ZZ(G)}$
and $\{K_\lambda\}_{\lambda\in \ZZ(G)}$ are biorthogonal.
Now assume that $\A$ belongs to some Schatten class $\mathfrak{S}_p$, $p>0$,
which is equivalent to the property that $T =\{t_n\}$ has finite convergence exponent (and so the space $\hht$ can be chosen to be of finite order).
To apply Theorem \ref{comp1} to $\hht$ and $G$ one must impose the following conditions:
$$
\sum_n |a_nb_n t_n| <\infty, \qquad 1+ \sum_n a_n \bar b_n t_n \ne 0.
$$
Under these conditions eigenvectors and root vectors of the perturbed operator $\LL$ are complete,
the same holds for its adjoint $\LL^*$ (\cite[Theorem 2.1]{bar18}).
In the case when $(t_n a_n) \in L^2(\nu)$ or $(t_n b_n) \in L^2(\nu)$, we are in the situation
of the so-called {\it weak perturbations} in the sense of V.I.~Macaev. Indeed, in this case $a\in {\rm Range}\,\A$ or
$b\in {\rm Range}\,\A$, whence $\LL$ can be written as $\LL = \A (I+S)$ or
$\LL = (I+S)\A $ for some rank one operator $S$. Perturbations of this form were considered in now classical
theorems of M.V.~Keldysh and V.I.~Macaev. In particular, a theorem due to Keldysh
says that if $\A\in \mathfrak{S}_p$, $p>0$, is a normal operator,
whose spectrum lies on a finite system of rays and $S$ is an arbitrary compact operator with ${\rm Ker}\, (I+S) =0$,
then eigenvectors and root vectors of $\LL$ are complete. In Macaev's theorem $\A$ is assumed to be only compact,
but then it must be selfadjoint. For these results we refer to the original papers by Keldysh and Macaev \cite{keld1, keld2, Mats61},
as well as to \cite[Chapter V]{gk} and to a recent survey paper \cite[Section 4]{shkal}.
Our situation is much more special since $S$ of rank one, however we do not need any requirements
on the location of the spectrum of $\A$. Note that in our case condition ${\rm Ker}\, (I+S) =0$
coincides with $1+ \sum_n a_n \bar b_n t_n \ne 0$. Also it should be mentioned that in general rank one perturbations
need not be complete and, in some cases, can even be Volterra operators (see \cite[Theorems 1.1, 1.2]{by15}
and \cite[Theorem 8.1]{bar18} for details).
Now we consider an application to the question whether eigenvectors of a perturbed operator
form a Riesz basis. To apply Theorem \ref{bas1} to $G$ we need to write $a_n \bar b_n t^2_n = c_n\mu_n^{1/2}$
for some $(c_n)\in\ell^2$. Obviously, this is possible if and only if $(a_nt_n) \in \ell^2$, and so we are in the situation of
a weak perturbation. We have the following theorem:
\begin{theorem}
\label{bas2}
Let $\A$ be a normal operator in a Hilbert space $H \cong \ell^2$ with simple
spectrum and trivial kernel. Let $t_n= s_n^{-1}$ be the inverse to the eigenvectors of $\A$
and assume that $T =\{t_n\}$ is power separated with exponent $N$ \textup(whence $\A$ is in some Schatten class\textup).
Let $\LL = \A+a\otimes b$ where $a,b \in H$, $(t_na_n) \in\ell^2$
and $(t_n^{N+1} b_n) \in \ell^p$ for some $p\in (0, \infty)$. Assume that ${\rm Ker}\, \LL =0$
and $b=(b_n)$ is a cyclic vector for $\A$, i.e., $b_n \ne 0$ for any $n$.
Then the \textup(normalized\textup) eigenvectors and root vectors of $\LL$ form a Riesz basis in $H$.
If $T$ lies in a finite union of some strips, then the conclusion of the theorem remains true under a weaker
condition $(t_n^{N+1} b_n) \in\ell^\infty$.
\end{theorem}
\begin{proof}
Considering the functional model of Theorem \ref{model} for $\LL$
we find the space $\hht$, $\mu_n = |t_n|^2 |b_n|^2$, and the corresponding function $G$ given by
\begin{equation}
\label{bho}
G(z) = A(z)\bigg( 1+ \sum_n a_n \bar b_n t_n
+ \sum_n \frac{a_n \bar b_n t^2_n}{z-t_n}\bigg).
\end{equation}
If all zeros of $G$ are simple, then, by Theorem \ref{bas1}
(applied to $c_n = a_n t_n$ and $\mu_n = |t_n|^2 |b_n|^2$), the normalized reproducing
kernels $\{\tilde K_\lambda\}_{\lambda\in \ZZ(G)}$ form a Riesz basis in $\hht$. Therefore, its biorthogonal system
(which coincides with the set of eigenvectors of $\mathcal{T}_G$, unitary equivalent model for $\LL$) also is a Riesz basis.
By Proposition \ref{hry} and Lemma \ref{bbb}, all zeros of $G$ except, maybe, a finite number
are simple. Thus, there exists at most finite number of multiple eigenvalues, and it is easy to see that the union of
eigenvectors and of (finite number of) root vectors gives a Riesz basis.
\end{proof}
Finally, consider a related problem about rank one perturbations of {\it unbounded} normal operators
with discrete spectrum. Now let $\A$ be an unbounded normal operator on a separable Hilbert space
$H$ with simple spectrum $T =\{t_n\}$, $|t_n| \to \infty$, $0\notin T$. We then may assume
that $\A$ is multiplication by $z$ in $L^2(\nu) \cong \ell^2$, $\nu=\sum_n \delta_{t_n}$.
Let $\LL = \A + a \otimes b$ be a rank one perturbation of $\A$. We will assume that $a =(a_n) \in L^2(\nu)$.
However, it is sufficent to assume that $b \in \A L^2(\nu)$, i.e., $t_n^{-1} b_n \in \ell^2$.
In this case for $x\in {\mathcal D}(\A)$ we can understand the inner product $(x,b)$ as $(\A x, (\A^*)^{-1}b)$, and so $\LL$
is well defined on ${\mathcal D}(\A)$.
Assume that ${\rm Ker}\, \LL= 0$, which is, obviously equivalent to
\begin{equation}
\label{kap}
\varkappa = 1+(\A^{-1}a, b) = 1+\sum_n \frac{a_n \bar b_n}{t_n} \ne 0.
\end{equation}
In this case $\LL^{-1}$ is a (bounded) rank one perturbation of a compact normal operator $\A^{-1}$,
$$
\LL^{-1} = \A^{-1} - \frac{1}{\varkappa} \A^{-1} a \otimes (\A^{-1})^*b.
$$
Assume that $b_n \ne 0$ for any $n$.
Considering the model for $\LL^{-1}$ we see that $\LL^{-1}$ is unitarily equivalent to
an operator $\mathcal{T}_G$ on $\hht$ where $\mu_n = |b_n|^2$ and
\begin{equation}
\label{jjk}
G(z) = \frac{1}{\varkappa} A(z) \bigg(1 - \sum_n \frac{a_n \bar b_n}{z- t_n}\bigg)
\end{equation}
(note that $\A^{-1} a = (t_n^{-1} a_n)$, $(\A^{-1})^*b = (\overline{t_n^{-1}} b_n)$.
\begin{theorem}
\label{bas3}
Let $\A$ be an unbounded normal operator
in a Hilbert space $H\cong\ell^2$ whose spectrum $T=\{t_n\}$ is simple and power separated with exponent $N$.
Let $\LL = \A+a\otimes b$, where $a=(a_n) \in\ell^2$, $b_n\ne 0$ for any $n$ and
$(t_n^N b_n) \in \ell^p$ for some $p\in(0, \infty)$.
Then the normalized eigenvectors and root vectors of $\LL$ form a Riesz basis in $H$.
If $T$ lies in a finite union of some strips, then the conclusion of the theorem remains true under a weaker
condition $(t_n^{N} b_n) \in\ell^\infty$.
\end{theorem}
\begin{proof}
We can assume without loss of generality that ${\rm Ker}\, \LL= 0$ (one can always replace $\A$ by $\A +\alpha I$).
Then one can apply the above functional model to
$\LL^{-1}$ and get the corresponding space $\hht$ with $\mu_n = |b_n|^2$ and $G$
given by \eqref{jjk}. By the hypothesis, $(t_n^{2N} \mu_n) \in \ell^p$ for some $p\in(0, \infty)$
($(t_n^{2N} \mu_n) \in \ell^\infty$ in the case when $T$ lies in a finite union of strips).
Now, applying Theorem \ref{bas1} (with $\mu_n = |b_n|^2$ and $c_n = a_n\bar b_n/|b_n|$), we see that
the set $\{K_{\lambda}\}_{\lambda \in \mathcal{Z}(G)}$ (with added finite number of functions
$K^{(j)}_{\lambda}$ in case of multiple zeros) is a Riesz basis in $\hht$.
Recall that in our model $\{K_{\lambda}\}_{\lambda \in \mathcal{Z}(G)}$
is the system of eigenvectors of
$\mathcal{T}_G^*$ (which is unitary equivalent to $(\LL^{-1})^*$). Its biorthogonal system is also
a Riesz basis, thus, the eigenvectors of $\LL^{-1}$ (and of $\LL$) form a Riesz basis in $H$.
\end{proof}
\begin{corollary}
\label{bas4}
Let $\A$ be an unbounded normal operator
in a Hilbert space $H\cong\ell^2$ whose spectrum $T=\{t_n\}$ is simple and separated,
and let $\LL = \A+a\otimes b$, $a,b\in H$. Then there exists an enumeration $\{z_n\}$
of the spectrum of $\LL$ \textup(counting multiplicities\textup) such that $\sum_n |z_n-t_n| <\infty$. Moreover, the normalized
eigenvectors and root vectors of $\LL$ form a Riesz basis in $H$.
Conversely, for any sequence $\{z_n\}$ such that $\sum_n |z_n-t_n| <\infty$ there exists a unique rank one perturbation
$\LL = \A+a\otimes b$, $a,b\in H$, of $\A$ such that the spectrum of $\LL$ coincides with $\{z_n\}$
\textup(counting multiplicities\textup).
\end{corollary}
\begin{proof}
Without loss of generality ${\rm Ker}\, \LL= 0$ and $\varkappa\ne 0$, where $\varkappa$ is given by \eqref{kap}.
In the case when $b_n \ne 0$ for any $n$ the Riesz basis property follows from Theorem \ref{bas3}.
Eigenvalues of $\LL$ are zeros of the function $G$ given by \eqref{jjk}. By Proposition \ref{hry},
there exists an enumeration of the zero set of $G$
(counting multiplicities), $\ZZ(G) = \{z_n\}$, such
that $|z_n -t_n| \asymp |a_nb_n|$, whence $\sum_n |z_n-t_n| <\infty$.
Conversely, by Proposition \ref{hry} and Lemma \ref{bbb}, any sequence $\{z_n\}$ such that
$\sum_n |z_n-t_n| <\infty$ is a zero set of a function of the form
$$
G(z) = A(z) \bigg( 1+ \sum_n \frac{c_n}{z- t_n}\bigg)
$$
with $\sum_n |c_n| <\infty$. It remains to choose appropriate $a_n, b_n$.
In the case when $b_n=0$ for some $n$, we formally cannot apply the model from Theorem \ref{model}.
This case is however easily reduced to the cyclic case.
Let $\mathbb{N} = N_1\cup N_2$, where $N_2 = \{ n:\ b_n=0\}$.
Then we can write $\A = \A_1\oplus \A_2$, where $\A_j$ is multiplication by $z$ in $H_j = L^2(\nu_j) \cong\ell^2(N_j)$,
$\nu_j = \sum_{n\in N_j} \delta_{s_n}$, $j=1,2$.
Now let $\LL =\A+a\otimes b$. With respect to decomposition
$H = H_1\oplus H_2$ we have $a=a^{(1)}\oplus a^{(2)}$, $b=b^{(1)} \oplus 0$.
Denote by $(e_m)_{m\in N_2}$ the standard orthogonal basis of $H_2$,
$(e_m)_n = 0$, $m\ne n$, and $(e_m)_m=1$. It is clear that
$0\oplus e_m$ is an eigenvector
of $\LL$ corresponding to the eigenvalue $t_m$, $m\in N_2$.
Consider the operator $\LL_1 = \A_1 + a^{(1)} \otimes b^{(1)}$ on $H_1$. Now $b^{(1)}$
is a cyclic vector for $\A_1$. Its model space is $\mathcal{H}(T_1, A_1, \mu^{(1)})$,
where $T_1=\{t_n\}_{n\in N_1}$, $\mu^{(1)} = \sum_{n\in N_1} \mu_n \delta_{t_n}$,
and $\mu_n = |t_n|^2|b_n|^2$, $n\in N_1$. The corresponding ``characteristic'' function $G_1$
is given by
$$
G_1(z) = \frac{1}{\varkappa} A_1(z) \bigg(1 - \sum_n \frac{a_n \bar b_n}{z- t_n}\bigg) =
\frac{1}{\varkappa} A_1(z) \bigg(1 - \sum_{n\in N_1} \frac{a_n \bar b_n}{z- t_n}\bigg).
$$
The vector $b^1$ is cyclic and so, by Theorem \ref{bas3},
the normalized eigenvectors $(f_k)$ of $\LL_1$ form a Riesz basis in $H_1$ (to avoid uninteresting technicalities we assume that
all eigenvalues $\lambda_k$ are simple). Note that $\lambda_k$ (the zeros of $G_1$)
are exactly the zeros of the function \eqref{jjk} which are different from $t_m$, $m\in N_2$.
For $u=u^{(1)}\oplus u^{(2)}$ we have
$\LL u =
\LL_1 u^{(1)} \oplus \big(\A_2 u^{(2)} + (u^{(1)}, b^{(1)})a^{(2)}\big)$.
If $u$ is an eigenvector of $\LL$ then either $u^{(1)} =0$ (and so $u=0 \oplus e_m$ for some $m$) or $u^{(1)}\ne 0$ and
then we can assume that $u^{(1)}= f_k$ for some $k$. In this case for $u^{(2)}$ we have the equation:
$$
\A_2 u^{(2)} + (f_k, b^{(1)})a^{(2)} = \lambda_k u^{(2)},
$$
whence $u^{(2)} = - (f_k, b^{(1)}) (\A_2 - \lambda_k I)^{-1} a^{(2)}$.
Note that any $\lambda_k$ is not in the spectrum of $\A_2$ which coincides with $\{t_m\}_{m\in N_2}$.
Thus, the set of eigenvectors of $\LL$ is of the form $\{f_k \oplus g_k\}_{k\in N_1} \cup \{0 \oplus e_m\}_{m\in N_2}$,
where $g_k = - (f_k, b^{(1)}) (\A_2 - \lambda_k I)^{-1} a^{(2)}$.
Recall that a system $\{h_n\}$ is a Riesz basis in a Hilbert space $H$ if and only if
$\{h_n\}$ is complete and $\|\sum_n c_n h_n\| \asymp \|(c_n)\|_{\ell^2}$, $(c_n)\in\ell^2$.
It is obvious that the eigenvectors of $\LL$ are complete. Since $\{f_k\}$ and $\{e_m\}$
are Riesz bases in $H_1$ and $H_2$ respectively, it is sufficient to show
that $\|\sum_k c_k g_k\| \lesssim \|(c_k)\|_{\ell^2}$, $(c_k)\in\ell^2$. This is, obviously, true, since
$$
(g_k)_j = \frac{a_j^{(2)}}{\lambda_k - t_j}(f_k, b^{(1)}),
$$
the sequence $\{(f_k, b^{(1)})\}$ is in $\ell^2$, while $\lambda_k-t_k \to 0$
and so the set $\{\lambda_k\}_{k\in N_1} \cup \{t_m\}_{m\in N_2}$\
is separated.
\end{proof}
\begin{remark}
{\rm 1. Theorem \ref{bas4} extends \cite[Theorems 3.1, 4.1]{hry2} characterizing
spectra of rank one perturbations of unbounded selfadjoint operators to the case of normal
\smallskip
operators.
2. There is a vast literature on Riesz bases of eigenvectors and root vectors of perturbations
of selfadjoint or normal operators with discrete spectrum;
for a detailed survey we refer to \cite{shkal}. It is possible
that our Theorem \ref{bas3} is covered by some more general results. However, it seems that most of the results on Riesz bases
of eigenvectors (see, e.g., \cite[Theorems 6.6, 7.1]{shkal})
apply to the case when the unperturbed operator is selfadjoint or normal with the spectrum on a finite system of rays.
While we look at a very special class of perturbations (i.e., rank one), we do not need the assumptions on the spectrum location.
\smallskip
3. We give a more detailed comparison of
Theorem \ref{bas3} with \cite[Theorem 7.1]{shkal} (which in a slightly weaker form goes back to \cite{mit}).
In this theorem perturbations $T+B$ of an unbounded selfadjoint operator $T$
with discrete spectrum are considered such that the following ``local $p$-domination property'' holds:
for the normalized eigenvectors $\phi_n$ of $T$ and $p\in (-\infty, 1)$ one has $\|B \phi_n\| \lesssim |t_n|^p$.
In our situation the local $p$-domination condition is equivalent to $|b_n| \lesssim |t_n|^{p}$,
while the relation between $p$ and the order of the operator from \cite[Theorem 7.1]{shkal}
means that $p\le -N$ where $N$ is the exponent of power separation
(note that the case $p\ge 0$ corresponds only to operators of order at least 1).
In \cite{shkal} a somewhat weaker mean separation is used and so the systems of eigenvectors form Riesz bases with brackets. Thus, Theorem 7.1 from \cite{shkal} says that a rank one perturbation
$\LL = \A+a\otimes b$ of a {\it selfadjoint} operator $\A$
has a Riesz basis of eigenvectors and root vectors if $(a_n) \in\ell^2$ and $(t_n^N b_n) \in \ell^\infty$.
Note also that $\A$ is assumed to be of order greater than $1/2$ which means that
$N<2$. Theorem \ref{bas3} shows that for rank one perturbations conditions
$(a_n) \in\ell^2$ and $(t_n^N b_n) \in \ell^\infty$ are sufficient for a Riesz basis property
for any normal operator with spectrum in a finite union of strips and for any positive order.
Under slightly stronger condition $(t_n^N b_n) \in \ell^q$ for some $q\in(0, \infty)$
the result remains true for any normal operator with power separated spectrum without
any restrictions on its location.}
\end{remark}
\bigskip
\section{Two examples}
\label{examp}
We return to the problem about Riesz bases corresponding to zeros of the functions $B_\gamma$ of the form \eqref{gh}
and \eqref{ghh}. Our first example deals with the case of de Branges spaces (i.e., $T \subset \R$) while the second is
a specific example of a CdB space corresponding to points on a cross $\R\times i\R$ and which can be thought of as
a cross counterpart of the Paley--Wiener space.
Assume that $T \subset \R$. Then $\hht = \he$ for some Hermite--Biehler class function $E$.
Moreover, we have $E=A-iB$, where
$$
B(z) = A(z) \bigg(q+ \sum_n \Big(\frac{1}{t_n-z} -\frac{1}{t_n}\Big) \mu_n \bigg)
$$
for some $q \in\R$. As usual we agree that the term in the brackets is simply $-1/z$ in case when $t_n=0$.
Note that the ``mass at infinity'' $p$ (from the representation \eqref{herg}) is zero since
we assume that $\{K_{t_n}\}$ is an orthogonal basis in $\he$.
The function $\beta = B/A$ is a Herglotz function in $\cp$.
By the de Branges theory we know that for all $\gamma\in \R$ (except, maybe, one) the zeros
of the functions
$$
B_\gamma(z) = A(z) \bigg(\gamma +q+ \sum_n \Big(\frac{1}{t_n-z} -\frac{1}{t_n}\Big) \mu_n \bigg) =B+\gamma A
$$
generate an orthogonal basis in $\hht=\he$. (Note a slight difference in the definition of $B_\gamma$ -- now the parameter $q$
is included into $B_0$.) What happens if $\gamma\notin \R$?
Consider the inner function $\Theta = E^*/E = \frac{i-\beta}{i+\beta}$ in $\cp$.
Since $\Theta$ is meromorphic in $\co$ it is of the form
$\Theta(z) = e^{iaz} D(z)$, where $a\ge 0$ and $D$ is a meromorphic Blaschke product.
We have
$$
\frac{B+\gamma A}{E} = \frac{\Theta-1}{2i} +\gamma \frac{\Theta+1}{2} = \frac{1}{2} (\gamma+i + (\gamma-i)\Theta).
$$
Thus, the zeros of $B+\gamma A$ are the solutions of the equation $\Theta = \frac{i+\gamma}{i-\gamma}$. In the case
$\gamma=i$ these are the poles of $\Theta$, i.e., the zeros of $E$. If $\gamma\in \cm$, then all zeros of $B+\gamma A$ are in $\cp$
and vise versa.
Recall that a Blaschke product $D$ in the upper half-plane with zeros $\{z_n\}_{n\ge 1}$ is said to be interpolating
if $\inf_n \prod_{k\ne n}\big|\frac{z_n-z_k}{z_n-\bar z_k}\big| >0$.
By the classical Douglas--Shapiro--Shields theorem (see, e.g., \cite[Lecture VII]{nik}),
the normalized Cauchy kernels $\{\tilde k_w\}_{w\in \ZZ(D)}$ form a Riesz basis in their closed linear span
$K_D = H^2 \ominus D H^2$ if and only if
$D$ is an interpolating Blaschke product. Here $k_w(z) = (1-\bar w z)^{-1}$ and
$\tilde k_w = (1-|w|^2)^{1/2} k_w$.
For $|\alpha|<1$ denote by $\Theta_\alpha$ the Frostman shift of $\Theta$,
$\Theta_\alpha = \frac{\Theta-\alpha}{1-\overline{\alpha}\Theta}$.
\begin{proposition}
\label{carl}
Let $\hht =\he$ be a de Branges space and let $A$, $B$, $B_\gamma$, $\Theta$ be as above.
Let $\gamma\notin \R$ and put $\alpha = \frac{i+\gamma}{i-\gamma}$.
Then the following are equivalent:
\smallskip
1. The zeros of $B_\gamma$ are simple and the family $\{K_w\}_{w\in \ZZ(B_\gamma)}$
is a Riesz basis in $\he$;
\smallskip
2. $\gamma \in \cm$ and $\Theta_\alpha$ is an interpolating Blaschke product or
$\gamma\in \cp$ and $\Theta_{1/\alpha}$ is an interpolating Blaschke product.
\end{proposition}
\begin{proof}
Assume that $|\alpha|<1$ and consider the
normalized reproducing kernels of the model space $K_\Theta = E^{-1} \he$ at the points $w$ such that $\Theta(w) =\alpha$:
$$
\tilde k_w^{\Theta}(z) = \Big(\frac{1-|w|^2}{1-|\Theta(w)|^2}\Big)^{1/2} \frac{1-\overline{\Theta(w)}\Theta(z)}{1-\bar w z} =
\Big(\frac{1-|w|^2}{1-|\alpha|^2}\Big)^{1/2} \frac{1-\overline{\alpha}\Theta(z)}{1-\bar w z}.
$$
A well-known transform (sometimes referred to as Crofoot's transform)
$$
J: f\mapsto (1-|\alpha|^2)^{1/2} \frac{f}{1-\overline{\alpha}\Theta}
$$
maps $K_\Theta$ unitarily on the model space $K_{\Theta_\alpha}$.
Clearly, $J$ maps the family $\{\tilde k_w^{\Theta}\}_{w\in \ZZ(\Theta-\alpha)}$ to
$\{\tilde k_w\}_{w\in \ZZ(\Theta_\alpha)}$. Thus, we conclude that
$\{\tilde k_w^{\Theta}\}_{w\in \ZZ(\Theta-\alpha)}$ is a Riesz basis in $K_\Theta$ if and only if
$\{\tilde k_w\}_{w\in \ZZ(\Theta_\alpha)}$ is a Riesz basis in $K_{\Theta_\alpha}$ if and only if
$\Theta_\alpha$ is an interpolating Blaschke product. It remains to note that $\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$
is a Riesz basis in $\he$ if and only if
$\{\tilde k_w^{\Theta}\}_{w\in \ZZ(\Theta-\alpha)}$ is a Riesz basis in $K_\Theta$ (see formula \eqref{repr}).
In the case when $\gamma\in \cp$ and $|\alpha|>1$ note that the systems
$\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$ and $\{\tilde K_{\overline{w}}\}_{w\in \ZZ(B_\gamma)}$
simultaneously form a Riesz basis in $\he$ or not.
\end{proof}
\begin{remark}
{\rm Given an interpolating Blaschke product $\Theta$, it is a difficult problem to describe the set of parameters $\alpha$
for which $\Theta_\alpha$ also is interpolating. A. Nicolau \cite{nic1} gave a description of Carleson--Newman (e.g., finite product of interpolating) Blaschke products for which any Frostman shift is also a Carleson--Newman Blaschke product. He also gave
an example, for any $m\in (0,1)$, of an interpolating Blaschke product whose shifts with $|\alpha| \ge m$ are not Carleson--Newman.
On the other hand, if $\Theta$ is a {\it thin} Blaschke product (i.e., $ \prod_{k\ne n}\big|\frac{z_n-z_k}{z_n-\bar z_k}\big| \to 1$,
$n\to \infty$),
then all its Frostman shifts are also thin and thus interpolating up to possible gluing of a finite number of zeros. See \cite{nic2, nic3}
for further results and references therein. }
\end{remark}
Let $T = \Z +\frac{1}{2}$, $A(z) =\frac{1}{\pi} \cos (\pi z)$ and $\mu_n =1$ for any $t_n\in T$.
Then the space $\hh(T, A, \mu)$ coincides isometrically with the Paley--Wiener space $PW_\pi$.
It is well known and easy to see that the systems of reproducing kernels corresponding to the zeros of
$$
B_\gamma(z) = A(z) \bigg(\gamma + \sum_{n\in \Z} \Big(\frac{1}{n+\frac{1}{2}-z} - \frac{1}{n+\frac{1}{2}} \Big)\bigg) =
\frac{\gamma}{\pi} \cos \pi z + \sin\pi z
$$
form a Riesz basis in $\hht$ unless $\gamma = \pm \pi i$. In the case when $\gamma = \pm \pi i$ the function
$B_\gamma(z) = \pm i \exp(\mp i \pi z)$
does not vanish and so the corresponding system is empty.
Now we consider a certain cross analogue of the Paley--Wiener space.
Let
\begin{equation}
\label{comlos}
T = \Big(\Z +\frac{1}{2}\Big) \cup i \Big(\Z +\frac{1}{2}\Big), \qquad A(z) =\frac{1}{\pi} \cos (\pi z) \cos (\pi i z)
\end{equation}
and $\mu_n =1$ for any $t_n\in T$. Then
$$
\frac{B_\gamma(z)}{A(z)} = \gamma + \sum_{t_n\in T} \bigg(\frac{1}{t_n-z} - \frac{1}{t_n}\bigg) =
\gamma +\pi \, \tg (\pi z) + \pi i\, \tg (\pi i z).
$$
It is easy to see that for any $\gamma$ all zeros of $B_\gamma$ (except possibly a finite number)
are simple (see the discussion below).
\begin{proposition}
\label{tangens}
Let $\hht$ be a CdB space with parameters \eqref{comlos}. Then the system of normalized kernels
$\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$ \textup(with added reproducing kernels in case of multiple zeros\textup) is complete
for any $\gamma \in \co$, and is a Riesz basis in
$\hht$ if and only if $\gamma \notin \{\pm \pi(1+i),\, \pm \pi(1-i) \}$.
\end{proposition}
\begin{proof}
For the proof of the Riesz basis property we use the classical Bari's theorem (see, e.g., \cite[Lecture VI]{nik}): a normalized
system $\{x_n\}$ in a Hilbert space $H$
is a Riesz basis if and only if:
a) $\{x_n\}$ is complete and minimal;
b) its biorthogonal system $\{y_n\}$ its complete;
c) the map $J: x\mapsto (x, x_n)$ is a bounded linear map from $H$ to $\ell^2$;
d) the map $\tilde J: x\mapsto (x, y_n)$ is a bounded linear map from $H$ to $\ell^2$.
\medskip
\\
{\it Step 1. Completeness of $\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$.}
Note that
\begin{equation}
\label{comlos1}
\frac{B_\gamma(z)}{A(z)} = \gamma +\frac{\pi}{i} \frac{e^{2\pi i z} - 1}{e^{2\pi i z} +1}
- \pi \frac{e^{2\pi z} - 1}{e^{2\pi z} +1}.
\end{equation}
Then for any $\delta>0$ the ratio $B_\gamma/A$ tends respectively to
$\gamma +\pi(-1+i)$, $\gamma +\pi(1+i)$, $\gamma +\pi(1-i)$, and
$\gamma +\pi(-1-i)$, when $|z|\to \infty$ in each of the angles $\{\delta <\arg z<\pi/2 - \delta\}$,
$\{\pi/2+\delta <\arg z<\pi - \delta\}$, $\{\pi+\delta <\arg z<3\pi/2 - \delta\}$,
and $\{3\pi/2+\delta <\arg z<2\pi - \delta\}$. Now, if $f = A\sum_{t_n\in T} \frac{c_n}{z-t_n} \in \hht$ is orthogonal
to $\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$, then we can write $f=B_\gamma h$ for some entire function $h$.
Since $f/A$ tends to zero in each of the above four angles, we conclude that $h$ tends to zero at least in three of them.
Since $h$ must be a function of order at most 1, we conclude that it is zero.
\medskip
\\
{\it Step 2. Completeness of the biorthogonal system.} It should be noted that if the system
$\{K_w\}_{w\in \ZZ(B_\gamma)}$ is complete in some CdB space $\hht$, then its biorthogonal
is always complete. We do not use here the special form of $\hht$. Indeed, for any $w$ such that
$B_\gamma(w) = 0$ we have
$$
\frac{B_\gamma(z)}{z-w} = A(z)\sum_n \frac{\mu_n}{(w-t_n)(z-t_n)} \in \hht.
$$
From now on, to avoid uninteresting technicalities, we assume that all zeros of $B_\gamma$ are simple.
Then the biorthogonal system to $\{K_w\}_{w\in \ZZ(B_\gamma)}$
is given by $\big\{ \frac{B_\gamma(z)}{B'_\gamma(w)(z-w)} \big\}$. Assume that $f=A\sum_n \frac{c_n \mu_n^{1/2}}{z-t_n}$
is orthogonal to this system. Hence, for any $w\in \ZZ(B_\gamma)$ one has
$$
\sum_n \frac{\bar c_n \mu_n^{1/2}}{w-t_n} =0.
$$
Therefore, the function $f^*$ defined by $f^*(z) = A(z) \sum_n \frac{\bar c_n \mu_n^{1/2}}{z-t_n}$ belongs to $\hht$
and vanishes on $\ZZ(B_\gamma)$, a contradiction with completeness of
$\{K_w\}_{w\in \ZZ(B_\gamma)}$ unless $c_n=0$ for any $n$.
\medskip
\\
{\it Step 3. Zeros of $B_\gamma$ when $\gamma \notin \{\pm \pi(1+i),\, \pm \pi(1-i)\}$.}
Analyzing the equation $B_\gamma = 0$ using the representation \eqref{comlos1} and the Rouch\'e theorem
it is easy to see that in the case
$\gamma \notin \{\pm \pi(1+i),\, \pm \pi(1-i)\}$ its solutions consist of four series of zeros. Namely there exist
$\alpha_j, \beta_j \in \R$ (depending on $\gamma$),
$j=1, \dots, 4$,
such that all sufficiently large zeros of $B_\gamma$ are of one of the following form:
\begin{equation}
\label{comlos2}
\begin{aligned}
& k + \alpha_1 +i\beta_1 + \delta_{1k}, \quad ik + i\alpha_2 -\beta_2 + \delta_{2k}, \\
-& k + \alpha_3 +i\beta_3 + \delta_{3k},
\quad -ik + i\alpha_4 -\beta_4 + \delta_{4k},
\end{aligned}
\end{equation}
where $k\in \N$ is sufficiently large and $\delta_{jk} = O(e^{-2\pi k})$.
Note also that for any $\gamma\in\co$ we have $\ZZ(B_\gamma) \cap T = \emptyset$
and, moreover, $\alpha_j +i\beta_j \notin \Z+\frac{1}{2}$.
\medskip
\\
{\it Step 4. Boundedness of $J$ and $\tilde J$.}
Recall that $\|K_w\|^2 = |A(w)|^2 \sum_n \frac{\mu_n}{|t_n-w|^2}$.
From the representations \eqref{comlos2} it follows that, for any fixed
$\gamma \notin \{\pm \pi(1+i),\, \pm \pi(1-i)\}$ one has $\|K_w\| \asymp |A(w)|$,
$w\in \ZZ(B_\gamma)$. Since, for $f=A\sum_n \frac{c_n}{z-t_n} \in \hht$,
$$
(f, \tilde K_w) = \frac{A(w)}{\|K_w\|} \sum_n \frac{c_n}{w-t_n},
$$
we see that the boundedness of $J$ is equivalent to the boundedness of the operator
\begin{equation}
\label{romen}
(c_n) \in\ell^2 \mapsto \Big(\sum_n \frac{c_n}{w-t_n}\Big)_{w\in \ZZ(B_\gamma)}.
\end{equation}
In view of the asymptotic representation \eqref{comlos2} of $\ZZ(B_\gamma)$ this operator is a small perturbation
of discrete Hilbert and Hankel-type operators $(c_n)_{n\in \Z} \mapsto \big(\sum_{n\ne k} \frac{c_n}{n-k}\big)_{k\in \Z}$ and
$(c_n)_{n\in \Z} \mapsto \big(\sum_{n\ne 0} \frac{c_n}{n+i k}\big)_{k\in \Z}$, and thus is bounded.
The biorthogonal system to $\{\tilde K_w\}_{w\in \ZZ(B_\gamma)}$ is given by
$\big\{\frac{\|K_w\| B_\gamma(z)}{B'_\gamma(w)(z-w)} \big\}$. Then, for
$f=A\sum_n \frac{c_n}{z-t_n} \in \hht$, we have (by \eqref{skal})
$$
\Big(\frac{\|K_w\| B_\gamma(z)}{B'_\gamma(w)(z-w)}, f \Big) = \sum_n
\frac{\|K_w\| B_\gamma(t_n)}{A'(t_n) B'_\gamma(w)}\cdot \frac{\bar c_n}{t_n-w}.
$$
It is easy to see that $|A'(t_n)| \asymp |B_\gamma(t_n)|$ and,
from the asymptotics \eqref{comlos2}, that $\|K_w\| \asymp |A(w)| \asymp |B'_\gamma(w)|$, $w\in \ZZ(B_\gamma)$.
Thus, the boundedness of $\tilde J$ is also equivalent to the boundedness of the operator \eqref{romen}.
\medskip
\\
{\it Step 5. Case of $\gamma \in \{\pm \pi(1+i),\, \pm \pi(1-i)\}$.}
In this case two of series of zeros \eqref{comlos2} glue together. Let us consider the case
$\gamma = \pi(1-i)$. Then it is easy to see that for all sufficiently large zeros in the first quadrant
are of the form
$$
\Big(\frac{k}{2} - \frac{1}{8}\Big)(1+i) +O(e^{-2\pi k})
$$
where $k\in \N$ is sufficiently large. We also have two series of zeros
$$
-k + \frac{5}{8} +\frac{i}{4\pi} \log 2, \qquad -ki + \frac{5}{8}i +\frac{1}{4\pi} \log 2,
$$
as $k\to \infty$.
It is easy to see that for the zeros $w_k = \big(\frac{k}{2} - \frac{1}{8}\big)(1+i) +O(e^{-2\pi k})$ (with sufficiently large $k$)
one has
$$
\|K_{w_k}\|^2 = |A(w_k)|^2 \sum_n \frac{\mu_n}{|t_n-w_k|^2} \asymp
|A(w_k)|^2 \sum_{n=1}^\infty \frac{1}{(k-n)^2 +k^2}\asymp \frac{|A(w_k)|^2}{k}.
$$
Take $f(z) = \frac{A(z)}{z- 1/2}$.
Then $|(f, \tilde K_{w_k})| \asymp \frac{k^{1/2}}{|w_k - 1/2|}$, $k\to \infty$,
whence $\big( (f, \tilde K_{w_k}) \big) \notin\ell^2$ and $J$ is trivially unbounded.
The same is true for other exceptional values of $\gamma$.
\end{proof}
\section{Nearly invariant subspaces of finite codimension}
\label{riv}
In this section we prove Theorem \ref{dom}.
\begin{proof}[Proof of Theorem \ref{dom}]
{\bf (iii) $\Longrightarrow$ (ii).} Assume that $\sum_n |t_n|^{2N-2}\mu_n <\infty$. Then
the functions $B_j$ given by \eqref{bj} (i.e., $B_j \longleftrightarrow (t_n^j \mu_n^{1/2})$) belong to $\hht$ for $j=0, \dots, N-1$.
Let us show that $B_j \perp {\rm clos}\,\mathcal{D}_{z^N}$. Let
$f\in \mathcal{D}_{z^N}$, $f \longleftrightarrow (c_n)$. Then $z^k f \longleftrightarrow (t_n^k c_n)$, $1\le k\le N$.
Assume that $0\notin T$ (the general case follows trivially by a shift).
Then, for $j=0, \dots, N-1$,
$$
(f, B_j) = \sum_n c_n t_n^j \mu_n^{1/2} = \bigg(\sum_n \frac{c_n t^{j+1}\mu_n^{1/2}}{t_n-z}\bigg) \bigg|_{z=0} =
-\frac{z^{j+1}f(z)}{A(z)}\bigg|_{z=0} = 0.
$$
The functions $B_j$ are linearly independent and so the codimension of ${\rm clos}\,\mathcal{D}_{z^N}$
is at least $N$. To show that its codimension is at most $N$, assume that
$g\in \hht$, $g \longleftrightarrow (d_n)$ and $g\perp \mathcal{D}_{z^N}$.
We already know that $B_j \perp {\rm clos}\,\mathcal{D}_{z^N}$, $j=0, \dots, N-1$. Subtracting from $g$
a linear combination of $B_j$ we can assume that $d_1 = \dots =d_N=0$.
Next, for any $m>N$, one has
$$
g\perp \frac{A(z)}{(z-t_m)\prod_{k=1}^N (z-t_k)},
$$
whence
$$
d_m \mu_m^{-1/2} \prod_{k=1}^N (t_m-t_k)^{-1} +\sum_{k=1}^N
d_k \mu_k^{-1/2} (t_k-t_m) \prod_{1\le l\le N, l\ne k} (t_k-t_l)^{-1} = 0.
$$
We concude that $d_m =0$ for $m>N$, whence $g=0$. Thus,
$(\mathcal{D}_{z^N})^\perp = {\rm Span}\, \{B_0, \dots, B_{N-1}\}$.
\bigskip
\\
{\bf (ii) $\Longrightarrow$ (i).} All we need is to show
that ${\rm clos}\,\mathcal{D}_{z^N}$ is nearly invariant. Let $f\in {\rm clos}\,\mathcal{D}_{z^N}$ and $f(\lambda) =0$.
Choose $f_n, g \in \mathcal{D}_{z^N}$ such that $f_n\to f$ and $g(\lambda)=1$. Then
$$
\frac{f_n - f_n(\lambda) g}{z-\lambda} \to \frac{f}{z-\lambda}.
$$
We use the fact that $f_n(\lambda) \to f(\lambda)$ and that the map $h \to \frac{h}{z-\lambda}$ is a bounded operator from the subspace
$\{h\in\hht: \ h(\lambda) =0\}$ to $\hht$.
\medskip
While the proof essentially goes by (iii) $\Longrightarrow$ (ii) $\Longrightarrow$ (i) $\Longrightarrow$ (iii), we also
discuss the implication (iii) $\Longrightarrow$ (i) since its conclusion will be needed later on.
\bigskip
\\
{\bf (iii) $\Longrightarrow$ (i).} Define $\hh_0 = \{B_0, \dots, B_{N-1}\}^\perp$. We need to show that
it is nearly invariant, i.e., if $f\perp B_0, \dots, B_{N-1} $ and $f(\lambda) = 0$, then $\frac{f(z)}{z-\lambda} \perp B_0, \dots, B_{N-1}$.
We have, for $\lambda\notin T$,
$$
\begin{aligned}
\Big(\frac{f}{z-\lambda}, B_j \Big) = \sum_n \frac{c_n t_n^j \mu_n^{1/2}}{t_n-\lambda} & =
\sum_n \frac{c_n (t_n^j -\lambda^j) \mu_n^{1/2}}{t_n-\lambda} \\
& =\sum_{l=0}^{j-1} \lambda^{j-1-l} \sum_n c_n
t_n^l \mu_n^{1/2} =0.
\end{aligned}
$$
The case when $\lambda\in T$ is analogous.
\bigskip
\\
{\bf (i) $\Longrightarrow$ (iii).} This implication is slightly more complicated. Assume that $\hh_0$ is of codimension $N$
and let $\hh_0^\perp = {\rm Span}\, \{g_1, \dots, g_{N}\}$. Let $g_j \longleftrightarrow (d_n^j)$. Since $\hh_0$
is nearly invariant we see that for any $f\perp g_1, \dots, g_N$ such that $f(\lambda) = 0$
we have $\frac{f}{z-\lambda} \perp g_1, \dots, g_N $. Equivalently, this means that
$$
\sum_n c_n \overline{d_n^j} = 0, \ \ j=1, \dots, N, \quad \sum_n \frac{c_n \mu_n^{1/2}}{t_n - \lambda} =0 \quad \Longrightarrow
\quad
\sum_n \frac{c_n \overline{d_n^j}}{t_n-\lambda} = 0,
$$
that is, for any $k$,
$$
\Big(\frac{d_n^k}{\bar t_n - \bar \lambda} \Big) \in
{\rm Span}\,\bigg\{ \Big(\frac{\mu_n^{1/2}}{\bar t_n - \bar \lambda} \Big), (d_n^j), \ 1\le j\le N \bigg\}.
$$
Thus, there exists a matrix $\Gamma = (\gamma_{kj})_{1\le k,j\le N}$ and $\beta_k$, $1\le k\le N$, such that
for any $n$
$$
\frac{d_n^k}{\bar t_n - \bar \lambda} = \sum_{j=1}^N \gamma_{kj} d_n^j + \beta_k
\frac{\mu_n^{1/2}}{\bar t_n - \bar \lambda}, \qquad 1\le k\le N.
$$
If we denote by $d_n$ the vector (column) $(d_n^j)_{j=1}^N$, and put $\beta = (\beta_k)_{k=1}^N$,
the equation takes form
$$
(I- (\bar t_n - \bar \lambda) \Gamma) d_n = \mu_n^{1/2} \beta.
$$
Note that $\Gamma$ and $\beta$ depend on $\lambda$, but do not depend on $n$. From now on we assume that
$\lambda =0$ and $0 \notin T$. Then the equation takes the form
\begin{equation}
\label{mmd}
(I- \bar t_n \Gamma) d_n = \mu_n^{1/2} \beta.
\end{equation}
The solutions of the equation $(I-z\Gamma) u = v$ in $\co^N$ are rational functions of $z$ whose poles are the zeros of
${\rm det}\, (I-z\Gamma)$ or, equivalently, $z^{-1}$ where $z$ is an eigenvalue of $\Gamma$.
Since we know that equation \eqref{mmd} has the solution, we conclude that for the values of the parameter
$z= \bar t_n $ and
for the right hand side $\mu_n^{1/2} \beta$ either there are no singularities or they cancel. However,
we cannot apriory exclude that $\bar t_n^{-1}$ is an eigenvalue of $\Gamma$.
Therefore, for such $n$ the solution is not unique.
Denote by $\nn$ the (obviously finite) set of indices $n$ such
that $\bar t_n^{-1}$ is an eigenvector of $\Gamma$.
To summarize, the vectors $d_n = (d_n^j)_{j=1}^N$
satisfying \eqref{mmd} will be necessarily of the form
$$
d_n^j = \mu_n^{1/2} R_j (\bar t_n) +u_n^j,
$$
where $R_j$ is a rational function independent on $n$, $R_j = P_j/Q_j$, where $P_j$, $Q_j$ are coprime polynomials such that
${\rm deg}\, P_j \le N-1$, ${\rm deg}\, Q_j \le N$ and $(u_n^j)_{n\ge 1}$ is a finite vector with possible nonzero
components only at $n\in \nn$. We now analyze these solutions in more detail.
\medskip
\\
{\it Step 1.} At this step we assume that $j$ is fixed. We claim that $Q_j$ is a constant.
Assume that this is not the case and so $R_j$ has at least one pole $\bar \gamma_1$.
Note that $\gamma_1\notin T$. Then we have
\begin{equation}
\label{glo}
d_n^j = \mu_n^{1/2}\bigg(p_j (\bar t_n) +\sum_{l=1}^L \sum_{k=1}^{m_l} \frac{c_{lk}}
{(\bar t_n -\bar \gamma_l)^{k}}\bigg)
+u_n^j,
\end{equation}
where $p_j$ is a polynomial of degree at most $N-1$, $\gamma_l$ are distinct numbers in $\co\setminus T$ and
$L$ is at least 1. We will obtain a contradiction with the fact that $\hh_0$ is nearly invariant, by producing
a function $f\in \hh_0$ such that $f(\gamma_1) =0$, but $\frac{f(z)}{z-\gamma_1} \notin \hh_0$.
Note that for $\gamma\notin T$ and $f\in \hht$, $f\longleftrightarrow (c_n)$,
one has
$$
\bigg( (c_n), \Big(\frac{\mu_n^{1/2}}{(\bar t_n -\bar \gamma)^k}\Big)\bigg)_{\ell^2} =
\sum_n \frac{\mu_n^{1/2} c_n }{(t_n - \gamma)^k} = -\frac{1}{(k-1)!}\Big(\frac{f}{A}(z)\Big)^{(k-1)}\Big|_{z=\gamma}.
$$
Also, if $m=\max_j {\rm deg}\, p_j $, then we can already conclude that $\sum_n |t_n|^{2m} \mu_n <\infty$
(other terms in \eqref{glo} belong to $\ell^2$), whence
$B_0, \dots, B_m\in \hht$. Since the subspace $\hh_0$ is nearly invariant (and therefore considering $\frac{z-\tilde \gamma}{z-\gamma}f$
we can move any zero $\gamma$ of $f$ to any other point $\tilde\gamma$) and infinite dimensional
we can choose $f \in \hh_0$, $f\longleftrightarrow (c_n)$, such that:
\begin{itemize}
\item $f \perp B_0, \dots, B_m$, whence $f\perp g$, where $g \longleftrightarrow (p_j(\bar t_n))$;
\item $c_n = 0$, $n\in \nn$, whence $f\perp g$, where $g \longleftrightarrow (u_n^j)$;
\item $f^{(k)}(\gamma_l) = 0$, $1\le l \le L$, $1\le k\le m_l-1$;
\item $f^{(m_1)}(\gamma_1) \ne 0$.
\end{itemize}
From the above conditions we see that $f\perp g_j$, $g_j \longleftrightarrow (d_n^j)$, and $f(\gamma_1) = 0$.
Recall that by implication (iii)$\Longrightarrow$(i) we have $\frac{f(z)}{z-\gamma_1} \perp B_0, \dots, B_m$
as well. Thus, the function $\frac{f(z)}{z-\gamma_1}$ is orthogonal to all terms in the representation \eqref{glo}
of $d_n^j$ except $\big(\frac{\mu_n^{1/2}}{(\bar t_n -\bar \gamma_1)^{m_1}}\big)$. Thus,
$\big(\frac{f}{z-\gamma_1}, g_j\big) \ne 0$, a contradiction to nearly invariance of $\hh_0$.
\medskip
\\
{\it Step 2.} By Step 1 we know that, {\it for any $j$}, $d_n^j = \mu_n^{1/2} p_j (\bar t_n) + u_n^j$, where $p_j$ is some polynomial
of degree at most $N-1$ and $(u_n^j)$ is a finite vector. We claim that
$u_n^j = 0$ for all $j$ and $n\in\nn$. Assume that there exist $j$ and $n_0$ such that $u_{n_0}^j \ne 0$.
As before, we can choose $f\in \hh_0$ such that $f \perp B_0, \dots, B_m$,
$m=\max_j {\rm deg}\, p_j$, and such that $c_n = 0$, $n\in \nn$, which is equivalent to $f(t_n) = 0$, $n\in \nn$.
We can also choose $f$ so that $f'(t_{n_0}) \ne 0$. Hence, $\frac{f}{z-t_{n_0}}$ is not orthogonal to
$(u_n^j)$, whence $\big(\frac{f}{z-{t_{n_0}}}, g_j\big) \ne 0$, again a contradiction.
\medskip
\\
{\it Step 3.} We conclude that $(d_n^j) = (\mu_n^{1/2} p_j (\bar t_n))$ for any $j$
where $m_j = {\rm deg}\, p_j \le N- 1$ for any $j$, $1\le j\le N$. If $m_j<N-1$ for any $j$, we conclude that $g_j$
are linearly dependent, a contradiction. Thus, there exists $m_j = N-1$, whence
$\sum_n |t_n|^{2N-2} \mu_n <\infty$. Moreover, we see that
${\rm Span}\, \{g_1, \dots, g_N\} = {\rm Span}\, \{ B_0, \dots, B_{N-1}\}$.
Theorem \ref{dom} is proved.
\end{proof}
It is easy to see that the finite codimension subspaces constructed in Theorem \ref{dom}
are isomorphic to Cauchy--de Branges spaces. However, they are never CdB spaces themselves with the norm inherited from $\hht$
unless $\hht$ is a rotation of a de Branges space.
We analyze the case of subspaces of codimension 1.
\begin{proposition}
\label{nobas}
Let $\hht$ be a small de Branges space and let $\hh_0 = {\rm clos}\,\mathcal{D}_{z} = \{B\}^\perp$, where $B=B_0$.
\medskip
1. Fix $t_0\in T$ and put $T_0 = T\setminus \{t_0\}$, $A_0(z) = \frac{A(z)}{z-t_0}$ and $\nu = \sum_{n\ne 0} |t_n|^2 \mu_n \delta_{t_n}$.
Then $\hh_0$ is isomorphic to $\hh(T_0, A_0, \nu)$.
\medskip
2. $\hh_0$ is itself a CdB space, that is, has an orthogonal basis of reproducing kernels, if and only if $T$ lies on a straight line
\textup(i.e., $\hht$ is a rotation of a de Branges space\textup).
\end{proposition}
\begin{proof}
1. Let $T= \{t_n\}_{n\ge 0}$ and let $f\in \hh_0$, $f \longleftrightarrow (c_n)_{n\ge 0}$. Since $f\perp B$, one has $\mu_0^{1/2} c_0 = - \sum_{n\ge 1} c_n \mu_n^{1/2}$. Then
$$
f(z) = A(z) \sum_{n\ge 1} c_n \mu_n^{1/2} \bigg(\frac{1}{z-t_n} - \frac{1}{z-t_0} \bigg) =
\frac{A(z)}{z-t_0} \sum_{n\ge 1} \frac{c_n(t_n-t_0)\mu_n^{1/2}}{z-t_n}.
$$
Thus, $f\in\hh(T_0, A_0, \nu) $ and $\|f\|^2_{\hh(T_0, A_0, \nu)} \asymp \sum_{n\ge 1} |c_n|^2\asymp
\sum_{n\ge 0} |c_n|^2 =\|f\|^2_{\hh(T, A, \mu)}$.
Conversely, let $g\in \hh(T_0, A_0, \nu)$. Then, for some $(c_n)_{n\ge 1}\in\ell^2$ one has
$$
g(z) =A_0(z) \sum_{n\ge 1} \frac{c_n(t_n-t_0)\mu_n^{1/2}}{z-t_n}.
$$
It remains to put $c_0 = - \mu_0^{-1/2} \sum_{n\ge 1} c_n\mu_n^{1/2}$ and to reverse the above computations.
\medskip
2. Denote by $\hat K_w$ the reproducing kernel of $\hh_0$.
If we assume that $\|B\|^2_{\hht} = \sum_n \mu_n =1$, then $\hat K_w = K_w - \overline{B(w)}B $,
where $K_w$ is the reproducing kernel of $\hht$.
Assume that $\hh_0$ has an orthogonal basis of reproducing kernels $\{\hat K_{s_l}\}$.
Then $\hat K_{s_l}(s_m) =0 $, $m\ne l$. For any $m,l$,
the function $\frac{z-s_l}{z-s_m} \hat K_{s_l}$ belongs to $\hh_0$ and is orthogonal to all $\hat K_{s_j}$, $j\ne m$. Therefore,
there exists a constant $d_{l,m}$ such that
$$
\frac{z-s_l}{z-s_m} \hat K_{s_l} = d_{l,m} \hat K_{s_m}.
$$
Since $\hat K_w = K_w - \overline{B(w)}B $, we obtain by comparing the values at $t_n$ (for $t_n \ne s_l, s_m$) that
\begin{equation}
\label{bro}
\frac{t_n-s_l}{t_n -s_m} \bigg( \frac{\overline{A(s_l)}}{\bar s_l-\bar t_n}
- \overline{B(s_l)}\bigg) = d_{l,m}
\bigg( \frac{\overline{A(s_m)}}{\bar s_m-\bar t_n}
- \overline{B(s_m)} \bigg).
\end{equation}
Taking the limit as $n\to \infty$ we conclude that $\overline{B(s_l)} = d_{l,m} \overline{B(s_m)}$.
\medskip
\\
{\it Case 1.} Assume that $B(s_m) =0$ for some $m$. Then $B(s_l) = 0$ for any $l$ and thus $\{s_k\} \subset \ZZ(B)$.
Then, for any $n$ such that $t_n \ne s_l, s_m$,
$$
\overline{A(s_l)} \frac{t_n-s_l}{t_n -s_m} =
\overline{A(s_m)} \frac{\bar t_n - \bar s_l}{\bar t_n - \bar s_m}.
$$
Since $t_n\to \infty$, we conclude that $A(s_l) = A(s_m)$, whence $t_n$ satisfy the equation
$\ima (t_n (\bar s_m -\bar s_l) + \bar s_l s_m) =0$, which is an equation of a line.
\medskip
\\
{\it Case 2.} Now assume that $B(s_m) \ne 0$ for any $m$. Then $d_{l,m} = \overline{B(s_l)}/\overline{B(s_m)}$.
Inserting this into \eqref{bro} we get
$$
\overline{A(s_m)}\, \overline{B(s_l)} \,\frac{ t_n - s_m}{\bar t_n - \bar s_m} -
\overline{A(s_l)} \,\overline{B(s_m)}\, \frac{t_n-s_l}{\bar t_n - \bar s_l}
= \overline{B(s_l)}\, \overline{B(s_m)} (s_m-s_l)
$$
for $t_n \ne s_l, s_m$.
This equation is equivalent to $a|t_n|^2 +b(\bar t_n)^2 +ct_n +d\bar t_n +e= 0$ for some
$a, \dots , e\in \co$, $b= \overline{B(s_l)}\, \overline{B(s_m)} (s_m-s_l) \ne 0$. Since $t_n \to \infty$, we have $|a| = |b|$.
Thus there exists $\theta\in\R$ such that $e^{i\theta}t_n$ satisfy the equation $|z|^2 - z^2 + az+b\bar z+c =0$
for some new coefficients $a,b,c\in \co$. It is easy to show that this equation has infinitely many solutions
tending to infinity only if it is equivalent to an equation $\ima z=const$.
Thus, in each of two cases all points $t_n$ (maybe, except two if $s_m\in T$ or $s_l\in T$) lie on the same line.
Starting from different values of $s_m, s_l$ we conclude that all $t_n$
are on the same line.
\end{proof}
\begin{remark}
{\rm Statement 2 of Proposition \ref{nobas} and its proof are
close to Theorem \ref{jam} (\cite[Theorem 1]{bms1}) by Belov, Mengestie and Seip.
It is also worth mentioning that Theorem \ref{jam} can be related via the functional model of Section \ref{appli}
with a result of E.~Ionascu \cite{ion} who showed that the spectra of diagonal normal operators who have a normal
rank one perturbation must lie on a straight line or one a circle. }
\end{remark}
\bigskip
\section{Density of polynomials and structure of nearly invariant subspaces}
\label{stru}
Assume that $\sum_n \mu_n |t_n|^{2j} <\infty$ for any $j$. Then, by Theorem \ref{dom},
the space $\hht$ contains a subspace of any finite codimension and these subspaces are ordered
by inclusion. It turns out that, under additional condition that the set of all polynomials $\pp$ is dense
in $L^2(\mu)$, all nearly invariant subspaces of $\hht$ are of finite codimension; therefore they are
of the form ${\rm clos}\,\mathcal{D}_{z^j}$, $j\in \N$, and are ordered by inclusion.
De Branges and Cauchy--de Branges spaces with ``small'' measures $\mu$ (i.e., with fast decay of $\mu_n$)
were studied in \cite{loc1, loc2}, where the notion of localization was introduced.
In what follows we will assume that the sequence $T$ is {\it power separated} (see \eqref{powsep}).
Recall that any power separated sequence has finite convergence exponent and so $\hht$ is a space of finite order.
We say that the space $\mathcal{H}(T,A,\mu)$
with a power separated sequence $T$:
\begin{itemize}
\item
has the {\it localization property} if
there exists a sequence of disjoint disks $\{D(t_n,r_n)\}_{t_n\in T}$
with $r_n\to 0$ such that for any nonzero $f\in \mathcal{H}(T,A,\mu)$ the set
$\mathcal{Z}(f)\setminus\cup_n D(t_n,r_n)$ is finite and each disk
$D(t_n,r_n)$ contains at most one point of $\mathcal{Z}(f)$ (counting multiplicities) for any
$n$ except, possibly, a finite number;
\medskip
\item
has the {\it strong localization property} if there exists
a sequence of disjoint disks $\{D(t_n,r_n)\}_{t_n\in T}$
with $r_n\to 0$ such that for any nonzero $f\in \mathcal{H}(T,A,\mu)$
the set $\mathcal{Z}(f)\setminus\cup_n D(t_n,r_n)$ is finite
and each disk $D(t_n,r_n)$ contains
exactly one point of $\mathcal{Z}(f)$ for any $n$ except, possibly,
a finite number.
\end{itemize}
As is shown \cite{loc1}, one can take $r_n = \delta (|t_n|+1)^{-M}$ for any $M>N$, where $N$ is the constant from the
separation condition \eqref{powsep}.
It was proved in \cite{loc1} in the de Brangean setting and in \cite{loc2} for general CdB-spaces
that the strong localization property is equivalent to the density of polynomials in
$L^2(\mu)$, $\mu= \sum_n \mu_n \delta_{t_n}$ (the elements of $L^2(\mu)$ can be identified with sequences
$(d_n)$ such that $\|(d_n)\|^2_{L^2(\mu)} = \sum_n |d_n|^2\mu_n <\infty$).
Therefore, we can add to Theorem \ref{stro1}
one more equivalent condition.
\begin{theorem}
\label{stro2}
Let $\htt$ be a CdB-space with a power separated sequence $T$. Then the following assertions are equivalent:
\medskip
1. $\htt$ contains a nearly invariant subspace of any finite codimension and any nontrivial
nearly invariant subspace is of this form.
\medskip
2. The set of all polynomials $\pp$ is contained in $L^2(\mu)$ and is dense there.
\medskip
3. The space $\mathcal{H}(T,A,\mu)$ has the strong localization property.
\end{theorem}
\begin{proof}
Equivalence of statements 2 and 3 was shown in \cite[Theorem 1.2]{loc2}.
\medskip
\\
{\bf $1 \Longrightarrow 2$.} By Theorem \ref{dom}, $\sum_n \mu_n |t_n|^{2j} <\infty$ for any $j$, and so
$\pp \subset L^2(\mu)$. Assume that the polynomials are not dense in $L^2(\mu)$. Then there exists
a nonzero sequence $(d_n)\in L^2(\mu)$ such that $\sum_n d_n t_n^j \mu_n = 0$. If we put
$c_n = d_n \mu_n^{1/2}$, then $(c_n) \in \ell^2$ and $\sum_n c_n t_n^j \mu^{1/2}_n = 0$.
Therefore, the function $f(z) = A(z) \sum_n \frac{c_n \mu_n^{1/2}}{z-t_n}$ is in $\hht$
and is orthogonal to all functions $B_j$ from \eqref{bj}. Put $\hh_0 = \{B_j: j\ge 0\}^\perp$.
Then $\hh_0$ is nontrivial, of infinite codimension and, by the proof of the implication (iii)$\Longrightarrow$(i) of Theorem \ref{dom},
$\hh_0$ is nearly invariant, a contradiction.
\medskip
\\
{\bf $3 \Longrightarrow 1$.} Assume that $\hht$ has the strong localization property
and let $\hh_0$ be its nearly invariant subspace. Let $f\in\hh_0$, $f\ne 0$. Fix $M>N$,
where $N$ is the constant from the separation condition \eqref{powsep}.
Assume that $T$ is enumerated by positive integers, $T = \{t_n\}_{n\ge 1}$.
Then there exists $\nn$ such that $\N \setminus \nn$ is finite and for any
$n\in \nn$ the disk $D(t_n, (|t_n|+1)^{-M})$ contains exactly one zero of $f$, say $z_n$. Put
$$
g(z) = f(z)\prod_{n\in \nn} \frac{z-t_n}{z-z_n}, \qquad g_k(z) = f(z)\prod_{n\in \nn, n\le k} \frac{z-t_n}{z-z_n}.
$$
It is easy to show that if $M$ is sufficiently large, e.g., $\sum_n |t_n|^{M-N} <\infty$, then $g\in\hht$ (see \cite[Section 3]{loc2} for details)
and moreover $g_k\to g$ in $\hht$. Since $g_k\in \hh_0$, we conclude that $g\in \hh_0$. Also, we can write
$$
g(z) = \frac{A(z)}{P(z)}h(z),
$$
where $P(z) = \prod_{n\in \N \setminus \nn}(z-t_n)$ is a polynomial and $h$ is some entire function.
Any function in $\hht$ is majorized by $A$ in the sense that $\frac{f(z)}{A(z)} = o(1)$
as $z\to\infty$ outside a set of zero area density. Thus, $h$ admits a polynomial estimate outside
a set of zero area density, whence $h$ is a polynomial by Theorem \ref{dens}.
We conclude that $\hh_0$ contains a function of the form $\frac{A}{P}$ where $P$
is a polynomial of some degree $m$ with zeros in $T$. Since $\hh_0$ is nearly invariant,
we have $\frac{A(z)}{(z-t_1)\dots (z-t_m)} \in \hh_0$ for any choice of distinct $t_1, \dots, t_m$.
It remains to note that for $F\in\hht$
$$
F\perp \Big\{ \frac{A(z)}{(z-t_1)\dots (z-t_m)}\Big\} \quad \Longleftrightarrow \quad F\in
{\rm Span}\, \{ B_0, \dots, B_{m-2}\}.
$$
Then ${\rm clos}\,\mathcal{D}_{z^{m-1}} \subset \hh_0$, whence the codimension of $\hh_0$ is finite.
\end{proof}
Further results related, in particular, to a more subtle property of localization (which is not strong)
as well as numerous examples can be found in \cite{loc1, loc2}.
|
1,477,468,751,071 | arxiv | \section{Introduction}
\subsection{Motivation}
Let $(x_1,Y_1), \ldots, (x_n,Y_n)$
be observations from the nonparametric regression model
\begin{equation}
Y_i = f(x_i) + \sigma\,\epsilon_i
\end{equation}
where $\epsilon_i \sim N(0,1)$,
$x_i \in (0,1)$,
and $f$ is assumed to lie in some infinite-dimensional class of functions $\cH$.
We are interested in constructing confidence bands $(L, U)$ for $f$.
Ideally these bands should satisfy
\begin{equation}\label{eq::true-cov}
\P_f{L \le f \le U} = 1-\alpha \ \ \ \ \ \
\mbox{for all}\ f\in \cH
\end{equation}
where
${L \le f \le U}$ means that
$L(x) \le f(x) \le U(x)$
for all $x\in\cX$,
where $\cX$ is some subset of $(0,1)$
such as
$\cX=\{x\}, \cX = \{x_1, \ldots, x_n\}$ or
$\cX = (0,1)$.
Throughout this paper, we take
$\cX = \{x_1, \ldots, x_n\}$ but this particular choice
is not crucial in what follows.
Attaining (\ref{eq::true-cov}) is difficult and hence
it is common to settle for
pointwise asymptotic coverage:
\begin{equation}\label{eq::pointwise}
\liminf_{n\to\infty}\P_f{L \le f \le U} \geq 1-\alpha \ \ \ \ \ \
\mbox{for all}\ f\in \cH.
\end{equation}
``Pointwise'' refers to the fact that the asymptotic limit
is taken for each fixed $f$ rather than uniformly over $f\in\cH$.
Papers on pointwise asymptotic methods include
Claeskens and Van Keilegom (2003),
Eubank and Speckman (1993),
H\"{a}rdle and Marron (1991),
Hall and Titterington (1988),
H\"{a}rdle and Bowman (1988),
Neumann and Polzehl (1998),
and Xia (1998).
Achieving even pointwise asymptotic coverage
is nontrivial due to the presence of bias.
If $\hat{f}(x)$ is an estimator with mean
$\overline{f}(x)$ and standard deviation
$s(x)$ then
$$
\frac{\hat{f}(x) - f(x)}{s(x)} =
\frac{\hat{f}(x) - \overline{f}(x)}{s(x)} +
\frac{{\rm bias}(x)}{\sqrt{\rm variance(x)}}.
$$
The first term typically satisifes a central
limit theorem but the second term
does not vanish
even asymptotically if the bias and variance are
balanced.
For discussions on this point, see the papers
referenced above as well as
Ruppert, Wand, and Carroll (2003) and Sun and Loader (1994).
Pointwise asymptotic bands are not uniform, that is, they do not
control
\begin{equation}
\inf_{f\in \cH}\P_f{L \le f \le U}.
\end{equation}
The sample size
$n(f)$ required
for the true coverage to approximate the nominal coverage,
depends on the unknown function $f$.
The aim of this paper is to attain uniform coverage over $\cH$.
We say that
$B=(L,U)$ has \emph{uniform coverage} if
\begin{equation}\label{eq::first}
\inf_{f\in\cH}\P_f{L \le f \le U} \ge 1-\alpha .
\end{equation}
Starting in Section \ref{sec::projections},
we will insist on coverage over
$\cH = \{{\rm all\ functions}\}$.
The bound in (\ref{eq::first}) can be achieved trivially
using Bonferroni bands.
Set $\ell_i= Y_i - c_n \sigma$ and
$u_i= Y_i + c_n \sigma$,
where $c_n = \Phi^{-1}(1-\alpha/2n)$ and
$\Phi$ is the standard Normal {\sc cdf}.
Yet this band is unsatisfactory for several reasons:
\begin{enumerate}
\item The width of the band grows with sample size.
\item The band is centered on a poor estimator of the unknown function.
\item The width of the band is independent of the data and hence cannot adapt
to the smoothness of the unknown function.
\end{enumerate}
Problems (1) and (2) are easily remedied by using standard smoothing methods.
But the results of Low (1997) suggest that (3) is an inevitable consequence of uniform coverage.
The smoother the functions in $\cH$, the smaller the width necessary to achieve uniform coverage.
Suppose that $\cF \subset \cH$ contains the ``smooth'' functions in $\cH$ and that $\cH - \cF$ is nonempty.
Uniform coverage over $\cH$ requires that
the width of fixed-width bands be driven by the ``rough'' functions in $\cH - \cF$; the width will thus be large even if $f\in\cF$.
Ideally, our procedure would adjust automatically to produce narrower bands when the function
is smooth ($f\in\cF$) and wider bands when the function is rough ($f\not\in\cF$),
but to do that, the width must be determined from the data.
Low showed that for density estimation at a single point,
fixed-width confidence intervals perform as well as random length intervals;
that is, the data do not help reduce the width of the bands for smoother functions.
In Section \ref{sec::failure},
we extend Low's result to nonparametric regression
and show that the phenomenon is quite general.
Without restrictive assumptions, confidence bands cannot adapt.
These results mean that the width of uniform confidence bands is determined by the
greatest roughness we are willing to assume.
Because the typical assumptions about $\cH$ in the nonparametric regression problem are
loosely held and difficult to check,
the result is that the confidence band widths are essentially arbitrary.
This is not satisfactory in practice.
The contrast with $L^2$ confidence balls is noteworthy.
$L^2$ confidence sets
have been studied by
Li (1999), Juditsky and Lambert-Lacroix (2002),
Beran and D\"{u}mbgen (1998), Genovese and Wasserman (2004),
Baraud (2004), Hoffman and Lepski (2003),
Cai and Low (2004), and Robins and van der Vaart (2004).
Let
\begin{equation}
B = \Biggl\{{ f}\in \mathbb{R}^n:\ \frac{1}{n}\sum_{i=1}^n({ f}_i-\hat{{ f}}_i)^2 \le R_n^2\Biggr\}
\end{equation}
for some $\hat{f}$
and suppose that
\begin{equation}
\inf_{{ f}\in\mathbb{R}^n}\P_f{{ f}\in B}\ge 1-\alpha .
\end{equation}
Then
\begin{equation}\label{eq::smaller}
\inf_{{ f}\in\mathbb{R}^n}\E_f(R_n) \ge \frac{C_1}{n^{1/4}},
\ \ \ \ \mbox{and}\ \ \ \
\sup_{{ f}\in\mathbb{R}^n}\E_f(R_n) \ge C_2
\end{equation}
where $C_1$ and $C_2$ are positive constants.
Moreover, there exist confidence sets that
achieve the faster $n^{-1/4}$ rate at some points
in $\mathbb{R}^n$.
Because fixed-radius confidence sets necessarily have radius of size O(1),
the supremum in (\ref{eq::smaller}) implies such confidence sets
must have random radii.
We can construct random-radius confidence balls that improve on fixed-radius confidence sets,
for example, by obtaining a smaller radius for subsets of smoother functions $f$.
$L^2$ confidence balls can therefore adapt to the unknown smoothness of $f$.
Unfortunately, confidence balls can be difficult to work with in high dimensions (large $n$)
and tend to constrain many features of interest rather poorly,
for which reasons confidence bands are often desired.
It is also interesting to compare the adaptivity results for estimation and inference.
Estimators exist (e.g., Donoho et al. 1995) that can adapt to unknown smoothness, achieving
near optimal rates of convergence over a broad scale of spaces.
But since confidence bands cannot adapt, the minimum width bands that achieve uniform coverage
over the same scale of spaces have width $O(1)$, overwhelming the differences among reasonable estimators.
We are left knowing that we are close to the true function but being unable to demonstrate it
inferentially.
The message we take from the nonadaptivity results
in Low (1987) and Section \ref{sec::failure} of this paper
is that the problem of constructing confidence
bands for $f$ over nonparametric classes is simply too difficult
under the usual definition of coverage.
Instead, we introduce a slightly weaker notion -- surrogate coverage --
under which it is possible to obtain adaptive bands while allowing sharp
inferences about the main features of $f$.
\subsection{Surrogates}
Figure \ref{fig::bandfig} shows two situations where a band fails to
capture the true function.
The top plot shows a conservative failure: the only place where $f$ is not
contained in the band is when the bands are smoother than the truth.
The bottom plot
shows a liberal failure: the only place where $f$ is not
contained in the band is when the bands are less smooth than the truth.
The usual notion of coverage treats these failures equally.
Yet, in some sense, the second error is more serious than the first
since the bands overstate the complexity.
\begin{figure}
\hspace{1cm}
\includegraphics[width=5in]{bandfig.ps}
\caption{The top plot shows a conservative failure: the only place where $f$ is not
contained in the band is when the bands are smoother than the truth.
The bottom plot
shows a liberal failure: the only place where $f$ is not
contained in the band is when the bands are less smooth than the truth.
The usual notion of coverage treats these failures equally.}
\label{fig::bandfig}
\end{figure}
We are thus led to a different approach
that treats conservative errors and liberal errors differently.
The basic idea
is to find a function $f^\star$ that is simpler than $f$
as in Figure \ref{fig::surrogate}.
We then require that
\begin{equation}
\P_f{ L \le f \le U\ {\rm or}\ L \le f^\star \le U} \ge 1-\alpha,\ \ \
{\rm for \ all\ functions\ }f.
\end{equation}
More generally, we will define a finite set of
surrogates $F^\star \equiv F^*(f) = \{f, f_1^*,\ldots, f_m^*\}$
and require that a surrogate confidence band $(L,U)$ satisfy
\begin{equation}
\inf_f \P_f{ L \le g \le U\ \ {\rm for\ some\ }g\in F^\star} \ge 1-\alpha.
\end{equation}
We will also consider bands that are adaptive in the following sense:
if $f$ lies in some subspace $\cF$,
then with high probability $\norm{U-L}_\infty \le w(\cF)$,
where $w(\cF)$ is the best width of a uniformly valid confidence band (under the usual definition of coverage)
based on the a priori knowledge that $f\in\cF$.
Among possible surrogates,
a surrogate will be optimal if it admits a valid, adaptive procedure
and the set $\Set{f\in\cF\st F^*(f) = \{f\}}$ is as large as possible.
\begin{figure}
\hspace{1cm}
\includegraphics[width=5in]{surrogate.ps}
\caption{The top plot shows a complicated function $f$.
The bottom shows a surrogate $f^\star$ which is simpler than $f$
but retains the main, estimable features of $f$.
Adaptation is possible if we cover $f^\star$ instead of $f$.}
\label{fig::surrogate}
\end{figure}
\subsection{Summary of Results}
In Section \ref{sec::failure},
we show that Low's result on density estimation
holds in regression as well.
Fixed width bands do as well as random width bands,
thus ruling out adaptivity.
We show this when $\cH$ is the set of all functions
and when $\cH$ is a ball in a Lipschitz,
Sobolev, or Besov space.
Section \ref{sec::projections} gives our main results.
Theorem \ref{thm::main}
establishes lower bounds
on the width for any valid surrogte confidence band.
Let $\cF$ be a subspace of dimension $d$ in $\R^n$.
The functions that prevent adaptation are those that are
close to $\cF$ in $L^2$ but far in $L^\infty$.
Loosely speaking, such functions are close to $\cF$ except for isolated, spiky features.
If $||f- \Pi f||_2 < \epsilon_2$ and $||f- \Pi f||_\infty > \epsilon_\infty$,
for tuning constants $\epsilon_2, \epsilon_\infty$,
define the surrogate $f^\star$ to be the projection of $f$ onto $\cF$, $\Pi f$.
Otherwise, define $f^\star = f$.
We show that if
$\P_f{\norm{U - L}_\infty < w} \ge 1-\gamma$ for all $f\in \cF$,
then
\begin{equation}\label{eq::summaryw}
w \ge \max\left( w_\cF(\alpha,\gamma,\sigma), v(\epsilon_2,\epsilon_\infty, n, d, \alpha, \gamma, \sigma) \right),
\end{equation}
where $w_\cF$ is the minimum width for a uniform confidence band knowing a priori that $f\in\cF$
and
$v(\epsilon_2,\epsilon_\infty, n, d, \alpha, \gamma)$
is described later.
Corollary \ref{cor::optimality} shows that for proper choice of $\epsilon_2$ and $\epsilon_\infty$,
the $v$ term in the previous equation can be made smaller than $w_\cF$.
Figure \ref{fig::surrogate-pic} represents the functions involved;
the gray shaded area are those functions that are replaced by surrogates in the coverage statement,
denoted later by $\cS(\epsilon_2,\epsilon_\infty)$.
These are the functions that are both hard to distinguish from $\cF$ (because they are close to it)
and hard to cover (because they are ``spiky'').
The optimal choice of $\epsilon_2$ and $\epsilon_\infty$ minimizes the volume of this set
while making the right hand side in inequality (\ref{eq::summaryw}) equal to $w_\cF$.
Put another way,
the richest model that permits adaptive confidence bands under the usual notion of coverage
is $\cF = \mathbb{R}^n - \cS(\epsilon_2,\epsilon_\infty)$.
Theorem \ref{thm::achieve-multi}
gives a procedure that comes within a factor of 2
of attaining the lower bound for finite-samples.
The procedure conducts goodness of fit tests for subspaces
and constructs bands centered on the estimator of the
lowest dimensional nonrejected subspace.
Such a procedure actually reflects common practice.
It is not uncommon to fit a model, check the fit, and
if the model does not fit then we fit a more complex model.
In this sense, we view our results as providing a rigorous basis
for common practice.
It is known that pretesting followed by inference
does not lead to valid inferences for $f$ (Leeb and and P\"otscher, 2005).
But if we cant accept that sometimes we cover a surrogate $f^\star$ rather than $f$,
then validity is restored.
These results are proved in Section \ref{sec::proofs}.
\subsection{Related Work}
The idea of estimating the detectable part of $f$ is present, at least
implicitly, in other approaches. Davies and Kovac (2001) separate the
data into a simple piece plus a noise piece which is
similar in spirit to our approach.
Another related idea is
scale-space inference due to Chaudhuri and Marron (2000) who focus
on inference for all smoothed versions of $f$ rather than $f$ itself.
Also related is the idea of oversmoothing
as described in Terrell (1990) and
Terrell and Scott (1985).
Terrell argues that
``By using the most smoothing that is compatible
with the scale of the problem, we tend to eliminate
accidental features.''
The idea of one-sided inference
in Donoho (1988) has a similar spirit.
Here, one constructs confidence intervals
of the form $[L,\infty)$
for functionals such as the number of modes
of a density.
Bickel and Ritov (2000)
make what they call a ``radical proposal''
to `` ... determine how much bias can be tolerated without
[interesting] features being obscured.''
We view our approach as a way of implementing their suggestion.
Another related idea is contained in Donoho (1995) who showed that
if $\hat{f}$ is the soft threshold estimator of a function and
$f(x) = \sum_j \theta_j \psi_j(x)$ is an expansion in an unconditional basis,
then
$\P_f{\hat{f}\preceq f} \ge 1-\alpha$
where
$\hat{f}=\sum_j \hat\theta_j \psi_j$ and
$\hat{f}\preceq f$ means that
$|\hat\theta_j| \le |\theta_j|$ for all $j$.
Finally, we remind the reader that there is a plethora of work on
adaptative estimation; see, for example, Cai and Low (2004)
and references therein.
\subsection{Notation}
If $L$ and $U$ are random functions on $\cX = \Set{x_1,\ldots,x_n}$ such that $L \le U$,
we define $B = (L,U)$ to be the (random) set of all functions $g$ on $\cX$ for which $L \le g \le U$.
We call $B$ (or equivalently, the pair $L,U$) a band;
the band covers a function $f$ if $f\in B$ (or equivalently, if $L \le f \le U$).
Define its width to be the random variable
\begin{equation}
W = \norm{U - L}_\infty = \max_{1\le i\le n} (U(x_i) - L(x_i)).
\end{equation}
Because we are constructing bands on $\cX = \{x_1,\ldots,x_n\}$, we most often
refer to functions in terms of their evaluations $f = (f(x_1),\ldots,f(x_n)) \in \R^n$.
When we need to refer to a space of functions to which $f$ belongs, we use a $\;\tilde{}\;$ to
denote the function space and no $\;\tilde{}\;$ to denote the vector space of evaluations.
Thus, if $\tilde\cA$ is the space of all functions, then $\cA = \R^n$.
In both cases, we use the same symbol for the function and let the meaning be clear from context;
for example, $f\in\tilde\cA$ is the function and $f\in\cA$ is the vector $(f(x_1),\ldots,f(x_n))$.
Define the following norms on $\mathbb{R}^n$:
\begin{eqnarray*}
||f|| &=& ||f||_2 = \sqrt{\frac{1}{n}\sum_{i=1}^n f_i^2}\\
||f||_\infty &=& \max_i |f_i|.
\end{eqnarray*}
We use $\langle \cdot,\cdot\rangle$ to denote the inner product $\langle f,g\rangle = \frac{1}{n}\sum_{i=1}^n f_i g_i$
corresponding to $\norm{\cdot}$.
If $\cF$ is a subspace of $\R^n$, we define $\Pi_\cF$ to be the Euclidean projection onto $\cF$,
using just $\Pi$ if the subspace is clear from context.
We use
\begin{equation}\label{eq::eibasis}
e_i = (\underbrace{0,\ldots,0}_{i-1\ {\rm times}},1,
\underbrace{0,\ldots, 0}_{n-i\ {\rm times}})^T
\end{equation}
to denote the standard basis on $\R^n$.
If $F_\theta$ is a family of {\sc cdf}s indexed by $\theta$,
we write $F_\theta^{-1}(\alpha)$ to denote the lower-tail $\alpha$-quantile of $F_\theta$.
For the standard normal distribution, however,
we use $z_\alpha$ to denote the upper-tail $\alpha$-quantile,
and we denote the {\sc cdf} and {\sc pdf}, respectively, by $\Phi$ and $\phi$.
Throughout the paper we assume that $\sigma$ is a known constant;
in some cases we simply set $\sigma =1$.
But see Remark \ref{remark::sigma} about the unknown $\sigma$ case.
\section{Nonadaptivity of Bands}\label{sec::failure}
In this section we construct
lower bounds on the width of valid
confidence bands
analagous to
(\ref{eq::smaller})
and we show that the lower bound is achieved by fixed-width bands.
Low (1997)
considered estimating a density $f$
in the class
$$
\cF(a,k,M)=\Biggl\{ f:\ f\ge 0,\,\int f=1,\, f(x_0)\le a,\, ||f^{(k)}(x)||_\infty \le M \Biggr\}.
$$
He shows that if $C_n$ is a confidence interval for $f(0)$, that is,
$$
\inf_{f\in \cF(a,k,M)} \P_f{f(0)\in C_n} \ge 1-\alpha ,
$$
then, for every $\epsilon >0$,
there exists $N = N(\epsilon, M)$ and $c>0$ such that, for all $n \ge N$,
\begin{equation}
\E_f ({\rm length}(C_n)) \ge c \, n^{-k/(2k+1)}
\end{equation}
for all $f\in \cF(a,k,M)$ such that $f(0) > \epsilon$.
Moreover, there exists a fixed-width confidence interval $C_n$
and a constant $c_1$ such that
$\E_f ({\rm length}(C_n)) \le c_1 n^{-k/(2k+1)}$
for all $f\in \cF(a,k,M)$.
Thus, the data play no role in constructing a rate-optimal band,
except in determining the center of the interval.
For example, if we use kernel density estimation,
we could construct an optimal bandwidth $h = h(n,k)$
depending only on $n$ and $k$ -- but not the data --
and construct the interval from that kernel
estimator. This makes the interval highly dependent
on the minimal amount of smoothness $k$ that is assumed.
And it rules out the usual data-dependent bandwidth methods
such as cross-validation.
Now return to the regression model
\begin{equation}\label{eq::normal-means-model}
Y_i = f_i + \sigma \, \epsilon_i,\ \ \ i=1, \ldots, n,
\end{equation}
where $\epsilon_1$, $\ldots$, $\epsilon_n$ are independent,
${\rm Normal} (0,1)$ random variables, and
$f = (f_1, \ldots, f_n) \in \mathbb{R}^n$.
\begin{theorem}
\label{thm::basic}
Let
$B = (L,U)$
be a $1- \alpha$ confidence band
over $\Theta$, where
$0 < \alpha < 1/2$
and let $g\in \Theta$.
Suppose that $\Theta$ contains
a finite set of vectors $\Omega$,
such that:
\begin{enumerate}
\item for every distinct pair
$f,\nu\in\Omega$, we have
$\langle f-g, \nu-g \rangle =0$ and
\item for some $0 < \epsilon < (1/2) - \alpha$,
\begin{equation}
\max_{f\in\Omega}\frac{e^{n ||f-g||^2/\sigma^2}}{|\Omega|} \le \epsilon^2.
\end{equation}
\end{enumerate}
Then,
\begin{equation}
\E_g(W)\ge (1-2\alpha - 2\epsilon) \min_{f\in\Omega}||g-f||_\infty .
\end{equation}
\end{theorem}
We begin with the case where $\Theta = \mathbb{R}^n$. We will obtain
a lower bound on the width of any confidence band and then show that a
fixed-width procedure attains that width. The results hinge on
finding a least favorable configuration of mean vectors that are as
far away from each as possible in $L^\infty$ while staying a fixed
distance $\epsilon$ in total-variation distance.
\begin{theorem}
\label{thm::finite-sample-lower-bound}
Let $\cH = \mathbb{R}^n$
and fix $0 < \alpha < 1/2$.
Let
$B = (L,U)$
be a $1- \alpha$ confidence band over $\cH$.
Then, for every $0 < \epsilon < (1/2) - \alpha$,
\begin{equation}\label{eq::rnbound}
\inf_{f\in\mathbb{R}^n}\E_f(W)\ge (1-2\alpha - 2\epsilon)\sigma \sqrt{\log( n \epsilon^2 )}.
\end{equation}
The bound is achieved (up to constants) by the fixed-width Bonferroni bands:
$$
\ell_i = Y_i - \sigma z_{\alpha/n},\ u_i = Y_i + \sigma z_{\alpha/n}.
$$
\end{theorem}
\begin{theorem}[Lipshschitz Balls]
\label{thm::finite-sample-lip}
Define $x_i = i/n$ for $1 \le i \le n$. Let
\begin{eqnarray}
\tilde\cH(L) &=& \Biggl\{ f:\ |f(x) - f(y)| \le L |x-y| ,\ \ \, x,y \in [0,1]\Biggr\},\\
\noalign{\noindent be a ball in Lipschitz space, and let}
\cH(L) &=& \{ (f(x_1), \ldots, f(x_n)):\ f\in \tilde\cH(L)\}
\end{eqnarray}
be the vector of evaluations on $\cX$
Fix $0 < \alpha < 1/2$ and let $B = (L,U)$
be a $1- \alpha$ confidence band
over $\cH(L)$.
Then, for every $0 < \epsilon < (1/2) - \alpha$,
\begin{equation}
\inf_{f\in\cH(L)}\E_f(W) \ge a_n
\end{equation}
where
\begin{eqnarray*}
a_n &=& \left(\frac{\log n}{n}\right)^{1/3}\times
\left(\frac{L\sigma^2}{2}\right)^{1/3}\\
&& \hspace{-1cm}\times
\left( 1 + \frac{3\log (1+\epsilon^2)}{\log n} +
\frac{2 \log (L/(2\sigma))}{\log n} -
\frac{\log\left(\frac{1}{3}\log n + \log(1+\epsilon^2)+ \frac{2}{3}\log(L/(2\sigma))\right)}
{\log n} \right).
\end{eqnarray*}
The lower bound is achieved (up to logarithmic factors) by a fixed-width procedure.
\end{theorem}
\begin{theorem}[Sobolev Balls]
\label{thm::sobolev}
Let $\tilde\cH(p,c)$ be a Sobolev ball of order $p$
and radius $c$
and let $B = (L,U)$
be a $1- \alpha$ confidence band over $\cH(p,c)$.
For every $0 < \epsilon < (1/2) - \alpha$,
for every $\delta >0$,
and all large $n$,
\begin{equation}
\inf_{F\in\cH(p,c-\delta)}
\E_F(W)\ge (1-2\alpha - 2\epsilon) \left(\frac{c_n}{n^{p/(2p+1)}}\right)
\end{equation}
for some $c_n$ that increases at most logarithmically.
The bound is achieved (up to logarithmic factors) by a fixed-width band procedure.
\end{theorem}
\begin{theorem}[Besov Balls] \label{thm::besov}
Let $\tilde\cH(p,q,\xi,c)$ be ball of size $c$ in the Besov space $B_{p,q}^\xi$
and
et $B = (L,U)$ be a $1- \alpha$ confidence band over $\cH(p,q,\xi,c)$.
For every $0 < \epsilon < (1/2) - \alpha$, and every $\delta>0$,
\begin{equation}
\inf_{f\in\cH(p,q,\xi,c-\delta)}\E_f(W)\ge c_n (1-2\alpha - 2\epsilon) n^{-1/(1/p - \xi - 1/2)}.
\end{equation}
The bound is achieved (up to logarithmic factors) by a fixed-width procedure.
\end{theorem}
\section{Adaptive Bands} \label{sec::projections}
Let $\{{\cal F}_T: T\in \cT\}$ be a scale of
linear subspaces.
Let $w_T$ denote the smallest width
of any confidence band when it is known that $f\in\cF_T$
(defined more precisely below).
We would like to define an approporiate surrogate
and a procedure that gets as close as possible to
the target width $w_T$ when $f\in\cF_T$.
To clarify the ideas, subsection \ref{sec::singlesubspace}
develops our results in the special case where the subspaces
are $\Set{\cF, \R^n}$ for a fixed $\cF$ of dimension $d < n$.
Subsection \ref{sec::nestedspaces} handles the more general
case of a sequence of nested subspaces.
\subsection{Preliminaries}
We begin by defining several quantities that will be used throughout.
Let $\tau(\epsilon)$ denote the total variation distance
between a $N(0,1)$ and a $N(\epsilon,1)$ distribution.
Thus,
\begin{equation}
\tau(\epsilon) = \Phi(\epsilon/2) - \Phi(-\epsilon/2).
\end{equation}
Then,
$\epsilon \phi(\epsilon/2)\le \tau(\epsilon)\le \epsilon \phi(0)$
and
$\tau(\epsilon)\sim \epsilon \phi(0)$ as $\epsilon\to 0$.
\begin{lemma}
If $P = N(f,\sigma^2 I)$ and $Q = N(g,\sigma^2 I)$
are multivariate Normals with $f,g\in\mathbb{R}^n$ then
\begin{equation}
d_{\rm TV}(P,Q) = \tau \left(\frac{\sqrt{n}||f-g||}{\sigma}\right).
\end{equation}
\end{lemma}
We will need several constants.
For $0 < \alpha < 1$ and $0 < \gamma < 1-2\alpha$ define
\begin{equation} \label{eq::const::kappa}
\kappa(\alpha,\gamma)
= \left(2\log(1 + 4(1 - \gamma - 2\alpha)^2)\right)^{1/4}.
\end{equation}
For $0 < \beta < 1-\xi < 1$ and
integer $m\ge 1$ define
$Q=Q(m,\beta,\xi)$ to be the solution of
\begin{equation}\label{eq::const::Q}
\xi = 1- F_{0,m}(F^{-1}_{Q\sqrt{m},m}(\beta)),
\end{equation}
where $F_{a,d}$ denotes the {\sc cdf} of a $\chi^2$
random variable with $d$ degrees of
freedom and noncentrality parameter $a$
\begin{lemma}\label{lemma::universal::Q}
There is a universal constant $\Lambda(\beta,\xi)$
such that
$Q(m,\beta,\xi)\le \Lambda(\beta,\xi)$
for all $m\ge 1$.
For example, $\Lambda(.05,.05) \le 6.25$.
Suppose now that $m = m_n$, $\beta = \beta_n$, and
$\xi = \xi_n$ are all functions of $n$.
As long as
$-\log\beta_n \le \log n$
and $-\log\xi_n \le \sqrt{\log n}$,
then $Q(m_n,\beta_n,\xi_n) = O(\sqrt{\log n})$.
\end{lemma}
Next, define
\begin{equation}\label{eq::const::E}
E(m,\alpha,\gamma) = \max( Q(m,\alpha,\gamma), 2 \kappa(\alpha,\gamma)),
\end{equation}
for $0 < \alpha < 1$ and $0 < \gamma < 1-2\alpha$.
Finally,
if $\cF$ is a subspace of dimension $d$,
define
\begin{equation}
\Omega_\cF =
\max_{1\le i\le n} \frac{\norm{\Pi_\cF e_i}}{\norm{e_i}},
\end{equation}
where $e_i$ is defined in equation (\ref{eq::eibasis}).
Note that $0 \le \Omega_\cF \le 1$.
The value of $\Omega_\cF$ relates to the geometry of $\cF$ as a hyperplane
embedded in $\R^n$,
as seen through the following results.
\begin{lemma}\label{lemma::min2inInfinity}
Let $\cF$ be a subspace of $\mathbb{R}^n$. Then
\begin{eqnarray}
\min\Biggl\{ \norm{v}:\ v\in\cF, \norm{v}_\infty = \epsilon \Biggr\} &=&
\frac{\epsilon}{\sqrt{n}\Omega_{\cF}}\\
\max\Biggl\{ \norm{v}_\infty:\ v\in\cF, \norm{v} = \epsilon \Biggr\} &=&
\epsilon \sqrt{n}\Omega_{\cF}.
\end{eqnarray}
\end{lemma}
\begin{lemma}
Let $\{\phi_1, \ldots, \phi_d\}$ be orthonormal
vectors with respect to $||\cdot ||$ in $\mathbb{R}^n$
and let $\cF$ be the linear span of these vectors.
Then
\begin{equation}
\Omega_{\cF} = \max_{1\le i \le n} \sqrt{ \frac{\sum_{j=1}^d \phi_{ji}^2}{n}}.
\end{equation}
In particular, if $\max_j\max_i \phi_j(i)\le c$ then
\begin{equation}
\Omega_{\cF} \le c\sqrt{\frac{d}{n}}.
\end{equation}
\end{lemma}
\begin{lemma}
Let
$\{\phi_1, \ldots, \phi_d\}$
be orthonormal functions on $[0,1]$.
Define
$\cH_j$ to be the linear span of
$\{\phi_1, \ldots, \phi_j\}$.
Let $x_i=i/n$, $i=1, \ldots, n$ and
$\cF_j = \{ f = (h(x_1), \ldots, h(x_n)):\ h\in\cH_j\}$.
Then,
\begin{equation}
\Omega_{\cF} = \sqrt{ \frac{\sum_{j=1}^d \phi_j^2(x_i)}{n}} + O(1/n).
\end{equation}
In particular, if $\max_j\sup_x \phi_j(x)\le c$ then
\begin{equation}
\Omega_{\cF} \le c\sqrt{\frac{d}{n}} + O(1/n).
\end{equation}
\end{lemma}
In addition, we need the following Lemma first proved, in a related form,
in Baraud (2003).
\begin{lemma}\label{lemma::baraud-doover}
Let $\cF$ be a subspace of dimension $d$.
Let $0 < \delta < 1 - \xi$ and
\begin{equation}
\epsilon = \frac{(n-d)^{1/4}}{\sqrt{n}}\,\left(2 \log(1 + 4 \delta^2)\right)^{1/4}.
\end{equation}
Define $A = \Set{f:\ \norm{f - \Pi_\cF f} > \epsilon}$.
Then,
\begin{equation}
\beta \equiv
\inf_{\phi_\alpha\in\Phi_\xi}\sup_{f\in A}\P_f{\phi_\xi=0} \ge 1 - \xi - \delta
\end{equation}
where
\begin{equation}
\Phi_\xi = \Biggl\{ \phi_\xi:\ \sup_{f\in\cF}\P_f{\phi_\xi = 0} \le \xi \Biggr\}
\end{equation}
is the set of level $\xi$ tests.
\end{lemma}
\subsection{Single Subspace}\label{sec::singlesubspace}
To begin, we start with a single subspace $\cF$
of dimension $d$.
\begin{definition}
For given $\epsilon_2, \epsilon_\infty >0$,
define the {\em surrogate} $f^\star$ of $f$ by
\begin{eqnarray}
f^\star=
\left\{\begin{array}{ll}
\Pi f & {\rm if\ }||f-\Pi f||_2 \le \epsilon_2\ {\rm and}\ ||f-\Pi f||_\infty > \epsilon_\infty\\
f & {\rm otherwise.}
\end{array}
\right.
\end{eqnarray}
Define the \emph{surrogate set} of $f$, $F^*(f) = \Set{ f, f^* }$, which will be a singleton when $f^* = f$.
Define the {\em spoiler set}
$\cS(\epsilon_2,\epsilon_\infty) = \{f\in\R^n:\ f^\star \ne f\}$
and the {\em invariant set}
$\cI(\epsilon_2,\epsilon_\infty) = \{f:\ f^\star = f\}$.
\end{definition}
We give a schematic diagram in
Figure \ref{fig::surrogate-pic}.
The gray area represents
$\cS(\epsilon_2,\epsilon_\infty)$.
These are the functions that preclude adaptivity.
Being close to $\cF$ in $L^2$ makes them hard to detect
but being far from $\cF$ in $L^\infty$ makes them hard to cover.
To achieve adaptivity we must settle for sometimes
covering $\Pi_\cF f$.
\begin{figure}
\hglue -2in
\vbox
{
\pspicture(0,3)(16,12)
\pscircle[fillstyle=solid,fillcolor=gray,linewidth=2pt](8,8){5}
\pspolygon[fillstyle=solid,fillcolor=white](12,4)(12,12)(4,12)(4,4)
\pscircle[linewidth=2pt](8,8){5}
\psdots[dotsize=6pt](8,8)
\rput(7,9){Hard to detect; easy to cover}
\rput(16,8){Hard to detect; hard to cover}
\rput(14.5,12.5){Easy to detect; hard to cover}
\psline{->}(16,8.5)(12.5,8.5)
\psline{->}(14,12)(11.75,11.5)
\endpspicture
}
\caption{The dot at the center represents the subspace $\cF$.
The shaded area is the set of spoilers $\cS(\epsilon_2,\epsilon_\infty)$
of vectors for which $f^\star \neq f$. If these vectors were not surrogated,
adaptation is not possible.
The non-shaded area is the invariant set $\cI(\epsilon_2,\epsilon_\infty) =
\{f:\ f^\star = f\}$.}
\label{fig::surrogate-pic}
\end{figure}
\subsubsection{Lower Bounds}
We begin with two lemmas.
The first controls the minimum width of a band and
the second controls the maximum.
The second is of more interest for our purposes; the first lemma
is included for completeness.
For any $1 \le p \le \infty$,
$\epsilon>0$, and $A\subset \mathbb{R}^n$ define
\begin{equation}
M_p(\epsilon,A) = \sup\{ d_{\srm TV}(P_f,P_g):\ f,g\in A, ||f-g||_p \le \epsilon \}
\end{equation}
and
\begin{equation}
m_\infty(\epsilon,A_0,A_1)=
\inf \{d_{\rm TV}(P_f,P_g):\ f\in A_0,g\in A_1, \norm{f-g}_\infty \ge \epsilon\}.
\end{equation}
\begin{lemma}\label{lemma::mod}
Suppose that
$\inf_{f\in A}\P_f{L \le f \le U} \ge 1-\alpha$.
Let $1 \le p \le \infty$ and $\epsilon > 0$.
For $f\in A$, define
$$\epsilon(f,q) = \sup\{ \norm{f-h}_q:\ h\in A, \norm{f-h}_p\le \epsilon\},$$
where $1 \le q \le \infty$.
Then, for any $A_0\subset A$,
\begin{equation} \label{eq::eps-}
\inf_{f\in A_0} \P_f{W > \epsilon(f,\infty)} \ge 1- 2\alpha - \sup_{f\in A_0} M_p(\epsilon(f,p),A)
\end{equation}
where $W = ||U-L||_\infty$.
If every point in $A$ is contained in a subset of $A$ of $\ell^p$-diameter $\epsilon$,
then $\epsilon(f,p) \equiv \epsilon$, and
\begin{equation}
\inf_{f\in A_0} \P_f{W > \epsilon} \ge 1- 2\alpha - M_p(\epsilon,A).
\end{equation}
\end{lemma}
\begin{lemma}\label{lemma::m2}
Suppose that
$\inf_{f\in A}\P_f{L \le f \le U} \ge 1-\alpha$.
Suppose that $A=A_0\cup A_1$ (not necessarily disjoint).
Let $\epsilon > 0$ be such that
for each $f\in A_0$ there exists $g\in A_1$ for which
$\norm{f-g}_\infty = \epsilon$.
Then,
\begin{equation}
\sup_{f\in A_0} \P_f{W > \epsilon} \ge 1- 2\alpha - m_\infty(\epsilon,A_0,A_1)
\end{equation}
where $W = ||U-L||_\infty$.
\end{lemma}
Now we establish the target rate, the smallest width
of a band if we knew a priori that
$f\in \cF$.
Define
\begin{equation}\label{eq::define-target}
w_\cF \equiv w_\cF(\alpha,\gamma,\sigma) = \Omega_\cF \, \sigma\,\tau^{-1}(1-2\alpha - \gamma).
\end{equation}
\begin{theorem}\label{thm::target-rate}
Suppose that
\begin{equation}
\inf_{f\in \cF}\P_f{L \le f\le U} \ge 1-\alpha.
\end{equation}
If $\inf_{f\in\cF}\P_f{W\le w}\ge 1-\gamma$
then $w \ge w_{\cF}$.
A band that achieves this width, up to logarithmic factors,
is $(L,U) = \hat{f} \pm c$
where $\hat{f} = \Pi Y$
and $c = \sigma (\Pi \Pi^T)_{ii} z_{\alpha/2n}$.
\end{theorem}
\begin{remark}
Using an argument similar to that in Theorem \ref{thm::basic},
it is possible to improve this lower bound by an additional
$\sqrt{\log d}$ factor, but this is inconsequential to the
rest of the paper.
\end{remark}
Next, we give the main result for this case.
\begin{eqnarray}
v_0(\epsilon_2,\epsilon_\infty, n, \alpha, \gamma, \sigma) &=& \min\Bigl\{\sqrt{n}\epsilon_2,\, \epsilon_\infty,\, \sigma\tau^{-1}(1-2\alpha-\gamma)\Bigr\}, \label{eq::v0const}\\
v_1(\epsilon_2, n, d, \alpha, \gamma, \sigma)
&=& \left\{
\begin{array}{ll}
0 & {\rm if\ } \epsilon_2 \ge 2v_2(n,d,\alpha,\gamma) \label{eq::v1const}\\
v_2(n,d,\alpha,\gamma)& {\rm if\ }\epsilon_2 < 2v_2(n,d,\alpha,\gamma),
\end{array}
\right. \\
v_2(n,d,\alpha,\gamma) &=& \kappa(\alpha,\gamma)(n-d)^{1/4}n^{-1/2} \\
\noalign{\noindent and define}
v(\epsilon_2,\epsilon_\infty, n, d, \alpha, \gamma, \sigma) &=& \max( v_0, v_1 ).
\end{eqnarray}
\smallskip
\begin{theorem}[Lower Bound for Surrogate Confidence Band Width] \label{thm::main}
\hfil\break
Fix $0< \alpha < 1$ and
$0 < \gamma < 1-2\alpha$.
Suppose that for bands $B = (L,U)$
\begin{equation} \label{eq::validband::singlespace}
\inf_{f\in\R^n} \P_f{ F^*(f) \cap B \ne \emptyset }\ge 1-\alpha.
\end{equation}
Then,
\begin{equation}\label{eq::Wgamma}
\inf_{f\in\cF} \P_f{W \le w} \ge 1 - \gamma.
\end{equation}
implies
\begin{equation}\label{eq::the-cool-lower-bound}
w \ge \underline{w}(\cF,\epsilon_2,\epsilon_\infty,n,d,\alpha,\gamma,\sigma) \equiv \max\Bigr\{w_\cF(\alpha,\gamma,\sigma),\, v(\epsilon_2,\epsilon_\infty, n, d, \alpha, \gamma, \sigma)\Bigr\}
\end{equation}
\end{theorem}
The inequality (\ref{eq::validband::singlespace}) ensures that $B$ is a valid surrogate confidence band: for every function,
either the function or its surrogate is covered with at least the target probability.
The result gives a probabilistic lower bound on the width of the band that is at least as big as the best a priori width for the subspace.
As we will see, with proper choice of $\epsilon_2$ and $\epsilon_\infty$, the $v$ term can be made small, giving the subspace width $w_\cF$
for the lower bound.
Next, we address the question of optimality. Consider, for example, the trivial surrogate that maps all functions to 0.
We can cover the surrogate using 0 width bands with probability 1, but this would not be too interesting.
There is a tradeoff between the width of the bands on low dimensional subspaces and the volume of the spoiler
set, the functions that are surrogated.
We characterize optimality here as minimizing the volume of the spoiler set $\cS(\epsilon_2,\epsilon_\infty)$
while still attaining the target width with high probability when $f$ truly lies in the subspace.
In this sense, the surrogate defined above is optimal.
\begin{theorem}[Optimality]
\label{thm::optimal}
Let $\underline{w}$ denote the
right hand side of inequality
(\ref{eq::the-cool-lower-bound}).
Then $\underline{w} \ge w_\cF$,
where $w_\cF$ is defined in (\ref{eq::define-target}).
Setting
$$
\epsilon_2 = 2\kappa(\alpha,\gamma)(n-d)^{1/4}n^{-1/2},\ \ \
\epsilon_\infty = w_\cF
$$
minimizes ${\rm Volume}(\cS(\epsilon_2,\epsilon_\infty))$ subject to
achieving the lower bound on $\underline{w}$.
\end{theorem}
\subsubsection{Achievability}
Having established a lower bound, we need to show that the lower bound is sharp.
We do this by constructing a finite-sample procedure that achieves the bound within
a factor of 2.
Let
$F_{a,d}$ denote the {\sc cdf} of a $\chi^2$
random variable with $d$ degrees of
freedom and noncentrality parameter $a$
and let
$\chi^2_{\alpha,d}= F^{-1}_{0,d}(1-\alpha)$.
Let $T = ||Y-\Pi Y||^2$ and
define
\begin{equation}\label{eq::these-are-the-bands}
B=(L,U) = \hat{f}\pm c \sigma
\end{equation}
where
\begin{equation}
\hat{f} =
\left\{
\begin{array}{ll}
Y & {\rm if\ } T > \chi^2_{\gamma,n-d}\\
\Pi Y & {\rm if\ } T \le \chi^2_{\gamma,n-d}
\end{array}
\right.
\end{equation}
and
\begin{equation}
c = z_{\alpha/2n} \times
\left\{
\begin{array}{ll}
\omega_\cF + \epsilon_\infty & {\rm if}\ T \le \chi^2_{\gamma,n-d}\\
1 & {\rm if}\ T > \chi^2_{\gamma,n-d}.
\end{array}
\right.
\end{equation}
\begin{theorem}\label{thm::achieve}
If
\begin{equation}
\gamma \ge 1- F_{0,n-d}(F^{-1}_{n\epsilon_2^2,n-d}(\alpha/2))
\end{equation}
then
\begin{equation}
\inf_{f\in \mathbb{R}^n} \P_f{F^\star(f) \cap B \ne \emptyset} \ge 1-\alpha
\end{equation}
and
\begin{equation}\label{eq::it-works}
\inf_{f\in \cF} \P_f{W \le w_\cF + \epsilon_\infty} \ge 1-\gamma.
\end{equation}
If $\epsilon_2 \ge E(n-d,\alpha/2,\gamma) (n-d)^{1/4} n^{-1/2}$,
where $E(m,\alpha,\gamma)$ is defined in (\ref{eq::const::E}),
then
\begin{equation}\label{eq::it-works2}
\inf_{f\in \cF} \P_f{W \le 2 \underline{w}(\cF,\epsilon_2,\epsilon_\infty,\alpha,\gamma,n,d)}
\ge 1-\gamma.
\end{equation}
where
$\underline{w}(\cF,\epsilon_2,\epsilon_\infty,\alpha,\gamma,n,d)$
is defined (\ref{eq::the-cool-lower-bound}).
Hence,
the procedure adapts to within a logarithmic factor
of the lower bound $\underline{w}$
given in Theorem \ref{thm::main}.
\end{theorem}
\begin{corollary}
Setting
$$
\epsilon_2 = E(n-d,\alpha/2,\gamma)(n-d)^{1/4}n^{-1/2},\ \ \
\epsilon_\infty = w_\cF
$$
in the above procedure,
minimizes ${\rm Volume}(\cS(\epsilon_2,\epsilon_\infty))$ subject to
satisfying (\ref{eq::it-works2}).
\end{corollary}
\begin{remark}\label{remark::sigma}
The results can be extended to unknown $\sigma$
by replacing $\sigma$ with a nonparametric estimate $\hat\sigma$.
However, the results are then asymptotic rather than finite sample.
Moreover, a minimal amount of smoothness is required
to ensure that $\hat\sigma$ consistently estimates $\sigma$;
see Genovese and Wasserman (2005).
So as not to detract from our main points,
we continue to take $\sigma$ known.
\end{remark}
\subsubsection{Remarks on Estimation and the Modulus of Continuity}
It is interesting to note that the bands
defined above cover the true $f$
over a set $V$ that is larger than $\cF$.
In this section we take a brief look at the properties of $V$.
Define
\begin{equation}
C(\alpha,a,b) = \sup_{u > 0}\,(a u + b)\left(1 - \alpha - \frac14 + \half \Phi(-u/2)\right),
\end{equation}
and let $C(\alpha) \equiv C(\alpha,1,0)$.
Let $\cF^\perp$ be
the orthogonal complement of $\cF$. Let $B^\perp_k(0,\epsilon)$
be a $\ell^k$-ball around 0 in $\cF^\perp$ ($k = 2,\infty$).
For $f\in\mathbb{R}^n$, let $B^\perp_k(f,\epsilon) = f + B^\perp_k(0,\epsilon)$.
Define
\begin{equation}\label{eq::V}
V \equiv V(\cF,\epsilon_2,\epsilon_\infty) =
\bigcup_{f\in\cF} \Biggl(B^\perp_2(f,\epsilon_2) \cap B^\perp_\infty(f,\epsilon_\infty)\Biggr).
\end{equation}
\begin{lemma}
Let $B=(L,U)$ be defined as in (\ref{eq::these-are-the-bands}).
Then
\begin{equation}
\inf_{f\in V}\P_f{L \leq f \leq U}\geq 1- \alpha.
\end{equation}
\end{lemma}
Let $T f = f_1$. The next lemma gives the modulus of continuity
(Donoho and Liu 1991)
of $T$ over $V$
which measures the difficulty of estimation over $V$.
The modulus of continuity of $T$ over a set $\cA$ is
\begin{equation}
\omega(u,\cA) = \sup\{ |T f - T g| :\; \norm{f-g}_2 \le u; f, g\in \cA\}.
\end{equation}
Donoho and Liu showed that the difficulty of estimation over $\cA$
is often characterized by $\omega(1/\sqrt{n},\cA)$
in the sense that this quantity defines a lower bound
on estimation rates.
\begin{lemma}[Modulus of Continuity]
\label{lemma::Modulus-of-Continuity}
We have
\begin{equation}
\omega(u,V) =
\left( u \Omega\sqrt{n} \sqrt{\frac{\Omega^2}{1 + \Omega^2}} +
\min\left( \frac{u\sqrt{n}}{\sqrt{1+\Omega^2}},
\epsilon_2\wedge (\epsilon_\infty/\sqrt{n})\right)\right).
\end{equation}
\end{lemma}
Note that when $\epsilon_2 = \epsilon_\infty =0$ and
$\Omega \sim \sqrt{d/n}$, we have
$\omega(1/\sqrt{n},\cA)\sim \sqrt{d/n}$
as expected.
However, when
$\epsilon \equiv \epsilon_2 = \epsilon_\infty/\sqrt{n}$ is large
we will have that
$\omega(1/\sqrt{n},\cA)\sim \sqrt{d/n} + \epsilon/\sqrt{1+d^2/n}$.
The extra term
$\epsilon/\sqrt{1+d^2/n}$ reflects the ``ball-like'' behavior
of $V$ in addition to the subspace-like behavior of $V$.
The bands need to cover over this extra set to maintain valid coverage
and this leads to larger lower bounds than just covering over $\cF$.
\subsection{Nested Subspaces} \label{sec::nestedspaces}
Now suppose that we have nested subspaces
$\cF_1 \subset \cdots \subset \cF_m \subset \cF_{m+1}\equiv \mathbb{R}^n$.
Let $\Pi_j$ denote the projector onto $\cF_j$.
We define the surrogate as follows.
\begin{definition}
For given
$\epsilon_2 = (\epsilon_{2,1}, \ldots, \epsilon_{2,m})$
and
$\epsilon_\infty = (\epsilon_{\infty,1}, \ldots, \epsilon_{\infty,m})$
define
\begin{equation}
\cJ(f) = \{1\le j\le m:\
||f-\Pi_j f||_2 \le \epsilon_{2,j}\ {\rm and}\ ||f-\Pi_j f||_\infty > \epsilon_{\infty,j}
\Biggr\}.
\end{equation}
Then define the surrogate set
\begin{equation}
F^\star(f) = \{ \Pi_j f:\ j\in \cJ(f)\} \cup \{f\}.
\end{equation}
\end{definition}
\begin{definition}
We say that $B=\{ g:\ L\le g \le U\}\equiv (L,U)$ has coverage $1-\alpha$ if
\begin{equation}
\inf_{f\in\mathbb{R}^n}\P_f{ F^\star\cap B \neq \emptyset}\ge 1-\alpha.
\end{equation}
\end{definition}
\subsubsection{Lower Bounds}
\begin{theorem}[Lower Bound for Surrogate Confidence Band Width] \label{thm::main::nested}
\hfil\break
Fix $0< \alpha < 1$ and
$0 < \gamma < 1-2\alpha$.
Suppose that for bands $B = (L,U)$
\begin{equation} \label{eq::validband::nestedspaces}
\inf_{f\in\R^n} \P_f{ F^*(f) \cap B \ne \emptyset }\ge 1-\alpha.
\end{equation}
Then
\begin{equation}\label{eq::Wgamma::nested}
\inf_{f\in\cF_j} \P_f{W \le w} \ge 1 - \gamma.
\end{equation}
implies
\begin{equation}\label{eq::the-cool-lower-bound::nested}
w \ge \underline{w}(\cF_j,\epsilon_{2,j},\epsilon_{\infty,j},n,d_j,\alpha,\gamma,\sigma),
\end{equation}
where $\underline{w}$ is given in Theorem \ref{thm::main}.
\end{theorem}
\begin{theorem}[Optimality]
\label{thm::optimal::nested}
Let $\underline{w}$ denote the
right hand side of inequality
(\ref{eq::the-cool-lower-bound::nested}).
Then $\underline{w} \ge w_\cF$,
where $w_{\cF_j}$ is defined in (\ref{eq::define-target}).
Setting
$$
\epsilon_{2_j} = 2\kappa(\alpha,\gamma)(n-d_j)^{1/4} n^{-1/2},\ \ \
\epsilon_{\infty,j} = w_{\cF_j}
$$
minimizes the volume of the set
\begin{equation} \label{eq::minvolume::nested}
\Set{ f:\ \norm{f - \Pi_j f} \le \epsilon_{2,j} \ {\rm and}\ \norm{f - \Pi_j f}_\infty > \epsilon_{2,\infty}}
\end{equation}
subject to achieving the lower bound on $\underline{w}$.
\end{theorem}
\subsubsection{Achievability}
Define
$T_j= ||Y-\Pi_j Y||^2$ and $\hat{f}=\Pi_{\hat J}Y$, where
\begin{equation}
\hat{J} = \min\{1 \le j \le m:\ T_j \le \chi^2_{\gamma,n-d_j}\},
\end{equation}
where $\hat J = m + 1$ if the set is empty,
and define
\begin{equation}
c_j = z_{\alpha_j/2 n} \times
\left\{
\begin{array}{ll}
\omega_{\cF_j}(\alpha_j) + \epsilon_{\infty,j} & {\rm if}\ 1 \le j \le m\\
1 & {\rm if}\ j = m + 1.
\end{array}
\right.
\end{equation}
Finally, let $B=(L,U) = \hat{f}\pm c_{\hat{J}} \sigma$
where $\sum_j \alpha_j \le \alpha$.
\begin{theorem}\label{thm::achieve-multi}
If,
\begin{equation}\label{eq::gam-alph}
\gamma \ge 1- \min_j F_{0,n-d_j}(F^{-1}_{n\epsilon_{2,j}^2,n-d_j}(\alpha_j))
\end{equation}
then
\begin{equation}
\inf_{f\in\mathbb{R}^n}\P_f{ F^\star\cap B \neq \emptyset}\ge 1-\alpha.
\end{equation}
Let $w_j = w_{\cF_j}(\alpha_j) + \epsilon_{\infty,j}$.
If $w_1 \le \cdots \le w_{m+1}$ then
\begin{equation}\label{eq::the-gam-result}
\inf_{f\in \cF_j} \P_f{W \le w_j} \ge 1-\gamma.
\end{equation}
If in addition
$\epsilon_{2,j} \ge E(n-d_j,\alpha_j,\gamma)(n-d_j)^{1/4} n^{-1/2}$
and
$\epsilon_{\infty,j} \le w_{\cF_j}$
then
\begin{equation}\label{eq::opt-this}
\inf_{f\in \cF_j}
\P_f{W \le 2 \underline{w}(\epsilon_{2,j},\epsilon_{\infty,j},\alpha_j,\gamma,n,d_j)}
\ge 1-\gamma
\end{equation}
where
$\underline{w}(\epsilon_{2,j},\epsilon_{\infty,j},\alpha_j,\gamma,n,d_j)$
is defined (\ref{eq::the-cool-lower-bound}).
Hence,
the procedure adapts to within a logarithmic factor
of the lower bound $\underline{w}$
given in Theorem \ref{thm::main}.
\end{theorem}
\begin{corollary}\label{cor::optimality}
Suppose $\alpha_1 = \cdots = \alpha_{m+1} = \alpha/(m+1)$.
Then
$w_1 \le \cdots \le w_{m+1}$
so (\ref{eq::the-gam-result}) holds.
Moreover,
setting
\begin{eqnarray}
\epsilon_{2,j} &=& E(n-d_j,\alpha_j,\gamma)(n-d_j)^{1/4}n^{-1/2}\\
\noalign{\noindent and}
\epsilon_{\infty,j} &=& w_{\cF_j}
\end{eqnarray}
in the above procedure,
minimizes the volume of the set (\ref{eq::minvolume::nested})
satisfying (\ref{eq::the-cool-lower-bound::nested}).
\end{corollary}
\begin{example}
Suppose that
$x_i=i/n$ and let
$B_1=[0,1/d], B_2 = (1/d,2/d]$, $\ldots$, $B_d = ((d-1)/d,1]$.
Write
$f =( f(x_i):\ i=1, \ldots, n)$
and let $\cF$
denote the subspace of vectors $f$
that are constant over each $B_j$.
Then $\Omega_{\cF} = \sqrt{d/n}$.
The above procedure
then produces a band with width no more that
$O(\sqrt{d/n})$ with probability at least $1-\gamma$.
\end{example}
\section{Proofs}
\label{sec::proofs}
In this section, we prove the main results.
We omit proofs for a few of the simpler lemmas.
Throughout this section, we write
$x_n = O^*(b_n)$ to mean that
$x_n = O(c_n b_n)$
where $c_n$ increases at most logarithmically with $n$.
The following lemma is essentially from Section 3.3 of Ingster and Suslina (2003).
\begin{lemma}\label{lemma::mix}
Let $M$ be a probability measure
on $\mathbb{R}^n$ and let
$$
Q(\cdot) = \int P_f(\cdot) dM(f)
$$
where
$P_f(\cdot)$ denotes the measure for a multivariate
Normal with mean $f = (f_1, \ldots, f_n)$
and covariance $\sigma^2 I$.
Then
\begin{equation}
L_1(Q,P_g) \le
\sqrt{\int \int \exp\left\{ \frac{n \langle f-g, \nu-g\rangle}{\sigma^2}\right\}
dM(f)dM(\nu) - 1}.
\end{equation}
In particular, if $Q$ is uniform on a finite set $\Omega$, then
\begin{equation}
L_1(Q,P_g) \le \sqrt{
\left( \frac{1}{|\Omega|}\right)^2
\sum_{f,\nu\in\Omega} \exp\left\{ \frac{n \langle f-g, \nu-g\rangle}{\sigma^2}\right\}-1}.
\end{equation}
\end{lemma}
\begin{proof
Let $p_f$ denote the density
of a multivariate Normal with mean
$f$ and covariance $\sigma^2 I$
where $I$ is the identity matrix.
Let $q$ be the density of $Q$:
$$
q(y) = \int p_f(y) dM(f).
$$
Then,
\begin{eqnarray}
\nonumber
\int | p_g(x) - q(x) | dx &=&
\int \frac{| p_g(x) - q(x) |}{\sqrt{p_g(x)}} \sqrt{p_g(x)} dx \\
\label{eq::ineq}
&\le& \sqrt{\int \frac{( p_g(x) - q(x) )^2}{p_g(x)} dx} =
\sqrt{\int \frac{q^2(x)}{p_g(x)} dx -1 }.
\end{eqnarray}
Now,
\begin{eqnarray*}
\int \frac{q^2(x)}{p_g(x)} dx &=&
\int \left(\frac{q(x)}{p_g(x)}\right)^2 p_g(x) dx = \E_g \left(\frac{q(x)}{p_g(x)}\right)^2 \\
&& \hskip -2cm = \int \int \E_g\left(\frac{p_f(x)p_\nu(x)}{p_g^2(x)}\right)dM(f)dM(\nu)\\
&& \hskip -2cm=\int\int \exp\left\{ - \frac{n}{2\sigma^2} ( ||f-g||^2 + ||\nu-g||^2 )\right\}
\E_g \left(\exp\left\{ \epsilon^T (f+\nu-2g)/\sigma^2\right\}\right)dM(f)dM(\nu)\\
&& \hskip -2cm=\int\int \exp\left\{ - \frac{n}{2\sigma^2} ( ||f-g||^2 + ||\nu-g||^2 )\right\}
\exp\left\{ \sum_{i=1}^n (f_i - g_i + \nu_i - g_i)^2 /(2\sigma^2) \right\}dM(f)dM(\nu)\\
&& \hskip -2cm=\int\int \exp\left\{ \frac{n\langle f-g, \nu-g\rangle}{\sigma^2}\right\}dM(f)dM(\nu)
\end{eqnarray*}
and the result follows from (\ref{eq::ineq}).
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::basic}]
Let $N = |\Omega|$ and
let $b^2 = n\max_{f\in\Omega}||f-g||^2$.
Let $p_f$ denote the density
of a multivariate Normal with mean
$f$ and covariance $\sigma^2 I$
where $I$ is the identity matrix.
Define the mixture
$$
q(y) = \frac{1}{N}\sum_{f\in\Omega}p_f(y).
$$
By Lemma \ref{lemma::mix},
\begin{eqnarray*}
\int | p_g(x) - q(x) | dx &\le&
\sqrt{ \left(\frac{1}{N}\right)^2
\sum_{f,\nu\in\Omega} \exp\left\{ \frac{n\langle f-g, \nu-g\rangle}{\sigma^2}\right\} -1}\\
&=&
\sqrt{ \left(\frac{1}{N}\right)^2 \Biggl[ N e^{b^2/\sigma^2} + N(N-1)\Biggr]-1}\\
&\le& \sqrt{ e^{b^2/\sigma^2}/N} = \epsilon.
\end{eqnarray*}
Define two events,
$A=\{ \ell \le g \le u\}$ and
$B = \{ \ell \le f \le u, \ {\rm for\ some \ }f\in \Omega\}$.
Then,
$A \cap B \subset \{ w_n \ge a\}$
where
$$
a= \min_{f\in\Omega}||g-f||_\infty.
$$
Since
$\P_f{\ell \le f\le u}\ge 1-\alpha$ for all $f$,
it follows that
$\P_f{B} \ge 1-\alpha$ for all $f\in\Omega$.
Hence,
$Q(B) \ge 1-\alpha$.
So,
\begin{eqnarray*}
\P_g{w_n \ge a} &\ge& \P_g{A\cap B} \ge Q(A \cap B) - \epsilon = Q(A) + Q(B) - Q(A\cup B)- \epsilon\\
&\ge & Q(A) + Q(B) - 1- \epsilon \ge Q(A) + (1-\alpha) - 1- \epsilon \ge \P_g{A} + (1-\alpha) - 1- 2\epsilon\\
&\ge& (1-\alpha) + (1-\alpha) - 1- 2\epsilon = 1-2\alpha - 2\epsilon.
\end{eqnarray*}
So,
$\E_g(w_n) \ge (1-2\alpha - 2\epsilon) a$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::finite-sample-lower-bound}]
Let $g\in\mathbb{R}^n$ be arbitrary, let
$$
a_n = \sigma \sqrt{\log( n \epsilon^2)}
$$
and define
$$
\Omega = \Biggl\{ g+(a_n,0,\ldots,0), \ g+(0,a_n,\ldots,0), \ \ldots, \ g+ (0,0,\ldots,a_n) \Biggr\}.
$$
Then
the conditions of Theorem \ref{thm::basic} are satisfied
with $N=n$, and
hence
\begin{equation}
\E_g(W)\ge (1-2\alpha - 2\epsilon)\min_{f\in\Omega}||g-f||_\infty =
(1-2\alpha - 2\epsilon) a_n.
\end{equation}
This is true for each $g$ and hence
(\ref{eq::rnbound}) follows.
The last statement of the theorem follows from
standard Gaussian tail inequalities.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::finite-sample-lip}]
We construct the appropriate set $\Omega$
and apply Theorem \ref{thm::basic}.
For simplicity, we build $\Omega$ around
$g=(0,\ldots, 0)$, the extension to arbitrary $g$ being straightforward.
Set $a = a_n$ from the statement of the theorem, and
define
$$
F(x) = \left\{
\begin{array}{ll}
Lx & 0 \le x \le a/L \\
2a -Lx & a/L \le x \le 2 a /L.
\end{array}
\right.
$$
Note that
$F\in {\cal F}(L)$
and that $F$ minimizes $||F||_2$ among all $F\in {\cal F}(L)$
with $||F||_\infty =a$.
For simplicity, assume that
$2aN/L = 1$ for some integer $N$.
Define
$F_1(\cdot) = F(\cdot)$,
$F_2(\cdot) = F(\cdot-\delta)$,\ldots,
and $F_N(\cdot) = F(\cdot-N\delta)$.
Let
$\Omega(a) = \{ f_1, \ldots, f_N\}$
where
$f_j = (F_j(x_1), \ldots, F_j(x_n))$.
Now
$$
n ||f_j||^2 \le \frac{2 n a^3}{3L}
$$
and so
$$
\frac{e^{n ||f_j||^2/\sigma^2}}{N} \le \epsilon^2.
$$
Now apply Theorem \ref{thm::basic}.
To prove the last statement,
we note that it is well known that if $\hat{F}$
is a kernel estimator with triangular kernel and bandwidth
$h=O(n^{-1/3})$ then
$$
\sup_{f\in \Theta} E_F (||\hat{F} - F||_\infty) \le
C \left(\frac{ \log n}{n}\right)^{1/3} \equiv C_n
$$
for some $C>0$.
Then $B=(\hat{F}-\frac{C_n}{\alpha},\hat{F}+\frac{C_n}{\alpha})$
(restricted to $x_i=i/n$) is valid by Markov's inequality
and has the rate $a_n$.
\end{proof}
\begin{proof}[Proof outline of Theorem \ref{thm::sobolev}]
We will use the fact that an appropriately chosen wavelet basis
forms a basis for $\cF$.
Let
$$
J_n \sim \log_2 \left( \frac{n^{1/(2p+1)}}{\log n} \right),
$$
$$
b_n = \frac{\sigma}{\sqrt{n}}\sqrt{\log (2^{J_n}\epsilon^2)}
$$
and
$$
F(x) = b_n 2^{J_n/2}\psi(2^{J_n}x)
$$
where
$\psi$ is a compactly supported mother wavelet.
Then
$F^{(p)} = b_n 2^{J_n/2}2^{p J_n}\psi^{(p)}(2^{J_n}x)$ so that
$\int (F^{(p)})^2 < c^2$ for all large $n$
so that $F\in\cF$.
Let $f=(F(x_i),\ldots, F(x_n))$.
Then,
$$
||f||_\infty = b_n 2^{J_n/2} = O^*(n^{-p/(2p+1)})
$$
and
$\sqrt{n}||f||_2 \sim \sqrt{n} b_n$.
Let
$f_k = (F(x_1 - k\Delta),\ldots, F(x_n - k\Delta))^T$
where $\Delta$ is just large enough so that
the $F_k$'s are orthogonal.
Hence,
$\Delta \approx 1/N$ where
$N \sim 2^{J_n}$.
Finally, set
$\Omega = \{ f_1, \ldots, f_N\}$.
Then,
$$
\frac{e^{n ||f||^2/\sigma^2}}{N} = e^{n b_n^2/\sigma^2}{2^{J_n}} \le \epsilon^2
$$
for each $f\in\Omega$.
The lower bound follows from Theorem \ref{thm::basic}.
A fixed-width procedure that achieves the bound is
$$
\ell_i = \hat{f}_i - c_n z_{\alpha/n},\ u_i = \hat{f}_i + c_n z_{\alpha/n}.
$$
where
$\hat{f}_i = \hat{F}(x_i)$,
$$
\hat{F}(x) = \sum_j \hat\alpha_j \phi_j(x) + \sum_{j=1}^J \sum_k \hat\beta_{jk}\psi_{jk}(x),
$$
$\hat\alpha_j = n^{-1}\sum_i Y_i\phi_j(x_i)$,
$\hat\beta_{jk}= n^{-1}\sum_i Y_i\psi_{jk}(x_i)$
and
$c_n = \sqrt{\max_x {\rm Var}(\hat{F}(x))}$.
\end{proof}
\begin{proof}[Proof outline of Theorem \ref{thm::besov}]
Again, we use the fact that an appropriately chosen wavelet basis
forms a basis for $\cF$.
Let
$$
J_n \sim \frac{ \log_2 \frac{c \sqrt{n}}{\sigma \sqrt{\log 2^J \epsilon^2}}}
{\xi + \frac{1}{2} - \frac{1}{p}}.
$$
Let
$$
a_n = \frac{\sigma}{\sqrt{n}} \sqrt{ \log 2^J \epsilon^2}
$$
and define
$F(x) = a_n 2^{J/2} \psi(x)$,
where
$\psi$ is a compactly supported mother wavelet.
Then,
$||f|| =a_n$,
$||f||_\infty =a_n 2^{J/2}$,
and
$||F||_{p,q}^\xi \le c-\delta$ for all large $n$.
Take $\Omega$ around $g$ to be non-overlapping translations of $F$ added to $g$.
Then $N \sim 2^J$
and conditions of Theorem \ref{thm::basic} hold.
Moreover,
$$
a_n = O^*( n^{-1/(1/p - \xi - 1/2)}).
$$
The bound is achieved
by Markov applied to the soft-thresholded wavelet estimator
with universal thersholding.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::universal::Q}]
$Q$ is the solution, with respect to $c$, to
$\xi = 1- F_{0,m}(r(c))$
where
the function
$r(c)=F^{-1}_{c\sqrt{m},m}(\beta))$
is monotonically increasing in $c$.
Also, $F_{0,m}(r(0))=\beta$ and
$F_{0,m}(r(\infty))=1$
so a solution exists since
$0 < \beta < 1-\xi < 1$.
Now we bound $Q$ from above.
To upper bound $Q$ it suffices to find $c$ such that
\begin{equation}
F^{-1}_{c\sqrt{m},m}(\beta) \ge F^{-1}_{0,m}(1-\xi).
\end{equation}
\relax From Birg\'e (2001) we have
\begin{eqnarray}
F^{-1}_{z,d}(u) &\le& z+d+2\sqrt{(2z+d)\log(1/(1-u))} + 2\log (1/(1-u))\\
F^{-1}_{z,d}(u) &\ge& z+d-2\sqrt{(2z+d)\log(1/u)}.
\end{eqnarray}
Hence,
\begin{eqnarray}
F^{-1}_{c\sqrt{m},m}(\beta) &\ge& m + c\sqrt{m} - 2 \sqrt{(2 c \sqrt{m} + m) \log\frac1\beta} \\
F_{0,m}^{-1}(1-\gamma) &\le& m + 2 \sqrt{m \log\frac1\gamma} + 2\log\frac1\gamma.
\end{eqnarray}
It suffices to find $c$ that satisfies
\begin{equation}
m + c\sqrt{m} - 2 \sqrt{(2 c \sqrt{m} + m) \log\frac1\beta}
\ge m + 2 \sqrt{m \log\frac1\gamma} + 2\log\frac1\gamma, \\
\end{equation}
or equivalently,
\begin{equation}
c \ge 2\sqrt{\left(\frac{c}{\sqrt{m}} + 1\right)\log\frac1\beta} + 2\left(\sqrt{\log\frac1\gamma} + \log\frac1\gamma\right).
\end{equation}
The right hand side of the last inequality is largest when $m = 1$, and
equality can be achieved when $m = 1$ at some $\Lambda(\beta,\xi)$
for any $\beta, \xi$ satisfying the stated conditions.
Equality can be achieved then for any $m$ at some $Q(m,\beta,\xi) \le \Lambda(\beta,\xi)$.
This proves the first claim.
The second claim follows immediately by inspection.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::min2inInfinity}]
Note that
\begin{eqnarray}
\min\Biggl\{ \norm{v}:\ v\in\cF, \norm{v}_\infty = 1 \Biggr\} &=&
\min_{v\in\cF}\frac{\norm{v}_{\phantom{\infty}}}{\norm{v}_\infty}\\
&=& \frac{1}{\max_{v\in\cF}\frac{\norm{v}_\infty}{\norm{v}_{\phantom{\infty}}}},\\
&=& \frac{1}{\max\Biggl\{ \norm{v}_\infty:\ v\in\cF, \norm{v} = 1 \Biggr\}}.
\end{eqnarray}
If $v$ solves one of these problems then
$\epsilon v$ solves the more general version in the statement of the lemma.
It now suffices to show just the second equality.
Now,
$\Omega_{\cF} = \max_i \Omega_i$
where
$$
\Omega_i = \frac{\langle e_i, \Pi_\cF e_i\rangle}{\norm{e_i}\,\norm{\Pi_\cF e_i}} =
\frac{\norm{\Pi_\cF e_i}}{\norm{e_i}}.
$$
Maximizing $f_i=e_i^T f$
for $f\in\cF$ and $\norm{f}\le 1$ is equivalent to maximizing
$n \langle e_i, f\rangle = n \langle \Pi_\cF e_i, f\rangle$. The
maximum subject to the constraint occurs at $f^\star = \Pi e_i/\norm{\Pi e_i}$.
Hence, the maximum is
$e_i^T f^\star = (\Pi e_i)^T f^\star =
n\norm{\Pi e_i}^2/\norm{\Pi e_i} =
n\norm{\Pi e_i}^2/\norm{\Pi e_i} \frac{\norm{e_i}}{\norm{e_i}}=
\sqrt{n}\Omega_i$.
Maximizing over $i$ completes the proof.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::baraud-doover}]
We find a $P_0\in \cF_j$
and a measure $\mu$ supported on $A$ such that
$d_{\srm TV}(P_0, P_\mu) \le 2\delta$.
We then have, following Ingster (1993),
\begin{eqnarray}
\beta
&\ge& \inf_{\phi_\xi\in\Phi_\xi} P_\mu\Event{ \phi_\xi = 0 } \\
&\ge& 1 - \xi - \sup_{R:\ P_0(R) \le \xi} \left| P_0(R) - P_\mu(R) \right| \\
&\ge& 1 - \xi - \sup_R \left| P_0(R) - P_\mu(R) \right| \\
&=& 1 - \xi - \half d_{\srm TV}(P_0,P_\mu) \\
&\ge& 1 - \xi - \delta.
\end{eqnarray}
Let $\psi_1, \psi_2, \ldots, \psi_{n}$ be an orthonormal basis for $\R^n$ such
that $\psi_1,\ldots,\psi_d$ form an orthonormal basis for $\cF$.
Fix $\tau > 0$ small and let $\lambda^2 = n\epsilon^2/(n-d) + \tau^2/(n-d)$.
Define
\begin{equation}\label{eq::fE}
f_E = \lambda \sum_{s=d+1}^{m} E_s \psi_s,
\end{equation}
where $(E_s: \ s=d+1,\ldots, n)$ are independent Rademacher random variables,
that is, $\P{E_s =1} = \P{E_s =-1} = 1/2$.
Now,
$\Pi_\cF f_E = 0$ and hence $||f_E - \Pi_\cF f_E||^2 = \lambda^2 > \epsilon^2$,
and hence $f_E\in A$ for each choice of the Rademachers.
Let $P_\mu = \E(P_E)$
where $P_E$ is the distribution under $f_E$
and the expectation is with repect to the Rademachers.
Choose $f_0 \in \cF$ and let $P_0$ be the corresponding
distribution.
As in Baraud, we use the bound
\begin{equation}
d_{\srm TV}(P_\mu,P_0) \le \sqrt{ \E_0 \left(\frac{dP_\mu}{d P_0}(Y)\right)^2 - 1 }.
\end{equation}
We take $f_0 = (0,\ldots, 0) \in \cF$ and so
\begin{eqnarray}
\left(\frac{dP_\mu}{d P_0}(Y)\right)
&=& \E_E \left(\exp\left\{-\frac{1}{2}\lambda^2(n-d) +
\lambda\sum_{s=d+1}^{n} E_s \sum_i Y_i \psi_{si}\right\}\right)\\
&=& e^{-\lambda^2/2}\,\prod_{s=d+1}^{n} \cosh(\lambda (Y\cdot \psi_s)).
\end{eqnarray}
Since
$\E_0 \cosh^2 (\lambda (Y\cdot\psi_j)) = e^{\lambda^2}\cosh(\lambda^2)$
and $\cosh(x) \le e^{x^2/2}$
we have
\begin{eqnarray}
\E_0 \left(\frac{dP_\mu}{d P_0}(Y)\right)^2
&=& \left(\cosh(\lambda^2)\right)^{n-d} \\
&\le& e^{(n-d)\lambda^4/2} \\
&=& \exp\left(\frac{n^2}{2(n-d)} \epsilon^4 + \frac{\tau^4}{2(n-d)} + \frac{n}{n-d}\tau^2 \epsilon^2\right).
\end{eqnarray}
By the definition of $\epsilon$ (in terms of $\delta$),
$\beta \ge 1 - \xi - \delta + O(\tau)$,
and because this holds for every $\tau$, the result follows.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::mod}]
Let $f,g \in A$
be such that $||f-g||_p \le \epsilon$.
Then,
\begin{eqnarray}
\P_g{ L \le f \le U } \hskip -0.75in &&\nonumber\\
&=& \P_f{ L \le f \le U } + \P_g{ L \le f \le U } - \P_f{ L \le f \le U } \\
&\ge& \P_f{ L \le f \le U } - d_{\srm TV}(P_f,P_g) \\
&\ge& 1 - \alpha - M_p(||f-g||_p,A) \\
&\ge& 1 - \alpha - M_p(\epsilon(f,p),A).
\end{eqnarray}
We also have that $\P_g{L \le g \le U} \ge 1 - \alpha$.
Hence,
\begin{eqnarray}
\P_g{L \le g \le U, L \le f \le U} \hskip -0.75in &&\nonumber\\
&\ge& \P_g{ L \le g \le U } + \P_g{ L \le f \le U } - 1 \\
&\ge& 1 - \alpha + 1 - \alpha - M_p(\epsilon(f,p),A) - 1 \\
&\ge& 1 - 2\alpha - M_p(\epsilon(f,p),A).
\end{eqnarray}
The event
$\Event{L \le g \le U, L \le f \le U}$
implies that $W \ge \norm{g- f}_\infty$.
Hence,
\begin{eqnarray*}
\P_f{W > ||f-g||_\infty} &\ge & 1 - 2\alpha - M_p(\epsilon(f,p),A) \\
&\ge& 1 - 2\alpha - M_p(\epsilon(f,p),A) \\
&\ge& 1 - 2\alpha - M_p(\epsilon,A).
\end{eqnarray*}
It follows then that
\begin{equation}
\P_f{W > \epsilon(f,\infty)} = \inf_g \P_f{W > ||f-g||_\infty}.
\end{equation}
and thus
\begin{equation}
\inf_{f\in A_0} \P_f{W > \epsilon(f,\infty)} \ge
1 - 2\alpha - \sup_{f\in A_0}M_p(\epsilon(f,p),A).
\end{equation}
This proves the first claim.
But
$\epsilon(f,\infty) \ge \epsilon(f,p)$ for any $1 \le p \le \infty$.
The final claim follows immediately.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::m2}]
Choose $f\in A_0$.
Choose $g\in A_1$
to minimize
$d_{\rm TV}(p_f,p_g)$ such to
such that
$||f-g||_\infty = \epsilon$.
Hence,
$d_{\rm TV}(p_f,p_g) = m_\infty(\epsilon,A_0,A_1)$.
Then,
\begin{eqnarray}
\P_f{ L \le g \le U } \hskip -0.75in &&\nonumber\\
&=& \P_g{ L \le g \le U } + \P_f{ L \le g \le U } - \P_g{ L \le g \le U } \\
&\ge& \P_g{ L \le g \le U } - d_{\srm TV}(P_f,P_g) \\
&\ge& 1 - \alpha - m_\infty(\epsilon,A_0,A_1)
\end{eqnarray}
because, by assumption. $\P_g{L \le g \le U} \ge 1 - \alpha$.
We also have that $\P_f{L \le f \le U} \ge 1 - \alpha$.
Hence,
\begin{eqnarray}
\P_f{L \le f \le U, L \le g \le U} \hskip -0.75in &&\nonumber\\
&\ge& \P_f{ L \le f \le U } + \P_f{ L \le g \le U } - 1 \\
&\ge& 1 - \alpha + 1 - \alpha - m_\infty(\epsilon,A_0,A_1)\\
&\ge& 1 - 2\alpha - m_\infty(\epsilon,A_0,A_1).
\end{eqnarray}
The event
$\Event{L \le f \le U, L \le g \le U}$
implies that $W \ge \norm{f- g}_\infty$.
Hence,
\begin{equation}
\P_f{W > ||f-g||_\infty} \ge 1 - 2\alpha - m_\infty(\epsilon,A_0,A_1).
\end{equation}
It follows then that
\begin{equation}
\sup_{f\in A_0}\P_f{W > \epsilon} \ge 1 - 2\alpha - m_\infty(\epsilon,A_0,A_1).
\end{equation}
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::target-rate}]
First, we compute $m_\infty(\epsilon,\cF,\cF)$.
Note that for all $f\in\cF$, $d_{TV}(\P_f,\P_0) = \tau(\sqrt{n}\norm{f})$.
Hence,
$m_\infty(\epsilon,\cF,\cF) = \tau(\sqrt{n} v)$
where
$v = \min\{ ||f||:\ f\in\cF,\ \norm{f}_\infty = \epsilon \}$.
By Lemma \ref{lemma::min2inInfinity},
$v=\epsilon/(\sqrt{n}\Omega_\cF)$.
It follows by Lemma \ref{lemma::m2} that
\begin{equation}
\sup_{f\in\cF}\P{W > w} \ge 1 - 2\alpha - \tau\left(\frac{w}{\sigma\Omega_\cF}\right).
\end{equation}
Let $w_*= \sigma\Omega \tau^{-1}(1-2\alpha-\gamma)$.
It follows that
if $w < w_*$ then
$\inf_{f\in\cF}\P{W \le w} < 1 - \gamma$
which is a contradiction.
That the proposed band has correct coverage follows easily.
Now, $(\Pi \Pi^T)_{ii} \le \Omega_{\cF}$ and
$z_{\alpha/2n} \le \sqrt{c\log n}$ for some $c$
and the claim follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::main}]
We break the argument up into three parts.
Parts I and II taken together contribute the term $v_0$ from equation (\ref{eq::v0const}) to the bounds.
The logic of both parts is the same: find a value $w_*$ such that if $w < w_*$ then $\sup_{f\in\cF} \P{W > w} > \gamma$.
and, equivalently, $\inf_{f\in\cF}\P{W \le w} < 1 - \gamma$, which gives a contradiction under the assumptions of the theorem.
Part III contributes the term $v_1$ from equation (\ref{eq::v1const}) to the bounds.
It is based on using the confidence bands to construct both an estimator and a test.
Throughout the proof, we refer to the space $V \supset \cF$ defined in equation (\ref{eq::V}); this
is the set of spoilers that are within $\epsilon_2$ of $\cF$.
\smallskip
{\sf Part I.}
First, we compute $m_\infty(w,\cF,\cF)$.
Note that for all $f\in\cF$, $d_{\srm TV}(\P_f,\P_0) = \tau(\sqrt{n}\norm{f}/\sigma)$.
Hence,
$m_\infty(w,\cF,\cF) = \tau(\sqrt{n} v/\sigma)$
where
$v = \min\{ ||f||:\ f\in\cF,\ \norm{f}_\infty = \epsilon \}$.
By Lemma \ref{lemma::min2inInfinity},
$v = w/(\sqrt{n}\Omega_\cF)$.
It follows by Lemma \ref{lemma::m2} that
\begin{equation}
\sup_{f\in\cF}\P{W > w} \ge 1 - 2\alpha - \tau\left(\frac{w}{\sigma\Omega_\cF}\right).
\end{equation}
Take $w_*= \sigma\Omega_\cF \tau^{-1}(1-2\alpha-\gamma)$.
\medskip
{\sf Part II.}
\emph{Case (a.)}\enspace
$\ds \epsilon_2 \le \epsilon_\infty/\sqrt{n}$.
\enspace
First, note that
$m_\infty(w,\cF,V) = \tau(\sqrt{n}\frac{w}{\sigma\sqrt{n}}) = \tau(w/\sigma)$
for $w \le \sqrt{n}\epsilon_2$,
because the minimum two-norm for a given infinity-norm is achieved
on the coordinate axis.
Second, let $A_0=\cF$ and $A_1 = V$ in Lemma \ref{lemma::m2}.
Then, for $w \le \sqrt{n}\epsilon_2$,
\begin{equation}
\sup_{f\in\cF}\P{W > w} \ge 1 - 2\alpha - \tau\left(\frac{w}{\sigma}\right)
\end{equation}
Let $w_* = \sigma\min(\tau^{-1}(1 - 2\alpha - \gamma),\epsilon_2\sqrt{n})$,
then
$\sup_{f\in\cF}\P{W > w_0} \ge \gamma.$
\emph{Case (b.)}\enspace
$\ds \epsilon_2 > \epsilon_\infty/\sqrt{n}$. \enspace
First, note that
$m_\infty(w,\cF,V) = \tau(\sqrt{n}\frac{w}{\sigma\sqrt{n}}) = \tau(w/\sigma)$
for $w \le \epsilon_\infty$.
Second, let $A_0=\cF$ and $A_1 = V$ in Lemma \ref{lemma::m2}.
Then, for $w \le \epsilon_\infty$,
\begin{equation}
\sup_{f\in\cF}\P{W > w} \ge 1 - 2\alpha - \tau\left(\frac{w}{\sigma}\right)
\end{equation}
Let $w_* = \sigma\min(\tau^{-1}(1 - 2\alpha - \gamma),\epsilon_\infty)$,
then
$\sup_{f\in\cF}\P{W > w_0} \ge \gamma.$
{\sf Part III.}
The argument here is based on an argument in Baraud (2004).
Let $\hat f = (U + L)/2$.
Define a rejection region
\begin{equation}
\cR = \Set{ W > w} \cup \Set{ ||\hat{f} - \Pi \hat{f}||_2 > \frac{W}{2} }.
\end{equation}
Now, for any $f\in \cF$, $f^\star=f$,
$||\hat{f} - \Pi \hat{f}||_2 \le ||\hat{f} - f||_2$ and
\begin{eqnarray}
\P_f(\cR)
&\le& \P_f{ W > w} + \P_f{ ||\hat{f} - \Pi \hat{f}||_2 > W/2}\\
&\le& \gamma + \P_f{ ||\hat{f} - \Pi \hat{f}||_2 > W/2}\\
&\le& \gamma + \P_f{ ||f - \hat{f}||_2 > W/2}\\
&=& \gamma + \P_f{ ||f^\star - \hat{f}||_2 > W/2}\\
&\le& \gamma + \P_f{ ||f^\star - \hat{f}||_\infty > W/2}\\
&\le& \gamma + \alpha
\end{eqnarray}
which bounds the type I error of $\cR$.
Now let $f$ be such that
$\norm{f - \Pi f} > \max\{w,\epsilon_2\}$.
Because $\norm{f - \Pi \hat f} > \norm{f - \Pi f}$,
$\norm{f - \Pi f} > \epsilon_2$ implies that $f^\star = f$.
And thus,
\begin{equation}
||\hat{f} - \Pi\hat{f}||_2
\ge ||f - \Pi\hat{f}||_2 - ||f-\hat{f}||_2
\ge w - ||f-\hat{f}||_2.
\end{equation}
Hence,
\begin{eqnarray}
\P_f(\cR^c)
&=& \P_f{ ||\hat{f} - \Pi \hat{f}||_2 \le W/2, W/2\le w/2}\\
&\le& \P_f{ ||\hat{f} - \Pi \hat{f}||_2 \le w/2, W\le w}\\
&\le& \P_f{ ||f-\hat{f}||_2 \ge w/2, w\ge W}\\
&\le& \P_f{ ||f-\hat{f}||_2 \ge W/2}\\
&=& \P_f{ ||f^\star-\hat{f}||_2 \ge W/2}\\
&\le& \P_f{ ||f^\star-\hat{f}||_\infty \ge W/2}\\
&\le& \alpha.
\end{eqnarray}
Thus, $\cR$ defines a test
for $H_0:f\in\cF$
with level $\alpha+\gamma$
whose power
more than a distance
$\max\{w,\epsilon_2\}$ from $\cF$ is at least $1-\alpha$.
Using Lemma \ref{lemma::baraud-doover} with $\xi = \alpha + \gamma$
and $\delta = 1 - \gamma - 2\alpha$, this implies that
\begin{equation}
\max\{ w,\epsilon_2\} \ge 2 \kappa(\alpha,\gamma) (n - d)^{1/4} n^{-1/2}.
\end{equation}
The result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::optimal}]
The volume is minimized by making $\epsilon_\infty$ as large as possible
and
$\epsilon_2$ as small as possible.
To achieve the lower bound on the width
requires
$\epsilon_\infty \le w_\cF$ and
$\epsilon_2 \ge 2\kappa(\alpha,\gamma)(n-d)^{1/4}n^{-1/2}.$
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::achieve}]
Let $A = \Event{T \le \chi^2_{\gamma,n-d}}$.
Then,
$$
\P_f{f^\star\notin B} = \P_f{f^\star\notin B,A} + \P_f{f^\star\notin B,A^c}.
$$
We claim that
$\P_f{f^\star\notin B,A}\le \alpha/2$
and
$\P_f{f^\star\notin B,A^c} \le \alpha/2$.
There are four cases.
\smallskip
{\em Case I.} $f\in\cF$. Then $f=f^\star$ and
$\P_f{f\notin B,A^c} \le \P_f{A^c} \le \alpha/2$.
$\P_f{f\notin B,A}\le \P_f{f\notin B} =
\P_{\Pi f}{\Pi f\notin B} \le \P_{\Pi f}{||\hat f - \Pi f||_\infty > w_{\cF}} \le \alpha/2$.
\bigskip
{\em Case II.} $f\in V-\cF$ where
$V= \{f:\ \norm{f-\Pi f}\le \epsilon_2, \, \norm{f-\Pi f}_\infty \le \epsilon_\epsilon\}$.
Again, $f=f^\star$.
First,
$\P_f{f\notin B,A^c} \le \P_f{||Y-f||_\infty > z_{\alpha/2n}} \le \alpha/2$.
Next, we bound
$\P_f{f\notin B,A}$.
Note that
$\hat{f} = \Pi Y \sim N(g, \sigma^2 \Pi \Pi^T)$, where $g=\Pi f$.
Then
$\hat{f}_i \sim N(g_i, \Omega_i^2)$.
Let $B_0 = (L+\epsilon_\infty, U-\epsilon_\infty)$.
Then, $\Pi f\in B_0$ implies $f\in B$ and
$\P_f{\notin B, A}\le \P_f{\Pi f \notin B_0}\le \alpha/2$.
\bigskip
{\em Case III.} $f\notin V$,
$||f-\Pi f|| \le \epsilon_2$ and
$||f-\Pi f||_\infty > \epsilon_\infty$.
In this case, $f^\star = \Pi f$.
Then
$\P_f{f^\star,f\in B^c , A^c}\le
\P_f{f\in B^c , A^c}\le \alpha/2$.
Also,
$\P_f{f^\star,f\in B^c,A}\le \P_f{f^\star\notin B} =
\P_{\Pi f}{\Pi f\notin B} \le \P_{\Pi f}{||\hat f - \Pi f||_{\infty} > w_{\cF}} \le \alpha/2$.
\bigskip
{\em Case IV.} $f\notin V$ and $||f-\Pi f|| > \epsilon_2$.
In this case, $f^\star = f$.
But
$$
\P_f{f\notin B,A}\le \P_f{A}\le
F_{f-\Pi f,n-d}(\chi^2_{\gamma,n-d}) \le
F_{\epsilon_2,n-d}(\chi^2_{\gamma,n-d}) \le
\alpha/2
$$
and
$$
\P_f{f\notin B,A^c} \le \P_f{f\notin B,A^c} \le \alpha/2.
$$
\bigskip
Thus, $\P_f{f^\star \not\in B} \le \alpha$.
Equation (\ref{eq::it-works}) follows since
$\P_f{T \le \chi^2_{\gamma,n-d}} \ge 1-\gamma$ for all $f\in \cF$.
\end{proof}
\begin{proof}[Proof of Lemma \ref{lemma::Modulus-of-Continuity}]
First note that if $B$ is a ball in $\mathbb{R}^n$ in any norm,
then $B - B = 2 B$.
Second, we have that
\begin{eqnarray}
\omega(u)
&=& \sup\{ |T g| :\; \norm{g}_2 \le u,\ g\in V-V\} \\
&=& \sup\{ |T g| :\; \norm{g}_2 \le u,\ g\in V(2\epsilon_2,2\epsilon_\infty)\}.
\end{eqnarray}
To see the latter equality, note that if $g,h\in V$, then we can
write $g - h = f + \delta_1 - \delta_2$ where $f \in \cF$ and $\delta_i$ are
in $B^\perp_k(0,\epsilon_k)$ for $k = 2,\infty$. Thus, $\delta_1 - \delta_2$
is in $2B^\perp_2(0,\epsilon_2) \cap 2B^\perp_\infty(0,\epsilon_\infty)$.
Set $B^*(f) = B^\perp_2(f,2\epsilon_2) \cap B^\perp_\infty(f,2\epsilon_\infty)$.
We have that
\begin{eqnarray}
\omega(\eta,\cF) &=& \sup\{ f_1:\; \norm{f}_2 \le \eta, f\in \cF\} \\
\omega(\eta, B^*(0)) &=& \sup\{ f_1:\; \norm{f}_2 \le \eta, f\in B^*(0)\}.
\end{eqnarray}
For any $g\in V(2\epsilon_2,2\epsilon_\infty)$, we can write $g = g_1 + g_2$
where $g_1 \in \cF$ and $g_2 \in B^*(0)$ and the two functions are orthogonal.
Then,
\begin{eqnarray}
w(u,V)
&=& \sup \Biggl\{T(g):\ g\in V(2\epsilon_2,2\epsilon_\infty),\ \norm{g}_2 \le u\Biggr\}\\
&=& \sup_{0\le c \le u} \Biggl\{T(g_1 + g_2):\ \norm{g_1}_2 \le \sqrt{u^2 - c^2}, \,\norm{g_2}_2 \le c^2,\Biggr. \nonumber\\
&& \phantom{\sup_{0\le c \le u} \Biggl\{T(g_1 + g_2):\ \Biggr.}
\ g_1\in \cF, g_2 \in B^*(0)\Biggr\}\\
&\le& \sup_{0 \le c \le u} \left[ \sup_{g_1\in\cF\atop\norm{g_1}_2 \le \sqrt{u^2 - c^2}} T(g_1) +
\sup_{g_2 \in B^*(0)\atop \norm{g_2}_2\le c} T(g_2) \right] \\
&=& \sup_{0 \le c \le u} \left[\omega(\sqrt{u^2 - c^2},\cF) + \omega(c,B^*(0))\right].
\end{eqnarray}
Moreover, equality can be attained for each $c$ by choosing $g_1$ and $g_2$ to be the maximizers
(or suitably close approximants thereof) of each term in the last equation.
Consequently,
\begin{equation}
\omega(u) = \sup_{0 \le c \le u} \omega(\sqrt{u^2 - c^2},\cF) + \omega(c,B^*(0)).
\end{equation}
To derive $\omega(\eta,B^*(0))$, note that
$f = ((\eta\wedge \epsilon_2)\sqrt{n} \wedge\epsilon_\infty,0,0,\ldots,0)$
maximizes $f_1$ subject to the norm constraint.
Hence, $\omega(\eta,B^*(0)) = \min( (\eta\wedge \epsilon_2)\sqrt{n}, \epsilon_\infty )$.
For $\omega(\eta,\cF)$, let $e = (1,0,\ldots,0) \in\mathbb{R}^n$.
Recall that
$\Omega_{\cF} = \frac{\langle e, \Pi_\cF e\rangle}{\norm{e}\,\norm{\Pi_\cF e}} = \frac{\norm{\Pi_\cF e}}{\norm{e}}$,
which is between 0 and 1.
Maximizing $e^T f$ for $f\in\cF$ and $\norm{f}_2\le \eta$
is equivalent to maximizing $n \langle e, f\rangle = n \langle \Pi_\cF e, f\rangle$.
The maximum subject to the constraint occurs at $f^\star = \eta \Pi e/\norm{\Pi e}$
Hence, $\omega(\eta,\cF) = \eta\sqrt{n} \Omega_{\cF}$.
Note that $\eta$ is in terms of the normalized two norm;
in the ``natural'' (root sum of squares) norm, the modulus would be $\omega_\natural(u,\cF) = u \Omega_{\cF}$.
It follows that
\begin{eqnarray}
\omega(u,V) \hskip -0.25in &&\nonumber\\
&=& \sup_{0 \le c \le u} [\omega(\sqrt{u^2 - c^2},\cF) + \omega(c,B^*(0))] \\
&=& \sup_{0 \le c \le u} \left[\sqrt{n}\Omega_{\cF}\sqrt{u^2 - c^2} + \min( (c\wedge \epsilon_2)\sqrt{n}, \epsilon_\infty )\right] \\
&=& \sqrt{n}\sup_{0 \le c \le u} \left[\Omega_{\cF}\sqrt{u^2 - c^2} + \min( c, \epsilon_2\wedge (\epsilon_\infty/\sqrt{n}))\right] \\
&=& \sqrt{n} \left( u \Omega \sqrt{\frac{\Omega^2}{1 + \Omega^2}} + \min( \frac{u}{\sqrt{1+\Omega^2}}, \epsilon_2\wedge (\epsilon_\infty/\sqrt{n}))\right) \\
&=& \left( u\sqrt{n} \Omega \sqrt{\frac{\Omega^2}{1 + \Omega^2}} + \min\left( \frac{u\sqrt{n}}{\sqrt{1+\Omega^2}},\, \epsilon_2\sqrt{n},\, \epsilon_\infty\right)\right)
\end{eqnarray}
because the supremum over $c$ is maximized at $c = u/(1 + \Omega^2)$.
In the natural two norm, we have
\begin{equation}
\omega_\natural(u,V) =
\left( u \Omega \sqrt{\frac{\Omega^2}{1 + \Omega^2}} +
\min\left( \frac{u}{\Omega}\,\sqrt{\frac{\Omega^2}{1+\Omega^2}},\,
\epsilon_{2,\natural},\, \epsilon_\infty\right)\right).
\end{equation}
\end{proof}
Next, we prove the lower bound result generalized to a nested sequence of
subspaces. To do so, we need to prove several auxilliary lemmas.
Define for each $1 \le j \le m$,
\begin{equation}\label{eq::Uj}
U_j = \Set{ f \in \R^n:\ F^*(f) = \{\Pi_j f, f\} \ {\rm or}\ F^*(f) = \{f\} }.
\end{equation}
Referring to the definition of $V$ in equation (\ref{eq::V}), define here
$V_j = V(\cF_j,\epsilon_{2,j},\epsilon_{\infty,j})$.
\begin{lemma}\label{lemma::modulusUj}
Let $w > 0$. Then,
\begin{eqnarray}
m_\infty( w, \cF_j \cap U_j, \cF_j \cap U_j ) &=& m_\infty( w, \cF_j, \cF_j ) \\
m_\infty( w, \cF_j \cap U_j, V_j \cap U_j ) &=& m_\infty( w, \cF_j, V_j )
\end{eqnarray}
\end{lemma}
\begin{proof
First, let $f, g \in \cF_j$ be the minimal pair for $m_\infty( w, \cF_j, \cF_j )$.
Let $\psi$ be a unit-2-norm vector in $\cF_{j} \cap \cF_{j-1}^\perp$.
Let $\lambda > \epsilon_{2,1}$ and
define
\begin{eqnarray}
\tilde f &=& \lambda \psi + f \\
\tilde g &=& \lambda \psi + g.
\end{eqnarray}
Then, $\tilde f, \tilde g \in \cF_j \cap U_j$ because
if either $f$ or $g$ were in $\cF_j \cap U_j^c$ then adding $\lambda\psi$ makes the
distance from the projection on one of the lower spaces larger than the corresponding $\epsilon_2$.
Also $d_{\srm TV}(P_{\tilde f}, P_{\tilde g}) = d_{\srm TV}(P_f,P_g)$
and $\norm{\tilde f - \tilde g}_\infty = \norm{f - g}_\infty$.
Hence, $m_\infty( w, \cF_j \cap U_j, \cF_j \cap U_j ) \le m_\infty( w, \cF_j, \cF_j )$.
But $\cF_j \cap U_j \subset \cF_j$, so
$m_\infty( w, \cF_j \cap U_j, \cF_j \cap U_j ) = m_\infty( w, \cF_j, \cF_j )$ as was to be proved.
Second,let $f \in \cF_j$ and $g\in V_j$ be the minimal pair for $m_\infty( w, \cF_j, V_j )$.
Now apply the same argument.
\end{proof}
\begin{lemma}\label{lemma::baraudUj}
Let $0 < \delta < 1 - \xi$ and
\begin{equation}
\epsilon = \frac{(n-d_j)^{1/4}}{\sqrt{n}}\,\left(2 \log(1 + 4 \delta^2)\right)^{1/4}.
\end{equation}
Define $A_j = U_j \cap \Set{f:\ \norm{f - \Pi_j f} > \epsilon}$.
Then,
\begin{equation}
\beta \equiv
\inf_{\phi_\alpha\in\Phi_\xi}\sup_{f\in A_j}\P_f{\phi_\xi=0} \ge 1 - \xi - \delta
\end{equation}
where
\begin{equation}
\Phi_\xi = \Biggl\{ \phi_\xi:\ \sup_{f\in\cF_j}\P_f{\phi_\xi = 0} \le \xi \Biggr\}
\end{equation}
is the set of level $\xi$ tests.
\end{lemma}
\begin{proof
Let $f_E$ be defined as in equation (\ref{eq::fE})
in the proof of Lemma \ref{lemma::baraud-doover}.
Let $\psi$ be a unit vector in $\cF_{j+1} \cap \cF_j^\perp$
and let $\lambda > \epsilon_{2,1}$.
Then, define $\tilde f_E = \lambda \psi + f_E$.
Now apply the proof of Lemma \ref{lemma::baraud-doover} using $f_0 = \lambda\psi$ instead of $0$.
The total variation distances among corners of the hypercube do not change and the result follows.
\end{proof}
\begin{lemma}\label{lemma::main::special}
Fix $0< \alpha < 1$ and
$0 < \gamma < 1-2\alpha$.
Suppose that for bands $B = (L,U)$
\begin{equation} \label{eq::validband::special}
\inf_{f\in U_j} \P_f{ F^*(f) \cap B \ne \emptyset }\ge 1-\alpha.
\end{equation}
Then
\begin{equation}\label{eq::Wgamma::special}
\inf_{f\in\cF_j} \P_f{W \le w} \ge 1 - \gamma.
\end{equation}
implies
\begin{equation}\label{eq::the-cool-lower-bound::special}
w \ge \underline{w}(\cF_j,\epsilon_{2,j},\epsilon_{\infty,j},n,d_j,\alpha,\gamma,\sigma),
\end{equation}
where $\underline{w}$ is given in Theorem \ref{thm::main}.
\end{lemma}
\begin{proof
To prove this lemma, we will adapt the proof of Theorem \ref{thm::main} as follows.
By Lemma \ref{lemma::modulusUj},
the argument for Parts I and II is the same with $\cF$ replaced with $\cF_j \cap U_j$
and $V$ replaced with $V_j \cap U_j$.
By replacing the reference to Lemma \ref{lemma::baraud-doover} with Lemma \ref{lemma::baraudUj},
the argument for Part III also follows exactly.
The result follows.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::main::nested}]
The result follows directly from Lemma \ref{lemma::main::special} because
$\inf_{f\in\R^n} \P{ F^*(f) \cap B \ne \emptyset} \ge 1 - \alpha$ implies
$\inf_{f\in U_j} \P{ F^*(f) \cap B \ne \emptyset} \ge 1 - \alpha$.
\end{proof}
\begin{proof}[Proof of Theorem \ref{thm::achieve-multi}]
Note that
$\P_f{ F^\star \cap B = \emptyset} = \sum_j \P_f{ F^\star \cap B = \emptyset,\hat{J}=j}$.
We show that
$\P_f{ F^\star \cap B = \emptyset,\hat{J}=j} \le \alpha_j$ for each $j$.
There are three cases. Throughout the proof, we take $\sigma = 1$.
\emph{Case I.} $||f-\Pi_j f|| > \epsilon_{2,j}$.
Then,
\begin{eqnarray*}
\P_f{ F^\star \cap B = \emptyset,\hat{J}=j} &\le&
\P_f{\hat{J}=j} \le F_{f-\Pi_j f,n-d_j}(\chi^2_{\gamma,n-d_j})\\
&\le&F_{\epsilon_{2,j},n-d_j}(\chi^2_{\gamma,n-d_j})\\
&\le& \alpha_j
\end{eqnarray*}
due to (\ref{eq::gam-alph}).
\emph{Case II.} $||f-\Pi_j f|| \le \epsilon_{2,j}$ and
$||f-\Pi_j f||_\infty \le \epsilon_{\infty,j}$.
So,
\begin{eqnarray*}
\P_f{ F^\star \cap B = \emptyset,\hat{J}=j} &\le& \P_f{ f \notin B,\hat{J}=j}\\
&\le& \P_f{ ||f-\hat{f}||_\infty > w_{\cF_j} + \epsilon_{\infty,j}}\\
&\le& \P_f{ ||f-\Pi_j f||_\infty +||\Pi_j f-\Pi_j Y||_\infty > w_{\cF_j} + \epsilon_{\infty,j}}\\
&\le& \P_f{ ||\Pi_j f-\Pi_j Y||_\infty > w_{\cF_j}}\\
&=& \P_{\Pi_j f}{ ||\Pi_j f-\Pi_j Y||_\infty > w_{\cF_j}}\\
&\le& \alpha_j.
\end{eqnarray*}
\emph{Case III.}
$||f-\Pi_j f|| \le \epsilon_{2,j}$ and
$||f-\Pi_j f||_\infty > \epsilon_{\infty,j}$.
Now,
\begin{eqnarray*}
\P_f{ F^\star \cap B = \emptyset,\hat{J}=j}
&\le& \P_f{ \Pi_j f \notin B,\hat{J}=j}\\
&=& \P_f{ \norm{\Pi_j Y - \Pi_j f}_\infty > c_j,\hat{J}=j}\\
&\le& \P_f{ \norm{\Pi_j Y - \Pi_j f}_\infty > c_j}\\
&=& \P_{\Pi_j f}{\norm{\Pi_j Y - \Pi_j f}_\infty > c_j}\\
&\le& \alpha_j.
\end{eqnarray*}
To prove (\ref{eq::the-gam-result}),
suppose that $f\in\cF_j$.
Then,
$\P_f{\hat{J} > j} \le \gamma$.
But, as long as $\hat{J}\le j$,
$W = w_{\cal_{\hat J}}(\alpha_{\hat J}) + \epsilon_{\infty,\hat{J}} \le
w_j(\alpha_{j}) + \epsilon_{\infty,j}$.
The last statement follows since,
when $\epsilon_{2,j} \ge Q(n-d_j,\alpha/2,\gamma)(n-d_j)^{1/4} n^{-1/2}$
\end{proof}
\section{Discussion}\label{sec::discussion}
We have shown that adaptive confidence bands for
$f$ are possible if coverage is replaced by
surrogate coverage.
Of course, there are many other ways one could
define a surrogate.
Here, we briefly outline a few possibilities.
Wavelet expansions of the form
$$
f(x) = \sum_j \alpha_j \phi_j(x) + \sum_j\sum_k \beta_{jk}\psi_{jk}
$$
lend themselves quite naturally to the surrogate approach.
For example, one can define
$$
f^\star(x) = \sum_j \alpha_j \phi_j(x) + \sum_j\sum_k s(\beta_{jk})\psi_{jk}
$$
where $s(x) = {\rm sign}(x)(|x|-\lambda)_+$ is the usual soft-thresholding function.
For kernel smoothers and local polynomial smoothers $\hat{f}_h$ that
depends on a bandwidth $h$, a possible surrogate is
$f^\star = \E(\hat{f}_{h^\star})$
where $h^\star$ is the largest bandwidth $h$ for which
$\hat{f}_h$ passes a goodness of fit test with high probability.
In the spirit of Davies and Kovac (2001),
one could take the test to be a test for randomness
applied to the residuals.
Motivated by ideas in Donoho (1988)
we can define another surrogate as follows.
Let us switch to the problem of
density estimation.
Let $X_1, \ldots, X_n \sim F$
for some distribution $F$.
The goal is define an appropriate surrogate band for the density $f$.
Define the smoothness functional
$S(F) = \int (f''(x))^2 dx$.
To make sure that $S(F)$ is well defined for all $F$
we borrow an idea from Donoho (1988).
Let $\Phi_h$ denote a Gaussian with standard deviation $h$ and define
$S(F) = \lim_{h\to 0}S(F\oplus \Phi_h)$
where $\oplus$ denote convolution.
Donoho shows that $S$ is then a well-defined, convex, lower semicontinuous functional.
Let $\hat{F}_n$ be the empirical distribution function and let
$B=B(\hat{F},\epsilon_n) = \{ F:\ ||F - \hat{F}_n|| \le \epsilon_n\}$
where
$||\cdot ||$ is the Kolmogorov-Smnirnov distance and
$\epsilon_n$ is the $1-\beta$ quantile of
$||U-U_n||$ where $U$ is the uniform distribution
and $U_n$ is the empirical from a sample from $U$.
Thus, $B$ is a nonparametric, $1-\beta$ confidence ball for $F$.
The simplest $F\in B$ is the distribution
that minimize $S(F)$ subject to
$F\in B$.
We define the surrogate $F^\star$ to be the distribution that
minimizes $S(F)$ subject to $F$ belonging to $B_F$,
where $B_F$ is a population
version of $B$.
We might then think of $F^\star$ as the simplest distribution
that is not empirically dinstinguishable from $F$.
A natural definition of $B_F$ might be
$B_F = \{ G:\ ||F-G|| \le \epsilon_n\}$.
But this definition only makes sense for fixed radius
confidence sets.
Another definition is
$B_F = \{ G :\ \P_F{G\in B}\ge 1/2\}$.
To summarize, we define
\begin{equation}
F^\star = {\rm argmin}_{F \in B_F} S(F)
\end{equation}
where
\begin{equation}
B_F = \Biggl\{ G :\ \P_F{G\in B(\hat{F}_n,\epsilon_n)}\ge 1/2\Biggr\}
\end{equation}
and
$B(\hat{F}_n,\epsilon_n) = \{ G:\ ||\hat{F}_n-G|| \le \epsilon_n\}$.
Let
\begin{equation}
\Gamma = \cup \{G^\star:\ G\in B(\hat{F}_n,\epsilon_n)\}.
\end{equation}
Then
\begin{equation}
\ell(x) = \inf_{F\in\Gamma}F'(x),\ \ \
u(x) = \sup_{F\in\Gamma}F'(x)
\end{equation}
defines a valid confidence band for the density of $F^\star$.
Let us also mention
average coverage (Wahba 1983; Cummins, Filloon, Nychka 2001).
Bands $(L,U)$ have average coverage if
$\P_f{L(\xi) \le f(\xi) \le U(\xi)} \ge 1-\alpha$
where $\xi\sim {\rm Uniform}(0,1)$.
A way to combine average with the surrogate idea is to enforce
something stronger than average coverage such as
$$
\P_f{L(\xi) \le f(\xi) \le U(\xi)\ {\rm and}\ \hat{f}\preceq f} \ge 1-\alpha
$$
where $\hat{f} = (L+U)/2$ and
$\hat{f}\preceq f$ means that
$\hat{f}$ is simpler than $f$ according to a
partial order $\preceq$, for example,
$f\preceq g$ if
$\int (f'')^2 \leq \int (g'')^2$.
\section*{References}\hfil\break
\indent Baraud, Y. (2002).
Non Asymptotic minimax rates of testing in signal detection,
\emph{Bernoulli}, {\bf 8}, 577.
Baraud, Y. (2004).
Confidence balls in {G}aussian regression,
{\em The Annals of Statistics}, 32, 528--551.
Beran, Rudolf and D\"{u}mbgen, Lutz. (1998).
Modulation of estimators and confidence sets.
{\em The Annals of Statistics}, 26, 1826--1856.
Bickel, P.J. and Ritov, Y. (2000).
Non-and semi parametric statistics: compared and contrasted.
{\em J. Statist. Plann. Inference}, 91,
Birg\'e, L. (2001).
An alternative point of view on Lepski's method.
In {\em State of the Art in Probability and Statistics.}
(M. de Gunst, C. Klaassen and A. van der Vaart, eds.)
113--133, IMS, Beachwood, OH.
Cai, T. and Low, M. (2005).
Adaptive Confidence Balls.
{\em The Annals of Statistics}, 34, 202--228.
Cai, T. and Low, Mark, G. (2004).
An adaptation theory for nonparametric confidence intervals.
{\em Ann. Statist.}, 32, 1805--1840.
Chaudhuri, Probal and Marron, J. S. (2000).
Scale space view of curve estimation.
{\em The Annals of Statistics}, 28, 408--428.
Claeskens, G. and Van Keilegom, I. (2003).
Bootstrap confidence bands for regression curves
and their derivatives.
{\em The Annals of Statistics}, 31, 1852--1884.
Cummins D., Filloon T., Nychka D. (2001).
Confidence Intervals for Nonparametric Curve Estimates:
Toward More Uniform Pointwise Coverage
{\em Journal of the American Statistical Association}, 96, 233--246.
Donoho, D. (1988).
One-Sided Inference about Functionals of a Density.
{\em Annals of Statistics}, 16, 1390--1420.
Donoho, D. (1995).
De-noising by soft-thresholding.
{\em IEEE Transactions on Information Theory}, 41, 613--627.
Donoho, D. and Liu, R. (1991).
Geometrizing Rates of Convergence, II.
{\em The Annals of Statistics}, 19, 633--667.
Donoho, D., Johnstone, I.M., Kerkyacharian G., and Picard, D. (1995).
Wavelet Shrinkage: Asymptopia,
\emph{J. Roy. Statist. Soc. B}, {\bf 57}, 301--369.
Eubank, R.L. and Speckman, P.L. (1993).
Confidence Bands in Nonparametric Regression.
{\em Journal of the American Statistical Association},
88, 1287--1301.
Genovese, C. and Wasserman, L. (2005).
Nonparametric confidence sets for wavelet regression.
{\em Annals of Statistics}, 33, 698--729.
Hall, P. and Titterington, M. (1988).
On confidence bands in nonparametric density estimation and regression.
{\em Journal of Multivariate Analysis}, 27, 228--254.
H\"{a}rdle, Wolfgang and Bowman, Adrian W. (1988).
Bootstrapping in nonparametric regression:
Local adaptive smoothing and confidence bands.
{\em Journal of the American Statistical Association}, 83, 102--110.
H\"{a}rdle, W. and Marron, J. S. (1991).
Bootstrap simultaneous error bars for nonparametric regression.
{\em The Annals of Statistics}, 19, 778--796.
Ingster, Y. (1993).
Asymptotically minimax hypothesis testing for nonparametric alternatives, I and II.
\emph{Math. Methods Statist}, {\bf 2}, 85--114.
Ingster, Y. and Suslina, I. (2003).
{\em Nonparametric Goodness of Fit Testing Under Gaussian Models.}
Springer. New York.
Juditsky, A. and Lambert-Lacroix, S. (2003).
Nonparametric confidence set estimation.
{\em Mathematical Methods of Statistics}, 19, 410-428.
Leeb, H. and P\"otscher, B.M. (2005).
Model Selection and Inference: Facts and Fiction.
{\em Econometric Theory}, 21, 21--59.
Li, Ker-Chau. (1989).
Honest confidence regions for nonparametric regression.
{\em The Annals of Statistics}, 17, 1001--1008.
Low, Mark G. (1997).
On nonparametric confidence intervals.
{\em The Annals of Statistics}, 25, 2547--2554.
Neumann, Michael H. and Polzehl, J\"{o}rg. (1998).
Simultaneous bootstrap confidence bands in nonparametric regression.
{\em Journal of Nonparametric Statistics}, 9, 307--333.
Robins, J. and van der Vaart, Aad. (2006).
Adaptive Nonparametric Confidence Sets.
{\em The Annals of Statistics}, 34, 229--253.
Ruppert, D. and Wand, M.P. and Carroll, R.J. (2003).
{\em Semiparametric Regression},
Cambridge University Press. Cambridge.
Sun, J. and Loader, C. R. (1994).
Simultaneous confidence bands for linear regression and smoothing.
{\em The Annals of Statistics}, 22, 1328--1345.
Terrell, G.R. and Scott, D.W. (1985).
Oversmoothed Nonparametric Density Estimates.
{\em Journal of the American Statistical Association},
80, 209--214.
Terrell, G.R. (1990).
The Maximal Smoothing Principle in Density Estimation.
{\em Journal of the American Statistical Association},
85, 470--477.
Wahba, G. (1983).
Bayesian ``confidence intervals'' for the cross-validated smoothing spline.
{\em Journal of the Royal Statistical Society, Series B, Methodological}, 45, 133--150.
Xia, Y. (1998).
Bias-Corrected Confidence Bands in Nonparametric Regression.
{\em Journal of the Royal Statistical Society. Series B}, 60, 797--811.
\end{document}
|
1,477,468,751,072 | arxiv | \section{Introduction}
One of the main challenges in the development of quantum technologies is how to overcome decoherence \cite{steane1998quantum,dowling2003quantum,ladd2010quantum}. Quantum systems tend to couple very easily to their external environment, losing their quantum nature, reducing to a classical state \cite{breuer2002theory,zurek2003decoherence}. It is however also well-known that the time scale for which a quantum state decoheres is very much a state-dependent process. For example, superpositions of macroscopically distinct states, such as Schrodinger cat states $ |0 \rangle^{\otimes N} + |1 \rangle^{\otimes N} $, where $ N $ is the number of qubits, collapse exponentially faster in comparison to a product state of qubits $ (|0 \rangle + |1 \rangle)^{\otimes N} $. The fragility (or conversely the robustness) of quantum states have been studied in numerous studies \cite{frowis2018macroscopic,yu2002phonon,shimizu2002stability,janzing2000fragility}. The fragility of quantum states has been discussed in connection to measures of defining the macroscopicity of quantum superpositions \cite{frowis2018macroscopic,frowis2012measures,dur2002effective}. The fragility of particular quantum states can be considered the flip-side of the enhanced sensitivity of such states, the classic example being NOON states, which are fundamental in the field of quantum metrology \cite{boto2000quantum,dowling2008quantum,giovannetti2011advances}.
Meanwhile, quantum information theory has provided numerous tools in order to better understand the nature of quantum states. Various quantifiers for strength of Bell correlations \cite{bell1964einstein,einstein1935can}, EPR steering \cite{wiseman2007steering}, entanglement \cite{horodecki2009quantum}, and quantum correlations \cite{ollivier2001quantum,henderson2001classical} have been proposed, each characterizing different aspects of quantum states. For example, entanglement is strictly defined as any state that is not writeable in separable form, whereas quantum correlations arise when it is impossible to disturb a quantum state with local projective measurements \cite{ollivier2001quantum}. Recently another quantifier, quantum coherence, has attracted attention as another way of characterizing quantum states \cite{baumgratz2014quantifying}. Unlike quantum correlations that require at least bipartite systems to exist, quantum coherence can occur on a single system, and is a measure of the degree of superposition \cite{radhakrishnan2016distribution,tan2016unified}.
These quantifiers form a hierarchical structure, where quantities higher in the hierarchy possess attributes non-zero values of lower quantites \cite{adesso2016measures,ma2019operational}. For example, a system possessing entanglement necessarily possesses quantum correlations and coherence, but does not necessarily show Bell correlations or steering. In particular, a unified theory connecting various types of quantum correlations was proposed by Modi, Vedral, Williamson, and co-workers in Ref. \cite{modi2010unified}. Giorgi and Zambrini extended this approach to include various types of coherence in Ref. \cite{giorgi2018hallmarking}. Various quantum technological tasks rely on different properties of quantum states, hence one of the major aims of quantum information theory is to understand the
operational capability of these different resources \cite{chitambar2019quantum,winter2016operational,brandao2015reversible,radhakrishnan2017quantum,chitambar2016relating,horodecki2013quantumness,chitambar2016critical}.
How these resources behave in a dynamical context has been a focus of several works \cite{xu2010experimental,xu2010experimental2,xu2013experimental,bernardes2015experimental}, motivated by the presence of environmental decoherence in quantum technological systems.
In this study, we experimentally show the effect of the different quantum correlations and coherences of a tripartite photonic system under the influence of a one and three qubit dephasing environment. We measure the six quantities: (1) entanglement, (2) total coherence, (3) global coherence, (4) local coherence, (5) mutual information, and (6) classical correlations and measure their decay dynamics under dephasing. The fragility of these quantities under dephasing is investigated by measuring the decay rate, which can quantify the fragility of the quantity under question. We note that investigations on the transient dynamics of entanglement and quantum discord have been performed in Refs. \cite{bellomo2007non,eberly2007end,lopez2008sudden,yu2009sudden, xu2010experimental,xu2010experimental2,xu2013experimental,bernardes2015experimental,mazzola2010sudden,
werlang2009robustness,maziero2009classical}. Particularly in Ref. \cite{xu2010experimental,xu2010experimental2,xu2013experimental,bernardes2015experimental}
an experimental verification of the decay dynamics has been examined. In our work we focus on studying the {\it comparative} dephasing dynamics of different
quantum properties using relative entropy measures. To observe the decay dynamics of multipartite quantum states, we generate the $ W \bar{W} $ and star states, which contain correlations and coherences at all levels. Such states are uniquely suited for examining multiple quantum properties simultaneously.
\begin{figure*}
\includegraphics[width=\linewidth]{fig1}
\caption{Experimental setup for the preparation, dephasing, and measurement of the (a) $W \bar{W}$ state and (b) star state. In (a), the down-converted
photons are collected by a single fiber coupler.
The output coupler before the first polarizing beam splitter (PBS) is mounted on a translational stage to make fine adjustments with the arrival time of the
photons. Each beam splitter (BS) consists of one
$0^\circ $ plate BS and a $45^\circ$ mirror in its reflection path. The mirror introduces
a phase shift of $\pi$ between $|H \rangle$ and $|V \rangle$ which is to compensate the phase shift introduced by the BS. Such a setup makes the reflectivity more polarization independent than a cube BS. Since this requires no phase modulation the setup can be stable over several days. The final triggered photon is detected using a half-wave plate (HWP), a PBS, and a detector.
Each photon of a $W \bar{W}$ state is analyzed using a polarization measurement system consisting of a quarter wave plate (QWP), HWP, PBS and two fiber coupled single photon detectors. (b) In each arm of the EPR pairs, one TC-crystal is used for temporal compensation and one SC-crystal is used for
spatial compensation, through which the two possible ways of generating photon pairs (first or second crystal in sandwiched BBO) are made
indistinguishable. The two extraordinarily down converted photons produced by cascaded sandwich beam
source is superposed on a PBS. The time of arrival of photons are adjusted with prisms. Further details on the experiment can be seen in the
Supplementary Material. \label{fig1} }
\end{figure*}
\section{Photonic state generation}
\subsection{$W \bar{W}$ and Star states}
In this study we generate and study the dynamics of two quantum states under dephasing. The first state is the $W \bar{W}$ state defined as
\begin{align}
|W \bar{W} \rangle &= \frac{1}{\sqrt{2}} \left( |W \rangle + |\bar{W} \rangle \right) \\
|W \rangle & = \frac{1}{\sqrt{3}} \left( |001 \rangle + |010 \rangle + |100 \rangle \right) \\
| \bar{W} \rangle &= \frac{1}{\sqrt{3}} \left( |110 \rangle + |101 \rangle + |011 \rangle \right)
\end{align}
The $W \bar{W}$ state is an equal superposition of a standard $ W $ state and its spin-flipped version, the $ \bar{W}$ state. This type of state is chosen
because it has quantum coherence at the single qubit, bipartite and tripartite levels, as well as bipartite and tripartite quantum correlations. Such a state is a good testbed for studying quantum correlations distributed at different levels. The presence of different types of correlations is one of the reasons that $W$ states are robust under local decoherence \cite{dur2000three}.
The second state we investigate is the star state defined as
\begin{align}
|S \rangle &= \frac{1}{2} \left(|000 \rangle + |100 \rangle + |101 \rangle + |111 \rangle \right).
\label{starstatedef}
\end{align}
Like the $W \bar{W}$ state, the star state also has coherence and correlations distributed at all possible levels. However, the correlations are present in an asymmetric way for a star state, in contrast to the $W \bar{W}$ state which is symmetric for all qubits. The entanglement structure for the star state takes a form $ A \Leftrightarrow C \Leftrightarrow B $, where we have labeled the three qubits as $ ABC $ in (\ref{starstatedef}) from left to right. For example, if qubits $ A $ or $ B $ are traced out, entanglement is present in the remaining qubits. However, if qubit $ C $ is traced out, the remaining qubits are left in a separable state. We thus call qubit $C$ is the central qubit, and qubits $ A $ and $ B $ the peripheral qubits. The star state is a very simple example of a graph state \cite{plesch2003entangled} which in multipartite cases are
useful for quantum error correction \cite{anders2006fast}. More details on the distribution of correlations and coherence in the $W \bar{W}$ and star states are given in the Supplementary Material.
\subsection{Experimental Preparation}
To experimentally realize the above states, polarization encoded photonic qubits
are used, where the horizontal (H) and vertical (V) polarizations are encoded as the two levels $|0 \rangle$ and $|1 \rangle$ respectively. The detailed procedure of preparing these quantum states is shown in Fig. \ref{fig1}. In our experiment we investigate the dynamics of various correlations and coherence in a tripartite quantum system which is under the influence of an external phase damping environment, realized by passing the photonic states through birefringent quartz crystals of different thicknesses. We perform two types of dephasing, where all three photons are dephased by a crystal of the same thickness, and another where only one of the photons is dephased. The dephasing on only one of the photons allows for a partial dephasing of the system, where some quantum property is retained even after complete dephasing.
The experimental set up to prepare the $W \bar{W}$ is shown in Fig. 1(a). Two pairs of down converted photons are simultaneously generated through a
higher order emission of spontaneous down conversion (SPDC) process. These four photons are collected by a single mode fiber and then fed into a
polarizing beam splitter (PBS) where they overlap and become indistinguishable in the spatial mode. The spectral selection is realized by inserting a 3nm
interference filter after the PBS. The four photons are separated by three non-polarizing beam splitters (BS). The post selected four-fold coincidence
count certifies the generation of four photon Dicke state with two excitations
$|D_{4}^{2} \rangle = (|0011 \rangle + |0101 \rangle + |1001 \rangle + |0110 \rangle + |1010 \rangle + |1100 \rangle) /\sqrt{6}$.
The $W \bar{W}$ state is generated from the Dicke state by projecting one of the qubits into the
$(|0 \rangle + |1 \rangle)/\sqrt{2}$ basis.
The star state generation scheme is shown in Fig. 1(b). Two non-maximally entangled bipartite states
$|\psi \rangle = \cos \theta |01 \rangle + \sin \theta |10 \rangle$ with the ratio
$\cos^{2} \theta$:$\sin^{2} \theta$ $=6.8554$, are required to prepare the star state. These polarization entangled states are generated using a sandwiched geometry beam-like type II BBO entanglement resource. Such an entanglement resource was first devised by Zhang and co-workers in Ref. \cite{zhang2015experimental} and was later used
in Ref. \cite{wang2016experimental} to realize ten photon entanglement. Applying single qubit unitary operators on each qubit, the state $|\psi \rangle$ is transformed to $(|00 \rangle + |10 \rangle + |11 \rangle)/\sqrt{3}$. The transformed states are fed into the PBS to overlap them and the Hong-Ou-Mandel interference visibility is enhanced using a 2nm band pass filter. The second one of the four qubit quantum states generated through this process is projected in the
$(|0 \rangle + |1 \rangle)/\sqrt{2}$ basis. By exchanging the qubits $3$ and $4$ in the resulting quantum state, the star states are obtained.
\begin{table*}
\begin{center}
\begin{tabular}{|c|c|c|c|}
\hline
\text{Quantity} & \text{Reference state} & Example reference state & \text{Definition} \\
\hline
\text{Entanglement} & \text{Separable state \; $\mathcal{S}$} & $ \sum_j p_j \rho^A_j \otimes \rho^B_j \otimes \rho^C_j $ & $ E = \displaystyle{\min_{\sigma \in \mathcal{S}}} \; S(\rho \| \sigma)$\\
\hline
\text{Total Coherence} & \text{Incoherent state \; $\mathcal{I}$} & $\rho_d = \sum_{j} \langle j | \rho | j \rangle |j \rangle \langle j | $ &
$C = \displaystyle{\min_{\sigma \in \mathcal{I}}} \; S(\rho \| \sigma) $ \\
\hline
\text{Local Coherence} & Incoherent states $ \mathcal{I} $ & $\pi_d(\rho) = \rho^A_d \otimes \rho^B_d \otimes \rho^C_d $ &
$ C_{L} = \displaystyle{\min_{ \pi(\rho) \in \mathcal{I}}} S(\pi(\rho) \| \sigma ) $ \\
\hline
$\underset{\text{(Total correlations)}}{\text{Mutual Information}}$ & \text{Product state \; $\mathcal{P}$} & $ \pi(\rho) = \rho^A \otimes \rho^B \otimes \rho^C $ &
$T= \displaystyle{\min_{\sigma \in \mathcal{P}}} \; S(\rho \| \sigma) $ \\
\hline
\text{Classical correlation} &\text{Product state \; $\mathcal{P}$} & $ \pi(\rho_d) $ &
$ K = \displaystyle{\min_{\sigma \in \mathcal{P}}} S(\rho_d \| \sigma ) $ \\
\hline
\text{Hookup} & \text{Incoherent product states $\bar{\mathcal{I}}$} & $ \pi_d(\rho) = \pi(\rho_d) $ &
$M = \displaystyle{\min_{\sigma \in \bar{\mathcal{I}}}} \; S(\rho \| \sigma) $ \\
\hline
\end{tabular}
\end{center}
\caption{List of properties of a quantum state $\rho$ and their measurement procedure. \label{table1} }
\end{table*}
\section{Measures of correlations and coherence}
We measure the correlations and coherence using the unified distance-based approach of Ref. \cite{modi2010unified}. The basic idea of any distance-based approach to quantify a quantum observable is as follows. First the set of all states do not have the relevant quantity is defined, and are called the reference states. For example, for entanglement, the reference states are the set of all separable states. Then to quantify the quantum property, one uses a suitable distance measure to find the distance to the closest reference state by minimization. In our case, we choose the distance measure to be relative entropy
\begin{equation}
S(\rho \| \sigma) = \hbox{Tr} (\rho \ln \rho - \rho \ln \sigma),
\label{REdensitymatrices}
\end{equation}
which is a popular choice due to its simplicity of computation and well-known properties \cite{vedral2002role}. The six quantities that we calculate are defined as below and summarized in Table \ref{table1}.
{\it Entanglement:} The entanglement is quantified as the minimum distance to the set of all separable states \cite{vedral1997quantifying,vedral1998entanglement}. We perform a minimization procedure to separable states taking the form $ \sum_j p_j \rho^A_j \otimes \rho^B_j \otimes \rho^C_j $, where $ p_j $ is a probability and $ \rho^{A,B,C}_j $ are density matrices on subsystems $ A, B, C $.
{\it Coherence:}
The total quantum coherence \cite{baumgratz2014quantifying} is defined as the distance to the closest incoherent state, which take the form $ \sum_{j}p_j |j \rangle \langle j | $, where $ |j \rangle $ are in the basis $ \{ |0 \rangle, |1 \rangle \} $ for $ A, B, C $. It has been shown that for the relative entropy, the closest incoherent state to a state $ \rho $ takes coefficients $ p_j = \langle j | \rho | j \rangle $, hence the minimization does not need to be explicitly performed \cite{baumgratz2014quantifying} and
\begin{equation}
C(\rho) = \min_{\sigma \in \mathcal{I}} S(\rho \| \sigma) = S(\rho \| \rho_d ) = S(\rho_{d}) - S(\rho).
\label{totalcoherence}
\end{equation}
Here we defined $ \rho_d $ as the matrix $ \rho $ with all off-diagonal terms set to zero in the basis $ | j \rangle $.
{\it Local and global coherence:}
Quantum coherence can originate from both coherence which is localized on subsystems, or due to coherence due to a collective property of the whole system \cite{radhakrishnan2016distribution,tan2016unified}. The former is called local coherence and is found by first breaking all the correlations between the subsystems. In a similar way to total coherence, the closest incoherent state is found by taking the diagonal form
\begin{align}
C_{L}(\rho) & = \min_{\sigma \in \mathcal{I}} S(\pi(\rho) \| \sigma ) = S(\pi(\rho) \| \pi_d (\rho) ) ,
\end{align}
where $ \pi_d (\rho) $ is the matrix $ \pi(\rho) $ but with all off-diagonal elements set to zero in the basis $ | j \rangle $. The coherence attributed to the collective nature of the system is called the global coherence and is defined as the difference of the total and local coherence
\begin{align}
C_{G} (\rho) & = C(\rho) - C_{L}(\rho) .
\end{align}
{\it Mutual Information:}
Mutual information measures the total amount of correlations, including both quantum and classical parts \cite{modi2010unified}. The set of uncorrelated states takes the form of a product state
$ \sigma^A \otimes \sigma^B \otimes \sigma^C $. It has been shown in Ref. \cite{modi2010unified} that for relative entropy the closest product state is the product state $ \pi (\rho) = \rho^A \otimes \rho^B \otimes \rho^C $ consisting of the reduced density matrices on each subsystem $ \rho^{A,B,C} $. Hence we can write
\begin{equation}
T(\rho) = \min_{\sigma \in \mathcal{P}} S( \rho \| \sigma ) = S( \rho \| \pi(\rho)) \equiv S(\pi(\rho)) - S(\rho) .
\label{mutualinformation}
\end{equation}
The total correlations as measured by the mutual information $T$ and the total quantum coherence $C$ are not completely independent quantities. Hence there is a common region of quantumness in a system which is measured by both these quantities. This region of overlap is the amount of global coherence in the system which arises due to quantum correlations between the qubits.
{\it Classical correlations:}
For local coherence, first the correlations between the subsystems are broken, then the remaining coherence is measured. The reverse ordering can equally be performed, where first the coherence is removed from the system, then the remaining correlations are measured. The state with no coherence is $ \rho_d $, which can only contain classical correlations because it is a diagonal density matrix \cite{giorgi2018hallmarking}. In the same way as mutual information, the closest uncorrelated state is its corresponding product state,
\begin{equation}
K(\rho) = \min_{\sigma \in \mathcal{P}} S(\rho_d \| \sigma ) = S(\rho_{d} \| \pi(\rho_{d})).
\label{classicalcorrelations}
\end{equation}
{\it Hookup:}
The reference state for total coherence $ C $ is $ \rho_d $, which is a state which has no coherence, but potentially classical correlations. Meanwhile, the reference state for the mutual information $ T $ is $ \pi (\rho) $, which has no correlations but potentially coherence. One can define a quantity with a reference state that has no correlations and no coherence. This was called the ``hookup'' in Ref. \cite{giorgi2018hallmarking} and can be evaluated to be
\begin{equation}
M(\rho) = C(\rho) + K(\rho) = T(\rho) + C_{L} (\rho).
\label{hookup}
\end{equation}
A detailed overview on the various correlations and coherence is given in the Supplementary Material.
\begin{figure*}
\includegraphics[scale=0.060]{fig2}
\caption{Tomographic reconstruction of the density matrices of the star and $W \bar{W}$ states for various thicknesses of quartz plates.
The amount of dephasing is controlled by the quartz plate thickness $ \ell $. Only the real part of the density matrix elements are shown,
and the imaginary parts are consistent with zero for all thicknesses (see Supplementary Material).
The theoretical density matrix for each dephasing time is shown as a transparent histogram, and the fidelities are marked as a percentage, along with the error estimate.
Dephasing rates of $ \Gamma = 2.21 \times 10^{-5} \lambda_{0}^{-2} $ for the $ W \bar{W} $ and
$ \Gamma = 2.06 \times 10^{-5} \lambda_{0}^{-2} $ for star states are used, with $\lambda_{0} = 780$ nm.
\label{tomography} }
\end{figure*}
\section{Density matrix evolution under dephasing}
\subsection{Tomography reconstruction of states}
Fig. \ref{tomography} shows the tomographic reconstructions of the star and $ W \bar{W} $ states with various amounts of dephasing. For the case that the dephasing is applied to all the photons, the density matrix approaches its diagonal form as expected for larger values of $ \ell $, the thickness of the quartz plate. The case where dephasing is only applied to one of the photons, off-diagonal terms remain since the state is only partially dephased. This is due to the nature of the star and $ W \bar{W} $ states that are used which contain other types of coherence other than completely tripartite coherence (such as in a GHZ state).
The tomographically reconstructed density matrix is compared to the theoretically calculated density matrix according to a dephasing channel for each qubit defined as
\begin{align}
\rho \rightarrow (1- p(\ell)) \rho + p(\ell) \sigma_z \rho \sigma_z ,
\label{dephasingch}
\end{align}
where $p(\ell) = [1-\exp(-\Gamma \ell^{2})]/2$ (see Supplementary Material). We obtain fidelities of the state with dephasing better than $ 93 \% $ for all dephasing values.
\subsection{Decay of correlations and coherence with dephasing on all qubits}
Using the tomographically reconstructed density matrices we calculate the various quantities summarized in Table \ref{table1}. First we discuss the
dephasing dynamics of the correlations in a $W \bar{W}$ state, as shown in Fig. \ref{fig3}(a) and (b). We observe that all quantities decay to zero
for large dephasing, except the mutual information $T$ and classical correlations $K$, which saturate to finite values. This is due to the dephasing
removing all coherence from the system, such that the state
\begin{align}
\rho_d &=& \frac{1}{6} \Big( |001\rangle \langle 001| + |010\rangle \langle 010| + |100\rangle \langle 100| \nonumber \\
& & + |110\rangle \langle 110| + |101\rangle \langle 101|+ |011\rangle \langle 011| \Big)
\end{align}
is progressively approached. This is a classically correlated state and hence the mutual information only contains classical correlations $ T = K $ as
observed, and all other quantum properties decay to zero.
In Fig. \ref{fig3}(b), we see that the global coherence starts at a larger value than the local coherence, but the global coherence decays faster
than the local coherence. This is an indication of the greater robustness of the local coherence in the presence of dephasing than global
coherence.
To examine this point in more detail, we plot the decay rates for the various quantities in Fig. \ref{fig3}(c). Due to the Gaussian nature of the dephasing
channel (\ref{dephasingch}), we expect the quantum properties to also approximately follow a Gaussian form $ \propto \exp( - \Gamma \ell^{2} ) $,
hence the decay rate
is the negative gradient on a semilog plot with $ \ell^2 $. Of all the
quantum properties the fastest decay is for entanglement. The next fastest decay rate is displayed by global quantum coherence, followed by the total
coherence. The very slow decay of mutual information is because it is composed of both
quantum correlations and classical correlations. While quantum correlations decay due to the environment, the classical correlations
remains unchanged, since the dephasing acts in the classical basis $ |0 \rangle, |1 \rangle $. Likewise, local coherence can be seen to decay more slowly than the total coherence. These results generally show that the quantities that are related to collective effects, such as entanglement and global coherence tend to decay at a faster than classical or local quantities.
The star state generally shows similar behavior, as seen in Fig. \ref{fig3}(d) and (e). Here again the mutual information and classical correlations saturate towards a non-zero value, according to the classical correlations in the state
\begin{align}
\rho_d = \frac{1}{4} \left( |000\rangle \langle 000| + |100\rangle \langle 100| + |101\rangle \langle101| + |111\rangle \langle 111| \right) .
\end{align}
All other quantities decay to zero, in a similar way to the $ W \bar{W} $ state. The total coherence is less in the star state due to the smaller number of terms in the superposition. Nevertheless, as seen by evaluating the decay rates in Fig. \ref{fig3}(f), the entanglement shows the greatest rate of decrease, followed by the global and total coherences. The mutual information and local coherences decay with the slowest rates, similar to the $ W \bar{W} $ state. Thus despite the rather different structure of the states, a consistent picture emerges once the decay rates are examined.
\begin{figure*}
\includegraphics[width=\linewidth]{fig3}
\caption{Decay of quantum properties for the (a)(b)(c) $W \bar{W}$ and (d)(e)(f) star state under three qubit dephasing. The various quantum properties are mutual information $ T $, total coherence $ C $ global coherence $ C_G $, local coherence $ C_L $, Entanglement $ E $ and the classical correlations $ K $. In (a)(b)(d)(e), the exponential decay of these properties is shown as a function of the thickness of the quartz plate $\ell$
(units of $\lambda_{0} =780$nm). Theoretical predictions are shown with the solid lines.
In (c)(f), we replot the same curves on a semilog plot with the $x$-axis representing the square of the thickness of the quartz plate and the physical properties along the $y$-axis. The slope of the linear fit gives the decay rate of the quantum property. In all figures, the experimental data is denoted by points and the error bar is given by through a simulation of the photon statistics. In (a)(b)(d)(e), solid lines are the theoretical predictions, while in (c)(f) the solid lines are fits to the experimental data.
Fitted values of the decay rates (in units of $ 10^{-5} \lambda_{0}^{-2} $) are (c) $ \Gamma(E) = 10.9, \Gamma(C_G) = 6.6, \Gamma(C) = 6.1, \Gamma(C_L) = 5.6, \Gamma(T) = 4.0 $; (f) $ \Gamma(E) = 9.2, \Gamma(C_G) = 5.2, \Gamma(C) = 4.8, \Gamma(C_L) = 3.9, \Gamma(T) = 3.0 $. \label{fig3}}
\end{figure*}
\begin{figure*}
\includegraphics[width=\linewidth]{fig4}
\caption{Decay of quantum properties of the (a),(b),(c) $W \bar{W}$ and (d),(e),(f) star state with dephasing of one qubit. For the $W \bar{W}$ state, qubit $ B $ is dephased, while for the star state qubit $ C $ is dephased. The labeling is the same as for Fig. \ref{fig3}. Points are experimental data, and the lines are theoretical predictions in (a)(b)(d)(e). In (c)(f) the lines are fits to the data. Fitted values of the decay rates (in units of $ 10^{-5} \lambda_{0}^{-2} $) are (c) $\Gamma(C_G) = 2.2, \Gamma(C) = 1.8, \Gamma(T) = 1.7, \Gamma(E) = 1.7, \Gamma(C_L) = 1.3 $; (f) $ \Gamma(E) = 3.6, \Gamma(C_G) = 2.0, \Gamma(C) = 1.3, \Gamma(T) = 1.3, \Gamma(C_L) = 0.2 $. \label{fig4} }
\end{figure*}
\subsection{Decay of correlations and coherence with one qubit dephasing}
One way of understanding the faster decay of the collective quantities such as entanglement and global coherence is that they are exposed to the dephasing effects from multiple qubits. This is in contrast to quantities that are localized on each qubit, such as local coherence, which can only affect one qubit at a time. In this picture, if the dephasing is only applied to one qubit, then we might expect that the rates for all quantities will be more similar. To test this hypothesis, we also perform dephasing on one qubit and investigate its effect on the various quantities as before.
The decay of various quantities for the $W \bar{W}$ due to dephasing is shown in Fig. \ref{fig4}(a) and (b).
Due to the symmetric nature of the state, dephasing any one of the three qubits leads to the same result, hence in our case qubit $B$ is dephased.
In this case all quantities saturate to a non-zero value, which is characteristic of the $W \bar{W}$ state. As is well-known, dephasing of a $W$ state only partially removes the entanglement from the system, and the remaining qubits are partially entangled. This means that both quantum correlations and coherence are preserved in the system. Due to the quantum correlations that are preserved in this case, we observe that the amount of correlations and coherence are always larger than the amount of classical correlations, in contrast to the three qubit dephasing case.
The entanglement structure of the state plays a more important role in the case of star states, as seen in Fig. \ref{fig4}(d) and (e). For the star state we show the effects of dephasing on the central qubit $ C $. In this case we observe the entanglement decaying to zero for large dephasing, as expected from the discussion surrounding Eq. (\ref{starstatedef}). For dephasing on a peripheral qubit, we find that the entanglement does not decay to zero, in a similar way to the $ W \bar{W} $ state (see the Supplementary Material). Other quantities saturate to non-zero values, with the steady state value of the global coherence being higher than the amount of classical correlations in the system. This is in
contrast to the entanglement and local coherence in which the steady state value is lower than the classical correlations. We note that compared to the other
quantities the local coherence exhibits very minimal evolution due to dephasing.
Fig. \ref{fig4}(c) and (f) shows a comparison of the decay rates of the various quantities, which appears as the negative gradient on the semilog plot. We find that the ordering of the decay rates do not occur in a consistent order as before. For the $ W \bar{W} $ state, we find that all quantities generally decay with a similar rate, with the global coherence giving the largest value. On the other hand, for the star state, we clearly see that the entanglement decays at the fastest rate, in a similar way to the three qubit dephasing case. We attribute this to the different structure of entanglement that is present in the two states. For the $ W \bar{W} $ state, all the qubits can be considered ``peripheral'' qubits, since the dephasing only causes partial loss of entanglement. In the case of dephasing the central qubit of the star state, the destruction of entanglement is very effective, since it is a central qubit for the entanglement.
Thus in this case we observe that the structure of the quantum correlations greatly affect the fragility of the state.
\section{Summary and Conclusions}
The effects of dephasing on quantum correlations and coherence was experimentally studied on photonic $W \bar{W}$ and star states, with one and three qubit dephasing. Such states have coherence and correlations of all types in a tripartite system. Using a Gaussian dephasing model, we are able to extract the effective decay rates for each state and each type of dephasing, as shown in Figs. \ref{fig3}(c) and (f) and \ref{fig4}(c) and (f). In the case that dephasing is applied on all the qubits, a consistent picture emerges, despite the different nature of the states. Here we find that
\begin{align}
\Gamma(E) > \Gamma(C_G) > \Gamma(C) > \Gamma(C_L) > \Gamma(T) > \Gamma(K) ,
\label{robustnessordering}
\end{align}
i.e. the dephasing rates occur in the order of entanglement, global coherence, total coherence, local coherence, mutual information, and classical correlations.
We thus see a clear hierarchy in the decay rate of the various quantum properties, where the collective quantities decay at a faster rate than local and classical quantities. This can be understood as the result of collective quantities being affected by all the channels of dephasing, but local quantities only being affected by its local dephaser. In this way we verify the conjecture that collective quantities are more fragile than the local quantities, when local decoherence is applied on the whole system. For the case that only one qubit is dephased, the rates of decay depend more on the structure of the quantum state. In the case of dephasing the central qubit of a star state, we again recover the entanglement is the most fragile quantity. However, in the case of dephasing a peripheral qubit where entanglement can be retained in the strong dephasing limit, the rate of decay is much lower. Similar results were obtained in different models of dephasing theoretically \cite{radhakrishnan2019time}.
The hierarchy between the different measurable quantities in (\ref{robustnessordering}) can be viewed as a
robustness hierarchy in terms of quantum properties as follows
\begin{equation}
\hbox{NLQC} \prec \hbox{TQC} \prec \hbox{LS} .
\end{equation}
In the above equation NLQC, TQC and LS stand for nonlocal quantum correlation, total quantum correlation and local superposition respectively
and the notation $A \prec B$ denotes that $A$ decays faster than $B$. The NLQC is unique to quantum systems and
TQC (both nonlocal and local quantum correlations) are inter-qubit correlations distributed between the qubits.
LS is the superposition between the levels of a qubit and hence is a intra-qubit property which is localized within a qubit.
Hence we find that the inter-qubit quantum properties which are spread out between the qubits are more likely to decay much faster
when compared to the intra-qubit quantum properties which are relatively more robust. This suggest that in quantum information theoretic tasks it would be advantageous to use intra-qubit quantum properties as resources as they can be preserved over longer time intervals. By converting between local coherence to global coherence only when it is needed \cite{wu2018experimental}, this could be used as a strategy for preserving coherence to longer times.
We note that in our approach the classical correlations are constant throughout the entire process of evolution. This is contrast to the theoretical results observed in Refs. \cite{maziero2009classical,maziero2010system} and were subsequently experimentally examined \cite{xu2010experimental2}. The difference here originates from the different notions of classicality as defined by quantum discord and quantum coherence. In quantum discord, a state is classical correlated if there exists a local measurement and a conditioned measurement, in any basis, which does not disturb the quantum state \cite{ollivier2001quantum,radhakrishnan2019quantum}. It is therefore a quantity that is invariant under local basis transformations. In contrast, coherence is a basis dependent quantity \cite{radhakrishnan2019basis}. The classical nature of the state is with respect to a particular basis choice, in our case the $ |0\rangle, |1 \rangle $ basis. Here our notion of classical correlations is in this fixed basis choice, and the dephasing removes coherence in this basis. This means that the classical correlations are always unchanged under this evolution. In the case of Refs. \cite{maziero2009classical,maziero2010system,xu2010experimental2}, classical correlations can be dynamic because of the local basis optimization that is performed in evaluating the discord. In our view, these results are not inconsistent, but arise from different notions of classicality. In our approach, there is a preferred classical basis $ |0\rangle, |1 \rangle $, which is natural to consider since this is the basis that dephasing occurs in the system.
Another observation that can be made from Fig. \ref{fig3} and \ref{fig4} is that the amount of total quantum coherence is always higher than the entanglement present in the system. This is
because the coherence originates due to nonlocal quantum correlations, local quantum correlations and
local superpositions, whereas the entanglement arises only due to the nonlocal quantum correlations.
This enables us to verify the theorem $E(\rho) \leq C(\rho)$ in Ref. \cite{streltsov2015measuring} in a
dynamical scenario, when they are both measured using the same contractive distance.
This relationship between entanglement and coherence was proved in Ref. \cite{streltsov2015measuring}
under the condition that both these quantities are measured using the same contractive distance. In our work we also use the same contractive distance (relative entropy) and also verify that the relation holds under dephasing dynamics as well.
Our work demonstrates that various quantum information quantities can be used to effectively characterize quantum systems. These can be extended to larger quantum systems, where more dramatic phase transition phenomena can be observed \cite{radhakrishnan2017quantum}. Adding decoherence and observing the dynamics can be a direct quantifier for the fragility of various quantities. Since particular quantities are more relevant for a given quantum information task, this general method may find also practical uses in the context of applications to quantum technology.
\section*{Acknowledgements}
This work is supported by the Shanghai Research Challenge Fund; New York University Global Seed Grants for Collaborative Research; National Natural Science Foundation
of China (61571301,D1210036A); the NSFC Research Fund for International Young Scientists (11650110425,11850410426); NYU-ECNU Institute of Physics at NYU Shanghai;
the Science and Technology Commission of Shanghai Municipality (17ZR1443600); the China Science and Technology Exchange Center (NGA-16-001); and the NSFC-RFBR
Collaborative grant (81811530112).
|
1,477,468,751,073 | arxiv | \section{Semi analytic and sub analytic sets}\label{SecSub}
We review the definitions of semi analytic and sub analytic sets from Chapter 6 \cite{H} (also see Chapter 1 \cite{BM3} or \cite{BM2}).
Let $X$ be a set and $\Delta$ be a family of subsets of $X$. The elementary closure $\tilde\Delta$ of $\Delta$ is the smallest family of subsets of $X$ containing $\Delta$ which is stable under finite intersection, finite union and complement.
Suppose that $U$ is an open subset of a real analytic space $X$.
Let $\Delta_+(U)$ be the set of subsets $A$ of $U$ of the form $A=\{x\in U\mid f(x)>0\}$ for some real analytic function $f$ on $U$. A subset $A$ of $X$ is said to be semi analytic at $x_0\in X$ if there exists an open neighborhood $U$ of $x_0$ in $X$ such that $A\cap U$ belongs to the elementary closure of $\Delta_+(U)$. $A$ is said to be semi analytic in $X$ if it is semi analytic at every point of $X$.
Let $\Gamma(U)$ be the set of those closed subsets of $U$ which are images of proper real analytic maps $g:Y\rightarrow U$. A subset $A$ of $X$ is said to be sub analytic at $x_0\in X$ if there exists an open neighborhood $U$ of $x_0$ in $X$ such that $A\cap U$ belongs to the elementary closure of $\Gamma(U)$. $A$ is said to be sub analytic in $X$ if it is sub analytic at every point of $X$.
\section{Preliminaries on \'etoiles and local blow ups}\label{Pre}
We require that an analytic space be Hausdorff.
An \'etoile is defined in Definition 2.1 \cite{Hcar}. An \'etoile $e$ over a complex analytic space $X$ is defined as a subcategory of the category of sequences of local blow ups $\mathcal E(X)$ over $X$. If $\pi:X'\rightarrow X\in e$ then a point $e_{X'}\in X'$ is associated to $e$. We will call $e_{X'}$ the center of $e$ on $X'$. The \'etoile associates a point $e_X\in X$ to $X$ and if $\pi_1:X_1\rightarrow U$ is a local blow up of $X$ such that $e_X\in U$ then $\pi_1\in e$ and $e_{X_1}\in X_1$ satisfies $\pi_1(e_{X_1})=e_X$. If $\pi_2:X_2\rightarrow U_1$ is a local blow up of $X_1$ such that $e_{X_1}\in U_1$ then $\pi_1\pi_2\in e$ and $e_{X_2}\in X_2$ satisfies $\pi_2(e_{X_2})=e_{X_1}$. Continuing in this way, we can construct sequences of local blow ups
$$
X_n\stackrel{\pi_n}{\rightarrow} X_{n-1}\rightarrow \cdots\rightarrow X_1\stackrel{\pi_1}{\rightarrow} X
$$
such that $\pi_1\cdots\pi_i\in e$, with associated points $e_{X_i}\in X_i$ such that $\pi_i(e_{X_i})=e_{X_{i-1}}$ for all $i$.
Let $X$ be a complex analytic space. Let $\mathcal E_X$ be the set of all \'etoiles over $X$ and for
$\pi:X_1\rightarrow X$ a product of local blow ups, let
$$
\mathcal E_{\pi}=\{e\in \mathcal E_X\mid \pi\in e\}.
$$
Then the $\mathcal E_{\pi}$ form a basis for a topology on $\mathcal E_X$. The space $\mathcal E_X$ with this topology is called the vo\^ ute \'etoil\'ee over $X$ (Definition 3.1 \cite{Hcar}). The vo\^ ute \'etoil\'ee is a generalization to complex analytic spaces of the Zariski Riemann manifold of a variety $Z$ in algebraic geometry (Section 17, Chapter VI \cite{ZS2}).
We have a canonical map $P_X:\mathcal E_X\rightarrow X$ defined by $P_X(e)=e_X$ which is continuous, surjective and proper (Theorem 3.4 \cite{Hcar}).
It is shown in Section 2 of \cite{Hcar} that given a product of local blow ups $\pi:X_1\rightarrow X$, there is a natural homeomorphism
$j_{\pi}:\mathcal E_{X_1}\rightarrow \mathcal E_{\pi}$ giving a commutative diagram
$$
\begin{array}{rccl}
\mathcal E_{X_1}&\cong \mathcal E_{\pi}&\subset &\mathcal E_X\\
P_{X_1}\downarrow&&&\downarrow P_X\\
X_1&&\stackrel{\pi}{\rightarrow}&X.
\end{array}
$$
The join of $\pi_1,\pi_2\in \mathcal E(Y)$ is defined in Proposition 2.9 \cite{Hcar}. The join is a morphism $J(\pi_1,\pi_2):Y_J\rightarrow Y$.
It has the following universal property: Suppose that $f:Z\rightarrow Y$ is a strict morphism (Definition 2.1 \cite{Hcar}). Then there exists a $Y$-morphism
$Z\rightarrow Y_J$ if and only if there exist $Y$-morphisms $Z\rightarrow Y_1$ and $Z\rightarrow Y_2$.
It follows from 2.9.2 \cite{Hcar} that if $\pi_1,\pi_2,\in e\in \mathcal E_Y$, then $J(\pi_1,\pi_2)\in e$.
We describe the construction of Proposition 2.9 \cite{Hcar}. In the case when $\pi_1$ and $\pi_2$ are each local blowups,
which are described by the data $(U_i,E_i,\pi_i)$, $J(\pi_1,\pi_2)$ is the blow up
$$
J(\pi_1,\pi_2):Y_J=B(\mathcal I_{E_1}\mathcal I_{E_2}\mathcal O_Y|U_1\cap U_2)\rightarrow Y.
$$
Now suppose that $\pi_1$ is a product $\alpha_0\alpha_{1}\cdots \alpha_r$ where $\alpha_i:Y_{i+1}\rightarrow Y_i$ are local blow ups defined by the data
$(U_i,E_i,\alpha_i)$, and $\pi_2$ is a product $\alpha_0'\alpha_1'\cdots\alpha_r'$ where $\alpha_i':Y_{i+1}'\rightarrow Y_i'$
are local blow ups defined by the data $(U_i',E_i',\alpha_i')$. We may assume (by composing with identity maps) that the length of each sequence is a common value $r$.
We define $J(\pi_1,\pi_2)$ by induction on $r$. Assume that $J_{r}=J(\alpha_{0}\alpha_1\cdots\alpha_{r-1},\alpha_{0}'\alpha_1'\cdots\alpha_{r-1}')$ has been constructed,
with projections $\gamma:Y_{J_r}\rightarrow Y_{r}$ and $\delta:Y_{J_r}\rightarrow Y_{r}'$. Then we define $J(\pi_1,\pi_2)$ to be the blow up
$$
J(\pi_1,\pi_2):Y_{J}=B(\mathcal I_{E_{r}}\mathcal I_{E'_{r}}\mathcal O_{J_{r}}|\gamma^{-1}(U_r)\cap \delta^{-1}(U_r'))\rightarrow Y.
$$
Suppose that $\phi:X\rightarrow Y$ is a morphism of complex analytic spaces, and $\pi:Y'\rightarrow Y\in \mathcal E(Y)$.
The morphism $\phi^{-1}[\pi]:\phi^{-1}[Y']\rightarrow X$ will denote the strict transform of $\phi$ by $\pi$ (Section 2 of \cite{HLT}).
In the case of a single local blowup $(U,E,\pi)$ of $Y$, $\phi^{-1}[Y']$ is the blow up $B(\mathcal I_E\mathcal O_{X}|\phi^{-1}(U))$.
In the case when $\pi=\pi_0\pi_{1}\cdots \pi_r$ with $\pi_i:Y_{i+1}\rightarrow Y_{i}$ given by local blow ups $(U_i,E_i,\pi_i)$, we inductively define
$\phi^{-1}[\pi]$. Assume that $\pi^{-1}[\pi_{0}\cdots\pi_{r-1}]$ has been constructed. Let $h=\pi_{0}\cdots\pi_{r-1}$, so that $\pi=h\pi_r$.
Let $\phi':\phi^{-1}[Y_{r}]\rightarrow Y_{r}$ be the natural morphism. Then define $\phi^{-1}[Y_{r+1}]$ to be the blow up
$B(\mathcal I_{E_r}\mathcal O_{\phi^{-1}[Y_{r}]}|(\phi')^{-1}(U_r))$.
\section{Rectilinearization}
In this section we prove rectilinearization, Theorem \ref{TheoremR}. We use the method of complexification of a real analytic morphism (Section 1, \cite{H}).
\begin{Theorem}\label{TheoremA} Suppose that $\phi:Y\rightarrow X$ is a morphism of reduced complex analytic spaces, $K$ is a compact neighborhood in $Y$ and $f$ is an \'etoile over $X$. Then there exist local monomializations
\begin{equation}\label{eq2}
\begin{array}{ccl}
Y_i&\stackrel{\phi_i}{\rightarrow}&X_i\\
\delta_i\downarrow&&\downarrow\gamma_i\\
Y&\stackrel{\phi}{\rightarrow}&X
\end{array}
\end{equation}
for $1\le i\le r$ and $\pi:X_f\rightarrow X\in f$ such that there are commutative diagrams for $1\le i\le t\le r$
\begin{equation}\label{eq1}
\begin{array}{rcl}
\overline Y_i&\stackrel{\tau_i}{\rightarrow}&X_f\\
\beta_i\downarrow&&\downarrow\alpha_i\\
Y_i&\stackrel{\phi_i}{\rightarrow}&X_i\\
\delta_i\downarrow&&\downarrow\gamma_i\\
Y&\stackrel{\phi}{\rightarrow}&X
\end{array}
\end{equation}
where $\overline Y_i=\phi_i^{-1}[X_f]$. Let $\psi_i=\delta_i\beta_i$ and $\pi=\gamma_i\alpha_i$. There exists a closed analytic subspace $G_f$ of $X_f$ which is nowhere dense in $X_f$ such that $X_f\setminus G_f\rightarrow X$ is an open embedding, $\pi^{-1}(\pi(G_f))=G_f$, the vertical arrows are products of a finite number of local blow ups of smooth subspaces and
$$
\cup_{i=1}^t\alpha_i^{-1}(\phi_i(\delta_i^{-1}(K)))\setminus G_f=\cup_{i=1}^t\tau_i(\psi_i^{-1}(K))\setminus G_f=\pi^{-1}(\phi(K))\setminus G_f.
$$
There exists a compact neighborhood $L$ of $f_{X_f}$ in $X_f$, a morphism of reduced complex analytic spaces $u:Z\rightarrow X$ and a compact neighborhood $M$ in $Z$ such that $\dim Z<\dim Y$, $u(Z)\subset \phi(Y)$ and $\phi(K)\cap\pi(G_f\cap L)=u(M)$.
\end{Theorem}
\begin{proof} By Theorem \ref{TheoremLM}, we may construct local monomializations (\ref{eq2}), with relatively compact open neighborhoods $C_i$ in $Y_i$ with closures $K_i$ such that
$\{\mathcal E_{C_1},\ldots,\mathcal E_{C_r}\}$ is an open cover of the compact set $\rho_Y^{-1}(K)$ ($\rho_Y$ is proper by Theorem 3.16 \cite{H}) and $\gamma_i$ are sequences of local blow ups of smooth subspaces. We have that
\begin{equation}\label{eq3}
K\subset \cup_{i=1}^r\delta_i(C_i).
\end{equation}
Further, there exist closed analytic subspaces $G_i$ of $X_i$ which are nowhere dense in $X_i$ such that $X_i\setminus G_i\rightarrow X$ is an open embedding and $\phi_i^{-1}(G_i)$ is nowhere dense in $Y_i$.
Reindex the diagrams (\ref{eq2}) so that $f\in \mathcal E_{X_i}$ if $1\le i\le t$ and $f\not\in \mathcal E_{X_i}$ if $t<i\le r$.
Suppose that $i$ is an index such that $f\not \in \mathcal E_{X_i}$ (that is, $t<i\le r$). The morphism $X_{i}\rightarrow X$ has a factorization
$$
X_i=V_n\stackrel{\sigma_n}{\rightarrow}\cdots\rightarrow V_2\stackrel{\sigma_2}{\rightarrow}V_1\stackrel{\sigma_1}{\rightarrow}V_0=X
$$
where each $\sigma_j:V_j\rightarrow V_{j-1}$ is a local blowup $(U_j,E_j,\sigma_j)$. There exists a smallest $j$ such that $f_{V_{j-1}}\not\in U_{j}$.
Let $X_i^*$ be an open neighborhood of $f_{V_{j-1}}$ in $V_{j-1}$ which is disjoint from $U_{j}$. Then $f\in \mathcal E_{X_i^*}$ and $\mathcal E_{X_i^*}\cap \mathcal E_{X_i}=\emptyset$. Let $\pi:X_f\rightarrow X$ be a (global) resolution of singularities of the join of the $X_i$ which satisfy $f\in \mathcal E_{X_i}$ and of the $X_i^*$ such that $f\not\in \mathcal E_{X_i}$ and so that
$\pi:X_f\rightarrow X\in f$ is a sequence of local blowups whose centers are nonsingular and such that $\alpha_i^{-1}(\phi_i(Y_i))$ is nowhere dense in $X_f$ for all $i$. Then, $\mathcal E_{X_f}\subset \mathcal E_{X_i}$ if $f\in \mathcal E_{X_i}$ and $\mathcal E_{X_f}\cap\mathcal E_{X_i}=\emptyset$ if $f\not\in \mathcal E_{X_i}$. We have factorizations
$$
X_f\stackrel{\alpha_i}{\rightarrow}X_i\stackrel{\gamma_i}{\rightarrow} X
$$
of $\pi$ if $f\in \mathcal E_{X_{i}}$.
For $1\le i\le t$ let
$$
\begin{array}{rcl}
\overline Y_i&\stackrel{\tau_i}{\rightarrow}&X_f\\
\beta_i\downarrow&&\downarrow\alpha_i\\
Y_i&\stackrel{\phi_i}{\rightarrow}&X_i
\end{array}
$$
be the natural commutative diagram of morphisms, with $\overline Y_i=\phi_i^{-1}[X_f]$. Let $\psi_i=\delta_i\beta_i$. Let $G_f$ be the union of the preimages of the subspaces blown up in a factorization of $\pi$ by local blowups. Then $G_f$ is a nowhere dense closed analytic subset of $X_f$ such that $X_f\setminus G_f\rightarrow X$ is an open embedding and $\pi^{-1}(\pi(G_f))=G_f$. Further, $\tau_i^{-1}(G_f)$ is nowhere dense in $\overline Y_i$ for all $i$. Let
$U=X_f\setminus G_f$. Suppose that $q\in \phi(K)\cap \pi(U)$. There there exists $p\in K$ such that $\phi(p)=q$ and there exists $i$ and $p'\in C_i$ such that $\delta_i(p')=p$ by (\ref{eq3}). Let $q'=\phi_i(p')\in X_i$. There exists $\lambda\in \mathcal E_X$ such that $\lambda_{X_i}=q'$ and thus $\lambda_X=q$. Since $q\in \pi(U)$, we can regard $q$ as an element of $X_f$ with $\lambda_{X_f}=q$. Thus $\lambda\in \mathcal E_{X_f}$ so that $\lambda\in \mathcal E_{X_f}\cap \mathcal E_{X_i}$, and so $f\in \mathcal E_{X_i}$ as this intersection is nonempty.
We have that $\phi_i(p')=q'$ and $\alpha_i:X_f\rightarrow X_i$ is an open embedding in a neighborhood of $q$, so $\beta_i$ is an open embedding in a neighborhood of $p'$. Thus $q=\lambda_{X_f}\in \tau_i(\psi_i^{-1}(K))$.
Whence
$$
\pi^{-1}(\phi(K))\cap U\subset \cup_i\tau_i(\beta_i^{-1}(C_i\cap \delta_i^{-1}(K)))\cap U \subset \cup_i\tau_i(\psi_i^{-1}(K))\cap U.
$$
We have
$$
\cup_i\tau_i(\psi_i^{-1}(K))\subset \pi^{-1}(\phi(K))
$$
since
$$
\pi\tau_i(\psi_i^{-1}(K))=\phi\psi_i(\psi_i^{-1}(K))\subset\phi(K)
$$
for $1\le i\le t$.
Thus
\begin{equation}\label{eq10}
\cup_i\tau_i(\psi_i^{-1}(K))\cap U=\pi^{-1}(\phi(K))\cap U.
\end{equation}
Let $V_f$ be a relatively compact open neighborhood of $f_{X_f}$ in $X_f$. Let $L$ be the closure of $V_f$ in $X_f$. Then $\beta_i^{-1}(C_i)\cap\tau_i^{-1}(V_f)$ are relatively compact open subsets of $\overline Y_i$ with closures $L_i=\beta_i^{-1}(K_i)\cap\tau_i^{-1}(L)$ for $1\le i\le t$. Further,
$$
\cup_i\phi\psi_i(\psi_i^{-1}(K)\cap L_i)\subset \pi(L)\cap \phi(K)
$$
and so
$$
\cup_i\phi\psi_i(\psi_i^{-1}(K)\cap L_i)\setminus \pi(G_f)=(\pi(L)\setminus \pi(G_f))\cap\phi(K)
$$
by (\ref{eq10}).
For all $i$, the compact set $\pi(G_f)\cap[\phi\psi_i(\psi_i^{-1}(K)\cap L_i)]$ is nowhere dense in the compact set $\phi\psi_i(\psi_i^{-1}(K)\cap L_i)$ since $\tau_i^{-1}(G_f)\cap \psi_i^{-1}(K)\cap L_i$ is nowhere dense in the compact neighborhood $\psi_i^{-1}(K)\cap L_i$ in $Y_i$.
Thus the compact set $\cup_i\phi\psi_i(\psi_i^{-1}(K)\cap L_i)$ is everywhere dense in the compact set $\pi(L)\cap\phi(K)$. Thus
$$
\cup_i\phi\psi_i(\psi_i^{-1}(K)\cap L_i)=\pi(L)\cap\phi(K)
$$
and so
$$
\cup_i\phi\phi_i(\psi_i^{-1}(K)\cap L_i\cap \tau_i^{-1}(G_f))=\pi(G_f\cap L)\cap\phi(K).
$$
Let $Z=\coprod_{1\le i\le t}\tau_i^{-1}(G_f)$ be the disjoint union of the analytic spaces
$\tau_i^{-1}(G_f)$ with associated morphism $u=\coprod_i \phi\psi_i:Z\rightarrow X$ and compact subset $M=\coprod_i \psi_i^{-1}(K)\cap L_i\cap\tau_i^{-1}(G_f)$ of $Z$.
Then
$\dim Z<\dim Y$ , $u(Z)\subset\phi(Y)$ and $\phi(K)\cap\pi(G_f\cap L)=u(M)$.
\end{proof}
Suppose $\phi:Y\rightarrow X$ is a morphism of reduced real analytic spaces such that $X$ is smooth.
Let $\tilde Y\rightarrow \tilde X$ be a complexification of $\phi:Y\rightarrow X$ such that $\tilde X$ is smooth and $\tilde Y$ is reduced. Suppose that $\tilde K$ is a compact neighborhood in $\tilde Y$ which is invariant under the auto conjugation of $\tilde Y$. Let $K$ be the real part of $\tilde K$, which is a compact neighborhood in $Y$. Let
$$
\tilde Y=\tilde Y^{(n)}\supset \tilde Y^{(n-1)}\supset\cdots\supset \tilde Y^{(0)}=\emptyset
$$
be the stratification of $\tilde Y$ where $\tilde Y^{(i-1)}=\mbox{sing}(\tilde Y^{(i)})$ is the singular locus of $\tilde Y^{(i)}$, and let
$$
Y=Y^{(n)}\supset Y^{(n-1)}\supset \cdots\supset Y^{(0)}
$$
be the induced smooth real analytic stratification of $Y$.
We have induced compact neighborhoods $\tilde K\cap Y^{(i)}$ in $Y^{(i)}$, with $K=\tilde K\cap Y^{(n)}$. There exist global resolutions of singularities $\tilde\lambda_i:(\tilde Y^{(i)})^*\rightarrow \tilde Y^{(i)}$
which have an auto conjugation such that the real part of $\tilde\lambda_i:(\tilde Y^{(i)})^*\rightarrow \tilde Y^{(i)}$ is $\lambda_i:(Y^{(i)})^*\rightarrow Y^{(i)}$ where $(Y^{(i)})^*$ is smooth (Desingularization I, 5.10 \cite{H}). The morphism $\tilde \lambda_i$ is proper,
so $\tilde K_i=\tilde \lambda_i^{-1}(K\cap \tilde Y^{(i)})$ is a compact neighborhood in $\tilde Y^{(i)}$ with compact real neighborhood $K_i=\lambda_i^{-1}(K\cap Y^{(i)})$ in $Y^{(i)}$.
Let $Y'=\coprod_i(Y^{(i)})^*$. We have that the induced morphism $\phi^*:Y'\rightarrow Y$ is proper and surjective. Let $K'=(\lambda^*)^{-1}(K)$, a compact neighborhood in $Y'$.
Let $\tilde Y'=\coprod_i(\tilde Y^{(i)})^*$ with induced complex analytic morphism $\tilde\lambda^*:\tilde Y'\rightarrow \tilde Y$. Then $\tilde\lambda^*:\tilde Y'\rightarrow \tilde Y$ is a complexification of $\lambda^*:Y'\rightarrow Y$.
Let $\tilde K'=(\lambda^*)^{-1}(\tilde K)$, which is a compact neighborhood in $\tilde Y'$ with $(\tilde K')\cap Y'=K'$.
By Theorem \ref{TheoremA}, applied to the complex analytic morphism $\tilde\phi\tilde\lambda^*:\tilde Y'\rightarrow \tilde X$, the compact neighborhood $\tilde K'$ in $\tilde Y'$ and an \'etoile $f$ over $\tilde X$, there exist commutative diagrams
$$
\begin{array}{rcl}
\tilde{\overline Y}_i&\stackrel{\tilde\tau_i}{\rightarrow}&\tilde X_f\\
\tilde\beta_i\downarrow&&\downarrow\tilde\alpha_i\\
\tilde Y_i&\stackrel{\tilde\phi_i}{\rightarrow}&\tilde X_i\\
\tilde\delta_i\downarrow&&\downarrow\tilde\gamma_i\\
\tilde Y'&\stackrel{\tilde\phi\tilde\lambda^*}{\rightarrow}&\tilde X\\
\searrow&&\nearrow\\
&\tilde Y&
\end{array}
$$
and a closed analytic subspace $\tilde G_f$ of $\tilde X_f$ such that
\begin{equation}\label{eq24}
\cup_{i=1}^t\tilde\tau_i(\tilde\psi_i^{-1}(\tilde K'))\setminus \tilde G_f=\tilde\pi^{-1}(\tilde\phi\tilde\lambda^*(\tilde K'))\setminus \tilde G_f
\end{equation}
and there exists a compact neighborhood $\tilde L$ of $f_{\tilde X_f}$ in $\tilde X_f$, a morphism of reduced complex analytic spaces $\tilde u:\tilde Z\rightarrow\tilde X$ and a compact neighborhood $\tilde M$ in $\tilde Z$ such that $\dim \tilde Z<\dim \tilde Y$ and
$$
\tilde\phi\tilde\lambda^*(\tilde K')\cap \tilde\pi(\tilde G_f\cap\tilde L)=\tilde u(\tilde M).
$$
We can construct the above complex analytic spaces and morphisms so that there are compatible auto conjugations which preserve $\tilde G_f$, $\tilde Z$, $\tilde L$ and $\tilde M$ and so that the real part $X_f$ of $\tilde X_f$ is nonempty if and only if $f_{\tilde X_f}$ is a real point (by Theorems 8.4 and 8.5 \cite{C}).
Taking the invariants of the auto conjugations, we thus have whenever $X_f\ne\emptyset$, induced commutative diagrams of real analytic spaces and morphisms
$$
\begin{array}{rcl}
\overline Y_i&\stackrel{\tau_i}{\rightarrow}&X_f\\
\beta_i\downarrow&&\downarrow\alpha_i\\
Y_i&\stackrel{\phi_i}{\rightarrow}& X_i\\
\delta_i\downarrow&&\downarrow\gamma_i\\
Y'&\stackrel{\phi\lambda^*}{\rightarrow}& X\\
\searrow&&\nearrow\\
&Y&
\end{array}
$$
with a closed real analytic subspace $G_f=\tilde G_f\cap X_f$ of $X_f$. We have that $G_f$ is nowhere dense in $X_f$ since $X_f$ is smooth (for instance by Lemma 8.2 \cite{C}), and thus $\dim G_f<\dim X_f=\dim Y$.
Also, taking the real part of $\tilde u:\tilde Z\rightarrow \tilde Y'$, we have a morphism of reduced real analytic spaces $u:Z\rightarrow Y'$ and a compact subset $M$ of $Z$ such that $\dim Z<\dim Y$.
The analog of (\ref{eq3}) in Theorem \ref{TheoremA}, $K'\subset \cup_{i=1}^r\delta_i(C_i)$ where $C_i$ is the real part of $\tilde C_i$, is true as $Y'$ is smooth (by Theorem 8.7 \cite{C} and its proof). Then the argument following (\ref{eq3})
in Theorem \ref{TheoremA} shows that
$$
\cup_{i=1}^t\tau_i(\psi_i^{-1}(K'))\setminus G_f=\pi^{-1}(\phi\lambda^*(K'))\setminus G_f=\pi^{-1}(\phi(K))\setminus G_f
$$
and
$$
\pi(K)\cap\pi(G_f\cap L)=\phi\lambda^*(K')\cap\pi(G_f\cap L)=u(M).
$$
\begin{Theorem}\label{TheoremB} Suppose that $\phi:Y\rightarrow X$ is a morphism of reduced complex analytic spaces, $K\subset Y$ is a compact neighborhood in $Y$ and $h\in \mathcal E_X$. Then there exists $d_h:X_h\rightarrow X\in h$, morphisms of reduced complex analytic spaces $\phi_i:Y_i\rightarrow X$ for $0\le i\le t$ with compact neighborhoods $K_i$ in $Y_i$ such that
$\phi_0=\phi$, $Y_0=Y$, $K_0=K$, $\phi_{i+1}(Y_{i+1})\subset \phi_i(Y_i)$, $\dim Y_{i+1}<\dim Y_i$ for all $i$ and $Y_t=\emptyset$. There exist commutative diagrams for $0\le i\le t$
\begin{equation}\label{eq4}
\begin{array}{rcl}
\hat{Y}_{ij}&\stackrel{\sigma_{ij}}{\rightarrow}&X_h\\
b_{ij}\downarrow&&\downarrow a_i\\
\overline Y_{ij}&\stackrel{\tau_{ij}}{\rightarrow}&X_i\\
\beta_{ij}\downarrow&&\downarrow \alpha_{ij}\\
Y_{ij}&\stackrel{\phi_{ij}}{\rightarrow}&X_{ij}\\
\delta_{ij}\downarrow&&\downarrow \gamma_{ij}\\
Y_i&\stackrel{\phi_i}{\rightarrow}&X
\end{array}
\end{equation}
where $\phi_{ij}:Y_{ij}\rightarrow X_{ij}$ are monomial morphisms, $\overline Y_{ij}=\phi_{ij}^{-1}[X_i]$ and $\hat{Y}_{ij}=\tau_{ij}^{-1}[X_h]$, $\psi_{ij}=\delta_{ij}\beta_{ij}$, $c_{ij}=\psi_{ij}b_{ij}$, $\pi_i=\gamma_{ij}\alpha_{ij}$, $\epsilon_{ij}=\alpha_{ij}a_i$ and $d_h=\pi_ia_i$ such that
\begin{equation}\label{eq9}
d_h^{-1}(\phi(K))=\cup_ia_i^{-1}[\cup_j\tau_{ij}(\psi_{ij}^{-1}(K_i))]
=\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(\delta_{ij}^{-1}(K)))
\end{equation}
\end{Theorem}
\begin{proof} We construct commutative diagrams
$$
\begin{array}{rcl}
\overline Y_{ij}&\rightarrow&X_i\\
\downarrow&&\downarrow\\
Y_{ij}&\rightarrow &X_{ij}\\
\downarrow&&\downarrow\\
Y_i&\rightarrow& X
\end{array}
$$
satisfying the conclusions of the theorem by induction on $i$, using Theorem \ref{TheoremA}.
In particular, there exist nowhere dense closed analytic subsets $G_i$ of $X_i$ such that $\pi_i^{-1}(\pi_i(G_i))=G_i$ for all $i$ and $X_i\setminus G_i\rightarrow X$ is an open embedding
and there exist compact neighborhoods $K_i$ in $Y_i$ and $L_i$ of $f_{X_i}$ in $X_i$ such that $X_t=\emptyset$, and for all $i$,
\begin{equation}\label{eq6}
\cup_j\tau_{ij}(\psi_{ij}^{-1}(K_i))\setminus G_i=\pi_i^{-1}(\phi_i(K_i))\setminus G_i
\end{equation}
and
\begin{equation}\label{eq5}
\phi_i(K_i)\cap\pi_i(G_i\cap L_i)=\phi_{i+1}(K_{i+1}).
\end{equation}
We then have (by the definition of an \'etoile) that there exists $X_h\rightarrow X\in h$ such that we have a commutative diagram (\ref{eq4}) such that $a_i(X_h)\subset L_i$ for all $i$.
For all $i$, (\ref{eq6}) implies
\begin{equation}\label{eq8}
a_i^{-1}[\cup_j\tau_{ij}(\psi_{ij}^{-1}(K_i))]\cap (X_f\setminus a_i^{-1}(G_i))
=d_h^{-1}(\phi_i(K_i))\cap (X_f\setminus a_i^{-1}(G_i)).
\end{equation}
Now, (\ref{eq5}) implies
$$
\pi_{i-1}^{-1}(\phi_{i-1}(K_{i-1}))\cap G_{i-1}\cap \pi_{i-1}^{-1}(\pi_{i-1}(L_{i-1})) =\pi_{i-1}^{-1}(\phi_i(K_i))
$$
for all $i$, and so since $a_{i-1}(X_h)\subset L_{i-1}$,
$$
d_h^{-1}(\phi_{i-1}(K_{i-1}))\cap a_{i-1}^{-1}(G_{i-1})=d_h^{-1}(\phi_i(K_i))
$$
for all $i$. Thus
\begin{equation}\label{eq7}
d_h^{-1}(\phi(K))\cap a_0^{-1}(G_0)\cap \cdots\cap a_{i-1}^{-1}(G_{i-1})=d_h^{-1}(\phi_i(K_i)).
\end{equation}
Since $G_t=\emptyset$, and we certainly have
$$
\cup_ia_i^{-1}[\cup_j\tau_{ij}(\psi_{ij}^{-1}(K_i))]\subset d_h^{-1}(\phi(K)),
$$
(\ref{eq9}) follows from induction on $i$, using (\ref{eq8}) and (\ref{eq7}) and since
$$
\cup_{i=0}^t[a_0^{-1}(G_0)\cap \cdots \cap a_{i-1}^{-1}(G_{i-1})]\cap (X_f\setminus a_i^{-1}(G_i))=X_f.
$$
\end{proof}
From the discussion after Theorem \ref{TheoremA} and Theorem \ref{TheoremB}, we obtain the following statement.
Suppose $\phi:Y\rightarrow X$ is a morphism of reduced real analytic spaces such that $X$ is smooth.
Let $\tilde Y\rightarrow \tilde X$ be a complexification of $\phi:Y\rightarrow X$ such that $\tilde X$ is smooth and $\tilde Y$ is reduced. Suppose that $\tilde K$ is a compact neighborhood in $\tilde Y$
which is invariant under the auto conjugation of $\tilde Y$. Let $K$ be the real part of $\tilde K$ which is a compact neighborhood in $Y$.
Suppose that $h\in \mathcal E_X$. Then there exists $\tilde d_h:\tilde X_h\rightarrow \tilde X\in h$, morphisms of reduced complex analytic spaces $\tilde \phi_i:\tilde Y_i\rightarrow \tilde X$ for $0\le i\le t$ with compact neighborhoods $\tilde K_i$ in $\tilde Y_i$ such that
$\tilde \phi_0(\tilde K_0)= \tilde \phi(\tilde K)$, $\dim \tilde Y_{i+1}<\dim \tilde Y_i$ for all $i$ and $\tilde Y_t=\emptyset$. There exist commutative diagrams for $0\le i\le t$
\begin{equation}\label{eq20}
\begin{array}{rcl}
\tilde {\hat{Y}}_{ij}&\stackrel{\tilde \sigma_{ij}}{\rightarrow}&\tilde X_h\\
\tilde b_{ij}\downarrow&&\downarrow \tilde a_i\\
\tilde{\overline Y_{ij}}&\stackrel{\tilde\tau_{ij}}{\rightarrow}&\tilde X_i\\
\beta_{ij}\downarrow&&\downarrow \alpha_{ij}\\
\tilde Y_{ij}&\stackrel{\tilde \phi_{ij}}{\rightarrow}&\tilde X_{ij}\\
\tilde \delta_{ij}\downarrow&&\downarrow\tilde \gamma_{ij}\\
\tilde Y_i&\stackrel{\tilde \phi_i}{\rightarrow}&\tilde X
\end{array}
\end{equation}
where $\phi_{ij}:\tilde Y_{ij}\rightarrow\tilde X_{ij}$ are monomial morphisms, $\tilde{\overline Y}_{ij}=\tilde\phi_{ij}^{-1}[X_i]$ and $\tilde{\hat{Y}}_{ij}=\tilde\tau_{ij}^{-1}[\tilde X_h]$. Let $\tilde\psi_{ij}=\tilde\delta_{ij}\tilde\beta_{ij}$, $\tilde c_{ij}=\tilde\psi_{ij}\tilde b_{ij}$, $\tilde\pi_i=\tilde\gamma_{ij}\tilde \alpha_{ij}$ and $\tilde d_h=\tilde\pi_i\tilde a_i$ such that
\begin{equation}\label{eq23}
\tilde d_h^{-1}(\tilde \phi(\tilde K))=\cup_{i,j}\tilde\epsilon_{ij}^{-1}(\tilde\phi_{ij}(\tilde\delta_{ij}^{-1}(\tilde K)))=\cup_i\tilde a_i^{-1}[\cup_j\tilde\tau_{ij}(\tilde\psi_{ij}^{-1}(\tilde K_i))]
\end{equation}
Further, there are compatible auto conjugations of these analytic spaces and morphisms such that the real parts are
$d_h:X_h\rightarrow X$, morphisms of reduced real analytic spaces $\phi_i:Y_i\rightarrow X$ for $0\le i\le t$ with compact neighborhoods $K_i$ in $Y_i$ with $K_i=\tilde K_i\cap Y_i$ such that
$\phi_0(K_0)=\phi(K)$, $\dim Y_{i+1}<\dim Y_i$ for all $i$ and $Y_t=\emptyset$. We may assume that $X_h\ne \emptyset$ if and only if $h_{\tilde X_h}$ is a real point of $\tilde X_h$. Suppose that $X_h\ne\emptyset$. Then there exist commutative diagrams for $0\le i\le t$
\begin{equation}\label{eq21}
\begin{array}{rcl}
\hat{Y}_{ij}&\stackrel{\sigma_{ij}}{\rightarrow}&X_h\\
b_{ij}\downarrow&&\downarrow a_i\\
\overline Y_{ij}&\stackrel{\tau_{ij}}{\rightarrow}&X_i\\
\beta_{ij}\downarrow&&\downarrow a_{ij}\\
Y_{ij}&\stackrel{\phi_{ij}}{\rightarrow}&X_{ij}\\
\delta_{ij}\downarrow&&\downarrow \gamma_{ij}\\
Y_i&\stackrel{\phi_i}{\rightarrow}&X
\end{array}
\end{equation}
where $\phi_{ij}:Y_{ij}\rightarrow X_{ij}$ are monomial morphisms, $\overline Y_{ij}=\phi_{ij}^{-1}[X_i]$ and $\hat{Y}_{ij}=\tau_{ij}^{-1}[X_h]$, $\psi_{ij}=\delta_{ij}\beta_{ij}$, $c_{ij}=\psi_{ij}b_{ij}$, $\pi_i=\gamma_{ij}\alpha_{ij}$ and $d_h=\pi_ia_i$ such that
\begin{equation}\label{eq22}
d_h^{-1}(\phi(K))=\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(\delta_{ij}^{-1}( K)))=\cup_i a_i^{-1}[\cup_j\tau_{ij}(\psi_{ij}^{-1}( K_i))]
\end{equation}
\begin{Theorem}\label{TheoremC} Suppose that $X$ and $Y$ are real analytic spaces such that $X$ is smooth and $\phi:Y\rightarrow X$ is a proper real analytic map. Let $p\in X$. Then there exists a finite number of real analytic maps $\pi_{\alpha}:V_{\alpha}\rightarrow X$ such that:
\begin{enumerate}
\item[1)] Each $V_{\alpha}$ is smooth and each $\pi_{\alpha}$ is a composition of local blowups of nonsingular sub varieties,
\item[2)] There exist compact neighborhoods $N_{\alpha}$ in $V_{\alpha}$ for all $\alpha$ such that $\cup_{\alpha}\pi_{\alpha}(N_{\alpha})$ is a compact neighborhood of $p$ in $X$,
\item[3)] For all $\alpha$, $\pi_{\alpha}^{-1}(\phi(Y))$ is a semi analytic subset of $V_{\alpha}$.
\end{enumerate}
\end{Theorem}
\begin{proof}
Let $\tilde\phi:\tilde Y\rightarrow \tilde X$ be a complexication of $\phi:Y\rightarrow X$ so that $\tilde X$ is smooth and $\tilde Y$ is reduced.
Let $\tilde U$ be a relatively compact open neighborhood of $p$ in $\tilde X$ which is invariant under the auto conjugation of $\tilde X$ and let $\tilde L$ be the closure of $\tilde U$ in $\tilde X$. Let $L=\tilde L\cap X$, a compact neighborhood of $p$ in $X$.
Let $K'=\tilde\phi^{-1}(\tilde L)$. The real part of $K'$ is $K=\phi^{-1}(L)$ which is compact since $\phi$ is proper. Let $N$ be a compact neighborhood of $K$ in $\tilde Y$ which contains $K$ and is preserved by the auto conjugation of $\tilde Y$. Let $\tilde K=K'\cap N$. The set $\tilde K$ is a compact neighborhood in $\tilde Y$ which is preserved by the auto conjugation of $\tilde Y$ such that the real part of $\tilde K$ is $K$.
Let $U$ be the real part of $\tilde U$ which is an open neighborhood of $p$ in $X$ with closure $L$ in $X$. Let $V=\phi^{-1}(U)$, whose closure is $K=\phi^{-1}(L)$. We have that
\begin{equation}\label{eq30}
\phi(V)=\phi(K)\cap U.
\end{equation}
For each $h\in \mathcal E_{\tilde X}$ such that $X_h\ne\emptyset$, we have
associated complex analytic morphisms $\tilde d_h:\tilde X_h\rightarrow \tilde X$ with real part $d_h:X_h\rightarrow X$, and associated diagrams (\ref{eq20}) with real part (\ref{eq21}).
For all $i,j$, we have that
$$
d_h^{-1}(\phi(K))=d_h^{-1}(\phi(K))\cap[\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(Y_{ij}))]
$$
by (\ref{eq22}). Thus
\begin{equation}\label{eq31}
\begin{array}{lll}
d_h^{-1}(\phi(V))&=&d_h^{-1}(\phi(K)\cap U)=d_h^{-1}(\phi(K))\cap d_h^{-1}(U)\\
&=& d_h^{-1}(\phi(K))\cap [\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(Y_{ij}))]\cap d_h^{-1}U)\\&=&d_h^{-1}(U)\cap[\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(Y_{ij}))].
\end{array}
\end{equation}
We now establish that $d_h^{-1}(\phi(V))$ is a semi analytic subset of $X_h$. For all $i,j$, $\phi_{ij}( Y_{ij})$ is semi analytic in $X_{ij}$ since $\phi_{ij}$ is a monomial morphism (by the Tarski Seidenberg theorem, c.f. Theorem1.5 \cite{BM3}. Thus $d_h^{-1}(\phi(V))$ is semianalytic in $X_h$ by (\ref{eq31}).
For $h\in \mathcal E_{\tilde X}$, let $\tilde C_h$ be an open relatively compact neighborhood of $h_{\tilde X}$ in $\tilde X_h$ on which the auto conjugation acts. Let $\overline d_h:\tilde C_h\rightarrow \tilde X$ be the induced morphism. Let $C$ be a compact neighborhood of $p$ in $\tilde X$ such that $C\subset \tilde U$ and let $C'=\rho_{\tilde X}^{-1}(C)$, The set $C'$ is compact since $\rho_{\tilde X}$ is proper (Theorem 3.4 \cite{Hcar} or Theorem 3.16 \cite{H}). The open sets $\mathcal E_{\overline d_f}$ for $f\in C'$ give an open cover of $C'$, so there is a finite sub cover, which we index as
$\mathcal E_{\overline d_{f_1}},\ldots,\mathcal E_{\overline d_{f_t}}$.
We may replace $\tilde X_{f_i}$ with $\tilde d_{f_i}^{-1}(\tilde U)$, so that (\ref{eq31}) implies that
$$
d_h^{-1}(\phi(Y))=\cup_{i,j}\epsilon_{ij}^{-1}(\phi_{ij}(Y_{ij}))
$$
is a semi analytic set.
Let $C_{f_i}$ be the closure of $\tilde C_{f_i}$ in $\tilde X_{\tilde d_{f_i}}$ which is compact. Since $\rho_{\tilde X}$ is surjective and continuous, we have inclusions of compact sets
$$
p\in C\subset\cup_{i=1}^t\tilde d_{f_i}(C_{f_i}).
$$
Since $X$ and $\tilde X$ are smooth and each $\tilde d_{f_i}$ is a finite product of local blowups of closed analytic subspaces which are preserved by the auto conjugation, if $F_{f_i}$ is the the union of
the preimage in $\tilde X_{f_i}$ of these subspaces, then $F_{f_i}$ is a nowhere dense closed analytic subspace of $\tilde X_{f_i}$ which is preserved by the auto conjugation such that
$\tilde X_{f_i}\setminus F_{f_i} \rightarrow \tilde X$ is an open embedding. The image $\tilde d_{f_i}(F_{f_i})$ is nowhere dense in $\tilde X$, and since $\tilde X$ and $X$ are smooth varieties,
$\tilde d_{f_i}(F_{f_i})\cap X$ is nowhere dense in $X$ (as in the proof of Theorem 8.7 \cite{C}).
Let $C^*=C\cap X$ which is a compact neighborhood of $p$ in $X$ which is contained in $L$. Let $p'\in C^*\setminus \cup_{i=1}^t\tilde d_{f_i}(F_{f_i})$. Then there exist $i$ and $e\in \mathcal E_{\overline d_{f_i}}$ such that $e_{\tilde X}=p'$. Let $p_i=e_{\tilde X_{f_i}}\in C_i\subset \tilde X_{f_i}$. Since $p_i\not\in F_{f_i}$, $\tilde d_{f_i}$ is an open embedding near $p_i$, and since $p'$ is real, $p_i\in X_{f_i}$ is real. Thus $p'\in d_{f_i}(C_{f_i}\cap X_{f_i})$. We thus have that the set
$C^*\setminus \cup_{i=1}^t\tilde d_{f_i}(F_{f_i})$, which we have shown is dense in $C^*$, is contained in the compact set $\cup_{i=1}^t d_{f_i}(C_{f_i}\cap X_{f_i})$. Thus its closure $C^*$ is contained in $\cup_{i=1}^t d_{f_i}(C_{f_i}\cap X_{f_i})$, giving the conclusion of 2) of the theorem.
\end{proof}
\begin{Theorem}\label{TheoremD} Suppose that $X$ is a smooth real analytic space. Suppose that $A$ is a sub analytic subset of $X$ and that $p\in X$. Then there exists a finite number of real analytic maps $\pi_{\alpha}:V_{\alpha}\rightarrow X$ such that:
\begin{enumerate}
\item[1)] Each $\pi_{\alpha}$ is a composition of local blowups of nonsingular sub varieties,
\item[2)] There exist compact neighborhoods $N_{\alpha}$ in $V_{\alpha}$ for all $\alpha$ such that $\cup_{\alpha}\pi_{\alpha}(N_{\alpha})$ is a compact neighborhood of $p$ in $X$,
\item[3)] For all $\alpha$, $\pi_{\alpha}^{-1}(A)$ is a semianalytic subset of $V_{\alpha}$.
\end{enumerate}
\end{Theorem}
\begin{proof} After replacing $X$ with a suitable open neighborhood of $p$, we have by Definition 6.10 \cite{H}, an expression
$$
A=\cup_{k\in I}\cap_{l\in J}(A_{kl}\setminus B_{kl})
$$
where $I$ and $J$ are nonempty finite index sets and there are proper real analytic maps of reduced analytic spaces
$\phi_{kl}:Y_{kl}\rightarrow X$ and $\psi_{kl}:Z_{kl}\rightarrow X$ such that $A_{kl}=\phi_{kl}(Y_{kl})$ and $B_{kl}=\psi_{kl}(Z_{kl})$.
Let $\tilde X$ be a smooth complexification of $X$ and let $\tilde\phi_{kl}:\tilde Y_{kl}\rightarrow \tilde X$ and $\tilde\psi_{kl}:\tilde Z_{kl}\rightarrow\tilde X$ be complexifications of $\phi_{kl}$ and $\psi_{kl}$.
For each $h\in \mathcal E_{\tilde X}$, $k\in I$ and $l\in J$ we construct as in the proof of Theorem \ref{TheoremC} complex analytic morphisms of smooth analytic spaces
$\tilde d_h^{kl}:(\tilde X_h)_{kl}\rightarrow \tilde X\in h$ and $(\tilde d'_h)^{kl}:(\tilde X'_h)_{kl}\rightarrow \tilde X\in h$ with auto conjugations, so that taking the invariants of the auto conjugations we have real analytic morphisms
$d_h^{kl}:(X_h)_{k,l}\rightarrow X$ and $(d_h')^{kl}:(X_h')_{kl}\rightarrow X$ such that $(d_h^{kl})^{-1}(A_{kl})$ is semianalytic in $(X_h)_{kl}$ and $((d_h')^{kl})^{-1}(B_{kl})$ is semianalytic in $(X_h')_{kl}$ for $k\in I$ and $l\in J$.
There exists $\tilde d_h:\tilde X_h\rightarrow \tilde X\in h$ with auto conjugation such that there are factorizations $\tilde\beta_{kl}:\tilde X_h\rightarrow (\tilde X_h)_{kl}$ and $\tilde\gamma_{kl}:\tilde X_h\rightarrow (\tilde X_h')_{kl}$
for all $k\in I$ and $l\in J$. Thus taking the real part of $\tilde d_h:\tilde X_h\rightarrow \tilde X$, we have real analytic morphisms
$\beta_{kl}:X_h\rightarrow (X_h)_{kl}$ and $\gamma_{kl}:X_h\rightarrow (X_h')_{kl}$ factoring through $X_h\rightarrow X$. Thus
$$
d_h^{-1}(A_{kl})=\beta_{kl}^{-1}(d_h^{kl})^{-1}(A_{kl})
$$
and
$$
d_h^{-1}(B_{kl})=\gamma_{kl}^{-1}((d_h')^{kl})^{-1}(B_{kl})
$$
are semi analytic in $X_h$.
We now proceed as in the proof of Theorem \ref{TheoremC} to obtain the condition 2) of Theorem \ref{TheoremD}.
\end{proof}
\begin{Theorem}\label{TheoremR} Let $X$ be a smooth connected real analytic space and let $A$ be a sub analytic subset of $X$. Let $p\in X$ and let $n=\dim X$. Then there exist a finite number of real analytic morphisms $\pi_{\alpha}:V_{\alpha}\rightarrow X$
which are finite sequences of local blowups over $X$ and induce an open embedding of an open dense subset of $V_{\alpha}$ into $X$ such that:
\begin{enumerate}
\item[1)] Each $V_{\alpha}$ is isomorphic to ${\NZQ R}^n$,
\item[2)] There exist compact neighborhoods $K_{\alpha}$ in $V_{\alpha}$ such that $\cup_{\alpha}(K_{\alpha})$ is a compact neighborhood of $p$ in $X$,
\item[3)] For each $\alpha$, $\pi_{\alpha}^{-1}(A)$ is union of quadrants in ${\NZQ R}^n$.
\end{enumerate}
\end{Theorem}
\begin{proof} The proof follows from Theorem \ref{TheoremD} and Proposition 7.2 \cite{H} and Lemma 7.2.1 \cite{H}.
\end{proof}
|
1,477,468,751,074 | arxiv | \section{Introduction}
The Aharonov-Bohm (AB) effect, first proposed in 1959 \cite{ab}, was experimentally realized in a normal metal system in 1982
\cite{sharvin}.
Later the AB effect was observed in a semiconductor
system \cite{timp1}, and was the subject of a number of investigations
which expanded our general understanding of mesoscopic physics
\cite{timpagain,ford,ismail}. These investigations focused their
attention at relatively high magnetic fields. Only
\cite{chris} directly adressed the phase of the oscillations.
Recently, due to the perfection of e-beam lithography, the AB effect has been the subject of new interest. AB rings are now
used to perform phase sensitive measurements on e.g.\ quantum dots \cite{heiblum} or on rings were a local gate only affects the properties
in one of the
arms of the ring \cite{mailly}. The technique used in these reports is to locally change the properties of one of the arms in the
ring, and study the AB effect as a function of this perturbation. Information
about the changes in phase can be extracted from the measurements. Especially the observation of a period halving from $h/e$ to $h/2e$
and phase-shifts of $\pi$ in the magnetoconductance signal has attracted large interest.
\section{Experiment}
We fabricate the AB rings in a standard two dimensional electron gas (2DEG) situated 90nm below the surface of a
GaAs/GaAlAs heterostructure. The 2DEG
electron density is $2.0 \cdot 10^{15}\rm{m}^{-2}$ and the mobility is $90\rm{T}^{-1}$. This corresponds to a mean free path of
app.\ $6$ $\rm{\mu m}$.
The ring is defined by e-beam lithography and realized with a shallow etch technique \cite{Anders}.
The etched AB structure has a ring radius of $0.65 \rm{\mu m}$ and a width of the arms of $200\rm{nm}$
(Fig. \ref{1}, left insert).
A $30\rm{\mu m}$ wide gold gate covers the entire ring, and is used to change the electron density.
A positive voltage $V_g$ must be applied to the gate for the structure to conduct.
The sample was cooled to $0.3{\rm K}$ in a $^{3}$He cryostat equipped with a copper electromagnet.
Measurements were performed using a conventional voltage biased lock-in technique
with a excitation voltage of $V=7.7{\rm \mu V}$ oscillating at a frequency of $131 {\rm Hz}$.
Here we show measurements performed on one device, similar results
have been obtained with another device in a total of six different cool-downs.
\begin{figure}
\centerline{
\epsfig{file=Fig1.ps,width=11cm}
}
\caption{
Measured magnetoconductance of the device shown on the SEM picture in the left insert.
The magnetoconductance show very clear AB oscillations superposed on a slowly varying background.
The right insert displays the zero
magnetic field conductance at $T=4.2 {\rm K}$ as function of gate voltage. The
conductance curve displays distinct steps which show that the device is in a few-mode regime.
}
\label{1}
\end{figure}
\section{Results}
We first consider the conductance as a function of the voltage applied to the global gate at zero magnetic field.
This is shown in Fig.\ \ref{1} (right insert), at $T$=$4.2 {\rm K}$. Steps are observed at approximate integer values of
$e^{2}/h$. At least four steps are seen as the voltage is increased with $0.18{\rm V}$ from pinch-off. Such steps have previously been
reported in AB rings \cite{ismail}. The steps show that our system, in the gate voltage regime used here, only has a
few propagating modes. When the temperature is lowered a fluctuating signal is superposed the conductance curve. At $0.3$K,
the steps are completely washed out by the fluctuations. We ascribe the fluctuations to resonances.
They appear at the temperatures where the AB
oscillations become visible and are the signature of a fully phase coherent device.
\begin{figure}
\centerline{
\epsfig{file=Fig2.ps,width=10cm}
}
\caption{Grayscale plot of the measured conductance subtracted the conductance at zero field,
$G(\Phi,V_{G})$-$G(0,V_{G})$, as a function of applied magnetic
flux $\Phi$ through the ring (horizontal axis) and global gate voltage $V_{g}$
(vertical axis).
}
\label{gray}
\end{figure}
We show in Fig.\ \ref{1} an example of a magnetoconductance measurement.
Here the amplitude of the oscillations is $\sim$ 7 \% around zero field. We have seen oscillation amplitudes of up to 10 \%.
The conductance measurement is, due to a long distance from the voltage probes to the sample, effectively two-terminal.
Hence the magnetoconductance must be symmetric, $G(B)$ = $G(-B)$,
due to the Onsager relations. Here $B$ is the applied magnetic field. This means, that there can only be a maximum or
a minimum of the conductance at zero field, or stated differently, that the phase of the oscillations is $0$ or $\pi$
\cite{heiblum}.
In Fig.\ \ref{gray} we show the conductance $G(\Phi,V_g)$ {\sl subtracted} the fluctuating conductance at zero field $G(0,V_g)$,
as a function of magnetic flux $\Phi$ through the ring and gate voltage $V_g$.
The conductance is symmetric. Note that the dark (light) regions
correspond to magnetoconductance traces with an AB phase of $0$ ($\pi$). To exemplify this,
we show single traces in Fig.\ \ref{ex}. Another remarkable feature is the occurrence of traces with {\sl half} the
expected period in magnetic flux, see Fig.\ \ref{ex}.
We observe phase-shifts in the magnetoconductance, and occasional halving of the period, in all our measurements.
The transitions between situations with AB-phase $0$ and $\pi$ are smooth as the gate voltage is changed,
as can be seen in Fig.\ \ref{gray}. In between, magnetoconductance traces
that have both $h/e$- and $h/2e$-periodicity appear, Fig.\ \ref{fits}.
The zero-field conductance $G(0,V_g)$ for the measurement shown in Fig.\ \ref{gray} varies between $2.5$ to $4.5$ in units
of $e^2/h$. We find in general, that for conductances of the AB ring less than app.\ $2e^2/h$, the AB oscillations are
weak or not present at all. This might be due to one of the arms pinching off before the other.
\begin{figure}
\centerline{
\epsfig{file=Fig3.ps,width=\linewidth}
}
\caption{Examples of magnetoconductance traces, showing (from above) AB phase of $\pi$,
period halving, and AB phase of $0$. The voltages refer to the $V_g$-axis on the
previous Fig.\ \ref{gray}. Circles are measurements, lines are fits with Eq.\ \ref{fiteq}.
The values of $\delta$ obtained from the fit are, from above, $1.50$, $0.73$, and $0.00$.
}
\label{ex}
\end{figure}
\section{Discussion}
The fact that the AB oscillations can have a minimum at zero field, implies that the AB ring on these occasions
is not symmetric, in the sense that the quantum phase acquired by traversing the two arms is not the same. In order
to understand the behaviour, we compare our measurements with the theory \cite{bil1,bil2}, which is derived for a phase coherent
device with 1D independent electrons, and only one incident mode. This is the simplest possible theoretical model
one can think of. The conductance $G$ is given by \cite{bil2}
\begin{eqnarray}
\label{bil}
{\rm G}(\theta, \phi, \delta )=& & \frac{2e^{2}}{h} 2\epsilon {\rm g}(\theta,\phi) \nonumber\\
& &(\sin^{2}\phi\cos^{2}\theta+\sin^{2}\theta\sin^{2}\delta-\sin^{2}\phi\sin^{2}
\delta).
\end{eqnarray}
Here, $\phi=k_{F}L$, where $k_{F}$ is the Fermi wave number and $L$ is half the circumference
of the ring, is the average phase due to spatial propagation.
$\delta=\Delta (k_{f}L)$ is the phase difference between the two ways of traversing the ring.
When $\delta$ is not $0$, the AB oscillations might be phase-shifted by $\pi$.
$\theta=\pi \Phi/\Phi_{0}$ is the phase originating from the magnetic flux.
The coupling parameter $\epsilon$ can vary between $0$, for a closed ring, and $1/2$.
The function ${\rm g(\theta, \phi)}$ is given by
\begin{eqnarray}
& &{\rm g(\theta, \phi)}= \nonumber\\
& &\frac{2 \epsilon}{(a_{-}^{2}\cos2\delta+a_{+}^{2}\cos2\theta-(1-\epsilon)\cos2\phi)^{2}+\epsilon^{2}\sin^{2}2\phi},
\end{eqnarray}
where $a_{\pm}=(1/2)(\sqrt{1-2\epsilon}\pm1)$.
Overall, we find the best agreement with the lineshape of the oscillations by taking $\epsilon$ = $1/2$, as
expected for an open system. Previously \cite{prb}, we estimated $\phi = k_FL$ $\sim$
$100-160$, for the gate voltage regime used here. However, note that varying $\phi$ and $\delta$ between $0$ and $\pi/2$
in the expression (\ref{bil}) exhaust all possible lineshapes of the magnetoconductance oscillations.
The equation (\ref{bil}) gives a conductance that can oscillate between $0$ and $2(e^2/h)$. The scale of the oscillations
as seen in in Fig.\ \ref{gray}, is at most $0.3(e^2/h)$. In order to match the lineshape of the magnetoconductance
oscillations to measurements, we use the form
\begin{equation}
\label{fiteq}
{\rm G(B)}={\rm G_o}+{\rm G_{\Delta}}\cdot {\rm G}(\theta(B), \phi, \delta )_{\epsilon=1/2}.
\end{equation}
The introduction of the parameters ${\rm G_o}$ and ${\rm G_{\Delta}}$ is partly justified by the fact that 1) the experiment is performed
at a finite temperature, where the device might not be perfectly coherent. Incoherent transmission will on average not contribute
to the magnetoconductance oscillations. 2) For a system with more than one incident mode, again there will be a constant background and
the amplitude of the oscillations will be diminished \cite{bil2}.
The lines in Fig.\ \ref{ex} are fits with the form (\ref{fiteq}). Note first, that indeed the expression (\ref{bil}) can
produce both phase-shifts and halving of period. (For $\phi=\delta=\pi/4$ the period is purely $h/2e$.) Next, in Fig.\ \ref{fits}
several magnetoconductance traces are fitted with (\ref{fiteq}). The lineshapes of (\ref{bil}) agree nicely with the measurements.
Note however, that the introduction of the two extra parameters ${\rm G_o}$ and ${\rm G_{\Delta}}$ in (\ref{fiteq}), in addition
to $\phi$ and $\delta$ gives $4$ adjustable parameters in the fit. In order to extract solid information on the variation of
$\phi$ and $\delta$ in the experiment, an independent assesment of ${\rm G_o}$ and ${\rm G_{\Delta}}$ will be needed.
\begin{figure}
\centerline{
\epsfig{file=Fig4.ps,width=\linewidth}
}
\caption{Several magnetoconductance traces. Circles are measurements, lines are fits with Eq.\ \ref{fiteq}.
}
\label{fits}
\end{figure}
\section{Conclusion}
The oscillatory magnetoconductance of an AB ring, and in particular the phase of the oscillations, is systematically
studied as a function of electron density. We observe phase-shifts
of $\pi$ in the magnetoconductance oscillations, and halving of the fundamental $h/e$ period, as the density is varied.
All those features are reproduced by a simple theoretical model \cite{bil2}, when allowing for an asymmetry in the electron density in the
two arms of the ring.
Our interpretation gives a
simple explanation for why period-halving and phase shifts should appear in mesoscopic AB rings.
Further, our measurements suggest that variations in single-mode characteristics might be probed
by studying the lineshape of the AB oscillations.
\section{acknowledgements}
We wish to thank David H.\ Cobden and Per Hedeg\aa rd for enlightening discussions.
This work was financially supported by Velux Fonden, Ib Henriksen Foundation, Novo
Nordisk Foundation, the Danish Research Council (grant 9502937, 9601677 and 9800243)
and the Danish Technical Research Council (grant 9701490)
|
1,477,468,751,075 | arxiv | \section{Introduction}
Let $\mathcal{F}$ be a family of graphs. The \textit{Tur\'an number} of $\mathcal{F}$, denoted by $ex(n, \mathcal{F})$, is the maximum number of edges in a graph with $n$ vertices which does not contain any subgraph isomorphic to a graph in $\mathcal{F}$.
When $\mathcal{F}=\{F\}$, we write $ex(n, F)$ instead of $ex(n, \{F\})$.
The problem of determining Tur\'an number for assorted graphs traces its history back to 1907, when Mantel showed that $ex(n,K_3)=\lfloor\frac{n^2}{4}\rfloor$.
In 1941, Tur\'an \cite{1941Turan} proved that if a graph does not contain a complete subgraph $K_r$, then the maximum number of edges it can contain is given by the Tur\'an-graph, a complete balanced $(r-1)$-partite graph.
For a graph $G$ and $S,T\subseteq V(G)$, denote by $E_G(S,T)$ the set of edges between $S$ and $T$ in $G$, i.e., $E_G(S,T)=\{uv\in E(G)\colon\, u\in S, v\in T\}$.
Let $e_G(S,T)=|E_G(S,T)|$.
If $S=T$, we use $e_G(S)$ instead of $e_G(S,S)$.
For a vertex $v\in V(G)$, the {\it degree} of $v$, written as $d_G(v)$ or simply $d(v)$,
is the number of edges incident with $v$. We use $d_T(v)$ instead of $e_G(S,T)$ when $S=\{v\}$.
For any $U \subseteq V(G)$, let $G[U]$ be the subgraph induced by $U$ whose edges are precisely the edges of $G$ with
both ends in $U$.
Let $G$ be a graph of order $n$, $P$ a property defined on $G$, and $k$ a positive integer.
A property $P$ is said to be \textit{$k$-stable}, if whenever $G+uv$ has the property $P$ and $d_G(u) + d_G(v) \geq k$, then $G$ itself has the property $P$.
The $k$-\textit{closure} of a graph $G$ is the (unique) smallest graph $G'$ of order $n$ such that $E(G) \subseteq E(G')$ and $d_{G'}(u)+d_{G'}(v)<k$ for all $u v \notin E(G')$.
The $k$-closure can be obtained from $G$ by a recursive procedure of joining nonadjacent vertices with degree-sum at least $k$. In particular, if $G'=G$, we say that $G$ is \textit{stable under taking} $k$-closure.
Thus, if $P$ is $k$-stable and the $k$-closure of $G$ has property $P$, then $G$ itself has property $P$.
For a natural number $\alpha$ and a graph $G$, the $\alpha$-\textit{disintegration} of a graph $G$ is the process of iteratively removing from $G$ the vertices with degree at most $\alpha$ until the resulting graph has minimum degree at least $\alpha+1$ or is empty. The resulting subgraph $H=H(G, \alpha)$ will be called the $(\alpha+1)$-\textit{core} of $G$.
It is well known that $H(G, \alpha)$ is unique and does not depend on the order of vertex deletion (for instance, see \cite{1996P}).
The \textit{matching number} $\nu(G)$ is the number of edges in a maximum matching of $G$.
The $n$-vertex graph $H(n, k, a)$ is defined as follows.
The vertex set of $H(n, k, a)$ is partitioned into three sets $A, B, C$ such that $|A|=a,|B|=k-2a,|C|=n-k+a$, and the edge set of $H(n, k, a)$ consists of all edges between $A$ and $C$ together with all edges in $A \cup B$.
Let $H^+(n, k, a)$ and $H^{++}(n, k, a)$ be the graph obtained by adding one edge and two independent edges in $C$ of $H(n, k, a)$, respectively.
The number of $r$-cliques in $H(n, k, a)$ is denoted by $h_{r}(n, k, a):=\binom{k-a}{r}+(n-k+a)\binom{a}{r-1}$, where $h_{r}(n, k, 0)=\binom{k}{r}$.
A \textit{linear forest} is a forest whose connected components are all paths and isolated vertices.
Let $\mathcal{L}_{k}$ be the family of all linear forests of size $k$ without isolated vertices.
In \cite{2019Wang}, Wang and Yang proved that $ex\left(n ; \mathcal{L}_{n-k}\right)=\binom{n-k}{2}+O\left(k^{2}\right)$ when $n\geq 3k$.
Later, Ning and Wang \cite{2020Ning} completely determined the Tur\'an number $ex\left(n ; \mathcal{L}_{k}\right)$ for all $n>k$.
\begin{figure}
\begin{center}
\begin{tikzpicture}
\filldraw [color=black, fill=blue!15, very thick] (0,0) circle (50pt)node [left=2pt]{\Large $B$};
\filldraw [color=black, fill=blue!15, very thick] (1,0) circle (20pt)node {\Large $A$};
\draw[color=black, fill=white, , very thick] (4.5,0) ellipse(0.8 and 1.6)node {\Large $C$};
\draw [black,very thick](1,0.7)--(4.5,1.6);
\draw [black,very thick](1,-0.7)--(4.5,-1.6);
\end{tikzpicture}
\end{center}\caption{$H(n,k,a)$}
\end{figure}
\begin{theorem}[Ning and Wang \cite{2020Ning}]\label{span}
For any integers $n$ and $k$ with $1 \leq k \leq n-1$, we have
$$
ex\left(n,\mathcal{L}_{k}\right)=\max \left\{h_2\left(n, k, 0\right),h_2\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}.
$$
\end{theorem}
Given a graph $T$ and a family of graphs $\mathcal{F}$, the \textit{generalized Tur\'an number} of $\mathcal{F}$ is the maximum number of copies of $T$ in an $\mathcal{F}$-free graph on $n$ vertices, denoted by $ex(n,T,\mathcal{F})$.
Note that $ex(n, K_2, \mathcal{F})=ex(n, \mathcal{F})$.
The problem to estimate generalized Tur\'an number has
received a lot of attention.
In 1962, Erd\H{o}s \cite{1962E} generalized the classical result of Tur\'an by determining the exact value of $ex(n,K_r,K_t)$.
Luo \cite{2018Luo} determined the upper bounds on $ex(n,K_r,P_{k})$ and $ex(n,K_r,\mathcal{C}_{\geq k})$, where $\mathcal{C}_{\geq k}$ is the family of all cycles with length at least $k$.
In \cite{2020Gerbner}, Gerbner, Methuku and Vizer investigated the function $ex(n,T,kF)$, where $kF$ denotes $k$ vertex disjoint copies of a fixed graph $F$.
The systematic study of $ex(n,T,\mathcal{F})$ was initiated by Alon and Shikhelman \cite{2016Alon}.
Recently, Zhang, Wang and Zhou \cite{2021Zhang} determined the exact values of $ex(n,K_r,\mathcal{L}_{k})$ by using the shifting method.
\begin{theorem}[Zhang, Wang and Zhou \cite{2021Zhang}]\label{2021Zhang}
For any $r\geq 2$ and $n\geq k+1$,
$$ex(n,K_r,\mathcal{L}_{k})=\max \left\{h_r\left(n, k, 0\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}.$$
\end{theorem}
Let $N_r(G)$ denote the number of $r$-cliques in $G$.
When $T = K_r$, $ex(n, K_r, \mathcal{F})$ is a function specifying the maximum possible number of $r$-cliques in an $\mathcal{F}$-free graph on $n$ vertices.
We extend Theorem \ref{2021Zhang} as follows.
\begin{theorem}\label{clique}
Let $G$ be an $\mathcal{L}_{k}$-free graph on $n$ vertices with minimum degree $d$ and $d\leq \lfloor\frac{k-1}{2}\rfloor$.
Then $$N_r(G)\leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}.$$
The graphs $H(n,k,d)$ and $H\big(n,k,\lfloor\frac{k-1}{2}\rfloor\big)$ show that this bound is sharp.
\end{theorem}
Many extremal problems have the property that there is a unique extremal example, and moreover any construction of close to maximum size is structurally close to this extremal example.
In \cite{2019F}, F\"uredi, Kostochka, and Luo studied the maximum number of cliques in non-$\ell$-hamiltonian graphs, where the property non-$\ell$-hamiltonian is $(n+\ell)$-stable.
Actually, they not only asked to determine the maximum number of cliques in graphs having a stable property $P$, but also asked to prove a stability version of it.
Motivated by the question proposed by F\"uredi, Kostochka, and Luo \cite{2019F}, we give the following result which is the stability version of Theorem \ref{clique}.
\begin{theorem}\label{stab2}
Let $G$ be an $\mathcal{L}_{k}$-free graph on $n$ vertices with minimum degree at least $d$.
If $n > k^5$, $r\leq \lfloor\frac{k-3}{2}\rfloor$ and $$N_r(G)>\max \left\{h_r(n, k, d),h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right)\right\},$$ then \\
(i) $G$ is a subgraph of the graph $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ if $k$ is odd;\\
(ii) $G$ is a subgraph of the graph $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$, $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^{++}\left(n, k-2, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ if $k$ is even.
\end{theorem}
In 1959, Erd\H{o}s and Gallai \cite{1959E} determined the maximum numbers of edges in an $n$-vertex graph with $\nu(G)\leq k$.
\begin{theorem}[Erd\H{o}s-Gallai Theorem \cite{1959E}]\label{1959E}
Let $G$ be a graph on $n$ vertices.
If $\nu(G) \leq k$, then
$$e(G)\leq \max \left\{h_{2}(n, 2k+1, 0),h_{2}(n, 2k+1, k)\right\}.$$
\end{theorem}
In \cite{2020Duan}, Duan et al. extended Erd\H{o}s-Gallai Theorem as follows.
\begin{theorem}[Duan et al. \cite{2020Duan}]\label{clique11}
If $G$ is a graph with $n\geq 2k+2$ vertices, minimum degree $d$, and $\nu(G) \leq k$,
then $$N_r(G)\leq\max \left\{h_r\left(n, 2k+1, d\right), h_r\left(n, 2k+1, k\right)\right\}.$$
\end{theorem}
As an application of our result, we give the stability version of Theorem \ref{clique11} for $2\leq r\leq k-1$.
\begin{theorem}\label{thm11}
Let $G$ be a graph on $n$ vertices with $\delta(G) \geq d$ and $\nu(G) \leq k$.
If $r\leq k-1$, $n > (2k+1)^5$ and
$$N_r(G)>\max \left\{h_{r}(n, 2k+1, d),h_{r}(n, 2k+1, k-2)\right\},$$
then $G$ is a subgraph of $H(n, 2k+1, k)$ or $H(n, 2k+1, k-1)$.
\end{theorem}
\section{The maximum number of cliques in $\mathcal{L}_{k}$-free graphs with given minimum degree}
The closure technique, which is initiated by Bondy and Chv\'atal \cite{1976Bondy} in 1976, played a crucial role in the proof of Theorem \ref{clique}.
In \cite{2020Ning}, Ning and Wang proved the property $\mathcal{L}_{k}$-free is $k$-stable.
\begin{lemma}[\cite{2020Ning}]\label{closure}
Let $G$ be a graph on $n$ vertices. Suppose that $u, v \in V(G)$ with $d(u)+d(v) \geq k$. Then $G$ is $\mathcal{L}_{k}$-free if and only if $G+u v$ is $\mathcal{L}_{k}$-free.
\end{lemma}
\noindent\textbf{Proof of Theorem \ref{clique}.}
Suppose, by way of contradiction, that $G$ is an $\mathcal{L}_{k}$-free graph with $N_{r}(G)>\max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}$.
Let $G^{\prime}$ be the $k$-closure of $G$.
Then Lemma \ref{closure} implies that $G'$ is $\mathcal{L}_{k}$-free.
Obviously, $\delta\left(G^{\prime}\right) \geq \delta(G)=d$.
Let $H_{1}$ denote the $\lfloor\frac{k+1}{2}\rfloor$-core of $G'$, i.e., the resulting graph of applying $\lfloor\frac{k-1}{2}\rfloor$-disintegration to $G'$.
\noindent\textbf{Claim 1.} $H_{1}$ is nonempty.
\noindent\textbf{Proof.} Suppose $H_{1}$ is empty.
Since one vertex is deleted at each step during the process of $\lfloor\frac{k-1}{2}\rfloor$-disintegration, that destroys at most $\binom{\lfloor\frac{k-1}{2}\rfloor}{r-1}$ cliques of size $r$.
The number of $K_{r}$'s contained in the last $\lceil\frac{k+1}{2}\rceil$ vertices is at most $\binom{\lceil\frac{k+1}{2}\rceil}{r}$.
Therefore,
$$
\begin{aligned}
N_{r}\left(G^{\prime}\right) & \leq \binom{\left\lceil\frac{k+1}{2}\right\rceil}{r} + \left(n-\left\lceil\frac{k+1}{2}\right\rceil\right)\binom{\lfloor\frac{k-1}{2}\rfloor}{r-1}\\
& = h_{r}\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right) \\
& \leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\},
\end{aligned}
$$
contradicting to the assumption of $G^{\prime}$, the claim follows.\qed
\noindent\textbf{Claim 2.} $H_{1}$ is a clique.
\noindent\textbf{Proof.}
Note that $d_{G^{\prime}}(u)\geq \lfloor\frac{k+1}{2}\rfloor$ for any vertex $u$ in $H_1$.
Since $G^{\prime}$ is closed under taking $k$-closure, $H_1$ is a clique.\qed
Let $t=\left|V\left(H_{1}\right)\right|$.
Now we estimate the range of $t$.
\noindent\textbf{Claim 3.} $\lfloor\frac{k+3}{2}\rfloor\leq t \leq k-d$.
\noindent\textbf{Proof.}
As $H_{1}$ is a clique and $d_{H_{1}}(u) \geq \lfloor\frac{k+1}{2}\rfloor$ for any vertex $u$ in $H_{1}$, we get $t \geq \lfloor\frac{k+3}{2}\rfloor$.
If $t \geq k-d+1$, then $d_{G^{\prime}}(u) \geq d_{H_{1}}(u) = t-1\geq k-d$ for any vertex $u$ in $H_1$.
Let $v$ be any vertex in $V\left(G^{\prime}\right) \backslash V\left(H_{1}\right)$.
Notice that $d_{G^{\prime}}(v) \geq d_{G}(v) \geq d$ and $d_{G^{\prime}}(u)+d_{G^{\prime}}(v) \geq k-d+d=k$.
Since $G^{\prime}$ is the $k$-closure of $G$, $v$ is adjacent to $u$.
Then $G^{\prime}$ contains a $P_{k+1}$, which is a contradiction.
Thus $\lfloor\frac{k+3}{2}\rfloor \leq t \leq k-d$.\qed
Let $H_{2}$ be the $(k+1-t)$-core of $G^{\prime}$.
Since $t \geq \lfloor\frac{k+3}{2}\rfloor$, we obtain $k+1-t \leq \lfloor\frac{k+1}{2}\rfloor$.
Therefore, $H_{1} \subseteq H_{2}$.
\noindent\textbf{Claim 4.} $H_{1} \neq H_{2}$.
\noindent\textbf{Proof.}
Suppose $H_{1}=H_{2}$.
Then $\left|V\left(H_{2}\right)\right|=t$.
Since each step during the process of $(k-t)$-disintegration destroys at most $\binom{k-t}{r-1}$ cliques of size $r$,
we have $N_{r}\left(G^{\prime}\right) \leq\binom{t}{r}+(n-t)\binom{k-t}{r-1}=h_{r}(n, k, k-t)$.
Note that $d \leq k-t\leq \lceil\frac{k-3}{2}\rceil$ from Claim 3.
By the convexity of $h_{r}(n, k, k-t)$, we have $N_{r}\left(G^{\prime}\right) \leq \max \left\{h_{r}(n, k, d), h_{r}(n, k, \lceil\frac{k-3}{2}\rceil)\right\}\leq \max \left\{h_r\left(n, k, d\right), h_r\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)\right\}$, a contradiction.
Thus the claim follows.\qed
By Claim 4, $H_{1}$ is a proper subgraph of $H_{2}$.
This implies that there are non-adjacent vertices $u$ and $v$ such that $u \in V(H_{1})$ and $v \in V(H_{2}) \backslash V(H_{1})$.
We have $d_{G^{\prime}}(u)+d_{G^{\prime}}(v) \geq t-1+(k+1-t)=k$.
As $G^{\prime}$ is stable under taking $k$-closure, $u$ must be adjacent to $v$.
We obtained a contradiction.
It is easy to see that graphs $H(n,k,d)$ and $H(n,k,\lfloor\frac{k-1}{2}\rfloor)$ are $\mathcal{L}_{k}$-free. Then either $H(n,k,d)$ or $H(n,k,\lfloor\frac{k-1}{2}\rfloor)$ obtains the bound.
The theorem is proved.\qed
\section{Stability on $\mathcal{L}_{k}$-free graphs}
\subsection{Proof of Theorem \ref{stab2}}
Let $G$ be a graph on $n$ vertices.
If there are at least $s$ vertices in $V(G)$ with degree at most $q$, then we say $G$ has $(s, q)$-\textit{P\'osa property}.
If $G$ has $(s, q)$-P\'osa property and $n \geq s+q$, then we can check that
$$N_r(G) \leq\binom{n-s}{r}+s \binom{q}{r-1}.$$
The following two lemmas show the relationship between the $k$-stable property and the P\'osa property.
With the help of these two lemmas, we can approximate the structure of $k$-closure of a graph.
\begin{lemma}\label{posa}
Let $n \geq k+1$.
Assume property $P$ is $k$-stable and the complete graph $K_n$ has the property $P$.
Suppose $G$ is a graph on $n$ vertices with minimum degree at least $d$.
If $G$ does not have property $P$, then there exists an integer $q$ with $d \leq q \leq \frac{k-1}{2}$ such that G has $(n-k+q, q)$-P\'osa property.
\end{lemma}
\noindent\textbf{Proof.}
Let $G^{\prime}$ be the $k$-closure of $G$ and $d_{G^{\prime}}\left(v_{1}\right), d_{G^{\prime}}\left(v_{2}\right), \cdots, d_{G^{\prime}}\left(v_{n}\right)$ be the degree sequence of $G^{\prime}$ such that $d_{G^{\prime}}\left(v_{1}\right)$ $\geq d_{G^{\prime}}\left(v_{2}\right) \geq \cdots \geq d_{G^{\prime}}\left(v_{n}\right)$.
Clearly, $G'$ is not a complete graph.
Otherwise $G'$ has property $P$, so does $G$, a contradiction.
Let $v_{i}$ and $v_{j}$ be two non-adjacent vertices in $G^{\prime}$ with $1 \leq i<j \leq n$ and $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right)$ as large as possible.
Obviously, $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right) \leq k-1$.
Let $S$ be the set of vertices in $V \backslash\{v_i\}$ which are not adjacent to $v_{i}$ in $G'$.
By the choice of $v_j$, we have $d_{G^{\prime}}(v) \leq d_{G^{\prime}}\left(v_{j}\right)$ for any $v \in S$.
Then
$$
|S|=n-1-d_{G^{\prime}}\left(v_{i}\right) \geq n-k+d_{G^{\prime}}\left(v_{j}\right) .
$$
There are at least $n-k+d_{G^{\prime}}\left(v_{j}\right)$ vertices in $V\left(G^{\prime}\right)$ with degree at most $d_{G^{\prime}}\left(v_{j}\right)$.
Let $q=d_{G^{\prime}}\left(v_{j}\right)$.
Then $G'$ has $(n-k+q, q)$-P\'osa property.
Moreover, since $d_{G^{\prime}}\left(v_{i}\right) \geq d_{G^{\prime}}\left(v_{j}\right)$ and $d_{G^{\prime}}\left(v_{i}\right)+d_{G^{\prime}}\left(v_{j}\right) \leq k-1$, it follows that $q=d_{G^{\prime}}\left(v_{j}\right) \leq \frac{k-1}{2}$.
Since $G$ is a subgraph of $G^{\prime}$ and
$$
d_{G^{\prime}}\left(v_{j}\right) \geq \delta\left(G^{\prime}\right) \geq \delta(G) \geq d,
$$
we complete the proof.\qed
The following lemma gives a structural characterization of graphs with P\'osa property.
\begin{lemma}\label{bipartite}
Suppose G has $n$ vertices and is stable under taking $k$-closure.
Let $q$ be the minimum integer such that $G$ has $(n-k+q, q)$-P\'osa property and $q \leq \frac{k-1}{2}$.
If $T$ is the set of vertices in $V(G)$ with degree at least $k-q$ and $T^{\prime}=V(G) \backslash T$, then $G\left[T, T^{\prime}\right]$ is a complete bipartite graph.
\end{lemma}
\noindent\textbf{Proof.}
Assume that $G\left[T, T^{\prime}\right]$ is not a complete bipartite graph.
Choose two non-adjacent vertices $u \in T$ and $v \in T^{\prime}$ such that $d(u)+d(v)$ is as large as possible.
Clearly, $d(u)+d(v) \leq k-1$ and $T$ forms a clique in $G$ as $G$ is stable under taking $k$-closure.
Now denote by $S$ the set of vertices in $V \backslash\{u\}$ which are not adjacent to $u$ in G.
Clearly, for any $v^{\prime} \in S, d\left(v^{\prime}\right) \leq d(v)$ and
$$
|S|=n-1-d(u) \geq n-k+d(v).
$$
Since $d(u) \geq k-q$ and $d(u)+d(v) \leq k-1, d(v) \leq q-1$.
Let $q^{\prime}=d(v) \leq q-1$.
We have at least $n-k+q^{\prime}$ vertices in $V(G)$ with degree at most $q^{\prime}$.
Then $G$ has $(n-k+q', q')$-P\'osa property with $q'<q$, which contradicts the minimality of $q$.
The lemma follows.\qed
Let $g(k,\Delta)$ be the maximum number of edges in a graph such that the size of linear forests is at most $k$ and the maximum degree is at most $\Delta$.
The following lemma estimates the upper bound of $g(k,\Delta)$.
\begin{lemma}\label{k2}
For $k\geq 1$ and $\Delta\geq 3$, \\
(i) ~$g(k,2)\leq \frac{3}{2}k.$\\
(ii) $g(k,\Delta)\leq k(\Delta-1).$
\end{lemma}
\noindent\textbf{Proof of (i).}
Let $G$ be an $\mathcal{L}_{k+1}$-free graph with $e(G)=g(k,2)$ and $\Delta(G)\leq 2$.
Clearly, $g(1,2)=1$ and $g(2,2)=3$.
Now suppose that $k\geq 3$.
Since the maximum degree is at most 2, each nontrivial component is either a path or a cycle.
We claim that each component with at least 3 vertices is a cycle.
If not, we add an edge between the two ends of the path and the resulting graph is still $\mathcal{L}_{k+1}$-free, which contradicts the maximality of $G$.
If there is a component consisting of exactly one edge, we replace this edge and a component $C_{\ell}$ in $G$ with $C_{\ell+1}$.
Then the resulting graph is still $\mathcal{L}_{k+1}$-free and the number of edges is equal to $g(k,2)$.
Therefore, we can further assume that each nontrivial component of $G$ is a cycle.
Let $C_{k_1}$, \ldots, $C_{k_t}$ be the nontrivial components of $G$.
Then $k=(k_1-1) + \cdots + (k_t-1)$ and $e(G)=k_1+\cdots+k_t=k+t$.
Note that $k_i-1\geq 2$, we have $t\leq \frac{k}{2}$.
Thus $g(k,2)=e(G)\leq \frac{3}{2}k$.\qed
\noindent\textbf{Proof of (ii).}
We use induction on $k$.
It is easy to check that $g(1,\Delta)=1$ and $g(2,\Delta)=\Delta$.
Thus lemma holds for $k=1,2$.
Suppose that the lemma holds for all $k'<k$.
Let $G$ be an $\mathcal{L}_{k+1}$-free graph with $\Delta(G)\leq \Delta$.
Let $P=v_0v_1\cdots v_{t}$ be the longest path in $G$ and $B=V(G)\backslash V(P)$.
Then $G[B]$ is $\mathcal{L}_{k+1-t}$-free and $e(G[B])\leq (k-t)(\Delta-1)$ by the induction hypothesis.
Since $P$ is the longest path in $G$, $d_B(v_0)=d_B(v_t)=0$ and $d_B(v_i)\leq \Delta-2$ for $1\leq i\leq t-1$.
Thus,
$$
\begin{aligned}
e(G[V(P)])+e_G[V(P), B])
& = \frac{1}{2}\left(\sum\limits_{i=0}^t d_G(v_i)+\sum\limits_{i=0}^t d_B(v_i)\right)\\
& \leq \frac{1}{2}\left((t+1)\Delta+(t-1)(\Delta-2)\right)\\
& = t(\Delta-1)+1
\end{aligned}
$$
The equality holds only if $d_G(v_0)=\cdots=d_G(v_t)=\Delta$, $d_B(v_1)=\cdots=d_B(v_{t-1})=\Delta-2$ and $d_B(v_0)=d_B(v_t)=0$ hold simultaneously, which is impossible.
Therefore, $e(G[V(P)])+e_G[V(P), B])\leq t(\Delta-1)$.
Moreover, we have
$$
\begin{aligned}
e(G)&=e(G[B])+e(G[V(P)])+e_G[V(P), B])\\
&\leq (k-t)(\Delta-1)+t(\Delta-1)\\
&\leq k(\Delta-1).
\end{aligned}
$$
\qed
\noindent\textbf{Remark.}
The graph consisting of $k/3$ pairwise disjoint $K_4$'s shows the bound in Lemma \ref{k2} (ii) is sharp when $3$ divides $k$ and $\Delta=3$.
For integers $m, l, r$, the following combinatorial identity is well-known.
\begin{eqnarray}\label{3.1}
\binom{m+l}{r}=\sum_{j=0}^{r}\binom{m}{j}\binom{l}{r-j}
\end{eqnarray}
The following lemma bounds the number of $r$-cliques by the number of edges.
\begin{lemma}[\cite{2021Chakraborti}]\label{cliques}
Let $r \geq 3$ be an integer, and let $x \geq r$ be a real number. Then, every graph with exactly $\binom{x}{2}$ edges contains at most $\binom{x}{r}$ cliques of order $r$.
\end{lemma}
For two disjoint vertex sets $T$ and $T'$ of $G$, we use $N_r^i\left(T, T'\right)$ and $N_r^{\geq i}\left(T, T'\right)$ to denote
the number of $r$-cliques in $G[T,T']$ that contain exactly $i$ vertices and at least $i$ vertices in $T'$, respectively.
\noindent\textbf{Proof of Theorem \ref{stab2}.}
Let $G^{\prime}$ be the $k$-closure of $G$.
Then $G'$ is $\mathcal{L}_{k}$-free from Lemma \ref{closure}.
By Lemma \ref{posa}, there exists an integer $q$ with $d \leq q \leq \lfloor\frac{k-1}{2}\rfloor$ such that $G^{\prime}$ has $(n-k+q, q)$-P\'osa property.
Furthermore, we assume $q$ is as small as possible.
Then either $q=\lfloor\frac{k-1}{2}\rfloor$ or $q=\lfloor\frac{k-3}{2}\rfloor$.
Otherwise, $d \leq q \leq \lfloor\frac{k-5}{2}\rfloor$ implies that $N_r(G) \leq \binom{k-q}{r}+(n-k+1)\binom{q}{r-1}= h_r(n, k, q) \leq \max \left\{h_r(n, k, d),h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right)\right\}$, a contradiction.
\noindent\textbf{ (i)} $k$ is odd.
\noindent\textbf{Case 1.} $q=\frac{k-1}{2}$.
Let $T_{1}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+1}{2}$, i.e.,
$$
T_{1}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+1}{2}\right\} .
$$
Then $T_1$ is a clique in $G^{\prime}$. Let $T_1'=V\left(G^{\prime}\right) \backslash T_{1}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{1}, T_1'\right]$ is a complete bipartite graph.
We will show that $\left|T_{1}\right|=\frac{k-1}{2}$ or $\left|T_{1}\right|=\frac{k-3}{2}$.
\noindent\textbf{Claim 1.} $\left|T_{1}\right|\leq\frac{k-1}{2}$
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\geq\frac{k+1}{2}$.
Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, all vertices in $T'$ with degree at least $\frac{k+1}{2}$.
It implies that $T'_1$ is an empty set.
Thus $G'$ is a complete graph.
Since $n\geq k+1$, $G'$ contains a linear forest of size $k$, a contradiction. \qed
\noindent\textbf{Claim 2.} $\left|T_{1}\right|\geq\frac{k-3}{2}$.
\noindent\textbf{Proof.} Otherwise, $\left|T_{1}\right|\leq\frac{k-5}{2}$.
Suppose $|T_{1}|=\frac{k-1}{2}-t$, then $2\leq t\leq \frac{k-1}{2}$.
Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most $t$.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2t+1}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(2t,t)\leq 2t(t-1)$ when $t\geq 3$ and $e(T^{\prime})\leq g(2t,t)\leq 6$ when $t=2$.
Suppose $uv\in E(G'[T_1'])$.
Since the degrees of $u$ and $v$ are at most $\frac{k-1}{2}$, $u$ and $v$ have at most $\frac{k-3}{2}$ common neighbors. Thus the edge $uv$ is contained in at most $\binom{\frac{k-3}{2}}{r-2}$ $r$-cliques.
If $t=2$, then
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k-5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+ 6\binom{\frac{k-3}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+5\binom{\frac{k-5}{2}}{r-1}+6 \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$ where the last inequality follows from (\ref{3.1}), a contradiction.
If $3\leq t\leq \frac{k-1}{2}$, then
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-1}{2}-t}{r} + \left(n-\frac{k-1}{2}+t\right)\binom{\frac{k-1}{2}-t}{r-1}+ 2t(t-1) \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k-7}{2}\right)\binom{\frac{k-7}{2}}{r-1}+ \frac{(k-1)(k-3)}{2} \binom{\frac{k-3}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k+5}{2}\right)\left(\binom{\frac{k-5}{2}}{r-1}-\binom{\frac{k-7}{2}}{r-2}\right) +6\binom{\frac{k-7}{2}}{r-1} + \frac{(k-1)(k-3)}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 1 and Claim 2, we have $\left|T_{1}\right|=\frac{k-1}{2}$ or $\left|T_{1}\right|=\frac{k-3}{2}$.
When $\left|T_{1}\right|=\frac{k-3}{2}$,
since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph and all the vertices in $T_{1}^{\prime}$ have degree at most $\frac{k-1}{2}$, it follows that all vertices in $T_{1}^{\prime}$ have degree at most one in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Therefore, $G'[T_{1}^{\prime}]$ consists of independent edges and isolated vertices.
We claim there are at most two edges in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Otherwise, one can find $P_{k-2}\cup 3 P_2$ in $G'$, a contradiction.
Thus, $G'\subseteq H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.
When $\left|T_{1}\right|=\frac{k-1}{2}$,
since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph and vertices in $T_{1}^{\prime}$ have degree at most $\frac{k-1}{2}$, it follows that $T_{1}^{\prime}$ forms an independent set of $G^{\prime}$.
Then $G^{\prime}$ is isomorphic to $H(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor)$.
\noindent\textbf{Case 2.} $q=\frac{k-3}{2}$.
Let $T_{2}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+3}{2}$, i.e.,
$$
T_{2}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+3}{2}\right\} .
$$
Then $T_2$ is a clique in $G^{\prime}$. Let $T_{2}^{\prime}=V\left(G^{\prime}\right) \backslash T_{2}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph. We will show that $\left|T_{2}\right|=\frac{k-3}{2}$.
\noindent\textbf{Claim 3.} $\left|T_{2}\right|\leq\frac{k-3}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\geq\frac{k-1}{2}$.
The fact $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-1}{2}$.
Therefore $G'$ has no vertex with degree less than or equal to $\frac{k-3}{2}$, which contradicts to the fact that $G'$ has $(n-k+\frac{k-3}{2}, \frac{k-3}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 4.} $\left|T_{2}\right|\geq\frac{k-3}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\leq\frac{k-5}{2}$.
Suppose $|T_{2}|= \frac{k-1}{2}-t$, where $2\leq t\leq \frac{k-1}{2}$.
Since $G^{\prime}\left[T_2, T_2^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{2}^{\prime}\right]$ is at most $t+1$.
Moreover, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_{2t+1}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
When $t=2$,
since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_5$-free with maximum degree at most 3.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(4,3)< 10=\binom{5}{2}$.
Then we have $N_r(G'[T'_{2}])\leq \binom{5}{r}$ from Lemma \ref{cliques}.
Thus the following inequality holds:
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1(T_2, T'_2) + \sum\limits_{i=2}^5 N_r^i(T_2, T_2')\\[1mm]
& \leq \binom{\frac{k-5}{2}}{r} + \left(n-\frac{k-5}{2}\right)\binom{\frac{k-5}{2}}{r-1}+ \sum\limits_{i=2}^5\binom{5}{i} \binom{\frac{k-5}{2}}{r-i}\\[1mm]
& = \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the second equality follows from (\ref{3.1}), a contradiction.
When $3\leq t\leq \frac{k-1}{2}$,
by Lemma \ref{k2}, $e(T^{\prime})\leq g(2t,t+1)\leq 2t^2$.
Note each edge in $G'[T_2']$ is contained in at most $\binom{\frac{k-1}{2}}{r-2}$ $r$-cliques.
Thus we have
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1\left(T_2, T'_2\right) + N_r^{\geq 2}\left(T_2,T'_2\right)\\[1mm]
& \leq \binom{\frac{k-1}{2}-t}{r} + \left(n-\frac{k-1}{2}+t\right)\binom{\frac{k-1}{2}-t}{r-1}+ 2t^2 \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k-7}{2}\right)\binom{\frac{k-7}{2}}{r-1}+ \frac{(k-1)^2}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-7}{2}}{r} + \left(n-\frac{k+5}{2}\right)\left[\binom{\frac{k-5}{2}}{r-1}-\binom{\frac{k-7}{2}}{r-2}\right] +6\binom{\frac{k-7}{2}}{r-1} + \frac{(k-1)^2}{2} \binom{\frac{k-1}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+5}{2}}{r} + \left(n-\frac{k+5}{2}\right)\binom{\frac{k-5}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 3 and Claim 4, we have
$\left|T_{2}\right|=\frac{k-3}{2}$.
Then $G^{\prime}\left[T_{2}^{\prime}\right]$ must be $\mathcal{L}_3$-free. Otherwise we can find a linear forest of size $k$.
Moreover, each vertex in $G^{\prime}\left[T_{2}^{\prime}\right]$ has degree at most two.
Thus $G'[T_{2}^{\prime}]$ is a subgraph of $C_3\cup (n-3)K_1$ or $2P_2\cup (n-4)K_1$.
It follows that $G^{\prime}$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.
Combining the two cases above, we get that $G$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.\qed
\noindent\textbf{ (ii)} $k$ is even.
\noindent\textbf{Case 1.} $q=\frac{k-2}{2}$.
Let $T_{1}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+2}{2}$, i.e.,
$$
T_{1}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+2}{2}\right\} .
$$
Then $T_1$ is a clique in $G^{\prime}$. Let $T_1'=V\left(G^{\prime}\right) \backslash T_{1}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{1}, T_1'\right]$ is a complete bipartite graph. We will show that $\left|T_{1}\right|=\frac{k-2}{2}$ or $\left|T_{1}\right|=\frac{k-4}{2}$.
\noindent\textbf{Claim 5.} $\left|T_{1}\right|\leq\frac{k-2}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\geq\frac{k}{2}$.
The fact $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{1}^{\prime}$ have degree at least $\frac{k}{2}$. Then $G^{\prime}$ has no vertex with degree less than or equal to $\frac{k-2}{2}$, which is a contradiction to the fact that $G^{\prime}$ has $(n-k+\frac{k-2}{2}, \frac{k-2}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 6.} $\left|T_{1}\right|\geq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{1}\right|\leq\frac{k-6}{2}$. Suppose $|T_{1}|= \frac{k}{2}-t$, then $3\leq t\leq \frac{k}{2}$.
Since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most $t$.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2t}$-free.
Otherwise we will find a linear forest of size at least $k$ in $G^{\prime}$.
By Lemma \ref{k2}, $e(T^{\prime})\leq g(2t-1,t)\leq (2t-1)(t-1)$.
If $t=3$, then
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k-6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+ 10 \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+6\binom{\frac{k-6}{2}}{r-1}+ 10 \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the last inequality follows from (\ref{3.1}).
If $4\leq t\leq \frac{k}{4}$, then
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_1) + N_r^1\left(T_1, T_1'\right) + N_r^{\geq 2}\left(T_1,T_1'\right)\\[1mm]
& \leq \binom{\frac{k}{2}-t}{r} + \left(n-\frac{k}{2}+t\right)\binom{\frac{k}{2}-t}{r-1}+ (2t-1)(t-1) \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k-8}{2}\right)\binom{\frac{k-8}{2}}{r-1}+ \frac{(k-1)(k-2)}{2} \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k+6}{2}\right)\left(\binom{\frac{k-6}{2}}{r-1}-\binom{\frac{k-8}{2}}{r-2}\right)+7\binom{\frac{k-8}{2}}{r-1}+ \frac{(k-1)(k-2)}{2} \binom{\frac{k-2}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the third inequality holds since (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 5 and Claim 6, we have $\left|T_{1}\right|=\frac{k-4}{2}$ or $\left|T_{1}\right|=\frac{k-2}{2}$.
When $\left|T_{1}\right|=\frac{k-4}{2}$,
since $G^{\prime}\left[T_1, T_1^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{1}^{\prime}\right]$ is at most two.
Moreover, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_4$-free.
Therefore, $G^{\prime}\left[T_{1}^{\prime}\right]$ (without isolated vertices) is a subgraph of $\{C_4,C_3\cup P_2, 3P_2\}$.
Thus $G$ is a subgraph of $H(n,k,\lfloor\frac{k-3}{2}\rfloor)$, $H^{+}(n,k-1,\lfloor\frac{k-3}{2}\rfloor)$ or $H^{++}(n,k-2,\lfloor\frac{k-3}{2}\rfloor)$.
When $\left|T_{1}\right|=\frac{k}{2}-1$,
since $G^{\prime}\left[T_{1}, T_{1}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{1}^{\prime}\right]$ is $\mathcal{L}_{2}$-free, i.e. there is at most one edge in $G^{\prime}\left[T_{1}^{\prime}\right]$.
Thus, $G'\subseteq H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$.
\noindent\textbf{Case 2.} $q=\frac{k-4}{2}$.
Let $T_{2}$ be the set of vertices in $V\left(G^{\prime}\right)$ with degree at least $\frac{k+4}{2}$, i.e.,
$$
T_{2}=\left\{u \in V\left(G^{\prime}\right): d_{G^{\prime}}(u) \geq \frac{k+4}{2}\right\} .
$$
Then $T_2$ is a clique in $G^{\prime}$. Let $T_{2}^{\prime}=V\left(G^{\prime}\right) \backslash T_{2}$.
By Lemma \ref{bipartite}, $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph. We will show that $\left|T_{2}\right|=\frac{k-4}{2}$
\noindent\textbf{Claim 7.} $\left|T_{2}\right|\leq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\geq\frac{k-2}{2}$.
The fact $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph implies that all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-2}{2}$.
Therefore $G'$ has no vertex with degree less than or equal to $\frac{k-4}{2}$, which contradicts to the fact that $G'$ has $(n-k+\frac{k-4}{2}, \frac{k-4}{2})$-P\'osa property. \qed
\noindent\textbf{Claim 8.} $\left|T_{2}\right|\geq\frac{k-4}{2}$.
\noindent\textbf{Proof.}
Otherwise, $\left|T_{2}\right|\leq\frac{k-6}{2}$.
Suppose $|T_{2}|=\frac{k}{2}-t$, then $3\leq t\leq \frac{k}{2}$.
Since $G^{\prime}\left[T_2, T_2^{\prime}\right]$ is a complete bipartite graph, the maximum degree of $G^{\prime}\left[T_{2}^{\prime}\right]$ is at most $t+1$.
Moreover, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_{2t}$-free.
Otherwise, we will find a linear forest of size at least $k$ in $G^{\prime}$.
When $t=3$, since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, $G^{\prime}\left[T_{2}^{\prime}\right]$ is $\mathcal{L}_6$-free with maximum degree at most 4.
By Lemma \ref{k2} (ii), $e(T'_{2})\leq 15=\binom{6}{2}$.
So $N_r(G'[T'_{2}])\leq \binom{6}{r}$ from Lemma \ref{cliques}.
Then we have
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1(T_2, T'_2) + \sum\limits_{i=2}^6 N_r^i(T_2, T'_2)\\[1mm]
& \leq \binom{\frac{k-6}{2}}{r} + \left(n-\frac{k-6}{2}\right)\binom{\frac{k-6}{2}}{r-1}+ \sum\limits_{i=2}^6\binom{6}{i} \binom{\frac{k}{2}-3}{r-i}\\[1mm]
& = \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right),
\end{aligned}
$$
where the second equality follows from (\ref{3.1}), a contradiction.
When $4\leq t\leq \frac{k}{2}$,
by Lemma \ref{k2}, $e(T^{\prime})\leq g(2t-1,t+1)\leq (2t-1)t$.
Thus we have
$$
\begin{aligned}
N_r\left(G^{\prime}\right)
& = N_r(T_2) + N_r^1\left(T_2, T'_2\right) + N_r^{\geq 2}\left(T_2,T'_2\right)\\[1mm]
& \leq \binom{\frac{k}{2}-t}{r} + \left(n-\frac{k}{2}+t\right)\binom{\frac{k}{2}-t}{r-1}+ (2t-1)t \binom{\frac{k}{2}}{r-2}\\[1mm]
& \leq \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k-8}{2}\right)\binom{\frac{k-8}{2}}{r-1}+ \frac{k(k-1)}{2} \binom{\frac{k}{2}}{r-2}\\[1mm]
& = \binom{\frac{k-8}{2}}{r} + \left(n-\frac{k+6}{2}\right)\left[\binom{\frac{k-6}{2}}{r-1}-\binom{\frac{k-8}{2}}{r-2}\right] + 7\binom{\frac{k-8}{2}}{r-1}+\frac{k(k-1)}{2} \binom{\frac{k}{2}}{r-2}\\[1mm]
& < \binom{\frac{k+6}{2}}{r} + \left(n-\frac{k+6}{2}\right)\binom{\frac{k-6}{2}}{r-1}\\[1mm]
& = h_r\left(n, k, \left\lfloor\frac{k-5}{2}\right\rfloor\right)
\end{aligned}
$$
where the third inequality follows from (\ref{3.1}), $n>k^5$ and $r\leq \lfloor\frac{k-3}{2}\rfloor$, a contradiction. \qed
By Claim 7 and Claim 8, we have $\left|T_{2}\right|=\frac{k-4}{2}$.
Since $G^{\prime}\left[T_{2}, T_{2}^{\prime}\right]$ is a complete bipartite graph, all vertices in $T_{2}^{\prime}$ have degree at least $\frac{k-4}{2}$.
The $(n-k+\frac{k-4}{2},\frac{k-4}{2})$-p\'osa property implies that there are at most 4 vertices in $T_{2}^{\prime}$ with degree great than 0.
Thus $G^{\prime}\left[T_{2}^{\prime}\right]$ is a subgraph of $K_4\cup (n-\frac{k+4}{2})K_1$. Then $G\subseteq H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$.
Combining the two cases above, we get that $G$ is a subgraph of $H\left(n, k, \left\lfloor\frac{k-1}{2}\right\rfloor\right)$, $H\left(n, k, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$, $H^+\left(n, k-1, \left\lfloor\frac{k-3}{2}\right\rfloor\right)$ or $H^{++}\big(n,k-2,\lfloor\frac{k-3}{2}\rfloor\big)$. The proof is finished. \qed
\section{The clique version of the stability of Erd\H{o}s-Gallai Theorem}
Notice that a linear forest with at least $2k + 1$ edges has a matching of size at least $k + 1$. A graph $G$ with $\nu(G) \leq k$ must be $\mathcal{L}_{2k+1}$-free.
Combining Theorem \ref{stab2} (i) and further discussions, we obtain Theorem \ref{thm11}.
\noindent\textbf{Proof of Theorem \ref{thm11}.}
Let $G$ be a graph satisfying the conditions of Theorem \ref{thm11}.
Then $G$ is $\mathcal{L}_{2k+1}$-free.
By Theorem \ref{stab2} (i), if $G\nsubseteq H^+\left(n, 2k, k-1\right)$, then $G$ is a subgraph of $H(n, 2k+1, k)$ or $H\left(n, 2k+1, k-1\right)$.
Next we will show that if $G\subseteq H^+\left(n, 2k, k-1\right)$, then $G\subseteq H\left(n, 2k+1, k-1\right)$.
If $G\subseteq H^+\left(n, 2k, k-1\right)$ and $G\subseteq H\left(n, 2k+1, k-1\right)$, then we are done.
Now we suppose that $G\subseteq H^+(n, 2k, k-1)$ and $G\nsubseteq H\left(n, 2k+1, k-1\right)$.
Note that $H^+(n, 2k, k-1)$ can be viewed as a graph obtained from $H(n, 2k-1, k-1)$
by adding two independent edges, say $x_1y_1$ and $x_2y_2$.
If $G\subseteq H^+(n, 2k, k-1)$ but $G\nsubseteq H\left(n, 2k+1, k-1\right)$, then $x_1y_1$ and $x_2y_2$ must be in $E(G)$.
Let $G_1=G-\{x_1,y_1,x_2,y_2\}$.
Then $G_1\subseteq H(n-4,2k-1,k-1)$ and
\begin{align}\label{G'}
N_r(G_1)
& > h_r(n,2k+1,k-2)-4\binom{k-1}{r-1}-2\binom{k-1}{r-2}\notag\\
& > \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1}
\end{align}
Since $G_1\subseteq H(n-4,2k-1,k-1)$, there exists an independent set $I$ satisfies $|I|=n-k-3$ and $d_{G_1}(v)\leq k-1$ for all $v\in I$.
Suppose that there are $t$ vertices in $I$ with degree $k-1$.
Then $t\leq k-2$.
Otherwise, we can find a $(k-1)$-matching $M$ in $G_1$.
The $(k-1)$-matching $M$ together with the edges $x_1y_1$ and $x_2y_2$ form a $(k+1)$-matching in $G$, a contradiction.
\noindent\textbf{Case 1.} $t=0$.
In this case, all vertices in $I$ have degree at most $k-2$.
Thus
$$N_r(G_1)\leq \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1},$$
contradicting to (\ref{G'}).
\noindent\textbf{Case 2.} $1\leq t\leq k-2$.
There are at most $k-2-t$ vertices in $I$ with degree $k-2$.
Otherwise, for any $S\subseteq V(G_1)\setminus I$, $|N(S)|\geq |S|$. By Hall's Theorem,
there exists a $(k-1)$-matching $M$ in $G_1$. The $(k-1)$-matching $M$ together with the edges $x_1y_1$ and $x_2y_2$ form a $(k+1)$-matching in $G$, a contradiction.
Thus
\begin{align*}
N_r(G_1)
& \leq \binom{k-1}{r}+t\binom{k-1}{r-1}+(k-2-t)\binom{k-2}{r-1}+(n-k-2)\binom{k-3}{r-1}\\[1mm]
& < \binom{k-1}{r}+(k-1)\binom{k-1}{r-1}+(n-k-3)\binom{k-3}{r-1}\\[1mm]
& < \binom{k-1}{r}+(n-k-3)\binom{k-2}{r-1},
\end{align*}
where the last inequality follows from $n>(2k+1)^5$, which is a contradiction to (\ref{G'}).
Thus $G\subseteq H^+(n, 2k, k-1)$ implies $G\subseteq H\left(n, 2k+1, k-1\right)$.
That is, $G$ is a subgraph of $H(n, 2k+1, k)$ or $H(n, 2k+1, k-1)$, completing the proof.
\qed
|
1,477,468,751,076 | arxiv | \section{Introduction}
\label{introSec}
\input{sectionsPdf/intro.tex}
\input{sectionsPdf/related.tex}
\input{sectionsPdf/overview.tex}
\input{sectionsPdf/algorithm.tex}
\input{sectionsPdf/analysis.tex}
\section{Implementation and Evaluation}
\label{experimentsSec}
\input{sectionsPdf/experiments.tex}
\section{Conclusion}
\label{conclusionSec}
\input{sectionsPdf/conclusion.tex}
\bibliographystyle{ACM-Reference-Format}
\section{Algorithms}
\label{algorithmSec}
We describe several adaptive processing strategies that are implemented in SkinnerDB. In Section~\ref{uctSub}, we introduce the UCT algorithm that all processing strategies are based upon. In Section~\ref{learningSub}, we describe how the UCT algorithm can generally be used to learn optimal join orders. In Section~\ref{genericSub}, we introduce a join order learning approach that can be implemented on top of existing SQL processing engines, in a completely non-intrusive manner. In Section~\ref{hybridSub}, we show how this strategy can integrate plans proposed by the original optimizer. In Section~\ref{customizedSub}, we propose a new query evaluation method that facilitates join order learning and the associated learning strategy.
While we describe the following algorithms only for SPJ queries, it is straight-forward to add sorting, grouping, or aggregate calculations in a post-processing step (we do so in our actual implementation). Nested queries can be treated via decomposition~\cite{Neumann}.
\begin{comment}
We describe several adaptive processing strategies based on reinforcement learning. As shown in Section~\ref{analysisSec}, all algorithms offer formal guarantees on expected regret (i.e., delta between actual and optimal execution time). Also, all algorithms have been implemented in the SkinnerDB system. We evaluate those algorithms experimentally in Section~\ref{experimentsSec}.
In Section~\ref{preliminariesSub}, we discuss how reinforcement learning can be used for learning optimal join orders in general. This discussion establishes the fundament for the following subsections. In Section~\ref{genericSub}, we describe a processing strategy that works with any execution engine. We have implemented that strategy in a completely non-intrusive manner on top of several existing DBMS such as Postgres, enabling them to process difficult-to-optimize queries where their original optimizer fails. In Section~\ref{hybridSub}, we show how to combine this learning-based approach with the original optimizer, enabling execution of difficult queries while not causing significant learning overheads for standard queries. The strategies discussed in the latter two subsections can be used on top of all existing DBMSs supporting an SQL interface (without changing a single line of DBMS code). In Section~\ref{customizedSub}, we present a join order learning approach that relies on a customized execution engine for select-project-join (SPJ) queries. While we describe the following algorithms only for SPJ queries, it is straight-forward to add sorting, grouping, or aggregate calculations in a post-processing step. Nested queries can be treated via decomposition~\cite{Selinger1979}.
\end{comment}
\subsection{Background on UCT}
\label{uctSub}
Our method for learning optimal join orders is based on the UCT algorithm~\cite{Kocsis2006}. This is an algorithm from the area of reinforcement learning. It assumes the following scenario. We repeatedly make choices that result in rewards. Each choice is associated with reward probabilities that we can learn over time. Our goal is to maximize the sum of obtained rewards. To achieve that goal, it can be beneficial to make choices that resulted in large rewards in the past (``exploitation'') or choices about which we have little information (``exploration'') to inform future choices. The UCT algorithm balances between exploration and exploitation in a principled manner that results in probabilistic guarantees. More precisely, assuming that rewards are drawn from the interval $[0,1]$, the UCT algorithm guarantees that the expected regret (i.e., the difference between the sum of obtained rewards to the sum of rewards for optimal choices) is in $O(\log(n))$ where $n$ designates the number of choices made~\cite{Kocsis2006}.
We specifically select the UCT algorithm for several reasons. First, UCT has been applied successfully to problems with very large search spaces (e.g., planning Go moves~\cite{Gelly2012}). This is important since the search space for join ordering grows quickly in the query size. Second, UCT provides formal guarantees on cumulative regret (i.e., accumulated regret over all choices made). Other algorithms from the area of reinforcement learning~\cite{Feldman2014} focus for instance on minimizing simple regret (i.e., quality of the final choice). The latter would be more appropriate when separating planning from execution. Our goal is to interleave planning and execution, making the first metric more appropriate. Third, the formal guarantees of UCT do not depend on any instance-specific parameter settings~\cite{Domshlak2013}, distinguishing it from other reinforcement learning algorithms.
We assume that the space of choices can be represented as a search tree. In each round, the UCT algorithm makes a series of decisions that can be represented as a path from the tree root to a leaf. Those decisions result in a reward from the interval $[0,1]$, calculated by an arbitrary, randomized function specific to the leaf node (or as a sum of rewards associated with each path step). Typically, the UCT algorithm is applied in scenarios where materializing the entire tree (in memory) is prohibitively expensive. Instead, the UCT algorithm expands a partial search tree gradually towards promising parts of the search space. The UCT variant used in our system expands the materialized search tree by at most one node per round (adding the first node on the current path that is outside the currently materialized tree).
Materializing search tree nodes allows to associate statistics with each node. The UCT algorithm maintains two counters per node: the number of times the node was visited and the average reward that was obtained for paths crossing through that node. If counters are available for all relevant nodes, the UCT algorithm selects at each step the child node $c$ maximizing the formula $r_c+w\cdot \sqrt{\log(v_p)/v_c}$ where $r_c$ is the average reward for $c$, $v_c$ and $v_p$ are the number of visits for child and parent node, and $w$ a weight factor. In this formula, the first term represents exploitation while the second term represents exploration. Their sum represents the upper bound of a confidence bound on the reward achievable by passing through the corresponding node (hence the name of the algorithm: UCT for Upper Confidence bounds applied to Trees). Setting $w=\sqrt{2}$ is sufficient to obtain bounds on expected regret. It can however be beneficial to try different values to optimize performance for specific domains~\cite{Domshlak2013}.
\subsection{Learning Optimal Join Orders}
\label{learningSub}
Our search space is the space of join orders. We consider all join orders except for join orders that introduce Cartesian product joins without need. Avoiding Cartesian product joins is a very common heuristic that is used by virtually all optimizers~\cite{Gubichev2015}.
To apply the UCT algorithm for join ordering, we need to represent the search space as a tree. We assume that each tree node represents one decision with regards to the next table in the join order. Tree edges represent the choice of one specific table. The tree root represents the choice of the first table in the join order. All query tables can be chosen since no table has been selected previously. Hence, the root node will have $n$ child nodes where $n$ is the number of tables to join. Nodes in the next layer of the tree (directly below the root) represent the choice of a second table. We cannot select the same table twice in the same join order. Hence, each of the latter node will have at most $n-1$ child nodes associated with remaining choices. The number of choices depends on the structure of the join graph. If at least one of the remaining tables is connected to the first table via join predicates, only such tables will be considered. If none of the remaining tables is connected, all remaining tables become eligible (since a Cartesian product join cannot be avoided given the initial choice). In total, the search tree will have $n$ levels. Each leaf node is associated with a completely specified join order.
We generally divide the execution of a query into small time slices in which different join order are tried. For each time slice, the UCT algorithm selects a path through the aforementioned tree, thereby selecting the join order to try next. As discussed previously, only part of the tree will be ``materialized'' (i.e., we keep nodes with node-specific counters in main memory). When selecting a path (i.e., a join order), UCT exploits counters in materialized nodes wherever available to select the next path step. Otherwise, the next step is selected randomly. After a join order has been selected, this join order is executed during the current time slice. Results from different time slices are merged (while removing overlapping results). We stop once a complete query result is obtained.
Our goal is to translate the aforementioned formal guarantees of UCT, bounding the distance between expected and optimal reward (i.e., the regret), into guarantees on query evaluation speed. To achieve that goal, we must link the reward function to query evaluation progress. The approaches for combined join order learning and execution, presented in the following subsections, define the reward function in different ways. They all have however the property that higher rewards correlate with better join orders. After executing the selected join order for a bounded amount of time, we measure evaluation progress and calculate a corresponding reward value. The UCT algorithm updates counters (average reward and number of visits) in all materialized tree nodes on the previously selected path.
The following algorithms use the UCT algorithm as a sub-function. More precisely, we use two UCT-related commands in the following pseudo-code: \Call{UctChoice}{$T$} and \Call{RewardUpdate}{$T,j,r$}. The first one returns the join order chosen by the UCT algorithm when applied to search tree $T$ (some of the following processing strategies maintain multiple UCT search trees for the same query). The second function updates tree $T$ by registering reward $r$ for join order $j$. Sometimes, we will pass a reward function instead of a constant for $r$ (with the semantics that the reward resulting from an evaluation of that function is registered).
\begin{comment}
Second, the algorithm assumes that the regret to minimize accumulates over the learning phase. It thereby differs from some other, recently proposed, algorithms that measure regret only after learning has finished. The latter would be appropriate if having a separate planning and execution stage. Here, we interleave learning and execution and therefore want to minimize accumulated regret.
We discuss how reinforcement learning can be used to learn optimal join orders. All processing strategies presented in the following subsections are based on one specific reinforcement learning algorithm: the UCT algorithm. This algorithm learns an optimal sequence of actions. It has been used very successfully for planning problems with large search spaces~\cite{Gelly2012}. To use that algorithm for join ordering, we must first represent the choice of a join order as a sequence of \textit{Actions}. We focus on left-deep query plans where join order is described by a permutation of tables. Hence, we can simply model the selection of the next table in a join order as one action. Executing a sequence of $m$ actions for a query joining $m$ tables leads to a complete join order.
Applying actions changes \textit{State}. The state generally determines what actions can be applied next. In case of join ordering, we simply use the set of tables joined so far to characterize state. The action sequence selecting a join order starts from the state associated with the empty table set (i.e., no tables are initially joined). Applying action $a$ in state $S$ (where $a$ is a table and $S$ a table set) leads to state $S\cup\{a\}$. The set of applicable actions is determined by the state as follows. First, actions are excluded that refer to tables already joined in the current state. Second, we use a popular heuristic that avoids Cartesian product joins~\cite{Selinger1979}. This means that we consider only actions joining a table that is connected by at least one predicates to tables already joined before (if at least one such action exists). This heuristic is used by virtually all modern optimizers~\cite{Gubichev2015}.
The UCT algorithm iteratively executes action sequences. Each action sequence results in a \textit{Reward} that accumulates over different iterations. The goal of the UCT algorithm is to maximize the accumulated reward or, equivalently, to minimize expected regret (i.e., the difference between actual and optimal accumulated reward). In case of join ordering, we enable iterative execution for a single query by dividing input data into batches or by limiting execution time after each selection. The processing strategies presented in the following sections use different reward functions. Common to all of those reward functions is that they tend to associate a higher reward with join orders that result in lower execution time. UCT comes with formal bounds on the expected regret~\cite{Kocsis2006}. We prove bounds on expected execution cost regret (i.e., difference between actual and optimal query execution time) for our algorithms in Section~\ref{analysisSec}. They are based on the formal guarantees provided by UCT.
The knowledge acquired by the UCT algorithm over different iterations is stored in the form of a search tree. This tree only represents parts of the search space that have been explored so far. The UCT algorithm is typically used in situations where representing the entire search space would take prohibitive amounts of space~\cite{Gelly2012}. This applies to the case of join ordering as well. Starting from a single node, we expand the tree by at most one node per iteration. We expand the tree whenever an action leads to a state whose node has not been constructed yet (completing the join order randomly if more than one node is missing). Each node in the UCT search tree is associated with a sequence of actions leading to that point. Tree edges represent actions. Each tree node is associated with a counter, representing the number of iterations in which it was visited. Also, each tree edge is associated with an average reward that was obtained via paths passing through that edge. Together, the number of visits and the average reward allow to calculate a measure of ``interestingness'' for specific actions. More precisely, we can calculate an upper bound on the expected reward associated with specific actions (based on reward average and number of visits). This allows the UCT algorithm to balance the need for exploration (i.e., learning about actions with high uncertainty) versus the need for exploitation (i.e., using actions that have proven to work well in the past).
The following algorithms use the UCT algorithm as a sub-function. More precisely, we use two UCT-related commands in the following pseudo-code: \Call{UctChoice}{$T$} and \Call{RewardUpdate}{$T,j,r$}. The first one returns the join order chosen by the UCT algorithm when applied to search tree $T$ (some of the following processing strategies maintain multiple UCT search trees for the same query). The second function updates tree $T$ by registering reward $r$ for join order $j$. Sometimes, we will pass a reward function instead of a constant for $r$ (with the semantics that the reward resulting from an evaluation of that function is registered).
\end{comment}
\subsection{Generic Execution Engines}
\label{genericSub}
In this subsection, we show how we can learn optimal join orders when treating the execution engine as a black box with an SQL interface. This approach can be used on top of existing DBMS without changing a single line of their code.
A naive approach to learn optimal join orders in this context would be the following. Following the discussion in the last subsection, we divide each table joined by the input query into an equal number of batches (if the input query contains unary predicates in the where clause, we can apply them in the same step). We simplify by assuming that all tables are sufficiently large to contain at least one tuple per batch (otherwise, less batches can be used for extremely small tables). We iteratively choose join orders using the UCT algorithm. In each iteration, we use the given join order to process a join between one batch \textit{for the left most table in the join order} and the remaining, complete tables. We remove each processed batch and add the result of each iteration to a result relation. We terminate processing once all batches are processed for at least one table. As we prove in more detail in Section~\ref{analysisSec}, the result relation contains a complete query result at this point. To process the query as quickly as possible, we feed the UCT algorithm with a reward function that is based on processing time for the current iteration. The lower execution time, the higher the corresponding reward. Note that reducing the size of the left-most table in a join order (by using only a single batch) tends to reduce the sizes of all intermediate results. If the dominant execution time component is proportional to those intermediate result sizes (e.g., time for generating intermediate result tuples, index lookups, number of evaluated join predicates), execution time for one batch is proportional to execution time for the entire table (with a scaling factor that corresponds to the number of batches per table).
The reason why we call the latter algorithm naive is the following. In many settings, the reward function for the UCT algorithm is relatively inexpensive to evaluate. In our case, it requires executing a join between one batch and all the remaining tables. The problem is that execution cost can vary strongly as a function of join order. The factor separating execution time of best and worst join order may grow exponentially in the number of query tables. Hence, even a single iteration with a bad join order and a single tuple in the left-most table may lead to an overall execution time that is far from the optimum for the entire query. Hence, we must upper-bound execution time in each iteration.
This leads however to a new problem: what timeout should we choose per batch in each iteration? Ideally, we would select as timeout the time required by an optimal join order. Of course, we neither know an optimal join order nor its optimal processing time for a new query. Using a timeout that is lower than the optimum prevents us from processing an entire batch before the timeout. This might be less critical if we can backup the state of the processing engine and restore it when trying the same join order again. However, we currently treat the processing engine as a black box and cannot assume access to partial results and internal state. Further, most SQL processing engines execute a series of binary joins and generate potentially large intermediate results. As we may try out many different join orders, already the space required for storing intermediate results for each join order would become prohibitive. So, we must assume that all intermediate results are lost if execution times out before a batch is finished. Using lower timeouts than necessary prevents us from making any progress. On the other side, choosing a timeout that is too high leads to unnecessary overheads when processing sub-optimal join orders.
\begin{figure}[t]
\centering
\includegraphics{arxivMain-figure2.pdf}
\caption{Illustration of time budget allocation scheme: we do not know the optimal time per batch and iterate over different timeouts, allocating higher budgets less frequently.\label{budgetFigure}}
\end{figure}
The choice of a good timeout is therefore crucial while we cannot know the best timeout a-priori. The solution lies in an iterative scheme that tries different timeouts in different iterations. We carefully balance allocated execution time over different timeouts, avoiding to use higher timeouts unless lower ones have been tried sufficiently often. More precisely, we will present a timeout scheme that ensures that the total execution time allocated per timeout does not differ by more than factor two across different timeouts. Figure~\ref{budgetFigure} gives an intuition for the corresponding timeout scheme (numbers indicate the iteration in which the corresponding timeout is chosen). We use timeouts that are powers of two (we also call the exponent the \textit{Level} of the timeout). We always choose the highest timeout for the next iteration such that the accumulated execution time for that timeout does not exceed time allocated to any lower timeout. Having fixed a timeout for each iteration, we assign a reward of one for a fixed join order if the input was processed entirely. We assign a reward of zero otherwise.
Algorithm~\ref{nonIntrusiveAlg} present pseudo-code matching the verbal description. First, tuples are filtered using unary predicates and the remaining tuples are partitioned into $b$ batches per table (we omit pseudo-code for pre-processing). We use function~\Call{DBMS}{} to invoke the underlying DBMS for processing one batch with a timeout. The function accumulates partial result in a result relation if processing finishes before the timeout and returns \textbf{true} in that case. Vector $o_i$ stores for each table an offset, indicating how many of its batches were completely processed (it is implicitly initialized to one for each table). Variable $n_l$ stores for each timeout level $l$ how much execution time was dedicated to it so far (it is implicitly initialized to zero and updated in each invocation of function~\Call{NextTimeout}{}). Note that we maintain separate UCT trees $T_t$ for each timeout $t$ (implicitly initialized as a single root node representing no joined tables). This prevents for instance processing failures for lower timeouts to influence join ordering decisions for larger timeouts. We prove the postulated properties of the timeout scheme (i.e., balancing time over different timeouts) in Section~\ref{analysisSec}.
\begin{comment}
\algrenewcommand\alglinenumber[1]{\small #1:}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Apply unary predicates in query $q$ and}
\State \Comment{partition each table into $b$ batches.}
\Function{PreprocessingG}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Initialize result set}
\State $R\gets\emptyset$
\State \Comment{Iterate over all tables in query}
\For{$i\in 1\ldots m$}
\State \Comment{Filter each table via unary predicates}
\State $F_i\gets\sigma_{\Call{Unary}{q}}(R_i)$
\State \Comment{Divide table into $b$ batches}
\State $R\gets R\cup\{\Call{Partitions}{F_i,b}\}$
\EndFor
\State \Return{$R$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Returns timeout for processing next batch,}
\State \Comment{based on times $n$ given to each timeout before.}
\Function{NextTimeout}{$n$}
\State \Comment{Choose timeout level}
\State $L\gets\max\{L|\forall l<L:n_l\geq n_L+2^L\}$
\State \Comment{Update total times given to levels}
\State $n_L\gets n_L+2^L$
\State \Comment{Return timeout for chosen level}
\State \Return{$2^L$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Auxiliary functions for regret-bounded query evaluation with generic engine.\label{nonIntrusiveAuxAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t!]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Returns timeout for processing next batch,}
\State \Comment{based on times $n$ given to each timeout before.}
\Function{NextTimeout}{$n$}
\State \Comment{Choose timeout level}
\State $L\gets\max\{L|\forall l<L:n_l\geq n_L+2^L\}$
\State \Comment{Update total time given to level}
\State $n_L\gets n_L+2^L$
\State \Comment{Return timeout for chosen level}
\State \Return{$2^L$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Process SPJ query $q$ using existing DBMS and}
\State \Comment{by dividing each table into $b$ batches.}
\Procedure{SkinnerG}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Apply unary predicates and partitioning}
\State $\{R_1^1,\ldots,R_m^b\}\gets$\Call{PreprocessingG}{$q,b$}
\State \Comment{Until we processed all batches of one table}
\While{$\nexists i:o_i>b$}
\State \Comment{Select timeout using pyramid scheme}
\State $t\gets$\Call{NextTimeout}{n}
\State \Comment{Select join order via UCT algorithm}
\State $j\gets$\Call{UctChoice}{$T_t$}
\State \Comment{Process one batch until timeout}
\State $suc\gets$\Call{DBMS}{$R_{j1}^{o_{j1}}\Join R_{j2}^{o_{j2}..b}\ldots\Join R_{jm}^{o_{jm}..b},t$}
\State \Comment{Was entire batch processed successfully?}
\If{$suc$}
\State \Comment{Mark current batch as processed}
\State $o_{j1}\gets o_{j1}+1$
\State \Comment{Store maximal reward in search tree}
\State \Call{RewardUpdate}{$T_t,j,1$}
\Else
\State \Comment{Store minimal reward in search tree}
\State \Call{RewardUpdate}{$T_t,j,0$}
\EndIf
\EndWhile
\EndProcedure
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a generic execution engine.\label{nonIntrusiveAlg}}
\end{algorithm}
\subsection{Hybrid Algorithm}
\label{hybridSub}
The algorithm presented in the last subsection uses reinforcement learning alone to order joins. It bypasses any join ordering capabilities offered by an existing optimizer completely. This approach is efficient for queries where erroneous statistics or difficult-to-analyze predicates mislead the traditional optimizer. However, it adds unnecessary learning overheads for standard queries where a traditional optimizer would produce reasonable query plans.
\begin{figure}[t]
\includegraphics{arxivMain-figure3.pdf}
\caption{The hybrid approach alternates with increasing timeouts between executing plans proposed by the traditional optimizer (red) and learned plans (blue).\label{hybridFig}}
\end{figure}
We present a hybrid algorithm that combines reinforcement learning with a traditional query optimizer. Instead of using an existing DBMS only as an execution engine, we additionally try benefiting from its query optimizer whenever possible. We do not provide pseudo-code for the hybrid algorithm as it is quick to explain. We iteratively execute the query using the plan chosen by the traditional query optimizer, using a timeout of $2^i$ where $i$ is the number of invocations (for the same input query) and time is measured according to some atomic units (e.g., several tens of milliseconds). In between two traditional optimizer invocations, we execute the learning based algorithm described in the last subsection. We execute it for the same amount of time as the traditional optimizer. We save the state of the UCT search trees between different invocations of the learning approach. Optionally, if a table batch was processed by the latter, we can remove the corresponding tuples before invoking the traditional optimizer. Figure~\ref{hybridFig} illustrates the hybrid approach. As shown in Section~\ref{analysisSec}, the hybrid approach bounds expected regret (compared to the optimal plan) and guarantees a constant factor overhead compared to the original optimizer.
\subsection{Customized Execution Engines}
\label{customizedSub}
The algorithms presented in the previous sections can work with any execution engine for SPJ queries. In this section, we present an execution engine that is tailored towards the needs of a learning based join ordering strategy. In addition, we present a variant of the join order learning algorithm that optimally exploits that execution engine.
\begin{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Advance tuple pointer $p$ for position $i$ in join order $j$}
\State \Comment{for query $q$, considering tuple offsets $o$.}
\Function{Advance}{$q=R_1\Join\ldots\Join R_m,j,o,p,i$}
\State \Comment{Advance tuple pointer for join order position}
\State $p_{j_i}\gets p_{j_i}+1$
\State \Comment{While index exceeds relation cardinality}
\While{$p_{j_i}>|q.R_{j_i}|$ \textbf{and} $i>0$}
\State $p_{j_i}\gets o_{j_i}$
\State $i\gets i-1$
\State $p_{j_i}\gets p_{j_i}+1$
\EndWhile
\State \Return{$\langle p,i\rangle$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Execute join order $j$ for query $q$ starting from}
\State \Comment{tuple pointer $p$ with tuple offsets $o$. Add results}
\State \Comment{to $R$ until time budget $b$ is depleted.}
\Function{Execute}{$q=R_1\Join\ldots\Join R_m,j,o,b,p,R$}
\State $i\gets1$ \Comment{Initialize join order index}
\While{processing time $<b$ \textbf{and} $i>0$}
\State $t\gets\Call{Materialize}{q.R[p_{j_1}]\times\ldots\times q.R[p_{j_i}]}$
\If{$t$ satisfies all newly applicable predicates}
\If{$i=q.n$} \Comment{Is result tuple completed?}
\State $R\gets R\cup\{p\}$ \Comment{Add pointers to result set}
\State $\langle p,i\rangle\gets\Call{Advance}{q,j,o,p,i}$
\Else \Comment{Tuple is incomplete}
\State $i\gets i+1$
\EndIf
\Else \Comment{Tuple violates predicates}
\State $\langle p,i\rangle\gets\Call{Advance}{q,j,o,p,i}$
\EndIf
\EndWhile
\State \Comment{Join order position 0 indicates termination}
\State \Return{$(i<1)$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Auxiliary functions for regret-bounded query evaluation with customized engine.\label{customizedAuxAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Advance tuple index in state $s$ for table at position $i$}
\State \Comment{in join order $j$ for query $q$, considering tuple offsets $o$.}
\Function{NextTuple}{$q=R_1\Join\ldots\Join R_m,j,o,s,i$}
\State \Comment{Advance tuple index for join order position}
\State $s_{j_i}\gets s_{j_i}+1$
\State \Comment{While index exceeds relation cardinality}
\While{$s_{j_i}>|R_{j_i}|$ \textbf{and} $i>0$}
\State $s_{j_i}\gets o_{j_i}$
\State $i\gets i-1$
\State $s_{j_i}\gets s_{j_i}+1$
\EndWhile
\State \Return{$\langle s,i\rangle$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Execute join order $j$ for query $q$ starting from}
\State \Comment{tuple indices $s$ with tuple offsets $o$. Add results}
\State \Comment{to $R$ until time budget $b$ is depleted.}
\Function{ContinueJoin}{$q=R_1\Join\ldots\Join R_m,j,o,b,s,R$}
\State $i\gets1$ \Comment{Initialize join order index}
\While{processing time $<b$ \textbf{and} $i>0$}
\State $t\gets\Call{Materialize}{R_{j_1}[s_{j_1}]\times\ldots\times R_{j_i}[s_{j_i}]}$
\If{$t$ satisfies all newly applicable predicates}
\If{$i=m$} \Comment{Is result tuple completed?}
\State $R\gets R\cup\{s\}$ \Comment{Add indices to result set}
\State $\langle s,i\rangle\gets\Call{NextTuple}{q,j,o,s,i}$
\Else \Comment{Tuple is incomplete}
\State $i\gets i+1$
\EndIf
\Else \Comment{Tuple violates predicates}
\State $\langle s,i\rangle\gets\Call{NextTuple}{q,j,o,s,i}$
\EndIf
\EndWhile
\State \Comment{Join order position 0 indicates termination}
\State \Return{$(i<1)$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Multi-way join algorithm supporting fast join order switching.\label{customizedAuxAlg}}
\end{algorithm}
\begin{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Regret-bounded evaluation of SPJ query $q$,}
\State \Comment{sampling join orders with time budget $b$}
\State \Comment{and using reward function $F_R$.}
\Function{SkinnerC}{$q,b,F_R$}
\State $o\gets \langle 0,\ldots,0\rangle$ \Comment{Initialize tuple offsets}
\State $R\gets\emptyset$ \Comment{Initialize result pointers}
\State $finished\gets\mathbf{false}$ \Comment{Initialize termination flag}
\While{$\neg finished$}
\State \Comment{Choose join order via UCT algorithm}
\State $j\gets\Call{UctChoice}{T}$
\State \Comment{Restore tuple pointers for this join order}
\State $p\gets\Call{RestorePointers}{j}$
\State \Comment{Execute join order for time budget}
\State $finished\gets\Call{ContinueJoin}{q,j,o,b,p,R}$
\State \Comment{Update UCT tree via progress-based rewards}
\State \Call{RewardUpdate}{$T,j,F_R$}
\State \Comment{Backup pointers and update tuple offsets}
\State $o\gets\Call{BackupProgress}{j,p,o}$
\EndWhile
\State \Return{$[\Call{Materialize}{R_1[p_1]\times R_2[p_2]\ldots}|p\in R]$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a customized execution engine.\label{customizedAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Regret-bounded evaluation of SPJ query $q$,}
\State \Comment{length of time slices is restricted by $b$.}
\Function{SkinnerC}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Apply unary predicates and hashing}
\State $q\gets$\Call{PreprocessingC}{$q$}
\State $R\gets\emptyset$ \Comment{Initialize result indices}
\State $finished\gets\mathbf{false}$ \Comment{Initialize termination flag}
\While{$\neg finished$}
\State \Comment{Choose join order via UCT algorithm}
\State $j\gets\Call{UctChoice}{T}$
\State \Comment{Restore execution state for this join order}
\State $s\gets\Call{RestoreState}{j,o,S}; s_{prior}\gets s$
\State \Comment{Execute join order during time budget}
\State $finished\gets\Call{ContinueJoin}{q,j,o,b,s,R}$
\State \Comment{Update UCT tree via progress-based rewards}
\State \Call{RewardUpdate}{$T,j,\textproc{Reward}(s-s_{prior},j)$}
\State \Comment{Backup execution state for join order}
\State $\langle o,S\rangle\gets\Call{BackupState}{j,s,o,S}$
\EndWhile
\State \Return{$[\Call{Materialize}{R_1[s_1]\times R_2[s_2]\ldots}|s\in R]$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a customized execution engine.\label{customizedAlg}}
\end{algorithm}
Most execution engines are designed for a traditional approach to query evaluation. They assume that a single join order is executed for a given query (after being generated by the optimizer). Learning optimal join orders while executing a query leads to unique requirements on the execution engine. First, we execute many different join orders for the same query, each one only for a short amount of time. Second, we may even execute the same join order multiple times with many interruptions (during which we try different join orders). This specific scenario leads to (at least) three desirable performance properties for the execution engine. First, the execution engine should minimize overheads when switching join orders. Second, the engine should preserve progress achieved for a given join order even if execution is interrupted. Finally, the engine should allow to share achieved progress, to the maximal extent possible, between different join orders as well. The generic approach realizes the latter point only to a limited extend (by discarding batches processed completely by any join order from consideration by other join orders).
The key towards achieving the first two desiderata (i.e., minimal overhead when switching join orders or interrupting execution) is a mechanism that backs up execution state as completely as possible. Also, restoring prior state when switching join order must be very efficient. By ``state'', we mean the sum of all intermediate results and changes to auxiliary data structures that were achieved during a partial query evaluation for one specific join order. We must keep execution state as small as possible in order to back it up and to restore it efficiently.
Two key ideas enable us to keep execution state small. First, we represent tuples in intermediate results concisely as vectors of tuple indices (each index pointing to one tuple in a base table). Second, we use a multi-way join strategy limiting the number of intermediate result tuples to at most one at any point in time. Next, we discuss both ideas in detail.
Traditional execution engines for SPJ queries produce intermediate results that consist of actual tuples (potentially containing many columns with elevated byte sizes). To reduce the size of the execution state, we materialize tuples only on demand. Each tuple, be it a result tuple or a tuple in an intermediate result, is the result of a join between single tuples in a subset of base tables. Hence, whenever possible, we describe tuples simply by an array of tuple indices (whose length is bounded by the number of tables in the input query). We materialize partial tuples (i.e., only the required columns) temporarily to check whether they satisfy applicable predicates or immediately before returning results to the user. To do that efficiently, we assume a column store architecture (allowing quick access to selected columns) and a main-memory resident data set (reducing the penalty of random data access).
Most traditional execution engines for SPJ queries process join orders by a sequence of binary join operations. This can generate large intermediate results that would become part of the execution state. We avoid that by a multi-way join strategy whose intermediate result size is restricted to at most one tuple. We describe this strategy first for queries with generic predicates. Later, we discuss an extension for queries with equality join predicates based on hashing.
Intuitively, our multi-way join strategy can be understood as a depth-first search for result tuples. Considering input tables in one specific join order, we fix one tuple in a predecessor table before considering tuples in the successor table. We start with the first tuple in the first table (in join order). Next, we select the first tuple in the second table and verify whether all applicable predicates are satisfied. If that is the case, we proceed to considering tuples in the third table. If not, we consider the next tuple in the second table. Once all tuples in the second table have been considered for a fixed tuple in the first table, we ``backtrack'' and advance the tuple indices for the first table by one. Execution ends once all tuples in the first table have been considered.
\tikzstyle{row}=[anchor=south west, draw=black, fill=blue!50, minimum width=1cm, minimum height=0.5cm]
\tikzstyle{joinFlow}=[thick, -stealth, red, dashed]
\tikzstyle{joinOrder}=[red, font=\small]
\begin{figure}[t]
\includegraphics{arxivMain-figure4.pdf}
\caption{Depth-first multi-way join strategy: we increase the join order index once the first tuple satisfying all applicable predicates is found, we decrease it once all tuples in the current table were considered.\label{joinFigure}}
\end{figure}
\begin{example}
Figure~\ref{joinFigure} illustrates the process for a three-table join. Having fixed a tuple in the left-most table (at the left, we start with the first tuple), the join order index is increased. Next, we find the first tuple in the second table satisfying the join condition with the current tuple in the first table. Having found such a tuple, we increase the join order index again. Now, we iterate over tuples in the third table, adding each tuple combination satisfying all applicable conditions to the result. After all tuples in the last table have been considered, we decrease the join order index and consider the next tuple in the second table.
\end{example}
Algorithm~\ref{customizedAuxAlg} implements that approach. Function~\Call{ContinueJoin}{} realizes the execution strategy described before. For a fixed amount of processing time (we use the number of outer while loop iterations as a proxy in our implementation) or until all input data is processed, it either increases ``depth'' (i.e., join order index $i$) to complete a partial tuple, satisfying all applicable predicates, further, or it advances tuples indices using Function~\Call{NextTuple}{}. The latter function increases the tuple indices for the current join order index or backtracks if the table cardinality is exceeded. Note that the same result tuple might be added multiple times in invocations of the execution engine for different join orders. However, we add tuple index vectors into a result \textit{set}, avoiding duplicate entries (of course, two different tuple index vectors can represent two result tuples with the same values in each column).
We discuss the main function (\Call{SkinnerC}{}) learning optimal join orders using a customized execution engine (see Algorithm~\ref{customizedAlg}). The most apparent difference to the version from Section~\ref{genericSub} is the lack of a dynamic timeout scheme. Instead, we use the same timeout for each invocation of the execution engine. This becomes possible since progress made when executing a specific join order is never lost. By minimizing the size of the execution state, we have enabled an efficient backup and restore mechanism (encapsulated by functions \Call{BackupState}{} and \Call{RestoreState}{} whose pseudo-code we omit) that operates only on a small vector of indices. The number of stored vectors is furthermore proportional to the size of the UCT tree. The fact that we do not lose partial results due to inappropriate timeouts anymore has huge impact from the theoretical perspective (see Section~\ref{analysisSec}) as well as for performance in practice (see Section~\ref{experimentsSec}). Learning overheads are lower than before since we only maintain a single UCT search tree accumulating knowledge from all executions.
In Section~\ref{genericSub}, we used a binary reward function based on whether the current batch was processed. We do not process data batch-wise anymore and must therefore change the reward function (represented as function~\textproc{Reward} in the pseudo-code which depends on execution state delta and join order). For instance, we can we use as reward the percentage of tuples processed in the left-most table during the last invocation. This function correlates with execution speed and returns values in the range between 0 and 1 (the standard formulas used for selecting actions by the UCT algorithm are optimized for that case~\cite{Kocsis2006}). SkinnerDB uses a slight refinement: we sum over all tuple index deltas, scaling each one down by the product of cardinality values of its associated table and the preceding tables in the current join order. Note that the UCT algorithm averages rewards over multiple invocations of the same join order and keeps exploring (i.e., obtaining a reward of zero for one good join order during a single invocation of the execution engine will not exclude that order from further consideration).
We have not yet discussed how our approach satisfies the third desiderata (sharing as much progress as possible among different join orders) mentioned at the beginning. We use in fact several techniques to share progress between different join orders (those techniques are encapsulated in Function~\Call{RestoreState}{}). First, we use again offset counters to exclude for each table tuples that have been joined with all other tuples already (vector $o$ in the pseudo-code which is implicitly initialized to one). In contrast to the version from Section~\ref{genericSub}, offsets are not defined at the granularity of data batches but at the granularity of single tuples. This allows for a more fine-grained sharing of progress between different join orders than before.
Second, we share progress between all join orders with the same prefix. Whenever we restore state for a given join order, we compare execution progress between the current join order and all other orders with the same prefix (iterating over all possible prefix lengths). Comparing execution states $s$ and $s'$ for two join orders $j$ and $j'$ with the same prefix of length $k$ (i.e., the first $k$ tables are identical), the first order is ``ahead'' of the second if there is a join order position $p\leq k$ such that $s_{j_i}\geq s'_{j_i}$ for $i<p$ and $s_{j_p}>s'_{j_p}+1$. In that case, we can ``fast-forward'' execution of the second join order, skipping result tuples that were already generated via the first join order. We do so by executing $j'$ from a merged state $s''$ where $s''_{j'_i}=s_{j'_i}$ for $i<p$, $s''_{j'_p}=s_{j'_p}-1$, and $s''_{j'_i}=o_{j'_i}$ for $i>p$ (since we can only share progress for the common prefix). Progress for different join orders is stored in the data structure represented as $S$ in Algorithm~\ref{customizedAlg}, Function~\textproc{RestoreState} takes care of fast-forwarding (selecting the most advanced execution state among all alternatives).
So far, we described the algorithm for queries with generic predicates. Our actual implementation uses an extended version supporting equality join predicates via hashing. If equality join predicates are present, we create hash tables on all columns subject to equality predicates during pre-processing. Of course, creating hash tables to support all possible join orders creates overheads. However, those overheads are typically small as only tuples satisfying all unary predicates are hashed. We extend Algorithm~\ref{customizedAuxAlg} to benefit from hash tables: instead of incrementing tuple indices always by one (line~5), we ``jump'' directly to the next highest tuple index that satisfies at least all applicable equality predicates with preceding tables in the current join order (this index can be determined efficiently via probing).
\section{Overview}
\label{overviewSec}
\tikzstyle{SkinnerComponent}=[anchor=center, draw=black, fill=blue!15, minimum width=1.6cm, align=center, font=\small]
\tikzstyle{SkinnerComponent2}=[anchor=center, draw=black, rounded corners=0.15cm, fill=red!15, minimum width=1.75cm, align=center, font=\small]
\tikzstyle{SkinnerIO}=[anchor=center, font=\small]
\tikzstyle{SkinnerFlow}=[ultra thick, draw]
\begin{figure}
\includegraphics{arxivMain-figure1.pdf}
\caption{Primary components of SkinnerDB.\label{architectureFig}}
\end{figure}
Figure~\ref{architectureFig} shows the primary components of SkinnerDB. This high-level outline applies to all of the SkinnerDB variants.
The pre-processor is invoked first for each query. Here, we filter base tables via unary predicates. Also, depending on the SkinnerDB variant, we partition the remaining tuples into batches or hash them (to support joins with equality predicates).
Join execution proceeds in small time slices. The join processor consists of several sub-components. The learning optimizer selects a join order to try next at the beginning of each time slice. It uses statistics on the quality of join orders that were collected during the current query execution. Selected join orders are forwarded to the join executor. This component executes the join order until a small timeout is reached. We add result tuples into a result set, checking for duplicate results generated by different join orders. The join executor can be either a generic SQL processor or, for maximal performance, a specialized execution engine. The same join order may get selected repeatedly. The progress tracker keeps track of which input data has been processed already. For Skinner-C, it even tracks execution state for each join order tried so far, and merges progress across join orders. At the start of each time slice, we consult the progress tracker to restore the latest state stored for the current join order. At the end of it, we backup progress achieved during the current time slice. The reward calculator calculates a reward value, based on progress achieved during the current time slice. This reward is a measure for how quickly execution proceeds using the chosen join order. It is used as input by the optimizer to determine the most interesting join order to try in the next time slice.
Finally, we invoke the post-processor, using the join result tuples as input. Post-processing involves grouping, aggregation, and sorting. In the next section, we describe the algorithms executed within SkinnerDB.
\section{Formal Analysis}
\label{analysisSec}
We prove correctness (see Section~\ref{correctnessSub}), and the regret bounds (see Section~\ref{regretSub}) for all Skinner variants.
\begin{comment}
We refer to an extended technical report~\cite{Skinners} for termination proofs.
Next, we introduce terminology and assumptions used throughout this section. \textit{Execution Time}, without further qualification, is total time required by SkinnerDB to execute a query. \textit{Regret} refers to the difference between actual and optimal execution time (i.e., using an optimal join order) for a given query. We express regret as a function of $n$, designating execution time of SkinnerDB. Intuitively, we calculate upper bounds on the expected ratio of execution time that was ``wasted''. Note that $n$ is proportional to the number of iterations of the UCT algorithm for a fixed timeout. We assume a query whose properties, such as the number of tables $m$ and the best average reward $r^*$ achievable by an optimal join order (for Skinner-C), are fixed.
Skinner-C and Skinner-H use a variable \textit{Timeout} that restricts execution time spent on a specific data batch using a specific join order. This timeout is determined by the \textit{Timeout Level} via the pyramid timeout scheme described in Section~\ref{genericSub}. Intuitively, the \textit{Number of Timeout Levels} refers to the ``height'' of the colored area in Figure~\ref{budgetFigure}. The execution time \textit{Allocated} to a specific timeout level is the accumulated time spent executing batches with the associated timeout. Intuitively, it corresponds to the ``width'' of a row (associated with a specific timeout level) in Figure~\ref{budgetFigure}. For Skinner-C, the timeout is static and restricts time spent on achieving progress for a fixed join order during a time slice.
We make several simplifying assumptions to facilitate the regret analysis. First, we assume that data batches are homogeneous. This means that, for a fixed join order, execution time per data batch (for Skinner-G and Skinner-H) and evaluation progress (i.e., delta of left-most tuple index per time unit for Skinner-C), is fixed. Second, we study slightly simplified versions of the algorithms from Section~\ref{algorithmSec}. In particular, we assume that offsets are only applied to exclude tuples for the left-most table in the current join order. This means that no progress is shared between join orders that do not share the left-most table. Third, we pessimistically assume that timeouts are completely exploited by SkinnerDB when using sub-optimal join orders (i.e., execution does not finish before the timeout).
\end{comment}
\begin{comment}
We analyze the three algorithms presented in Section~\ref{algorithmSec}. Skinner-G designates the algorithm that works with any execution engine (see Section~\ref{genericSub}), Skinner-H the hybrid version that mixes learning with traditional query optimization (see Section~\ref{hybridSub}), and Skinner-C the algorithm that is tailored to exploit a customized execution engine (see Section~\ref{customizedSub}). Each of the three algorithms is iterative. We prove termination first.
\begin{theorem}
Skinner-G terminates for each input.
\end{theorem}
\begin{proof}
The algorithm terminates once one of the offsets exceeds the number of batches per table (i.e., it terminates once all batches of one table have been processed). An offset is advanced once a batch is processed successfully within the timeout. The processing time per batch is finite and the algorithm tries higher and higher timeouts. Hence, each batch must eventually be processed successfully once the timeout is high enough. The algorithm terminates since the total number of batches is fixed for a given input.
\end{proof}
\begin{theorem}
Skinner-H terminates for each input.
\end{theorem}
\begin{proof}
This follows directly from the previous theorem as the hybrid algorithm uses Skinner-G as a sub-function. Skinner-H terminates once Skinner-G terminates (or before if the traditional optimizer finishes execution first).
\end{proof}
\begin{theorem}
Skinner-C terminates for each input.
\end{theorem}
\begin{proof}
Assume no termination. The number of join orders is finite. Hence, there must be at least one join order for which the execution engine is invoked infinitely often. We consider one such join order in the following. Before each invocation for that join order, we restore tuple pointers to their state after the last invocation for the same join order. Hence, progress is never lost while a join order is suspended. In each iteration, we either increase $i$ or advance the tuple pointers. As $i$ is upper-bounded by the number of tables $m$, tuples pointers must be advanced at least every $m$ iterations. We make the (reasonable) assumption that the execution time budget is sufficient to allow advancing tuple pointers at least once per executor invocation. When advancing tuple pointers, the pointer of a given table is either increased or the pointer of one of its predecessor tables in the join order. As each pointer is upper-bounded by the corresponding table cardinality, the pointer of the left-most table must eventually reach the cardinality bound after which $i$ reaches value zero. But then, the algorithm terminates.
\end{proof}
\end{comment}
\subsection{Correctness}
\label{correctnessSub}
Next, we prove correctness (i.e., that each algorithm produces a correct query result). We distinguish result tuples (tuples from the result relation joining all query tables) from component tuples (tuples taken from a single table).
\begin{theorem}
Skinner-G produces the correct query result.
\end{theorem}
\begin{proof}
Offsets exclude component tuples from consideration when executing the following joins. We show the following invariant: all result tuples containing excluded component tuples have been generated. This is certainly true at the start where offsets do not exclude any tuples. Offsets are only advanced if batches have been successfully processed. In that case, all newly excluded component tuples have been joined with tuples from all other tables that are not excluded. But excluded tuples can be neglected according to our invariant. The algorithm terminates only after all tuples from one table have been excluded. In that case, all result tuples have been generated. Still, we need to show that no result tuple has been generated more often than with a traditional execution. This is the case since we exclude all component tuples in one table after each successfully processed batch.
\end{proof}
\begin{theorem}
Skinner-H produces the correct query result.
\end{theorem}
\begin{proof}
We assume that executing a query plan produced by the traditional optimizer generates a correct result. The result produced by Skinner-G is correct according to the preceding theorem. This implies that Skinner-H produces a correct result as it returns the result generated by one of the latter two algorithms.
\end{proof}
\begin{theorem}
Skinner-C produces the correct query result.
\end{theorem}
\begin{proof}
Skinner-C does not produce any duplicate result tuples as justified next. Result tuples are materialized only at the very end of the main function. The result set contains tuple index vectors until then. Vectors are unique over all result tuples (as they indicate the component tuples from which they have been formed) and, due to set semantics, no vector will be contained twice in the result. Also, Skinner-C produces each result tuple at least once. This is due to the fact that \textit{i)}~complete tuples are always inserted into the result set, \textit{ii)}~partial tuples (i.e., $i<m$) are completed unless they violate predicates (then they cannot be completed into result tuples), and \textit{iii)}~tuple indices are advanced in a way that covers all combinations of component tuples.
\end{proof}
\begin{comment}
Regret refers to the difference between actual and optimal execution time for a given query. Before we analyze SkinnerDB, we analyze regret bounds of traditional approaches in a worst case.
Assume a query joining $m$ tables ($m$ is a constant) of cardinality $s$ ($s$ is variable). Tables are connected in a chain via join predicates and one unary predicate is placed on each table. All predicates evaluate to true on each tuple with the exception of the unary predicate on table $t^*$, located on one end of the chain. The latter predicate evaluates to false on each tuple. We consider left-deep join orders that avoid Cartesaian products and measure execution cost by the number of tuples read. Clearly, an optimal plan reads tuples from $t^*$ first and terminates in $O(s)$. If we choose to start from the table at the opposite end of the chain, $t^{-}$, compared to $t^*$, execution cost are in $O(s^{m-1})$.
If a traditional optimizer~\cite{Selinger1979} cannot reliably estimate the selectivity of predicates (e.g., because they are user-defined functions or because of data skew), we can consider the choice of the first join order table as random. Assuming for instance a uniform distribution, the expected regret is in $O(s^{m-2})$ (based on a $1/m$ chance of selecting $t^-$). Eddies~\cite{Tzoumas2008} fare slightly better in this extreme case. They operate on small data batches (e.g., tuples) and adapt their join order based on feedback. However, Eddies never discard partial results and join them with all remaining tables. If at least one tuple is selected from table $t^-$, there is no other way of completing it into result tuples than consecutive joins with all remaining tables. However, as the associated join predicates are non-selective, execution time will be in $O(s^{m-2})$. Assuming optimistically that Eddies will learn an optimal order after the very first tuple is selected, and assuming a uniform distribution over tables for the initial tuple choice, we have expected regret in $O(s^{m-3})$ for Eddies.
In the following, we denote by $m$ the number of tables joined by a query ($m$ is fixed) and by $s$ the size of the largest table. Imagine a query connecting all joined tables in a chain via equality join predicates. Further, assume that a unary user-defined function predicate is placed on each table. Assume that all those predicates evaluate to true on each tuple, except for the predicate on table $t^*$. The latter table is located at one end of the chain and its associated unary predicate evaluates always to false. Clearly, an optimal query plan starts joins with table $t^*$, discovering that the query result is empty and terminating in $O(s)$. A left-deep query plan that starts joins at the other end of the chain incurs cost in $O(s^{m-1})$ according to the $C_{out}$ metric. If we start at the other end of the chain, evaluation cost according to the
\end{comment}
\subsection{Regret Bounds}
\label{regretSub}
Regret is the difference between actual and optimal execution time. We denote execution time by $n$ and optimal time by $n^*$. Skinner-G and Skinner-H choose timeout levels (represented by the $y$ axis in Figure~\ref{budgetFigure}) that we denote by $l$. We use the subscript notation (e.g., $n_l$) to denote accumulated execution time spent with a specific timeout level. We study regret for fixed query properties (e.g., the number of joined tables, $m$, or the optimal reward per time slice, $r^*$) for growing amounts of input data (i.e., table size) and execution time. In particular, we assume that execution time, in relation to query size, is large enough to make the impact of transitory regret negligible~\cite{Coquelin2007b}. We focus on regret of the join phase as pre-processing overheads are linear in data and query size (while post-processing overheads are polynomial in query and join result size). We assume that time slices are chosen large enough to make overheads related to learning and join order switching negligible. Specifically for Skinner-G and Skinner-H, we assume that the optimal timeout per time slice applies to all batches. To simplify the analysis, we study slightly simplified versions of the algorithms from Section~\ref{algorithmSec}. In particular, we assume that offsets are only applied to exclude tuples for the left-most table in the current join order. This means that no progress is shared between join orders that do not share the left-most table. For Skinner-C, we assume that the simpler reward function (progress in left-most table only) is used. We base our analysis on the properties of the UCT variant proposed by Kocsis and Szepesvari~\cite{Kocsis2006}.
For a given join order, processing time in SkinnerDB is equivalent to processing time in traditional engines if scaling down the size of the left-most table scales down execution time proportionally (i.e., execution time behaves similarly to the $C_{out}$ cost metric~\cite{Krishnamurthy1986}). If so, the regret bounds apply compared to an optimal traditional query plan execution.
Before analyzing Skinner-G, we first prove several properties of the pyramid timeout scheme introduced in Section~\ref{genericSub}.
\begin{lemma}
The number of timeout levels used by Skinner-G is upper-bounded by $\log(n)$.\label{nrLevelsLemma}
\end{lemma}
\begin{proof}
We add a new timeout level $L$, whenever the equation $n_l\geq n_L+2^L$ is satisfied for all $0\leq l<L$ for the first time. As $n_l$ is generally a sum over powers of two ($2^l$), and as $n_L=0$ before $L$ is used for the first time, the latter condition can be tightened to $2^L=n_l$ for all $0\leq l<L$. Hence, we add a new timeout whenever the total execution time so far can be represented as $L\cdot 2^L$ for $L\in\mathbb{N}$. Assuming that $n$ is large, specifically $n>1$, the number of levels grows faster if adding levels whenever execution time can be represented as $2^L$ for $L\in\mathbb{N}$. In that case, the number of levels can be bounded by $\log(n)$ (using the binary logarithm).
\end{proof}
\begin{lemma}\label{balancedLevelsLemma}
The total amount of execution time allocated to different (already used) timeout levels cannot differ by more than factor two.
\end{lemma}
\begin{proof}
Assume the allocated time differs by more than factor two between two timeout levels, i.e.\ $\exists l_1,l_2:n_{l_1}>2\cdot n_{l_2}$ (and $n_{l_1},n_{l_2}\neq 0$). Consider the situation in which this happens for the first time. Since $\forall i:n_i\geq n_{i+1}$, we must have $n_0>2\cdot n_L$ where $L$ is the largest timeout level used so far. This was not the case previously so we either selected timeout level 0 or a new timeout level $L$ in the last step. If we selected a new timeout level $L$ then it was $n_l\geq n_L+2^L$ for all $0\leq l<L$ which can be tightened to $\forall 0\leq l<L:n_l=2^L$ (exploiting that $n_L=0$ previously and that timeouts are powers of two). Hence, selecting a new timeout cannot increase the maximal ratio of time per level. Assume now that timeout level 0 was selected. Denote by $\delta_{i}=n_i-n_{i+1}$ for $i<L$ the difference in allocated execution time between consecutive levels, before the last selection. It is $\delta_{i}\leq 2^{i}$ since $n_{i}$ is increased in steps of size $2^{i}$ and strictly smaller than $2^{i+1}$ (otherwise, level $i+1$ or a higher one would have been selected). It was $n_0-n_L=\sum_{0\leq i<L}\delta_i\leq \sum_{0\leq i<L}2^i< 2^L$. On the other side, it was $n_L\geq 2^L$ (as $n_L\neq0$ and since $n_L$ is increased in steps of $2^L$). After $n_0$ is increased by one, it is still $n_0\leq 2\cdot n_L$. The initial assumption leads always to a contradiction.
\end{proof}
\begin{comment}
\begin{proof}
We use induction. The statement holds trivially after executing a first batch with a timeout of one time unit (as only a single timeout level was used so far). Assume that the statement holds after executing up to $i$ batches. This implies that it holds after executing $i+1$ batches as shown next. Designate by $l$ the level used for the next batch and by $L$ the maximal level used before. The total amount of allocated execution time (i.e., $w_l$) can only decrease for increasing levels. Hence, the maximal relative difference in total execution time per level is encountered between the largest and smallest level used so far. Therefore, if the new level is neither of them (i.e, $0<l<L$), the maximal relative difference cannot change. If $l=L$ (i.e., $l$ is the maximal level used so far), the relative distance can only decrease. If $l=L+1$ (i.e., we start using a new level), then $w_L=2^{L+1}$ (since we add new levels as soon as possible) and therefore $w_{L+1}=w_L$. The relative distance between $w_{L+1}$ and $w_0$ is bounded by two since the relative distance between $w_L$ and $w_0$ was bounded by two. If the next level is minimal (i.e., $l=0$) then for all higher levels $h>0$ we must have $w_{h}+2^{h}>w_{h-1}$. The latter implies $w_{h-1}-w_{h}\leq 2^{h-1}$ since $w_{h}$ is increased in steps of $2^h$ while $w_{h-1}$ is increased in steps of $2^{h-1}$. Before increasing $w_0$ due to the newly executed batch, the absolute difference between $w_0$ and $w_L$ is bounded by $\sum_{h=1..L}2^{h-1}=2^L-1$. After increasing $w_0$ by one, the absolute difference is bounded by $2^L$. The induction holds since $w_L$ is increased in steps of size $2^L$ and therefore $w_L\geq 2^L$ if it was used. This implies $w_0/w_L\leq 2$.
\end{proof}
\end{comment}
We are now ready to provide worst-case bounds on the expected regret when evaluating queries via Skinner-G.
\begin{theorem}\label{skinnerGtheorem}
Expected execution time regret of Skinner-G is upper-bounded by $(1-1/(\log(n)\cdot m\cdot 4))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}
Total execution time $n$ is the sum over execution time components $n_l$ that we spent using timeout level $l$, i.e.\ we have $n=\sum_{0\leq l\leq L}n_l$ where $L+1$ is the number of timeout levels used. It is $L+1\leq \log(n)$ due to Lemma~\ref{nrLevelsLemma} and $\forall l_1,l_2\in L:n_{l_1}\geq n_{l_2}/2$ due to Lemma~\ref{balancedLevelsLemma}. Hence, for any specific timeout level $l$, we have $n_l\geq n/(2\cdot\log(n))$. Denote by $l^*$ the smallest timeout, tried by the pyramid timeout scheme, which allows to process an entire batch using the optimal join order. It is $n_{l^*}\geq n/(2\cdot\log(n))$. We also have $n_{l^*}=n_{l^*,1}+n_{l^*,0}$ where $n_{l^*,1}$ designates time spent executing join orders with timeout level $l^*$ that resulted in reward $1$, $n_{l^*,0}$ designates time for executions with reward $0$. UCT guarantees that expected regret grows as the logarithm in the number of rounds (which, for a fixed timeout level, is proportional to execution time). Hence, $n_{l^*,0}\in O(\log(n_{l^*}))$ and $n_{l^*,1}\geq n_{l^*}-O(\log(n_{l^*}))$. Denote by $b$ the number of batches per table. The optimal algorithm executes $b$ batches with timeout $l^*$ and the optimal join order. Skinner can execute at most $m\cdot b-m+1\in O(m\cdot b)$ batches for timeout $l^*$ before no batches are left for at least one table, terminating execution. Since $l^*$ is the smallest timeout greater than the optimal time per batch, the time per batch consumed by Skinner-G exceeds the optimal time per batch at most by factor 2. Hence, denoting by $n^*$ time for an optimal execution, it is $n^*\geq n_{l^*,1}/(2\cdot m)$, therefore $n^*\geq (n_{l^*}-O(\log(n)))/(2\cdot m)\geq n_{l^*}/(2\cdot m)-O(\log(n))$ (since $m$ is fixed), which implies $n^*\geq n/(4\cdot m\cdot\log(n))-O(\log(n))$. Hence, the regret $n-n^*$ is upper-bounded by $(1-1/(4\cdot m\cdot\log(n)))\cdot n+O(\log(n))$.
\end{proof}
\begin{comment}
\begin{proof}
We first calculate regret due to inappropriate timeouts. We pessimistically assume that there is one optimal timeout level $l^*$ per batch and that no other timeout leads to significant progress. The number of timeout levels used after $n$ units of execution time is upper-bounded by $\log(n)$ according to Lemma~\ref{nrLevelsLemma}. Also, the time allocated to different levels than $l^*$ exceeds the time dedicated to level $l^*$ at most by factor two (according to Lemma~\ref{balancedLevelsLemma}). Hence, the regret due to inappropriate timeouts is upper-bounded by $(1-1/(\log(n)\cdot 2))\cdot n$. According to the guarantees of the UCT algorithm~\cite{Kocsis2006}, the expected regret due to suboptimal choices (join orders in our case) is upper-bounded by $O(\log(n))$. Variable $n$ designates in general the number of UCT iterations. In our case the number of iterations (specifically for the right timeout) is indeed upper-bounded by the number $n$ of time units spent in execution. Skinner-G stops once all batches of one table are processed. We pessimistically assume that there is one optimal choice for the left-most table. Hence, we count regret for processing batches successfully with the right timeout unless they come from the right table. As the number of batches is the same in all tables, this type of regret is upper-bounded by factor $((m-1)/(\log(n)\cdot m\cdot 2))\cdot n$ for $m$ tables. Finally, even when processing batches from the right table with the right timeout, we may still incur regret by fully exploiting the timeout. This is suboptimal if it is possible to process batches within slightly more than half of the time budget (if the distance is higher, a different timeout level would be optimal). This regret is upper-bounded by $((m-1)/(\log(n)\cdot m\cdot 4))\cdot n$. Adding all regret terms yields the postulated bound.
\end{proof}
\end{comment}
Next, we analyze regret of Skinner-H.
\begin{theorem}
Expected execution time regret of Skinner-H is upper-bounded by $(1-1/(\log(n)\cdot m\cdot 12))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}
Denote by $n_O$ and $n_L$ time dedicated to executing the traditional optimizer plan or learned plans respectively. Assuming pessimistically that optimizer plan executions consume all dedicated time without terminating, it is $n_O=\sum_{0\leq l\leq L}2^l$ for a suitable $L\in\mathbb{N}$ at any point. Also, we have $n_L\geq\sum_{0\leq l<L}2^l$ as time is divided between the two approaches. It is $n_L/n\geq (2^L-1)/(2^{L+1}+2^L-2)$ which converges to $1/3$ as $n$ grows. We obtain the postulated bound from Theorem~\ref{skinnerGtheorem} by dividing the ``useful'' (non-regret) part of execution time by factor three.
\end{proof}
The following theorem is relevant if traditional query optimization works well (and learning creates overheads).
\begin{theorem}\label{skinnerHtraditionalTheorem}
The maximal execution time regret of Skinner-H compared to traditional query execution is $n\cdot 4/5$.
\end{theorem}
\begin{proof}
Denote by $n^*$ execution time of the plan produced by the traditional optimizer. Hence, Skinner-H terminates at the latest once the timeout for the traditional approach reaches at most $2\cdot n^*$ (since the timeout doubles after each iteration). The accumulated execution time of all prior invocations of the traditional optimizer is upper-bounded by $2\cdot n^*$ as well. At the same time, the time dedicated to learning is upper-bounded by $2\cdot n^*$. Hence, the total regret (i.e., added time compared to $n^*$) is upper-bounded by $n\cdot 4/5$.
\end{proof}
Finally, we analyze expected regret of Skinner-C.
\begin{comment}
\begin{theorem}
Expected execution time regret of Skinner-C is upper-bounded by $(1-1/(m\cdot 2))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}[Sketch]
The proof follows closely the one of Theorem~\ref{skinnerGtheorem} with the difference that no regret for inappropriate timeouts is incurred.
\end{proof}
\end{comment}
\begin{theorem}
Expected execution time regret of Skinner-C is upper-bounded by $(1-1/m)\cdot n+O(\log(n))$.\label{skinnerCadditiveTheorem}
\end{theorem}
\begin{comment}
\begin{proof}
Denote by $R$ the cumulative reward realized by Skinner-C during the execution of a specific query. It is $R=\sum_{t\in T}R_t$ where $R_t$ denotes rewards accumulated over time slices in which join orders starting with table $t\in T$ were selected. Reward is calculated as the relative tuple index delta in the left-most table (i.e., tuple index delta in left-most table divided by table cardinality). Hence, execution terminates whenever there is a table $t$ such that $R_t=1$. Skinner-C may accumulate more reward than necessary, when partitioning efforts over join orders with different start tables. The reward that can be collected before termination is however bounded by $R\leq m$. Skinner-C may lose time due to sub-optimal choices. However, it achieves near-optimal average rewards (and therefore near-optimal evaluation progress) based on the UCT guarantees~\cite{Kocsis2006}. More precisely, it is $r^*-r\in O(\log(n)/n)$ where $r$ is the expected average reward achieved by Skinner-C, $r^*$ the optimal average reward, and $n$ denotes execution time of Skinner-C (which is proportional to the number of UCT rounds as we use a fixed timeout). We calculate bounds on the optimal execution time $n^*$, required for collecting a reward of at least $R/m$ with optimal average reward per round $r^*$. It is $n^*\leq n\cdot r/(m\cdot r^*)=n\cdot (r^*-O(\log(n)/n))/(m\cdot r^*)$. From that, we obtain bounds on expected regret $n-n^*\leq n\cdot(1-1/m\cdot (r^*-O(\log(n)/n))/r^*)=n\cdot (1-1/m)+O(\log(n))$ (since $r^*$ is considered a constant).
\end{proof}
\end{comment}
\begin{proof}
Regret is the difference between optimal execution time, $n^*$, and actual time, $n$. It is $n-n^*=n\cdot(1-n^*/n)$. Denote by $R$ the total reward achieved by Skinner-C during query execution and by $r$ the average reward per time slice. It is $n=R/r$. Denote by $r^*$ the optimal reward per time slice. Reward is calculated as the relative tuple index delta in the left-most table (i.e., tuple index delta in left-most table divided by table cardinality). An optimal execution always uses the same join order and therefore terminates once the accumulated reward reaches one. Hence, we obtain $n^*=1/r^*$. We can rewrite regret as $n-n^*=n\cdot(1-(1/r^*)/(R/r))=n\cdot (1-r/(R\cdot r^*))$. The difference between expected reward and optimal reward is bounded as $r^*-r\in O(\log(n)/n)$~\cite{Kocsis2006}. Substituting $r$ by $r^*-(r^*-r)$, we can upper-bound regret by $n\cdot(1-1/R)+O(\log(n))$. Denote by $R_t\leq R$ rewards accumulated over time slices in which join orders starting with table $t\in T$ were selected. Skinner-C terminates whenever $R_t=1$ for any $t\in T$. Hence, we obtain $R\leq m$ and $n\cdot(1-1/m)+O(\log(n))$ as upper bound on expected regret.
\end{proof}
\begin{comment}
\begin{proof}
Denote by $n_R$ the time required by an optimal policy to reach the same reward $R$ as Skinner-C (assuming optimistically that maximal rewards can be obtained over the whole execution period). It is $n_R=R/r^*=n\cdot r/r^*$ and therefore $n-n_r=n\cdot r^*/r^*-n\cdot (r^*-O(\log(n)/n)/r^*=O(\log(n))/r^*\in O(\log(n))$ (since we consider $r^*$ as constant).
Denote by $n_R$ an optimistic estimate of the time required by an optimal policy to reach reward $R$: it is $n_R\cdot r^*=R$ and therefore $n_R=R/r^*$, hence $n-n_R=R/r-R/r^*=R\cdot(1/r+1/(r+O(\log(n))$
$n_R\cdot(r+O(\log(n)/n))=R$. Since $n\cdot r=R$, it is $n_R\cdot (r+O(\log(n)/n))=n\cdot r$, hence
Denote by $n$ execution time of Skinner-C for one specific query.
Skinner-C uses the relative delta in the left-most tuple index as reward function (i.e., tuple index delta divided by table cardinality). This links reward to evaluation progress. Execution terminates, at the latest, once the reward accumulated for join orders starting with one specific table reaches 1. Denote execution time of Skinner-C by $n$ (which is proportional to the number of UCT rounds since we used a fixed timeout) for a given query. , the expected difference between average reward $r$ of Skinner-C and optimal average reward $r^*$ is upper-bounded by $O(\log(n)/n)$. Furthermore, Skinner-C might partition its efforts over join orders starting with different tables. The optimal approach
\end{proof}
\end{comment}
Instead of the (additive) difference between expected and optimal execution time, we can also consider the ratio.
\begin{theorem}
The ratio of expected to optimal execution time for Skinner-C is upper-bounded and that bound converges to $m$ as $n$ grows.
\end{theorem}
\begin{proof}
Let $a=n-n^*$ be additive regret, i.e.\ the difference between actual and optimal execution time. It is $n^*=n-a$ and, as $a\leq (1-1/m)\cdot n+O(\log(n))$ due to Theorem~\ref{skinnerCadditiveTheorem}, it is $n^*\geq n-(1-1/m)\cdot n-O(\log(n))=n/m-O(\log n)=n\cdot(1/m-O(\log(n))/n)$. Optimal execution time is therefore lower-bounded by a term that converges to $n/m$ as $n$ grows. Then, the ratio $n/n^*$ is upper-bounded by $m$.
\end{proof}
\begin{comment}
$n$ is total execution time according to some atomic time unit
Assume that completely balanced over different timeouts to simplify the following proofs (relaxation is relatively straight-forward)
Assume that processing time per batch is homogeneous plus random noise (current proof sufficient for the latter?), can be achieved by shuffling and choosing batches large enough
Simplified version of the algorithm to simplify proofs
TODO: coupon collector, pyramid
Proof via induction concerning balancing of pyramid scheme?
\begin{equation}
R=R_T+R_P+R_F+R_J
\end{equation}
\begin{lemma}
Regret due to inappropriate timeouts is upper-bounded by $n\cdot(\log(n)-1)/\log(n)$.\label{timeoutRegretLemma}
\end{lemma}
\begin{proof}
We use different timeouts to process data batches. The number of different timeouts grows logarithmically in the total execution time. Further, our pyramid scheme ensures that execution time allocations are balanced over different timeouts (i.e, the total time spent in executing batches with larger timeouts does not exceed the total time spent in executing batches with smaller timeouts). Hence, the execution time allocated to one specific timeout is bounded by $n/\log(n)$ time units. We assume that there is one optimal timeout per batch that is minimal among all timeouts allowing to process batches reliably. We (very pessimistically) assume that invocations with smaller timeouts generally do not allow to make progress. We also assume that invocations with larger timeouts waste a disproportional amount of processing time to make negligible progress. Still, regret is bounded by the ratio of invocations for sub-optimal timeouts which is $n\cdot(\log(n)-1)/\log(n)$.
\end{proof}
\begin{lemma}
Expected regret due to processing failures is upper-bounded by $O(\log(n/\log(n)))$.
\end{lemma}
\begin{proof}
Following the proof of the previous lemma, the execution time fraction allocated to the optimal timeout per batch is $n/\log(n)$. As we use a minimal timeout of one time unit, the number of batches processed with that optimal timeout is upper-bounded by the same number. We use the UCT algorithm to find join orders that process batches reliably with that timeout. Each processed batch corresponds to one sample of the UCT algorithm. Following the proof by Kocsis and Szepesv\`ari~\cite{Kocsis2006}, expected regret is bounded by $O(\log(n/\log(n)))$. It is $\log((n)/\log(n))=\log(n)-\log(\log(n))\in O(\log(n))$.
\end{proof}
\begin{lemma}
Regret due to sub-optimal choice of the first join table is upper-bounded by $((m-1)/m)\cdot n/\log(n)$.
\end{lemma}
\begin{proof}
Following again the proof of Lemma~\ref{timeoutRegretLemma}, the execution time fraction allocated to the optimal timeout per batch is $n/\log(n)$. The number of successfully processed batches using that timeout is trivially upper-bounded by the same number. Not all successfully processed batches share the same left-most table with the optimal join order. As the number of batches is the same over all tables, the number of batches processed from sub-optimal tables is for each single table strictly upper-bounded by the number of batches processed in the optimal table (after processing finishes). Hence, the total number of batches processed for join orders starting with the wrong table is upper bounded by factor $(m-1)/m$.
\end{proof}
\begin{lemma}
Regret due to a sub-optimal join order after the first table is upper-bounded by $1/(2\cdot m)\cdot n/\log(n)$
\end{lemma}
\begin{proof}
We choose timeouts per batch from a geometric progression. The optimal timeout in that progression may exceed the optimal timeout by up to factor two. As the reward function depends only on whether or not a batch was processed successfully, the algorithm does not distinguish between join orders whose processing cost differs by up to factor two. In the worst case, we loose half of the processing time allocated to the optimal timeout per batch and to the optimal left-most table due to such sub-optimal choices.
\end{proof}
\begin{theorem}
Expected regret of Skinner-NI is bounded by $n\cdot(1-1/(2\cdot m\cdot\log(n)))+O(\log(n))$.
\end{theorem}
\begin{corollary}
The actual
$O(n-2\cdot m\cdot\log(n)\cdot\log(n)/(2\cdot m\cdot\log(n))$
\end{corollary}
Link to optimal processing time of original plan under specific assumptions
Proof sketch for Skinner-C
\end{comment}
\section{Related Work}
\label{relatedSec}
Our approach connects to prior work collecting information on predicate selectivity by evaluating them on data samples~\cite{Bruno2002, Chaudhuri2001, Haas1992, Haas2011, Karanasos2014a, Lipton1990a, Markl2013, Wu2016}. We compare in our experiments against a recently proposed representative~\cite{Wu2016}. Most prior approaches rely on a traditional optimizer to select interesting intermediate results to sample. They suffer if the original optimizer generates bad plans. The same applies to approaches for interleaved query execution and optimization~\cite{Aboulnaga2004a, Avnur2000, Babu2005} that repair initial plans at run time if cardinality estimates turn out to be wrong. Robust query optimization~\cite{Alyoubi2015, Alyoubi2016, Babcock2005, D.2008} assumes that predicate selectivity is known within narrow intervals which is often not the case~\cite{El-Helw2009}. Prior work~\cite{Dutt2014a, Dutt2014} on query optimization without selectivity estimation is based on simplifying assumptions (e.g., independent predicates) that are often violated.
Machine learning has been used to estimate cost for query plans whose cardinality values are known~\cite{Akdere2011, Li2012}, to predict query~\cite{Ganapathi} or workflow~\cite{Popescu2013} execution times, result cardinality~\cite{Malik2006, Malik2007}, or interference between query executions~\cite{Duggan2011}. LEO~\cite{Aboulnaga2004a, Stillger2001}, IBM's learning optimizer, leverages past query executions to improve cardinality estimates for similar queries. Ewen et al.~\cite{Ewen2005} use a similar approach for federated database systems. Several recent approaches~\cite{Krishnan2018, Marcus2018} use learning for join ordering. All of the aforementioned approaches learn from past queries for the optimization of future queries. To be effective, new queries must be similar to prior queries and this similarity must be recognizable. Instead, we learn \textit{during} the execution of a query.
Adaptive processing strategies have been explored in prior work~\cite{Avnur2000, Deshpande2004, Deshpande2006a, Quanzhong2007a, Raman2003, Tzoumas2008, Viglas2003}. Our work uses reinforcement learning and is therefore most related to prior work using reinforcement learning in the context of Eddies~\cite{Tzoumas2008}. We compare against this approach in our experiments. Eddies do not provide formal guarantees on the relationship between expected execution time and the optimum. They never discard intermediate results, even if joining them with the remaining tables creates disproportional overheads. Eddies support bushy query plans in contrast to our approach. Bushy plans can in principle decrease execution cost compared to the best left-deep plan. However, optimal left-deep plans typically achieve reasonable performance~\cite{Gubichev2015}. Also, as we show in our experiments, reliably identifying near-optimal left-deep plans can be better than selecting bushy query plans via non-robust optimization.
Our work relates to prior work on filter ordering with regret bounds~\cite{Condon2009a}. Join ordering introduces however new challenges, compared to filter ordering. In particular, applying more filters can only decrease the size of intermediate results. The relative overhead of a bad filter order, compared to the optimum, grows therefore linearly in the number of filters. The overhead of bad join orders, compared to the optimum, can grow exponentially in the query size. This motivates mechanisms that bound join overheads for single data batches, as well as mechanisms to save progress for partially processed data batches.
Worst-case optimal join algorithms~\cite{Ngo2012, Veldhuizen2012} bound cost as a function of worst-case query result size. We bound expected execution cost as a function of cost for processing an optimal join order. Further, prior work on worst-case optimal joins focuses on conjunctive queries while we support a broader class of queries, including queries with user-defined function predicates. Our approach applies to SQL with standard semantics while systems for worst-case optimal evaluation typically assume set semantics~\cite{Veldhuizen2012}.
\subsection{Experimental Setup}
Skinner-G(X) is the generic Skinner version (see Section~\ref{genericSub}) on top of database system X in the following. Skinner-H(X) is the hybrid version on system X. We execute Skinner on top of MonetDB (Database Server Toolkit v1.1 (Mar2018-SP1))~\cite{Boncz2008} and Postgres (version 9.5.14)~\cite{Postgres}. We use different mechanisms to force join orders for those systems. Postgres has dedicated knobs to force join orders. For MonetDB, we ``brute-force'' join orders by executing each join as a separate query, generating multiple intermediate result tables. Skinner-C, described in Section~\ref{customizedSub}, uses a specialized execution engine. We set $w=\sqrt{2}$ in the UCT formula for Skinner-G and Skinner-H and $w=10^{-6}$ for Skinner-C. Unless noted otherwise, we use a timeout of $b=500$ loop iterations for Skinner-C (i.e., thousands or even tens of thousands of join order switches per second). For Skinner-G and -H, we must use much higher timeouts, starting from one second. All SkinnerDB-specific components are implemented in Java. Our current Skinner-C version only allows to parallelize the pre-processing step. Extending our approach to parallel join processing is part of our future work. To separate speedups due to join ordering from speedups due to parallelization, we compare a subset of baselines in single- as well as in multi-threaded mode. The following experiments are executed on a Dell PowerEdge R640 server with 2 Intel Xeon 2.3~GHz CPUs and 256~GB of RAM.
\subsection{Performance on Join Order Benchmark}
\begin{table}[t]
\caption{Performance of query evaluation methods on the join order benchmark - single-threaded.\label{jobTable}}
\begin{tabular}{p{1.75cm}p{1.25cm}p{1.25cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Approach} & \textbf{Total Time} & \textbf{Total Card.\ } & \textbf{Max.\ Time} & \textbf{Max.\ Card.\ }\\
\midrule[1pt]
Skinner-C & 183 & 112M & 9 & 18M \\
\midrule
Postgres & 726 & 681M & 59 & 177M \\
S-G(PG) & 13,348 & N/A & 840 & N/A \\
S-H(PG) & 2,658 & N/A & 234 & N/A \\
\midrule
MonetDB & 986 & 2,971M & 409 & 1,186M \\
S-G(MDB) & 1,852 & N/A & 308 & N/A\\
S-H(MDB) & 762 & N/A & 114 & N/A\\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Performance of query evaluation methods on the join order benchmark - multi-threaded.\label{jobTableMT}}
\begin{tabular}{p{1.75cm}p{1.25cm}p{1.25cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Approach} & \textbf{Total Time} & \textbf{Total Card.\ } & \textbf{Max.\ Time} & \textbf{Max.\ Card.\ }\\
\midrule[1pt]
Skinner-C & 135 & 112M & 7 & 18M \\
\midrule
MonetDB & 105 & 2,971M & 26 & 1,186M \\
S-G(MDB) & 1,450 & N/A & 68 & N/A \\
S-H(MDB) &345 & N/A & 86 & N/A \\
\bottomrule[1pt]
\end{tabular}
\end{table}
We evaluate approaches on the join order benchmark~\cite{Gubichev2015}, a benchmark on real, correlated data. We follow the advice of the paper authors and explicitly prevent Postgres from choosing bad plans involving nested loops joins. Tables~\ref{jobTable} and \ref{jobTableMT} compare different baselines in single-threaded and for Skinner, and MonetDB, in multi-threaded mode (our server runs Postgres~9.5 which is not multi-threaded). We compare approaches by total and maximal (per query) execution time (in seconds). Also, we calculate the accumulated intermediate result cardinality of executed query plans. This metric is a measure of optimizer quality that is independent of the execution engine. Note that we cannot reliably measure cardinality for Skinner-G and Skinner-H since we cannot know which results were generated by the underlying execution engine before the timeout.
Clearly, Skinner-C performs best for single-threaded performance. Also, its speedups are correlated with significant reductions in intermediate result cardinality values. As verified in more detail later, this suggests join order quality as the reason. For multi-threaded execution on a server with 24 cores, MonetDB slightly beats SkinnerDB. Note that our system is implemented in Java and does not currently parallelize the join execution phase.
When it comes to Skinner on top of existing databases, the results are mixed. For Postgres, we are unable to achieve speedups in this scenario (as shown in the appendix, there are cases involving user-defined predicates where speedups are however possible). Postgres exploits memory less aggressively than MonetDB, making it more likely to read data from disk (which makes join order switching expensive). For single-threaded MonetDB, however, the hybrid version reduces total execution time by nearly 25\% and maximal time per query by factor four, compared to the original system. This is due to just a few queries where the original optimizer selects highly suboptimal plans.
\begin{table}[t]
\caption{Performance of join orders in different execution engines for join order benchmark - single threaded.\label{joinOrdersTable}}
\begin{tabular}{p{1.5cm}p{1.5cm}p{1.75cm}p{1.75cm}}
\toprule[1pt]
\textbf{Engine} & \textbf{Order} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
Skinner & Skinner & 183 & 9 \\
& Optimal & 180 & 7 \\
\midrule
Postgres & Original & 726 & 59 \\
& Skinner & 567 & 14 \\
& Optimal & 555 & 14 \\
\midrule
MonetDB & Original & 986 & 409 \\
& Skinner & 138 & 7 \\
& Optimal & 134 & 6 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Performance of join orders in different execution engines for join order benchmark - multi-threaded.\label{joinOrdersTableMT}}
\begin{tabular}{p{1.5cm}p{1.5cm}p{1.75cm}p{1.75cm}}
\toprule[1pt]
\textbf{Engine} & \textbf{Order} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
Skinner & Skinner & 135 & 7 \\
& Optimal & 129 & 7 \\
\midrule
MonetDB & Original & 105 & 26 \\
& Skinner & 53 & 2.7 \\
& Optimal & 51 & 2.3 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
To verify whether Skinner-C wins because of better join orders, we executed final join orders selected by Skinner-C in the other systems. We also used optimal join orders, calculated according to the $C_{out}$ metric. Tables~\ref{joinOrdersTable} and \ref{joinOrdersTableMT} show that Skinner's join orders improve performance uniformly, compared to the original optimizer. Also, Skinner's execution time is very close to the optimal order, proving the theoretical guarantees from the last section pessimistic.
\subsection{Further Analysis}
\begin{table}
\caption{Impact of replacing reinforcement learning by randomization.\label{randomizationTable}}
\begin{tabular}{llll}
\toprule[1pt]
\textbf{Engine} & \textbf{Optimizer} & \textbf{Time} & \textbf{Max.\ Time}\\
\midrule[1pt]
Skinner-C & Original & 182 & 9 \\
& Random & 2,268 & 332 \\
\midrule
Skinner-H(PG) & Original & 2,658 & 234 \\
& Random & 3,615 & 250 \\
\midrule
Skinner-H(MDB) & Original & 761 & 114 \\
& Random & $\geq$ 5,743 & $\geq$ 3,600 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Impact of SkinnerDB features.\label{featuresTable}}
\begin{tabular}{p{4.8cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Enabled Features} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
indexes, parallelization, learning & 135 & 7 \\
parallelization, learning & 162 & 9 \\
learning & 185 & 9 \\
none & 2,268 & 332 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
We experiment with different variants of SkinnerDB. First of all, we compare learning-based selection against randomized selection. Table~\ref{randomizationTable} shows the performance penalty for randomized selection. Clearly, join order learning is crucial for performance. In Table~\ref{featuresTable}, we compare the impact of randomization to the impact of parallelizing pre-processing and adding hash indices on all join columns (which SkinnerDB exploits if the corresponding table is not used in pre-processing). Clearly, join order learning is by far the most performance-relevant feature of SkinnerDB.
\begin{figure}[t]
\subfigure[MonetDB spends most time executing a few expensive queries.]{
\includegraphics{arxivMain-figure5.pdf}
}
\subfigure[SkinnerDB realizes high speedup for two expensive queries.]{
\includegraphics{arxivMain-figure6.pdf}
}
\caption{Analyzing the source of SkinnerDB's speedups compared to MonetDB.\label{monetVsSkinnerFig}}
\end{figure}
We analyze in more detail where the speedups compared to MonetDB come from (all results refer to single-threaded mode). Figure~\ref{monetVsSkinnerFig} shows on the left hand side the percentage of execution time, spent on the top-k most expensive queries (x axis). MonetDB spends the majority of execution time executing two queries with highly sub-optimal join orders (we reached out to the MonetDB team to make sure that no straight-forward optimizations remove the problem). On the right side, we draw speedups realized by Skinner versus MonetDB's query execution time. MonetDB is actually faster for most queries while SkinnerDB has highest speedups for the two most expensive queries. Since those queries account for a large percentage of total execution time, Skinner-C outperforms MonetDB in single-threaded mode.
\begin{figure}[t]
\subfigure[The growth of the search tree slows down over time.]{
\includegraphics{arxivMain-figure7.pdf}
}
\subfigure[SkinnerDB spends most time executing one or two join orders.]{
\includegraphics{arxivMain-figure8.pdf}
}
\caption{Analysis of convergence of SkinnerDB.\label{convergenceFig}}
\end{figure}
Figure~\ref{convergenceFig} analyzes convergence of Skinner-C to optimal join orders. On the left side, we show that the growth of the search tree slows as execution progresses (a first indication of convergence). On the right side, we show that Skinner-C executes one (with a timeout of $b=10$ per time slice) or two (with a timeout of $b=500$, allowing less iterations for convergence) join orders for most of the time.
\begin{figure}[t]
\subfigure[Search tree size is correlated with query size.\label{uctMemFig}]{
\includegraphics{arxivMain-figure9.pdf}
}
\subfigure[Size of join order progress tracker tree.\label{trackerMemFig}]{
\includegraphics{arxivMain-figure10.pdf}
}
\subfigure[Size of final result tuple indices.\label{finalMemFig}]{
\includegraphics{arxivMain-figure11.pdf}
}
\subfigure[Combined size of intermediate results, progress, and tree.\label{allMemFig}]{
\includegraphics{arxivMain-figure12.pdf}
}
\caption{Memory consumption of SkinnerDB.\label{memoryFigure}}
\end{figure}
Finally, we analyze memory consumption of Skinner-C. Compared to traditional systems, Skinner-C maintains several additional, auxiliary data structures. First, it keeps the UCT search tree. Second, it maintains a tree associating each join order to the last execution state (one tuple index for each base table). Third, it must keep the tuple vectors of all join result tuples in a hash table to eliminate duplicates from different join orders. On the other side, Skinner-C does not maintain any intermediate results as opposed to other systems (due to depth-first multiway join execution). Figure~\ref{memoryFigure} shows the maximal sizes of the aforementioned data structures during query executions as a function of query size. Storing result tuple index vectors (Figure~\ref{finalMemFig}) has dominant space complexity, followed by the progress tracker, and the UCT search tree. Overall, memory consumption is not excessive compared to traditional execution engines.
\section{Algorithms}
\label{algorithmSec}
We describe several adaptive processing strategies that are implemented in SkinnerDB. In Section~\ref{uctSub}, we introduce the UCT algorithm that all processing strategies are based upon. In Section~\ref{learningSub}, we describe how the UCT algorithm can generally be used to learn optimal join orders. In Section~\ref{genericSub}, we introduce a join order learning approach that can be implemented on top of existing SQL processing engines, in a completely non-intrusive manner. In Section~\ref{hybridSub}, we show how this strategy can integrate plans proposed by the original optimizer. In Section~\ref{customizedSub}, we propose a new query evaluation method that facilitates join order learning and the associated learning strategy.
While we describe the following algorithms only for SPJ queries, it is straight-forward to add sorting, grouping, or aggregate calculations in a post-processing step (we do so in our actual implementation). Nested queries can be treated via decomposition~\cite{Neumann}.
\begin{comment}
We describe several adaptive processing strategies based on reinforcement learning. As shown in Section~\ref{analysisSec}, all algorithms offer formal guarantees on expected regret (i.e., delta between actual and optimal execution time). Also, all algorithms have been implemented in the SkinnerDB system. We evaluate those algorithms experimentally in Section~\ref{experimentsSec}.
In Section~\ref{preliminariesSub}, we discuss how reinforcement learning can be used for learning optimal join orders in general. This discussion establishes the fundament for the following subsections. In Section~\ref{genericSub}, we describe a processing strategy that works with any execution engine. We have implemented that strategy in a completely non-intrusive manner on top of several existing DBMS such as Postgres, enabling them to process difficult-to-optimize queries where their original optimizer fails. In Section~\ref{hybridSub}, we show how to combine this learning-based approach with the original optimizer, enabling execution of difficult queries while not causing significant learning overheads for standard queries. The strategies discussed in the latter two subsections can be used on top of all existing DBMSs supporting an SQL interface (without changing a single line of DBMS code). In Section~\ref{customizedSub}, we present a join order learning approach that relies on a customized execution engine for select-project-join (SPJ) queries. While we describe the following algorithms only for SPJ queries, it is straight-forward to add sorting, grouping, or aggregate calculations in a post-processing step. Nested queries can be treated via decomposition~\cite{Selinger1979}.
\end{comment}
\subsection{Background on UCT}
\label{uctSub}
Our method for learning optimal join orders is based on the UCT algorithm~\cite{Kocsis2006}. This is an algorithm from the area of reinforcement learning. It assumes the following scenario. We repeatedly make choices that result in rewards. Each choice is associated with reward probabilities that we can learn over time. Our goal is to maximize the sum of obtained rewards. To achieve that goal, it can be beneficial to make choices that resulted in large rewards in the past (``exploitation'') or choices about which we have little information (``exploration'') to inform future choices. The UCT algorithm balances between exploration and exploitation in a principled manner that results in probabilistic guarantees. More precisely, assuming that rewards are drawn from the interval $[0,1]$, the UCT algorithm guarantees that the expected regret (i.e., the difference between the sum of obtained rewards to the sum of rewards for optimal choices) is in $O(\log(n))$ where $n$ designates the number of choices made~\cite{Kocsis2006}.
We specifically select the UCT algorithm for several reasons. First, UCT has been applied successfully to problems with very large search spaces (e.g., planning Go moves~\cite{Gelly2012}). This is important since the search space for join ordering grows quickly in the query size. Second, UCT provides formal guarantees on cumulative regret (i.e., accumulated regret over all choices made). Other algorithms from the area of reinforcement learning~\cite{Feldman2014} focus for instance on minimizing simple regret (i.e., quality of the final choice). The latter would be more appropriate when separating planning from execution. Our goal is to interleave planning and execution, making the first metric more appropriate. Third, the formal guarantees of UCT do not depend on any instance-specific parameter settings~\cite{Domshlak2013}, distinguishing it from other reinforcement learning algorithms.
We assume that the space of choices can be represented as a search tree. In each round, the UCT algorithm makes a series of decisions that can be represented as a path from the tree root to a leaf. Those decisions result in a reward from the interval $[0,1]$, calculated by an arbitrary, randomized function specific to the leaf node (or as a sum of rewards associated with each path step). Typically, the UCT algorithm is applied in scenarios where materializing the entire tree (in memory) is prohibitively expensive. Instead, the UCT algorithm expands a partial search tree gradually towards promising parts of the search space. The UCT variant used in our system expands the materialized search tree by at most one node per round (adding the first node on the current path that is outside the currently materialized tree).
Materializing search tree nodes allows to associate statistics with each node. The UCT algorithm maintains two counters per node: the number of times the node was visited and the average reward that was obtained for paths crossing through that node. If counters are available for all relevant nodes, the UCT algorithm selects at each step the child node $c$ maximizing the formula $r_c+w\cdot \sqrt{\log(v_p)/v_c}$ where $r_c$ is the average reward for $c$, $v_c$ and $v_p$ are the number of visits for child and parent node, and $w$ a weight factor. In this formula, the first term represents exploitation while the second term represents exploration. Their sum represents the upper bound of a confidence bound on the reward achievable by passing through the corresponding node (hence the name of the algorithm: UCT for Upper Confidence bounds applied to Trees). Setting $w=\sqrt{2}$ is sufficient to obtain bounds on expected regret. It can however be beneficial to try different values to optimize performance for specific domains~\cite{Domshlak2013}.
\subsection{Learning Optimal Join Orders}
\label{learningSub}
Our search space is the space of join orders. We consider all join orders except for join orders that introduce Cartesian product joins without need. Avoiding Cartesian product joins is a very common heuristic that is used by virtually all optimizers~\cite{Gubichev2015}.
To apply the UCT algorithm for join ordering, we need to represent the search space as a tree. We assume that each tree node represents one decision with regards to the next table in the join order. Tree edges represent the choice of one specific table. The tree root represents the choice of the first table in the join order. All query tables can be chosen since no table has been selected previously. Hence, the root node will have $n$ child nodes where $n$ is the number of tables to join. Nodes in the next layer of the tree (directly below the root) represent the choice of a second table. We cannot select the same table twice in the same join order. Hence, each of the latter node will have at most $n-1$ child nodes associated with remaining choices. The number of choices depends on the structure of the join graph. If at least one of the remaining tables is connected to the first table via join predicates, only such tables will be considered. If none of the remaining tables is connected, all remaining tables become eligible (since a Cartesian product join cannot be avoided given the initial choice). In total, the search tree will have $n$ levels. Each leaf node is associated with a completely specified join order.
We generally divide the execution of a query into small time slices in which different join order are tried. For each time slice, the UCT algorithm selects a path through the aforementioned tree, thereby selecting the join order to try next. As discussed previously, only part of the tree will be ``materialized'' (i.e., we keep nodes with node-specific counters in main memory). When selecting a path (i.e., a join order), UCT exploits counters in materialized nodes wherever available to select the next path step. Otherwise, the next step is selected randomly. After a join order has been selected, this join order is executed during the current time slice. Results from different time slices are merged (while removing overlapping results). We stop once a complete query result is obtained.
Our goal is to translate the aforementioned formal guarantees of UCT, bounding the distance between expected and optimal reward (i.e., the regret), into guarantees on query evaluation speed. To achieve that goal, we must link the reward function to query evaluation progress. The approaches for combined join order learning and execution, presented in the following subsections, define the reward function in different ways. They all have however the property that higher rewards correlate with better join orders. After executing the selected join order for a bounded amount of time, we measure evaluation progress and calculate a corresponding reward value. The UCT algorithm updates counters (average reward and number of visits) in all materialized tree nodes on the previously selected path.
The following algorithms use the UCT algorithm as a sub-function. More precisely, we use two UCT-related commands in the following pseudo-code: \Call{UctChoice}{$T$} and \Call{RewardUpdate}{$T,j,r$}. The first one returns the join order chosen by the UCT algorithm when applied to search tree $T$ (some of the following processing strategies maintain multiple UCT search trees for the same query). The second function updates tree $T$ by registering reward $r$ for join order $j$. Sometimes, we will pass a reward function instead of a constant for $r$ (with the semantics that the reward resulting from an evaluation of that function is registered).
\begin{comment}
Second, the algorithm assumes that the regret to minimize accumulates over the learning phase. It thereby differs from some other, recently proposed, algorithms that measure regret only after learning has finished. The latter would be appropriate if having a separate planning and execution stage. Here, we interleave learning and execution and therefore want to minimize accumulated regret.
We discuss how reinforcement learning can be used to learn optimal join orders. All processing strategies presented in the following subsections are based on one specific reinforcement learning algorithm: the UCT algorithm. This algorithm learns an optimal sequence of actions. It has been used very successfully for planning problems with large search spaces~\cite{Gelly2012}. To use that algorithm for join ordering, we must first represent the choice of a join order as a sequence of \textit{Actions}. We focus on left-deep query plans where join order is described by a permutation of tables. Hence, we can simply model the selection of the next table in a join order as one action. Executing a sequence of $m$ actions for a query joining $m$ tables leads to a complete join order.
Applying actions changes \textit{State}. The state generally determines what actions can be applied next. In case of join ordering, we simply use the set of tables joined so far to characterize state. The action sequence selecting a join order starts from the state associated with the empty table set (i.e., no tables are initially joined). Applying action $a$ in state $S$ (where $a$ is a table and $S$ a table set) leads to state $S\cup\{a\}$. The set of applicable actions is determined by the state as follows. First, actions are excluded that refer to tables already joined in the current state. Second, we use a popular heuristic that avoids Cartesian product joins~\cite{Selinger1979}. This means that we consider only actions joining a table that is connected by at least one predicates to tables already joined before (if at least one such action exists). This heuristic is used by virtually all modern optimizers~\cite{Gubichev2015}.
The UCT algorithm iteratively executes action sequences. Each action sequence results in a \textit{Reward} that accumulates over different iterations. The goal of the UCT algorithm is to maximize the accumulated reward or, equivalently, to minimize expected regret (i.e., the difference between actual and optimal accumulated reward). In case of join ordering, we enable iterative execution for a single query by dividing input data into batches or by limiting execution time after each selection. The processing strategies presented in the following sections use different reward functions. Common to all of those reward functions is that they tend to associate a higher reward with join orders that result in lower execution time. UCT comes with formal bounds on the expected regret~\cite{Kocsis2006}. We prove bounds on expected execution cost regret (i.e., difference between actual and optimal query execution time) for our algorithms in Section~\ref{analysisSec}. They are based on the formal guarantees provided by UCT.
The knowledge acquired by the UCT algorithm over different iterations is stored in the form of a search tree. This tree only represents parts of the search space that have been explored so far. The UCT algorithm is typically used in situations where representing the entire search space would take prohibitive amounts of space~\cite{Gelly2012}. This applies to the case of join ordering as well. Starting from a single node, we expand the tree by at most one node per iteration. We expand the tree whenever an action leads to a state whose node has not been constructed yet (completing the join order randomly if more than one node is missing). Each node in the UCT search tree is associated with a sequence of actions leading to that point. Tree edges represent actions. Each tree node is associated with a counter, representing the number of iterations in which it was visited. Also, each tree edge is associated with an average reward that was obtained via paths passing through that edge. Together, the number of visits and the average reward allow to calculate a measure of ``interestingness'' for specific actions. More precisely, we can calculate an upper bound on the expected reward associated with specific actions (based on reward average and number of visits). This allows the UCT algorithm to balance the need for exploration (i.e., learning about actions with high uncertainty) versus the need for exploitation (i.e., using actions that have proven to work well in the past).
The following algorithms use the UCT algorithm as a sub-function. More precisely, we use two UCT-related commands in the following pseudo-code: \Call{UctChoice}{$T$} and \Call{RewardUpdate}{$T,j,r$}. The first one returns the join order chosen by the UCT algorithm when applied to search tree $T$ (some of the following processing strategies maintain multiple UCT search trees for the same query). The second function updates tree $T$ by registering reward $r$ for join order $j$. Sometimes, we will pass a reward function instead of a constant for $r$ (with the semantics that the reward resulting from an evaluation of that function is registered).
\end{comment}
\subsection{Generic Execution Engines}
\label{genericSub}
In this subsection, we show how we can learn optimal join orders when treating the execution engine as a black box with an SQL interface. This approach can be used on top of existing DBMS without changing a single line of their code.
A naive approach to learn optimal join orders in this context would be the following. Following the discussion in the last subsection, we divide each table joined by the input query into an equal number of batches (if the input query contains unary predicates in the where clause, we can apply them in the same step). We simplify by assuming that all tables are sufficiently large to contain at least one tuple per batch (otherwise, less batches can be used for extremely small tables). We iteratively choose join orders using the UCT algorithm. In each iteration, we use the given join order to process a join between one batch \textit{for the left most table in the join order} and the remaining, complete tables. We remove each processed batch and add the result of each iteration to a result relation. We terminate processing once all batches are processed for at least one table. As we prove in more detail in Section~\ref{analysisSec}, the result relation contains a complete query result at this point. To process the query as quickly as possible, we feed the UCT algorithm with a reward function that is based on processing time for the current iteration. The lower execution time, the higher the corresponding reward. Note that reducing the size of the left-most table in a join order (by using only a single batch) tends to reduce the sizes of all intermediate results. If the dominant execution time component is proportional to those intermediate result sizes (e.g., time for generating intermediate result tuples, index lookups, number of evaluated join predicates), execution time for one batch is proportional to execution time for the entire table (with a scaling factor that corresponds to the number of batches per table).
The reason why we call the latter algorithm naive is the following. In many settings, the reward function for the UCT algorithm is relatively inexpensive to evaluate. In our case, it requires executing a join between one batch and all the remaining tables. The problem is that execution cost can vary strongly as a function of join order. The factor separating execution time of best and worst join order may grow exponentially in the number of query tables. Hence, even a single iteration with a bad join order and a single tuple in the left-most table may lead to an overall execution time that is far from the optimum for the entire query. Hence, we must upper-bound execution time in each iteration.
This leads however to a new problem: what timeout should we choose per batch in each iteration? Ideally, we would select as timeout the time required by an optimal join order. Of course, we neither know an optimal join order nor its optimal processing time for a new query. Using a timeout that is lower than the optimum prevents us from processing an entire batch before the timeout. This might be less critical if we can backup the state of the processing engine and restore it when trying the same join order again. However, we currently treat the processing engine as a black box and cannot assume access to partial results and internal state. Further, most SQL processing engines execute a series of binary joins and generate potentially large intermediate results. As we may try out many different join orders, already the space required for storing intermediate results for each join order would become prohibitive. So, we must assume that all intermediate results are lost if execution times out before a batch is finished. Using lower timeouts than necessary prevents us from making any progress. On the other side, choosing a timeout that is too high leads to unnecessary overheads when processing sub-optimal join orders.
\tikzstyle{timeBudget}=[anchor=south west, draw=black, rounded corners=0.15cm, fill=blue!15]
\begin{figure}[t]
\centering
\begin{tikzpicture}
\draw (0,0) -- (7,0);
\draw[->] (0,0) -- (0,2);
\node[timeBudget, minimum width=1cm] at (0,0) {1};
\node[timeBudget, minimum width=1cm] at (1,0) {2};
\node[timeBudget, minimum width=2cm] at (0,0.5) {3};
\node[timeBudget, minimum width=1cm] at (2,0) {4};
\node[timeBudget, minimum width=1cm] at (3,0) {5};
\node[timeBudget, minimum width=2cm] at (2,0.5) {6};
\node[timeBudget, minimum width=4cm] at (0,1) {7};
\node[timeBudget, minimum width=1cm] at (4,0) {8};
\node[timeBudget, minimum width=1cm] at (5,0) {9};
\node[timeBudget, minimum width=2cm] at (4,0.5) {10};
\node[timeBudget, minimum width=1cm] at (6,0) {11};
\node[font=\itshape] at (3.5,-0.5) {Time Units};
\node[font=\itshape, rotate=90, align=center] at (-0.5,1) {Timeout Level};
\foreach \x in {1,2,3,4,5,6,7}{
\draw[draw=black] (\x,0) -- (\x,-0.2);
}
\end{tikzpicture}
\caption{Illustration of time budget allocation scheme: we do not know the optimal time per batch and iterate over different timeouts, allocating higher budgets less frequently.\label{budgetFigure}}
\end{figure}
The choice of a good timeout is therefore crucial while we cannot know the best timeout a-priori. The solution lies in an iterative scheme that tries different timeouts in different iterations. We carefully balance allocated execution time over different timeouts, avoiding to use higher timeouts unless lower ones have been tried sufficiently often. More precisely, we will present a timeout scheme that ensures that the total execution time allocated per timeout does not differ by more than factor two across different timeouts. Figure~\ref{budgetFigure} gives an intuition for the corresponding timeout scheme (numbers indicate the iteration in which the corresponding timeout is chosen). We use timeouts that are powers of two (we also call the exponent the \textit{Level} of the timeout). We always choose the highest timeout for the next iteration such that the accumulated execution time for that timeout does not exceed time allocated to any lower timeout. Having fixed a timeout for each iteration, we assign a reward of one for a fixed join order if the input was processed entirely. We assign a reward of zero otherwise.
Algorithm~\ref{nonIntrusiveAlg} present pseudo-code matching the verbal description. First, tuples are filtered using unary predicates and the remaining tuples are partitioned into $b$ batches per table (we omit pseudo-code for pre-processing). We use function~\Call{DBMS}{} to invoke the underlying DBMS for processing one batch with a timeout. The function accumulates partial result in a result relation if processing finishes before the timeout and returns \textbf{true} in that case. Vector $o_i$ stores for each table an offset, indicating how many of its batches were completely processed (it is implicitly initialized to one for each table). Variable $n_l$ stores for each timeout level $l$ how much execution time was dedicated to it so far (it is implicitly initialized to zero and updated in each invocation of function~\Call{NextTimeout}{}). Note that we maintain separate UCT trees $T_t$ for each timeout $t$ (implicitly initialized as a single root node representing no joined tables). This prevents for instance processing failures for lower timeouts to influence join ordering decisions for larger timeouts. We prove the postulated properties of the timeout scheme (i.e., balancing time over different timeouts) in Section~\ref{analysisSec}.
\begin{comment}
\algrenewcommand\alglinenumber[1]{\small #1:}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Apply unary predicates in query $q$ and}
\State \Comment{partition each table into $b$ batches.}
\Function{PreprocessingG}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Initialize result set}
\State $R\gets\emptyset$
\State \Comment{Iterate over all tables in query}
\For{$i\in 1\ldots m$}
\State \Comment{Filter each table via unary predicates}
\State $F_i\gets\sigma_{\Call{Unary}{q}}(R_i)$
\State \Comment{Divide table into $b$ batches}
\State $R\gets R\cup\{\Call{Partitions}{F_i,b}\}$
\EndFor
\State \Return{$R$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Returns timeout for processing next batch,}
\State \Comment{based on times $n$ given to each timeout before.}
\Function{NextTimeout}{$n$}
\State \Comment{Choose timeout level}
\State $L\gets\max\{L|\forall l<L:n_l\geq n_L+2^L\}$
\State \Comment{Update total times given to levels}
\State $n_L\gets n_L+2^L$
\State \Comment{Return timeout for chosen level}
\State \Return{$2^L$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Auxiliary functions for regret-bounded query evaluation with generic engine.\label{nonIntrusiveAuxAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t!]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Returns timeout for processing next batch,}
\State \Comment{based on times $n$ given to each timeout before.}
\Function{NextTimeout}{$n$}
\State \Comment{Choose timeout level}
\State $L\gets\max\{L|\forall l<L:n_l\geq n_L+2^L\}$
\State \Comment{Update total time given to level}
\State $n_L\gets n_L+2^L$
\State \Comment{Return timeout for chosen level}
\State \Return{$2^L$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Process SPJ query $q$ using existing DBMS and}
\State \Comment{by dividing each table into $b$ batches.}
\Procedure{SkinnerG}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Apply unary predicates and partitioning}
\State $\{R_1^1,\ldots,R_m^b\}\gets$\Call{PreprocessingG}{$q,b$}
\State \Comment{Until we processed all batches of one table}
\While{$\nexists i:o_i>b$}
\State \Comment{Select timeout using pyramid scheme}
\State $t\gets$\Call{NextTimeout}{n}
\State \Comment{Select join order via UCT algorithm}
\State $j\gets$\Call{UctChoice}{$T_t$}
\State \Comment{Process one batch until timeout}
\State $suc\gets$\Call{DBMS}{$R_{j1}^{o_{j1}}\Join R_{j2}^{o_{j2}..b}\ldots\Join R_{jm}^{o_{jm}..b},t$}
\State \Comment{Was entire batch processed successfully?}
\If{$suc$}
\State \Comment{Mark current batch as processed}
\State $o_{j1}\gets o_{j1}+1$
\State \Comment{Store maximal reward in search tree}
\State \Call{RewardUpdate}{$T_t,j,1$}
\Else
\State \Comment{Store minimal reward in search tree}
\State \Call{RewardUpdate}{$T_t,j,0$}
\EndIf
\EndWhile
\EndProcedure
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a generic execution engine.\label{nonIntrusiveAlg}}
\end{algorithm}
\subsection{Hybrid Algorithm}
\label{hybridSub}
The algorithm presented in the last subsection uses reinforcement learning alone to order joins. It bypasses any join ordering capabilities offered by an existing optimizer completely. This approach is efficient for queries where erroneous statistics or difficult-to-analyze predicates mislead the traditional optimizer. However, it adds unnecessary learning overheads for standard queries where a traditional optimizer would produce reasonable query plans.
\begin{figure}[t]
\begin{tikzpicture}
\draw[->] (0,0) -- (7,0);
\node[timeBudget, fill=red!15, minimum width=1cm] at (0,0) {1};
\node[timeBudget, minimum width=1cm] at (1,0) {2};
\node[timeBudget, fill=red!15, minimum width=2cm] at (2,0) {3};
\node[timeBudget, minimum width=2cm] at (4,0) {4};
\node at (6.5,0.25) {\ldots};
\node[font=\itshape] at (3.5,-0.5) {Time};
\foreach \x in {1,2,3,4,5,6}{
\draw[draw=black] (\x,0) -- (\x,-0.2);
}
\end{tikzpicture}
\caption{The hybrid approach alternates with increasing timeouts between executing plans proposed by the traditional optimizer (red) and learned plans (blue).\label{hybridFig}}
\end{figure}
We present a hybrid algorithm that combines reinforcement learning with a traditional query optimizer. Instead of using an existing DBMS only as an execution engine, we additionally try benefiting from its query optimizer whenever possible. We do not provide pseudo-code for the hybrid algorithm as it is quick to explain. We iteratively execute the query using the plan chosen by the traditional query optimizer, using a timeout of $2^i$ where $i$ is the number of invocations (for the same input query) and time is measured according to some atomic units (e.g., several tens of milliseconds). In between two traditional optimizer invocations, we execute the learning based algorithm described in the last subsection. We execute it for the same amount of time as the traditional optimizer. We save the state of the UCT search trees between different invocations of the learning approach. Optionally, if a table batch was processed by the latter, we can remove the corresponding tuples before invoking the traditional optimizer. Figure~\ref{hybridFig} illustrates the hybrid approach. As shown in Section~\ref{analysisSec}, the hybrid approach bounds expected regret (compared to the optimal plan) and guarantees a constant factor overhead compared to the original optimizer.
\subsection{Customized Execution Engines}
\label{customizedSub}
The algorithms presented in the previous sections can work with any execution engine for SPJ queries. In this section, we present an execution engine that is tailored towards the needs of a learning based join ordering strategy. In addition, we present a variant of the join order learning algorithm that optimally exploits that execution engine.
\begin{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Advance tuple pointer $p$ for position $i$ in join order $j$}
\State \Comment{for query $q$, considering tuple offsets $o$.}
\Function{Advance}{$q=R_1\Join\ldots\Join R_m,j,o,p,i$}
\State \Comment{Advance tuple pointer for join order position}
\State $p_{j_i}\gets p_{j_i}+1$
\State \Comment{While index exceeds relation cardinality}
\While{$p_{j_i}>|q.R_{j_i}|$ \textbf{and} $i>0$}
\State $p_{j_i}\gets o_{j_i}$
\State $i\gets i-1$
\State $p_{j_i}\gets p_{j_i}+1$
\EndWhile
\State \Return{$\langle p,i\rangle$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Execute join order $j$ for query $q$ starting from}
\State \Comment{tuple pointer $p$ with tuple offsets $o$. Add results}
\State \Comment{to $R$ until time budget $b$ is depleted.}
\Function{Execute}{$q=R_1\Join\ldots\Join R_m,j,o,b,p,R$}
\State $i\gets1$ \Comment{Initialize join order index}
\While{processing time $<b$ \textbf{and} $i>0$}
\State $t\gets\Call{Materialize}{q.R[p_{j_1}]\times\ldots\times q.R[p_{j_i}]}$
\If{$t$ satisfies all newly applicable predicates}
\If{$i=q.n$} \Comment{Is result tuple completed?}
\State $R\gets R\cup\{p\}$ \Comment{Add pointers to result set}
\State $\langle p,i\rangle\gets\Call{Advance}{q,j,o,p,i}$
\Else \Comment{Tuple is incomplete}
\State $i\gets i+1$
\EndIf
\Else \Comment{Tuple violates predicates}
\State $\langle p,i\rangle\gets\Call{Advance}{q,j,o,p,i}$
\EndIf
\EndWhile
\State \Comment{Join order position 0 indicates termination}
\State \Return{$(i<1)$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Auxiliary functions for regret-bounded query evaluation with customized engine.\label{customizedAuxAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Advance tuple index in state $s$ for table at position $i$}
\State \Comment{in join order $j$ for query $q$, considering tuple offsets $o$.}
\Function{NextTuple}{$q=R_1\Join\ldots\Join R_m,j,o,s,i$}
\State \Comment{Advance tuple index for join order position}
\State $s_{j_i}\gets s_{j_i}+1$
\State \Comment{While index exceeds relation cardinality}
\While{$s_{j_i}>|R_{j_i}|$ \textbf{and} $i>0$}
\State $s_{j_i}\gets o_{j_i}$
\State $i\gets i-1$
\State $s_{j_i}\gets s_{j_i}+1$
\EndWhile
\State \Return{$\langle s,i\rangle$}
\EndFunction
\vspace{0.15cm}
\State \Comment{Execute join order $j$ for query $q$ starting from}
\State \Comment{tuple indices $s$ with tuple offsets $o$. Add results}
\State \Comment{to $R$ until time budget $b$ is depleted.}
\Function{ContinueJoin}{$q=R_1\Join\ldots\Join R_m,j,o,b,s,R$}
\State $i\gets1$ \Comment{Initialize join order index}
\While{processing time $<b$ \textbf{and} $i>0$}
\State $t\gets\Call{Materialize}{R_{j_1}[s_{j_1}]\times\ldots\times R_{j_i}[s_{j_i}]}$
\If{$t$ satisfies all newly applicable predicates}
\If{$i=m$} \Comment{Is result tuple completed?}
\State $R\gets R\cup\{s\}$ \Comment{Add indices to result set}
\State $\langle s,i\rangle\gets\Call{NextTuple}{q,j,o,s,i}$
\Else \Comment{Tuple is incomplete}
\State $i\gets i+1$
\EndIf
\Else \Comment{Tuple violates predicates}
\State $\langle s,i\rangle\gets\Call{NextTuple}{q,j,o,s,i}$
\EndIf
\EndWhile
\State \Comment{Join order position 0 indicates termination}
\State \Return{$(i<1)$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Multi-way join algorithm supporting fast join order switching.\label{customizedAuxAlg}}
\end{algorithm}
\begin{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Regret-bounded evaluation of SPJ query $q$,}
\State \Comment{sampling join orders with time budget $b$}
\State \Comment{and using reward function $F_R$.}
\Function{SkinnerC}{$q,b,F_R$}
\State $o\gets \langle 0,\ldots,0\rangle$ \Comment{Initialize tuple offsets}
\State $R\gets\emptyset$ \Comment{Initialize result pointers}
\State $finished\gets\mathbf{false}$ \Comment{Initialize termination flag}
\While{$\neg finished$}
\State \Comment{Choose join order via UCT algorithm}
\State $j\gets\Call{UctChoice}{T}$
\State \Comment{Restore tuple pointers for this join order}
\State $p\gets\Call{RestorePointers}{j}$
\State \Comment{Execute join order for time budget}
\State $finished\gets\Call{ContinueJoin}{q,j,o,b,p,R}$
\State \Comment{Update UCT tree via progress-based rewards}
\State \Call{RewardUpdate}{$T,j,F_R$}
\State \Comment{Backup pointers and update tuple offsets}
\State $o\gets\Call{BackupProgress}{j,p,o}$
\EndWhile
\State \Return{$[\Call{Materialize}{R_1[p_1]\times R_2[p_2]\ldots}|p\in R]$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a customized execution engine.\label{customizedAlg}}
\end{algorithm}
\end{comment}
\begin{algorithm}[t]
\renewcommand{\algorithmiccomment}[1]{// #1}
\begin{small}
\begin{algorithmic}[1]
\State \Comment{Regret-bounded evaluation of SPJ query $q$,}
\State \Comment{length of time slices is restricted by $b$.}
\Function{SkinnerC}{$q=R_1\Join\ldots\Join R_m,b$}
\State \Comment{Apply unary predicates and hashing}
\State $q\gets$\Call{PreprocessingC}{$q$}
\State $R\gets\emptyset$ \Comment{Initialize result indices}
\State $finished\gets\mathbf{false}$ \Comment{Initialize termination flag}
\While{$\neg finished$}
\State \Comment{Choose join order via UCT algorithm}
\State $j\gets\Call{UctChoice}{T}$
\State \Comment{Restore execution state for this join order}
\State $s\gets\Call{RestoreState}{j,o,S}; s_{prior}\gets s$
\State \Comment{Execute join order during time budget}
\State $finished\gets\Call{ContinueJoin}{q,j,o,b,s,R}$
\State \Comment{Update UCT tree via progress-based rewards}
\State \Call{RewardUpdate}{$T,j,\textproc{Reward}(s-s_{prior},j)$}
\State \Comment{Backup execution state for join order}
\State $\langle o,S\rangle\gets\Call{BackupState}{j,s,o,S}$
\EndWhile
\State \Return{$[\Call{Materialize}{R_1[s_1]\times R_2[s_2]\ldots}|s\in R]$}
\EndFunction
\end{algorithmic}
\end{small}
\caption{Regret-bounded query evaluation using a customized execution engine.\label{customizedAlg}}
\end{algorithm}
Most execution engines are designed for a traditional approach to query evaluation. They assume that a single join order is executed for a given query (after being generated by the optimizer). Learning optimal join orders while executing a query leads to unique requirements on the execution engine. First, we execute many different join orders for the same query, each one only for a short amount of time. Second, we may even execute the same join order multiple times with many interruptions (during which we try different join orders). This specific scenario leads to (at least) three desirable performance properties for the execution engine. First, the execution engine should minimize overheads when switching join orders. Second, the engine should preserve progress achieved for a given join order even if execution is interrupted. Finally, the engine should allow to share achieved progress, to the maximal extent possible, between different join orders as well. The generic approach realizes the latter point only to a limited extend (by discarding batches processed completely by any join order from consideration by other join orders).
The key towards achieving the first two desiderata (i.e., minimal overhead when switching join orders or interrupting execution) is a mechanism that backs up execution state as completely as possible. Also, restoring prior state when switching join order must be very efficient. By ``state'', we mean the sum of all intermediate results and changes to auxiliary data structures that were achieved during a partial query evaluation for one specific join order. We must keep execution state as small as possible in order to back it up and to restore it efficiently.
Two key ideas enable us to keep execution state small. First, we represent tuples in intermediate results concisely as vectors of tuple indices (each index pointing to one tuple in a base table). Second, we use a multi-way join strategy limiting the number of intermediate result tuples to at most one at any point in time. Next, we discuss both ideas in detail.
Traditional execution engines for SPJ queries produce intermediate results that consist of actual tuples (potentially containing many columns with elevated byte sizes). To reduce the size of the execution state, we materialize tuples only on demand. Each tuple, be it a result tuple or a tuple in an intermediate result, is the result of a join between single tuples in a subset of base tables. Hence, whenever possible, we describe tuples simply by an array of tuple indices (whose length is bounded by the number of tables in the input query). We materialize partial tuples (i.e., only the required columns) temporarily to check whether they satisfy applicable predicates or immediately before returning results to the user. To do that efficiently, we assume a column store architecture (allowing quick access to selected columns) and a main-memory resident data set (reducing the penalty of random data access).
Most traditional execution engines for SPJ queries process join orders by a sequence of binary join operations. This can generate large intermediate results that would become part of the execution state. We avoid that by a multi-way join strategy whose intermediate result size is restricted to at most one tuple. We describe this strategy first for queries with generic predicates. Later, we discuss an extension for queries with equality join predicates based on hashing.
Intuitively, our multi-way join strategy can be understood as a depth-first search for result tuples. Considering input tables in one specific join order, we fix one tuple in a predecessor table before considering tuples in the successor table. We start with the first tuple in the first table (in join order). Next, we select the first tuple in the second table and verify whether all applicable predicates are satisfied. If that is the case, we proceed to considering tuples in the third table. If not, we consider the next tuple in the second table. Once all tuples in the second table have been considered for a fixed tuple in the first table, we ``backtrack'' and advance the tuple indices for the first table by one. Execution ends once all tuples in the first table have been considered.
\tikzstyle{row}=[anchor=south west, draw=black, fill=blue!50, minimum width=1cm, minimum height=0.5cm]
\tikzstyle{joinFlow}=[thick, -stealth, red, dashed]
\tikzstyle{joinOrder}=[red, font=\small]
\begin{figure}[t]
\begin{tikzpicture}
\foreach \y in {1,2,3,4,5}{
\node[row] (R\y) at (0,0.5*\y) {...};
}
\foreach \y in {1,2,3,4,5}{
\node[row] (S\y) at (2.5,0.5*\y) {...};
}
\foreach \y in {1,2,3,4,5}{
\node[row] (T\y) at (5,0.5*\y) {...};
}
\draw[joinFlow] (S5.west) to[out=180, in=180] (S4.west);
\node[joinOrder] at (2,2.5) {2};
\draw[joinFlow] (S4.west) to[out=180, in=180] (S3.west);
\node[joinOrder] at (2,2) {3};
\draw[joinFlow] (R5.east) to (S5.west);
\node[joinOrder] at (1.75,2.9) {1};
\node[joinOrder] at (4,2.5) {4};
\draw[joinFlow] (S3.east) to (T5.west);
\draw[joinFlow] (T5.west) to[out=180, in=180] (T4.west);
\node[joinOrder] at (4.7,2.4) {5};
\draw[joinFlow] (T4.west) to[out=180, in=180] (T3.west);
\node[joinOrder] at (4.7,2) {6};
\draw[joinFlow] (T3.west) to[out=180, in=180] (T2.west);
\node[joinOrder] at (4.7,1.5) {7};
\draw[joinFlow] (T2.west) to[out=180, in=180] (T1.west);
\node[joinOrder] at (4.7,1.1) {8};
\draw[joinFlow] (T1.west) to (S3.east);
\node[joinOrder] at (4,1.75) {9};
\draw[joinFlow] (S3.west) to[out=180, in=180] (S2.west);
\node[joinFlow] at (3.75,1.25) {...};
\node[joinOrder] at (2,1.5) {10};
\end{tikzpicture}
\caption{Depth-first multi-way join strategy: we increase the join order index once the first tuple satisfying all applicable predicates is found, we decrease it once all tuples in the current table were considered.\label{joinFigure}}
\end{figure}
\begin{example}
Figure~\ref{joinFigure} illustrates the process for a three-table join. Having fixed a tuple in the left-most table (at the left, we start with the first tuple), the join order index is increased. Next, we find the first tuple in the second table satisfying the join condition with the current tuple in the first table. Having found such a tuple, we increase the join order index again. Now, we iterate over tuples in the third table, adding each tuple combination satisfying all applicable conditions to the result. After all tuples in the last table have been considered, we decrease the join order index and consider the next tuple in the second table.
\end{example}
Algorithm~\ref{customizedAuxAlg} implements that approach. Function~\Call{ContinueJoin}{} realizes the execution strategy described before. For a fixed amount of processing time (we use the number of outer while loop iterations as a proxy in our implementation) or until all input data is processed, it either increases ``depth'' (i.e., join order index $i$) to complete a partial tuple, satisfying all applicable predicates, further, or it advances tuples indices using Function~\Call{NextTuple}{}. The latter function increases the tuple indices for the current join order index or backtracks if the table cardinality is exceeded. Note that the same result tuple might be added multiple times in invocations of the execution engine for different join orders. However, we add tuple index vectors into a result \textit{set}, avoiding duplicate entries (of course, two different tuple index vectors can represent two result tuples with the same values in each column).
We discuss the main function (\Call{SkinnerC}{}) learning optimal join orders using a customized execution engine (see Algorithm~\ref{customizedAlg}). The most apparent difference to the version from Section~\ref{genericSub} is the lack of a dynamic timeout scheme. Instead, we use the same timeout for each invocation of the execution engine. This becomes possible since progress made when executing a specific join order is never lost. By minimizing the size of the execution state, we have enabled an efficient backup and restore mechanism (encapsulated by functions \Call{BackupState}{} and \Call{RestoreState}{} whose pseudo-code we omit) that operates only on a small vector of indices. The number of stored vectors is furthermore proportional to the size of the UCT tree. The fact that we do not lose partial results due to inappropriate timeouts anymore has huge impact from the theoretical perspective (see Section~\ref{analysisSec}) as well as for performance in practice (see Section~\ref{experimentsSec}). Learning overheads are lower than before since we only maintain a single UCT search tree accumulating knowledge from all executions.
In Section~\ref{genericSub}, we used a binary reward function based on whether the current batch was processed. We do not process data batch-wise anymore and must therefore change the reward function (represented as function~\textproc{Reward} in the pseudo-code which depends on execution state delta and join order). For instance, we can we use as reward the percentage of tuples processed in the left-most table during the last invocation. This function correlates with execution speed and returns values in the range between 0 and 1 (the standard formulas used for selecting actions by the UCT algorithm are optimized for that case~\cite{Kocsis2006}). SkinnerDB uses a slight refinement: we sum over all tuple index deltas, scaling each one down by the product of cardinality values of its associated table and the preceding tables in the current join order. Note that the UCT algorithm averages rewards over multiple invocations of the same join order and keeps exploring (i.e., obtaining a reward of zero for one good join order during a single invocation of the execution engine will not exclude that order from further consideration).
We have not yet discussed how our approach satisfies the third desiderata (sharing as much progress as possible among different join orders) mentioned at the beginning. We use in fact several techniques to share progress between different join orders (those techniques are encapsulated in Function~\Call{RestoreState}{}). First, we use again offset counters to exclude for each table tuples that have been joined with all other tuples already (vector $o$ in the pseudo-code which is implicitly initialized to one). In contrast to the version from Section~\ref{genericSub}, offsets are not defined at the granularity of data batches but at the granularity of single tuples. This allows for a more fine-grained sharing of progress between different join orders than before.
Second, we share progress between all join orders with the same prefix. Whenever we restore state for a given join order, we compare execution progress between the current join order and all other orders with the same prefix (iterating over all possible prefix lengths). Comparing execution states $s$ and $s'$ for two join orders $j$ and $j'$ with the same prefix of length $k$ (i.e., the first $k$ tables are identical), the first order is ``ahead'' of the second if there is a join order position $p\leq k$ such that $s_{j_i}\geq s'_{j_i}$ for $i<p$ and $s_{j_p}>s'_{j_p}+1$. In that case, we can ``fast-forward'' execution of the second join order, skipping result tuples that were already generated via the first join order. We do so by executing $j'$ from a merged state $s''$ where $s''_{j'_i}=s_{j'_i}$ for $i<p$, $s''_{j'_p}=s_{j'_p}-1$, and $s''_{j'_i}=o_{j'_i}$ for $i>p$ (since we can only share progress for the common prefix). Progress for different join orders is stored in the data structure represented as $S$ in Algorithm~\ref{customizedAlg}, Function~\textproc{RestoreState} takes care of fast-forwarding (selecting the most advanced execution state among all alternatives).
So far, we described the algorithm for queries with generic predicates. Our actual implementation uses an extended version supporting equality join predicates via hashing. If equality join predicates are present, we create hash tables on all columns subject to equality predicates during pre-processing. Of course, creating hash tables to support all possible join orders creates overheads. However, those overheads are typically small as only tuples satisfying all unary predicates are hashed. We extend Algorithm~\ref{customizedAuxAlg} to benefit from hash tables: instead of incrementing tuple indices always by one (line~5), we ``jump'' directly to the next highest tuple index that satisfies at least all applicable equality predicates with preceding tables in the current join order (this index can be determined efficiently via probing).
\section{Overview}
\label{overviewSec}
\tikzstyle{SkinnerComponent}=[anchor=center, draw=black, fill=blue!15, minimum width=1.6cm, align=center, font=\small]
\tikzstyle{SkinnerComponent2}=[anchor=center, draw=black, rounded corners=0.15cm, fill=red!15, minimum width=1.75cm, align=center, font=\small]
\tikzstyle{SkinnerIO}=[anchor=center, font=\small]
\tikzstyle{SkinnerFlow}=[ultra thick, draw]
\begin{figure}
\begin{tikzpicture}
\node[SkinnerIO] (query) at (-2,0) {Query};
\node[SkinnerIO] (result) at (-2,-1) {Result};
\node[SkinnerComponent] (preprocessor) at (-0.25,0) {Pre-\\Processor};
\node[SkinnerComponent] (postprocessor) at (-0.25,-1) {Post-\\Processor};
\node[SkinnerComponent, minimum width=4.25cm, minimum height=2.7cm] (joinProcessor) at (3.125,-0.15) {};
\node at (3.125,1) {Join Processor};
\node[SkinnerComponent2] (optimizer) at (2,0.25) {Learning\\Optimizer};
\node[SkinnerComponent2] (executor) at (4.25,0.25) {Join\\Executor};
\node[SkinnerComponent2] (tracker) at (4.25,-1) {Progress\\Tracker};
\node[SkinnerComponent2] (calculator) at (2,-1) {Reward\\Calculator};
\draw[SkinnerFlow, ->] (query) -- (preprocessor);
\draw[SkinnerFlow, ->] (postprocessor) -- (result);
\draw[SkinnerFlow, ->] (optimizer) -- (executor);
\draw[SkinnerFlow, <->] (executor) -- (tracker);
\draw[SkinnerFlow, ->] (tracker) -- (calculator);
\draw[SkinnerFlow, ->] (calculator) -- (optimizer);
\draw[SkinnerFlow, ->] (preprocessor.east) -- ++ (0.4,0);
\draw[SkinnerFlow, ->] (postprocessor.east) ++ (0.4,0) -- ++ (-0.4,0);
\end{tikzpicture}
\caption{Primary components of SkinnerDB.\label{architectureFig}}
\end{figure}
Figure~\ref{architectureFig} shows the primary components of SkinnerDB. This high-level outline applies to all of the SkinnerDB variants.
The pre-processor is invoked first for each query. Here, we filter base tables via unary predicates. Also, depending on the SkinnerDB variant, we partition the remaining tuples into batches or hash them (to support joins with equality predicates).
Join execution proceeds in small time slices. The join processor consists of several sub-components. The learning optimizer selects a join order to try next at the beginning of each time slice. It uses statistics on the quality of join orders that were collected during the current query execution. Selected join orders are forwarded to the join executor. This component executes the join order until a small timeout is reached. We add result tuples into a result set, checking for duplicate results generated by different join orders. The join executor can be either a generic SQL processor or, for maximal performance, a specialized execution engine. The same join order may get selected repeatedly. The progress tracker keeps track of which input data has been processed already. For Skinner-C, it even tracks execution state for each join order tried so far, and merges progress across join orders. At the start of each time slice, we consult the progress tracker to restore the latest state stored for the current join order. At the end of it, we backup progress achieved during the current time slice. The reward calculator calculates a reward value, based on progress achieved during the current time slice. This reward is a measure for how quickly execution proceeds using the chosen join order. It is used as input by the optimizer to determine the most interesting join order to try in the next time slice.
Finally, we invoke the post-processor, using the join result tuples as input. Post-processing involves grouping, aggregation, and sorting. In the next section, we describe the algorithms executed within SkinnerDB.
\section{Formal Analysis}
\label{analysisSec}
We prove correctness (see Section~\ref{correctnessSub}), and the regret bounds (see Section~\ref{regretSub}) for all Skinner variants.
\begin{comment}
We refer to an extended technical report~\cite{Skinners} for termination proofs.
Next, we introduce terminology and assumptions used throughout this section. \textit{Execution Time}, without further qualification, is total time required by SkinnerDB to execute a query. \textit{Regret} refers to the difference between actual and optimal execution time (i.e., using an optimal join order) for a given query. We express regret as a function of $n$, designating execution time of SkinnerDB. Intuitively, we calculate upper bounds on the expected ratio of execution time that was ``wasted''. Note that $n$ is proportional to the number of iterations of the UCT algorithm for a fixed timeout. We assume a query whose properties, such as the number of tables $m$ and the best average reward $r^*$ achievable by an optimal join order (for Skinner-C), are fixed.
Skinner-C and Skinner-H use a variable \textit{Timeout} that restricts execution time spent on a specific data batch using a specific join order. This timeout is determined by the \textit{Timeout Level} via the pyramid timeout scheme described in Section~\ref{genericSub}. Intuitively, the \textit{Number of Timeout Levels} refers to the ``height'' of the colored area in Figure~\ref{budgetFigure}. The execution time \textit{Allocated} to a specific timeout level is the accumulated time spent executing batches with the associated timeout. Intuitively, it corresponds to the ``width'' of a row (associated with a specific timeout level) in Figure~\ref{budgetFigure}. For Skinner-C, the timeout is static and restricts time spent on achieving progress for a fixed join order during a time slice.
We make several simplifying assumptions to facilitate the regret analysis. First, we assume that data batches are homogeneous. This means that, for a fixed join order, execution time per data batch (for Skinner-G and Skinner-H) and evaluation progress (i.e., delta of left-most tuple index per time unit for Skinner-C), is fixed. Second, we study slightly simplified versions of the algorithms from Section~\ref{algorithmSec}. In particular, we assume that offsets are only applied to exclude tuples for the left-most table in the current join order. This means that no progress is shared between join orders that do not share the left-most table. Third, we pessimistically assume that timeouts are completely exploited by SkinnerDB when using sub-optimal join orders (i.e., execution does not finish before the timeout).
\end{comment}
\begin{comment}
We analyze the three algorithms presented in Section~\ref{algorithmSec}. Skinner-G designates the algorithm that works with any execution engine (see Section~\ref{genericSub}), Skinner-H the hybrid version that mixes learning with traditional query optimization (see Section~\ref{hybridSub}), and Skinner-C the algorithm that is tailored to exploit a customized execution engine (see Section~\ref{customizedSub}). Each of the three algorithms is iterative. We prove termination first.
\begin{theorem}
Skinner-G terminates for each input.
\end{theorem}
\begin{proof}
The algorithm terminates once one of the offsets exceeds the number of batches per table (i.e., it terminates once all batches of one table have been processed). An offset is advanced once a batch is processed successfully within the timeout. The processing time per batch is finite and the algorithm tries higher and higher timeouts. Hence, each batch must eventually be processed successfully once the timeout is high enough. The algorithm terminates since the total number of batches is fixed for a given input.
\end{proof}
\begin{theorem}
Skinner-H terminates for each input.
\end{theorem}
\begin{proof}
This follows directly from the previous theorem as the hybrid algorithm uses Skinner-G as a sub-function. Skinner-H terminates once Skinner-G terminates (or before if the traditional optimizer finishes execution first).
\end{proof}
\begin{theorem}
Skinner-C terminates for each input.
\end{theorem}
\begin{proof}
Assume no termination. The number of join orders is finite. Hence, there must be at least one join order for which the execution engine is invoked infinitely often. We consider one such join order in the following. Before each invocation for that join order, we restore tuple pointers to their state after the last invocation for the same join order. Hence, progress is never lost while a join order is suspended. In each iteration, we either increase $i$ or advance the tuple pointers. As $i$ is upper-bounded by the number of tables $m$, tuples pointers must be advanced at least every $m$ iterations. We make the (reasonable) assumption that the execution time budget is sufficient to allow advancing tuple pointers at least once per executor invocation. When advancing tuple pointers, the pointer of a given table is either increased or the pointer of one of its predecessor tables in the join order. As each pointer is upper-bounded by the corresponding table cardinality, the pointer of the left-most table must eventually reach the cardinality bound after which $i$ reaches value zero. But then, the algorithm terminates.
\end{proof}
\end{comment}
\subsection{Correctness}
\label{correctnessSub}
Next, we prove correctness (i.e., that each algorithm produces a correct query result). We distinguish result tuples (tuples from the result relation joining all query tables) from component tuples (tuples taken from a single table).
\begin{theorem}
Skinner-G produces the correct query result.
\end{theorem}
\begin{proof}
Offsets exclude component tuples from consideration when executing the following joins. We show the following invariant: all result tuples containing excluded component tuples have been generated. This is certainly true at the start where offsets do not exclude any tuples. Offsets are only advanced if batches have been successfully processed. In that case, all newly excluded component tuples have been joined with tuples from all other tables that are not excluded. But excluded tuples can be neglected according to our invariant. The algorithm terminates only after all tuples from one table have been excluded. In that case, all result tuples have been generated. Still, we need to show that no result tuple has been generated more often than with a traditional execution. This is the case since we exclude all component tuples in one table after each successfully processed batch.
\end{proof}
\begin{theorem}
Skinner-H produces the correct query result.
\end{theorem}
\begin{proof}
We assume that executing a query plan produced by the traditional optimizer generates a correct result. The result produced by Skinner-G is correct according to the preceding theorem. This implies that Skinner-H produces a correct result as it returns the result generated by one of the latter two algorithms.
\end{proof}
\begin{theorem}
Skinner-C produces the correct query result.
\end{theorem}
\begin{proof}
Skinner-C does not produce any duplicate result tuples as justified next. Result tuples are materialized only at the very end of the main function. The result set contains tuple index vectors until then. Vectors are unique over all result tuples (as they indicate the component tuples from which they have been formed) and, due to set semantics, no vector will be contained twice in the result. Also, Skinner-C produces each result tuple at least once. This is due to the fact that \textit{i)}~complete tuples are always inserted into the result set, \textit{ii)}~partial tuples (i.e., $i<m$) are completed unless they violate predicates (then they cannot be completed into result tuples), and \textit{iii)}~tuple indices are advanced in a way that covers all combinations of component tuples.
\end{proof}
\begin{comment}
Regret refers to the difference between actual and optimal execution time for a given query. Before we analyze SkinnerDB, we analyze regret bounds of traditional approaches in a worst case.
Assume a query joining $m$ tables ($m$ is a constant) of cardinality $s$ ($s$ is variable). Tables are connected in a chain via join predicates and one unary predicate is placed on each table. All predicates evaluate to true on each tuple with the exception of the unary predicate on table $t^*$, located on one end of the chain. The latter predicate evaluates to false on each tuple. We consider left-deep join orders that avoid Cartesaian products and measure execution cost by the number of tuples read. Clearly, an optimal plan reads tuples from $t^*$ first and terminates in $O(s)$. If we choose to start from the table at the opposite end of the chain, $t^{-}$, compared to $t^*$, execution cost are in $O(s^{m-1})$.
If a traditional optimizer~\cite{Selinger1979} cannot reliably estimate the selectivity of predicates (e.g., because they are user-defined functions or because of data skew), we can consider the choice of the first join order table as random. Assuming for instance a uniform distribution, the expected regret is in $O(s^{m-2})$ (based on a $1/m$ chance of selecting $t^-$). Eddies~\cite{Tzoumas2008} fare slightly better in this extreme case. They operate on small data batches (e.g., tuples) and adapt their join order based on feedback. However, Eddies never discard partial results and join them with all remaining tables. If at least one tuple is selected from table $t^-$, there is no other way of completing it into result tuples than consecutive joins with all remaining tables. However, as the associated join predicates are non-selective, execution time will be in $O(s^{m-2})$. Assuming optimistically that Eddies will learn an optimal order after the very first tuple is selected, and assuming a uniform distribution over tables for the initial tuple choice, we have expected regret in $O(s^{m-3})$ for Eddies.
In the following, we denote by $m$ the number of tables joined by a query ($m$ is fixed) and by $s$ the size of the largest table. Imagine a query connecting all joined tables in a chain via equality join predicates. Further, assume that a unary user-defined function predicate is placed on each table. Assume that all those predicates evaluate to true on each tuple, except for the predicate on table $t^*$. The latter table is located at one end of the chain and its associated unary predicate evaluates always to false. Clearly, an optimal query plan starts joins with table $t^*$, discovering that the query result is empty and terminating in $O(s)$. A left-deep query plan that starts joins at the other end of the chain incurs cost in $O(s^{m-1})$ according to the $C_{out}$ metric. If we start at the other end of the chain, evaluation cost according to the
\end{comment}
\subsection{Regret Bounds}
\label{regretSub}
Regret is the difference between actual and optimal execution time. We denote execution time by $n$ and optimal time by $n^*$. Skinner-G and Skinner-H choose timeout levels (represented by the $y$ axis in Figure~\ref{budgetFigure}) that we denote by $l$. We use the subscript notation (e.g., $n_l$) to denote accumulated execution time spent with a specific timeout level. We study regret for fixed query properties (e.g., the number of joined tables, $m$, or the optimal reward per time slice, $r^*$) for growing amounts of input data (i.e., table size) and execution time. In particular, we assume that execution time, in relation to query size, is large enough to make the impact of transitory regret negligible~\cite{Coquelin2007b}. We focus on regret of the join phase as pre-processing overheads are linear in data and query size (while post-processing overheads are polynomial in query and join result size). We assume that time slices are chosen large enough to make overheads related to learning and join order switching negligible. Specifically for Skinner-G and Skinner-H, we assume that the optimal timeout per time slice applies to all batches. To simplify the analysis, we study slightly simplified versions of the algorithms from Section~\ref{algorithmSec}. In particular, we assume that offsets are only applied to exclude tuples for the left-most table in the current join order. This means that no progress is shared between join orders that do not share the left-most table. For Skinner-C, we assume that the simpler reward function (progress in left-most table only) is used. We base our analysis on the properties of the UCT variant proposed by Kocsis and Szepesvari~\cite{Kocsis2006}.
For a given join order, processing time in SkinnerDB is equivalent to processing time in traditional engines if scaling down the size of the left-most table scales down execution time proportionally (i.e., execution time behaves similarly to the $C_{out}$ cost metric~\cite{Krishnamurthy1986}). If so, the regret bounds apply compared to an optimal traditional query plan execution.
Before analyzing Skinner-G, we first prove several properties of the pyramid timeout scheme introduced in Section~\ref{genericSub}.
\begin{lemma}
The number of timeout levels used by Skinner-G is upper-bounded by $\log(n)$.\label{nrLevelsLemma}
\end{lemma}
\begin{proof}
We add a new timeout level $L$, whenever the equation $n_l\geq n_L+2^L$ is satisfied for all $0\leq l<L$ for the first time. As $n_l$ is generally a sum over powers of two ($2^l$), and as $n_L=0$ before $L$ is used for the first time, the latter condition can be tightened to $2^L=n_l$ for all $0\leq l<L$. Hence, we add a new timeout whenever the total execution time so far can be represented as $L\cdot 2^L$ for $L\in\mathbb{N}$. Assuming that $n$ is large, specifically $n>1$, the number of levels grows faster if adding levels whenever execution time can be represented as $2^L$ for $L\in\mathbb{N}$. In that case, the number of levels can be bounded by $\log(n)$ (using the binary logarithm).
\end{proof}
\begin{lemma}\label{balancedLevelsLemma}
The total amount of execution time allocated to different (already used) timeout levels cannot differ by more than factor two.
\end{lemma}
\begin{proof}
Assume the allocated time differs by more than factor two between two timeout levels, i.e.\ $\exists l_1,l_2:n_{l_1}>2\cdot n_{l_2}$ (and $n_{l_1},n_{l_2}\neq 0$). Consider the situation in which this happens for the first time. Since $\forall i:n_i\geq n_{i+1}$, we must have $n_0>2\cdot n_L$ where $L$ is the largest timeout level used so far. This was not the case previously so we either selected timeout level 0 or a new timeout level $L$ in the last step. If we selected a new timeout level $L$ then it was $n_l\geq n_L+2^L$ for all $0\leq l<L$ which can be tightened to $\forall 0\leq l<L:n_l=2^L$ (exploiting that $n_L=0$ previously and that timeouts are powers of two). Hence, selecting a new timeout cannot increase the maximal ratio of time per level. Assume now that timeout level 0 was selected. Denote by $\delta_{i}=n_i-n_{i+1}$ for $i<L$ the difference in allocated execution time between consecutive levels, before the last selection. It is $\delta_{i}\leq 2^{i}$ since $n_{i}$ is increased in steps of size $2^{i}$ and strictly smaller than $2^{i+1}$ (otherwise, level $i+1$ or a higher one would have been selected). It was $n_0-n_L=\sum_{0\leq i<L}\delta_i\leq \sum_{0\leq i<L}2^i< 2^L$. On the other side, it was $n_L\geq 2^L$ (as $n_L\neq0$ and since $n_L$ is increased in steps of $2^L$). After $n_0$ is increased by one, it is still $n_0\leq 2\cdot n_L$. The initial assumption leads always to a contradiction.
\end{proof}
\begin{comment}
\begin{proof}
We use induction. The statement holds trivially after executing a first batch with a timeout of one time unit (as only a single timeout level was used so far). Assume that the statement holds after executing up to $i$ batches. This implies that it holds after executing $i+1$ batches as shown next. Designate by $l$ the level used for the next batch and by $L$ the maximal level used before. The total amount of allocated execution time (i.e., $w_l$) can only decrease for increasing levels. Hence, the maximal relative difference in total execution time per level is encountered between the largest and smallest level used so far. Therefore, if the new level is neither of them (i.e, $0<l<L$), the maximal relative difference cannot change. If $l=L$ (i.e., $l$ is the maximal level used so far), the relative distance can only decrease. If $l=L+1$ (i.e., we start using a new level), then $w_L=2^{L+1}$ (since we add new levels as soon as possible) and therefore $w_{L+1}=w_L$. The relative distance between $w_{L+1}$ and $w_0$ is bounded by two since the relative distance between $w_L$ and $w_0$ was bounded by two. If the next level is minimal (i.e., $l=0$) then for all higher levels $h>0$ we must have $w_{h}+2^{h}>w_{h-1}$. The latter implies $w_{h-1}-w_{h}\leq 2^{h-1}$ since $w_{h}$ is increased in steps of $2^h$ while $w_{h-1}$ is increased in steps of $2^{h-1}$. Before increasing $w_0$ due to the newly executed batch, the absolute difference between $w_0$ and $w_L$ is bounded by $\sum_{h=1..L}2^{h-1}=2^L-1$. After increasing $w_0$ by one, the absolute difference is bounded by $2^L$. The induction holds since $w_L$ is increased in steps of size $2^L$ and therefore $w_L\geq 2^L$ if it was used. This implies $w_0/w_L\leq 2$.
\end{proof}
\end{comment}
We are now ready to provide worst-case bounds on the expected regret when evaluating queries via Skinner-G.
\begin{theorem}\label{skinnerGtheorem}
Expected execution time regret of Skinner-G is upper-bounded by $(1-1/(\log(n)\cdot m\cdot 4))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}
Total execution time $n$ is the sum over execution time components $n_l$ that we spent using timeout level $l$, i.e.\ we have $n=\sum_{0\leq l\leq L}n_l$ where $L+1$ is the number of timeout levels used. It is $L+1\leq \log(n)$ due to Lemma~\ref{nrLevelsLemma} and $\forall l_1,l_2\in L:n_{l_1}\geq n_{l_2}/2$ due to Lemma~\ref{balancedLevelsLemma}. Hence, for any specific timeout level $l$, we have $n_l\geq n/(2\cdot\log(n))$. Denote by $l^*$ the smallest timeout, tried by the pyramid timeout scheme, which allows to process an entire batch using the optimal join order. It is $n_{l^*}\geq n/(2\cdot\log(n))$. We also have $n_{l^*}=n_{l^*,1}+n_{l^*,0}$ where $n_{l^*,1}$ designates time spent executing join orders with timeout level $l^*$ that resulted in reward $1$, $n_{l^*,0}$ designates time for executions with reward $0$. UCT guarantees that expected regret grows as the logarithm in the number of rounds (which, for a fixed timeout level, is proportional to execution time). Hence, $n_{l^*,0}\in O(\log(n_{l^*}))$ and $n_{l^*,1}\geq n_{l^*}-O(\log(n_{l^*}))$. Denote by $b$ the number of batches per table. The optimal algorithm executes $b$ batches with timeout $l^*$ and the optimal join order. Skinner can execute at most $m\cdot b-m+1\in O(m\cdot b)$ batches for timeout $l^*$ before no batches are left for at least one table, terminating execution. Since $l^*$ is the smallest timeout greater than the optimal time per batch, the time per batch consumed by Skinner-G exceeds the optimal time per batch at most by factor 2. Hence, denoting by $n^*$ time for an optimal execution, it is $n^*\geq n_{l^*,1}/(2\cdot m)$, therefore $n^*\geq (n_{l^*}-O(\log(n)))/(2\cdot m)\geq n_{l^*}/(2\cdot m)-O(\log(n))$ (since $m$ is fixed), which implies $n^*\geq n/(4\cdot m\cdot\log(n))-O(\log(n))$. Hence, the regret $n-n^*$ is upper-bounded by $(1-1/(4\cdot m\cdot\log(n)))\cdot n+O(\log(n))$.
\end{proof}
\begin{comment}
\begin{proof}
We first calculate regret due to inappropriate timeouts. We pessimistically assume that there is one optimal timeout level $l^*$ per batch and that no other timeout leads to significant progress. The number of timeout levels used after $n$ units of execution time is upper-bounded by $\log(n)$ according to Lemma~\ref{nrLevelsLemma}. Also, the time allocated to different levels than $l^*$ exceeds the time dedicated to level $l^*$ at most by factor two (according to Lemma~\ref{balancedLevelsLemma}). Hence, the regret due to inappropriate timeouts is upper-bounded by $(1-1/(\log(n)\cdot 2))\cdot n$. According to the guarantees of the UCT algorithm~\cite{Kocsis2006}, the expected regret due to suboptimal choices (join orders in our case) is upper-bounded by $O(\log(n))$. Variable $n$ designates in general the number of UCT iterations. In our case the number of iterations (specifically for the right timeout) is indeed upper-bounded by the number $n$ of time units spent in execution. Skinner-G stops once all batches of one table are processed. We pessimistically assume that there is one optimal choice for the left-most table. Hence, we count regret for processing batches successfully with the right timeout unless they come from the right table. As the number of batches is the same in all tables, this type of regret is upper-bounded by factor $((m-1)/(\log(n)\cdot m\cdot 2))\cdot n$ for $m$ tables. Finally, even when processing batches from the right table with the right timeout, we may still incur regret by fully exploiting the timeout. This is suboptimal if it is possible to process batches within slightly more than half of the time budget (if the distance is higher, a different timeout level would be optimal). This regret is upper-bounded by $((m-1)/(\log(n)\cdot m\cdot 4))\cdot n$. Adding all regret terms yields the postulated bound.
\end{proof}
\end{comment}
Next, we analyze regret of Skinner-H.
\begin{theorem}
Expected execution time regret of Skinner-H is upper-bounded by $(1-1/(\log(n)\cdot m\cdot 12))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}
Denote by $n_O$ and $n_L$ time dedicated to executing the traditional optimizer plan or learned plans respectively. Assuming pessimistically that optimizer plan executions consume all dedicated time without terminating, it is $n_O=\sum_{0\leq l\leq L}2^l$ for a suitable $L\in\mathbb{N}$ at any point. Also, we have $n_L\geq\sum_{0\leq l<L}2^l$ as time is divided between the two approaches. It is $n_L/n\geq (2^L-1)/(2^{L+1}+2^L-2)$ which converges to $1/3$ as $n$ grows. We obtain the postulated bound from Theorem~\ref{skinnerGtheorem} by dividing the ``useful'' (non-regret) part of execution time by factor three.
\end{proof}
The following theorem is relevant if traditional query optimization works well (and learning creates overheads).
\begin{theorem}\label{skinnerHtraditionalTheorem}
The maximal execution time regret of Skinner-H compared to traditional query execution is $n\cdot 4/5$.
\end{theorem}
\begin{proof}
Denote by $n^*$ execution time of the plan produced by the traditional optimizer. Hence, Skinner-H terminates at the latest once the timeout for the traditional approach reaches at most $2\cdot n^*$ (since the timeout doubles after each iteration). The accumulated execution time of all prior invocations of the traditional optimizer is upper-bounded by $2\cdot n^*$ as well. At the same time, the time dedicated to learning is upper-bounded by $2\cdot n^*$. Hence, the total regret (i.e., added time compared to $n^*$) is upper-bounded by $n\cdot 4/5$.
\end{proof}
Finally, we analyze expected regret of Skinner-C.
\begin{comment}
\begin{theorem}
Expected execution time regret of Skinner-C is upper-bounded by $(1-1/(m\cdot 2))\cdot n+O(\log(n))$.
\end{theorem}
\begin{proof}[Sketch]
The proof follows closely the one of Theorem~\ref{skinnerGtheorem} with the difference that no regret for inappropriate timeouts is incurred.
\end{proof}
\end{comment}
\begin{theorem}
Expected execution time regret of Skinner-C is upper-bounded by $(1-1/m)\cdot n+O(\log(n))$.\label{skinnerCadditiveTheorem}
\end{theorem}
\begin{comment}
\begin{proof}
Denote by $R$ the cumulative reward realized by Skinner-C during the execution of a specific query. It is $R=\sum_{t\in T}R_t$ where $R_t$ denotes rewards accumulated over time slices in which join orders starting with table $t\in T$ were selected. Reward is calculated as the relative tuple index delta in the left-most table (i.e., tuple index delta in left-most table divided by table cardinality). Hence, execution terminates whenever there is a table $t$ such that $R_t=1$. Skinner-C may accumulate more reward than necessary, when partitioning efforts over join orders with different start tables. The reward that can be collected before termination is however bounded by $R\leq m$. Skinner-C may lose time due to sub-optimal choices. However, it achieves near-optimal average rewards (and therefore near-optimal evaluation progress) based on the UCT guarantees~\cite{Kocsis2006}. More precisely, it is $r^*-r\in O(\log(n)/n)$ where $r$ is the expected average reward achieved by Skinner-C, $r^*$ the optimal average reward, and $n$ denotes execution time of Skinner-C (which is proportional to the number of UCT rounds as we use a fixed timeout). We calculate bounds on the optimal execution time $n^*$, required for collecting a reward of at least $R/m$ with optimal average reward per round $r^*$. It is $n^*\leq n\cdot r/(m\cdot r^*)=n\cdot (r^*-O(\log(n)/n))/(m\cdot r^*)$. From that, we obtain bounds on expected regret $n-n^*\leq n\cdot(1-1/m\cdot (r^*-O(\log(n)/n))/r^*)=n\cdot (1-1/m)+O(\log(n))$ (since $r^*$ is considered a constant).
\end{proof}
\end{comment}
\begin{proof}
Regret is the difference between optimal execution time, $n^*$, and actual time, $n$. It is $n-n^*=n\cdot(1-n^*/n)$. Denote by $R$ the total reward achieved by Skinner-C during query execution and by $r$ the average reward per time slice. It is $n=R/r$. Denote by $r^*$ the optimal reward per time slice. Reward is calculated as the relative tuple index delta in the left-most table (i.e., tuple index delta in left-most table divided by table cardinality). An optimal execution always uses the same join order and therefore terminates once the accumulated reward reaches one. Hence, we obtain $n^*=1/r^*$. We can rewrite regret as $n-n^*=n\cdot(1-(1/r^*)/(R/r))=n\cdot (1-r/(R\cdot r^*))$. The difference between expected reward and optimal reward is bounded as $r^*-r\in O(\log(n)/n)$~\cite{Kocsis2006}. Substituting $r$ by $r^*-(r^*-r)$, we can upper-bound regret by $n\cdot(1-1/R)+O(\log(n))$. Denote by $R_t\leq R$ rewards accumulated over time slices in which join orders starting with table $t\in T$ were selected. Skinner-C terminates whenever $R_t=1$ for any $t\in T$. Hence, we obtain $R\leq m$ and $n\cdot(1-1/m)+O(\log(n))$ as upper bound on expected regret.
\end{proof}
\begin{comment}
\begin{proof}
Denote by $n_R$ the time required by an optimal policy to reach the same reward $R$ as Skinner-C (assuming optimistically that maximal rewards can be obtained over the whole execution period). It is $n_R=R/r^*=n\cdot r/r^*$ and therefore $n-n_r=n\cdot r^*/r^*-n\cdot (r^*-O(\log(n)/n)/r^*=O(\log(n))/r^*\in O(\log(n))$ (since we consider $r^*$ as constant).
Denote by $n_R$ an optimistic estimate of the time required by an optimal policy to reach reward $R$: it is $n_R\cdot r^*=R$ and therefore $n_R=R/r^*$, hence $n-n_R=R/r-R/r^*=R\cdot(1/r+1/(r+O(\log(n))$
$n_R\cdot(r+O(\log(n)/n))=R$. Since $n\cdot r=R$, it is $n_R\cdot (r+O(\log(n)/n))=n\cdot r$, hence
Denote by $n$ execution time of Skinner-C for one specific query.
Skinner-C uses the relative delta in the left-most tuple index as reward function (i.e., tuple index delta divided by table cardinality). This links reward to evaluation progress. Execution terminates, at the latest, once the reward accumulated for join orders starting with one specific table reaches 1. Denote execution time of Skinner-C by $n$ (which is proportional to the number of UCT rounds since we used a fixed timeout) for a given query. , the expected difference between average reward $r$ of Skinner-C and optimal average reward $r^*$ is upper-bounded by $O(\log(n)/n)$. Furthermore, Skinner-C might partition its efforts over join orders starting with different tables. The optimal approach
\end{proof}
\end{comment}
Instead of the (additive) difference between expected and optimal execution time, we can also consider the ratio.
\begin{theorem}
The ratio of expected to optimal execution time for Skinner-C is upper-bounded and that bound converges to $m$ as $n$ grows.
\end{theorem}
\begin{proof}
Let $a=n-n^*$ be additive regret, i.e.\ the difference between actual and optimal execution time. It is $n^*=n-a$ and, as $a\leq (1-1/m)\cdot n+O(\log(n))$ due to Theorem~\ref{skinnerCadditiveTheorem}, it is $n^*\geq n-(1-1/m)\cdot n-O(\log(n))=n/m-O(\log n)=n\cdot(1/m-O(\log(n))/n)$. Optimal execution time is therefore lower-bounded by a term that converges to $n/m$ as $n$ grows. Then, the ratio $n/n^*$ is upper-bounded by $m$.
\end{proof}
\begin{comment}
$n$ is total execution time according to some atomic time unit
Assume that completely balanced over different timeouts to simplify the following proofs (relaxation is relatively straight-forward)
Assume that processing time per batch is homogeneous plus random noise (current proof sufficient for the latter?), can be achieved by shuffling and choosing batches large enough
Simplified version of the algorithm to simplify proofs
TODO: coupon collector, pyramid
Proof via induction concerning balancing of pyramid scheme?
\begin{equation}
R=R_T+R_P+R_F+R_J
\end{equation}
\begin{lemma}
Regret due to inappropriate timeouts is upper-bounded by $n\cdot(\log(n)-1)/\log(n)$.\label{timeoutRegretLemma}
\end{lemma}
\begin{proof}
We use different timeouts to process data batches. The number of different timeouts grows logarithmically in the total execution time. Further, our pyramid scheme ensures that execution time allocations are balanced over different timeouts (i.e, the total time spent in executing batches with larger timeouts does not exceed the total time spent in executing batches with smaller timeouts). Hence, the execution time allocated to one specific timeout is bounded by $n/\log(n)$ time units. We assume that there is one optimal timeout per batch that is minimal among all timeouts allowing to process batches reliably. We (very pessimistically) assume that invocations with smaller timeouts generally do not allow to make progress. We also assume that invocations with larger timeouts waste a disproportional amount of processing time to make negligible progress. Still, regret is bounded by the ratio of invocations for sub-optimal timeouts which is $n\cdot(\log(n)-1)/\log(n)$.
\end{proof}
\begin{lemma}
Expected regret due to processing failures is upper-bounded by $O(\log(n/\log(n)))$.
\end{lemma}
\begin{proof}
Following the proof of the previous lemma, the execution time fraction allocated to the optimal timeout per batch is $n/\log(n)$. As we use a minimal timeout of one time unit, the number of batches processed with that optimal timeout is upper-bounded by the same number. We use the UCT algorithm to find join orders that process batches reliably with that timeout. Each processed batch corresponds to one sample of the UCT algorithm. Following the proof by Kocsis and Szepesv\`ari~\cite{Kocsis2006}, expected regret is bounded by $O(\log(n/\log(n)))$. It is $\log((n)/\log(n))=\log(n)-\log(\log(n))\in O(\log(n))$.
\end{proof}
\begin{lemma}
Regret due to sub-optimal choice of the first join table is upper-bounded by $((m-1)/m)\cdot n/\log(n)$.
\end{lemma}
\begin{proof}
Following again the proof of Lemma~\ref{timeoutRegretLemma}, the execution time fraction allocated to the optimal timeout per batch is $n/\log(n)$. The number of successfully processed batches using that timeout is trivially upper-bounded by the same number. Not all successfully processed batches share the same left-most table with the optimal join order. As the number of batches is the same over all tables, the number of batches processed from sub-optimal tables is for each single table strictly upper-bounded by the number of batches processed in the optimal table (after processing finishes). Hence, the total number of batches processed for join orders starting with the wrong table is upper bounded by factor $(m-1)/m$.
\end{proof}
\begin{lemma}
Regret due to a sub-optimal join order after the first table is upper-bounded by $1/(2\cdot m)\cdot n/\log(n)$
\end{lemma}
\begin{proof}
We choose timeouts per batch from a geometric progression. The optimal timeout in that progression may exceed the optimal timeout by up to factor two. As the reward function depends only on whether or not a batch was processed successfully, the algorithm does not distinguish between join orders whose processing cost differs by up to factor two. In the worst case, we loose half of the processing time allocated to the optimal timeout per batch and to the optimal left-most table due to such sub-optimal choices.
\end{proof}
\begin{theorem}
Expected regret of Skinner-NI is bounded by $n\cdot(1-1/(2\cdot m\cdot\log(n)))+O(\log(n))$.
\end{theorem}
\begin{corollary}
The actual
$O(n-2\cdot m\cdot\log(n)\cdot\log(n)/(2\cdot m\cdot\log(n))$
\end{corollary}
Link to optimal processing time of original plan under specific assumptions
Proof sketch for Skinner-C
\end{comment}
\section{Related Work}
\label{relatedSec}
Our approach connects to prior work collecting information on predicate selectivity by evaluating them on data samples~\cite{Bruno2002, Chaudhuri2001, Haas1992, Haas2011, Karanasos2014a, Lipton1990a, Markl2013, Wu2016}. We compare in our experiments against a recently proposed representative~\cite{Wu2016}. Most prior approaches rely on a traditional optimizer to select interesting intermediate results to sample. They suffer if the original optimizer generates bad plans. The same applies to approaches for interleaved query execution and optimization~\cite{Aboulnaga2004a, Avnur2000, Babu2005} that repair initial plans at run time if cardinality estimates turn out to be wrong. Robust query optimization~\cite{Alyoubi2015, Alyoubi2016, Babcock2005, D.2008} assumes that predicate selectivity is known within narrow intervals which is often not the case~\cite{El-Helw2009}. Prior work~\cite{Dutt2014a, Dutt2014} on query optimization without selectivity estimation is based on simplifying assumptions (e.g., independent predicates) that are often violated.
Machine learning has been used to estimate cost for query plans whose cardinality values are known~\cite{Akdere2011, Li2012}, to predict query~\cite{Ganapathi} or workflow~\cite{Popescu2013} execution times, result cardinality~\cite{Malik2006, Malik2007}, or interference between query executions~\cite{Duggan2011}. LEO~\cite{Aboulnaga2004a, Stillger2001}, IBM's learning optimizer, leverages past query executions to improve cardinality estimates for similar queries. Ewen et al.~\cite{Ewen2005} use a similar approach for federated database systems. Several recent approaches~\cite{Krishnan2018, Marcus2018} use learning for join ordering. All of the aforementioned approaches learn from past queries for the optimization of future queries. To be effective, new queries must be similar to prior queries and this similarity must be recognizable. Instead, we learn \textit{during} the execution of a query.
Adaptive processing strategies have been explored in prior work~\cite{Avnur2000, Deshpande2004, Deshpande2006a, Quanzhong2007a, Raman2003, Tzoumas2008, Viglas2003}. Our work uses reinforcement learning and is therefore most related to prior work using reinforcement learning in the context of Eddies~\cite{Tzoumas2008}. We compare against this approach in our experiments. Eddies do not provide formal guarantees on the relationship between expected execution time and the optimum. They never discard intermediate results, even if joining them with the remaining tables creates disproportional overheads. Eddies support bushy query plans in contrast to our approach. Bushy plans can in principle decrease execution cost compared to the best left-deep plan. However, optimal left-deep plans typically achieve reasonable performance~\cite{Gubichev2015}. Also, as we show in our experiments, reliably identifying near-optimal left-deep plans can be better than selecting bushy query plans via non-robust optimization.
Our work relates to prior work on filter ordering with regret bounds~\cite{Condon2009a}. Join ordering introduces however new challenges, compared to filter ordering. In particular, applying more filters can only decrease the size of intermediate results. The relative overhead of a bad filter order, compared to the optimum, grows therefore linearly in the number of filters. The overhead of bad join orders, compared to the optimum, can grow exponentially in the query size. This motivates mechanisms that bound join overheads for single data batches, as well as mechanisms to save progress for partially processed data batches.
Worst-case optimal join algorithms~\cite{Ngo2012, Veldhuizen2012} bound cost as a function of worst-case query result size. We bound expected execution cost as a function of cost for processing an optimal join order. Further, prior work on worst-case optimal joins focuses on conjunctive queries while we support a broader class of queries, including queries with user-defined function predicates. Our approach applies to SQL with standard semantics while systems for worst-case optimal evaluation typically assume set semantics~\cite{Veldhuizen2012}.
\subsection{Experimental Setup}
Skinner-G(X) is the generic Skinner version (see Section~\ref{genericSub}) on top of database system X in the following. Skinner-H(X) is the hybrid version on system X. We execute Skinner on top of MonetDB (Database Server Toolkit v1.1 (Mar2018-SP1))~\cite{Boncz2008} and Postgres (version 9.5.14)~\cite{Postgres}. We use different mechanisms to force join orders for those systems. Postgres has dedicated knobs to force join orders. For MonetDB, we ``brute-force'' join orders by executing each join as a separate query, generating multiple intermediate result tables. Skinner-C, described in Section~\ref{customizedSub}, uses a specialized execution engine. We set $w=\sqrt{2}$ in the UCT formula for Skinner-G and Skinner-H and $w=10^{-6}$ for Skinner-C. Unless noted otherwise, we use a timeout of $b=500$ loop iterations for Skinner-C (i.e., thousands or even tens of thousands of join order switches per second). For Skinner-G and -H, we must use much higher timeouts, starting from one second. All SkinnerDB-specific components are implemented in Java. Our current Skinner-C version only allows to parallelize the pre-processing step. Extending our approach to parallel join processing is part of our future work. To separate speedups due to join ordering from speedups due to parallelization, we compare a subset of baselines in single- as well as in multi-threaded mode. The following experiments are executed on a Dell PowerEdge R640 server with 2 Intel Xeon 2.3~GHz CPUs and 256~GB of RAM.
\subsection{Performance on Join Order Benchmark}
\begin{table}[t]
\caption{Performance of query evaluation methods on the join order benchmark - single-threaded.\label{jobTable}}
\begin{tabular}{p{1.75cm}p{1.25cm}p{1.25cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Approach} & \textbf{Total Time} & \textbf{Total Card.\ } & \textbf{Max.\ Time} & \textbf{Max.\ Card.\ }\\
\midrule[1pt]
Skinner-C & 183 & 112M & 9 & 18M \\
\midrule
Postgres & 726 & 681M & 59 & 177M \\
S-G(PG) & 13,348 & N/A & 840 & N/A \\
S-H(PG) & 2,658 & N/A & 234 & N/A \\
\midrule
MonetDB & 986 & 2,971M & 409 & 1,186M \\
S-G(MDB) & 1,852 & N/A & 308 & N/A\\
S-H(MDB) & 762 & N/A & 114 & N/A\\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Performance of query evaluation methods on the join order benchmark - multi-threaded.\label{jobTableMT}}
\begin{tabular}{p{1.75cm}p{1.25cm}p{1.25cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Approach} & \textbf{Total Time} & \textbf{Total Card.\ } & \textbf{Max.\ Time} & \textbf{Max.\ Card.\ }\\
\midrule[1pt]
Skinner-C & 135 & 112M & 7 & 18M \\
\midrule
MonetDB & 105 & 2,971M & 26 & 1,186M \\
S-G(MDB) & 1,450 & N/A & 68 & N/A \\
S-H(MDB) &345 & N/A & 86 & N/A \\
\bottomrule[1pt]
\end{tabular}
\end{table}
We evaluate approaches on the join order benchmark~\cite{Gubichev2015}, a benchmark on real, correlated data. We follow the advice of the paper authors and explicitly prevent Postgres from choosing bad plans involving nested loops joins. Tables~\ref{jobTable} and \ref{jobTableMT} compare different baselines in single-threaded and for Skinner, and MonetDB, in multi-threaded mode (our server runs Postgres~9.5 which is not multi-threaded). We compare approaches by total and maximal (per query) execution time (in seconds). Also, we calculate the accumulated intermediate result cardinality of executed query plans. This metric is a measure of optimizer quality that is independent of the execution engine. Note that we cannot reliably measure cardinality for Skinner-G and Skinner-H since we cannot know which results were generated by the underlying execution engine before the timeout.
Clearly, Skinner-C performs best for single-threaded performance. Also, its speedups are correlated with significant reductions in intermediate result cardinality values. As verified in more detail later, this suggests join order quality as the reason. For multi-threaded execution on a server with 24 cores, MonetDB slightly beats SkinnerDB. Note that our system is implemented in Java and does not currently parallelize the join execution phase.
When it comes to Skinner on top of existing databases, the results are mixed. For Postgres, we are unable to achieve speedups in this scenario (as shown in the appendix, there are cases involving user-defined predicates where speedups are however possible). Postgres exploits memory less aggressively than MonetDB, making it more likely to read data from disk (which makes join order switching expensive). For single-threaded MonetDB, however, the hybrid version reduces total execution time by nearly 25\% and maximal time per query by factor four, compared to the original system. This is due to just a few queries where the original optimizer selects highly suboptimal plans.
\begin{table}[t]
\caption{Performance of join orders in different execution engines for join order benchmark - single threaded.\label{joinOrdersTable}}
\begin{tabular}{p{1.5cm}p{1.5cm}p{1.75cm}p{1.75cm}}
\toprule[1pt]
\textbf{Engine} & \textbf{Order} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
Skinner & Skinner & 183 & 9 \\
& Optimal & 180 & 7 \\
\midrule
Postgres & Original & 726 & 59 \\
& Skinner & 567 & 14 \\
& Optimal & 555 & 14 \\
\midrule
MonetDB & Original & 986 & 409 \\
& Skinner & 138 & 7 \\
& Optimal & 134 & 6 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Performance of join orders in different execution engines for join order benchmark - multi-threaded.\label{joinOrdersTableMT}}
\begin{tabular}{p{1.5cm}p{1.5cm}p{1.75cm}p{1.75cm}}
\toprule[1pt]
\textbf{Engine} & \textbf{Order} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
Skinner & Skinner & 135 & 7 \\
& Optimal & 129 & 7 \\
\midrule
MonetDB & Original & 105 & 26 \\
& Skinner & 53 & 2.7 \\
& Optimal & 51 & 2.3 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
To verify whether Skinner-C wins because of better join orders, we executed final join orders selected by Skinner-C in the other systems. We also used optimal join orders, calculated according to the $C_{out}$ metric. Tables~\ref{joinOrdersTable} and \ref{joinOrdersTableMT} show that Skinner's join orders improve performance uniformly, compared to the original optimizer. Also, Skinner's execution time is very close to the optimal order, proving the theoretical guarantees from the last section pessimistic.
\subsection{Further Analysis}
\begin{table}
\caption{Impact of replacing reinforcement learning by randomization.\label{randomizationTable}}
\begin{tabular}{llll}
\toprule[1pt]
\textbf{Engine} & \textbf{Optimizer} & \textbf{Time} & \textbf{Max.\ Time}\\
\midrule[1pt]
Skinner-C & Original & 182 & 9 \\
& Random & 2,268 & 332 \\
\midrule
Skinner-H(PG) & Original & 2,658 & 234 \\
& Random & 3,615 & 250 \\
\midrule
Skinner-H(MDB) & Original & 761 & 114 \\
& Random & $\geq$ 5,743 & $\geq$ 3,600 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
\begin{table}[t]
\caption{Impact of SkinnerDB features.\label{featuresTable}}
\begin{tabular}{p{4.8cm}p{1.25cm}p{1.25cm}}
\toprule[1pt]
\textbf{Enabled Features} & \textbf{Total Time} & \textbf{Max.\ Time} \\
\midrule[1pt]
indexes, parallelization, learning & 135 & 7 \\
parallelization, learning & 162 & 9 \\
learning & 185 & 9 \\
none & 2,268 & 332 \\
\bottomrule[1pt]
\end{tabular}
\end{table}
We experiment with different variants of SkinnerDB. First of all, we compare learning-based selection against randomized selection. Table~\ref{randomizationTable} shows the performance penalty for randomized selection. Clearly, join order learning is crucial for performance. In Table~\ref{featuresTable}, we compare the impact of randomization to the impact of parallelizing pre-processing and adding hash indices on all join columns (which SkinnerDB exploits if the corresponding table is not used in pre-processing). Clearly, join order learning is by far the most performance-relevant feature of SkinnerDB.
\begin{figure}[t]
\subfigure[MonetDB spends most time executing a few expensive queries.]{
\begin{tikzpicture}
\begin{axis}[xlabel={Queries}, ylabel={Run Time (\%)}, width=4cm, ylabel near ticks, xlabel near ticks, legend entries={ Skinner, MonetDB}, legend pos=south east, legend style={font=\scriptsize}, ymajorgrids]
\addplot table[x index=0, y index=3, col sep=tab] {plots/runTimeSkinner.txt};
\addplot table[x index=0, y index=3, col sep=tab] {plots/runTimeSkew.txt};
\end{axis}
\end{tikzpicture}
}
\subfigure[SkinnerDB realizes high speedup for two expensive queries.]{
\begin{tikzpicture}
\begin{axis}[xlabel={MonetDB Time (ms)}, ylabel={Speedup}, width=4cm, ylabel near ticks, xlabel near ticks, xmode=log, ymode=log, ymajorgrids]
\addplot[scatter, only marks] table[x index=0, y index=2, col sep=tab] {plots/speedupSkew.txt};
\end{axis}
\end{tikzpicture}
}
\caption{Analyzing the source of SkinnerDB's speedups compared to MonetDB.\label{monetVsSkinnerFig}}
\end{figure}
We analyze in more detail where the speedups compared to MonetDB come from (all results refer to single-threaded mode). Figure~\ref{monetVsSkinnerFig} shows on the left hand side the percentage of execution time, spent on the top-k most expensive queries (x axis). MonetDB spends the majority of execution time executing two queries with highly sub-optimal join orders (we reached out to the MonetDB team to make sure that no straight-forward optimizations remove the problem). On the right side, we draw speedups realized by Skinner versus MonetDB's query execution time. MonetDB is actually faster for most queries while SkinnerDB has highest speedups for the two most expensive queries. Since those queries account for a large percentage of total execution time, Skinner-C outperforms MonetDB in single-threaded mode.
\begin{figure}[t]
\subfigure[The growth of the search tree slows down over time.]{
\begin{tikzpicture}
\begin{axis}[xlabel={Time (scaled)}, ylabel={\#Nodes (scaled)}, width=4cm, ylabel near ticks, xlabel near ticks, ymajorgrids]
\addplot coordinates {(0.25,0.65) (0.5, 0.85) (0.75,0.95) (1,1)};
\end{axis}
\end{tikzpicture}
}
\subfigure[SkinnerDB spends most time executing one or two join orders.]{
\begin{tikzpicture}
\begin{axis}[xlabel={Top-k Orders}, ylabel={Selections (\%)}, width=4cm, ylabel near ticks, xlabel near ticks, legend entries={T:500, T:10}, legend pos=south east, legend style={font=\scriptsize}, xtick=data, ymajorgrids]
\addplot coordinates {(1,0.45) (2,0.65) (3,0.72) (4,0.79) (5,0.81)};
\addplot coordinates {(1,0.65) (2,0.82) (3,0.88) (4,0.89) (5,0.90)};
\end{axis}
\end{tikzpicture}
}
\caption{Analysis of convergence of SkinnerDB.\label{convergenceFig}}
\end{figure}
Figure~\ref{convergenceFig} analyzes convergence of Skinner-C to optimal join orders. On the left side, we show that the growth of the search tree slows as execution progresses (a first indication of convergence). On the right side, we show that Skinner-C executes one (with a timeout of $b=10$ per time slice) or two (with a timeout of $b=500$, allowing less iterations for convergence) join orders for most of the time.
\begin{figure}[t]
\subfigure[Search tree size is correlated with query size.\label{uctMemFig}]{
\begin{tikzpicture}
\begin{axis}[xlabel={\# Joined Tables}, ylabel={\#Nodes}, width=4cm, ylabel near ticks, xlabel near ticks, ymajorgrids, xmode=linear, ymode=log]
\addplot[scatter, only marks] table[x index=0, y index=8, col sep=tab] {plots/treeGrowth.txt};
\end{axis}
\end{tikzpicture}
}
\subfigure[Size of join order progress tracker tree.\label{trackerMemFig}]{
\begin{tikzpicture}
\begin{axis}[xlabel={\# Joined Tables}, ylabel={\#Nodes}, width=4cm, ylabel near ticks, xlabel near ticks, ymajorgrids, xmode=linear, ymode=log]
\addplot[scatter, only marks] table[x index=0, y index=11, col sep=tab] {plots/treeGrowth.txt};
\end{axis}
\end{tikzpicture}
}
\subfigure[Size of final result tuple indices.\label{finalMemFig}]{
\begin{tikzpicture}
\begin{axis}[xlabel={\# Joined Tables}, ylabel={\#Size}, width=4cm, ylabel near ticks, xlabel near ticks, ymajorgrids, xmode=linear, ymode=log]
\addplot[scatter, only marks] table[x index=0, y index=12, col sep=tab] {plots/treeGrowth.txt};
\end{axis}
\end{tikzpicture}
}
\subfigure[Combined size of intermediate results, progress, and tree.\label{allMemFig}]{
\begin{tikzpicture}
\begin{axis}[xlabel={\# Joined Tables}, ylabel={All Data (GB)}, width=4cm, ylabel near ticks, xlabel near ticks, ymajorgrids, xmode=linear, ymode=log]
\addplot[scatter, only marks] table[x index=0, y index=9, col sep=tab] {plots/treeGrowth.txt};
\end{axis}
\end{tikzpicture}
}
\caption{Memory consumption of SkinnerDB.\label{memoryFigure}}
\end{figure}
Finally, we analyze memory consumption of Skinner-C. Compared to traditional systems, Skinner-C maintains several additional, auxiliary data structures. First, it keeps the UCT search tree. Second, it maintains a tree associating each join order to the last execution state (one tuple index for each base table). Third, it must keep the tuple vectors of all join result tuples in a hash table to eliminate duplicates from different join orders. On the other side, Skinner-C does not maintain any intermediate results as opposed to other systems (due to depth-first multiway join execution). Figure~\ref{memoryFigure} shows the maximal sizes of the aforementioned data structures during query executions as a function of query size. Storing result tuple index vectors (Figure~\ref{finalMemFig}) has dominant space complexity, followed by the progress tracker, and the UCT search tree. Overall, memory consumption is not excessive compared to traditional execution engines.
\begin{comment}
\begin{figure}
\center
\begin{tikzpicture}
\begin{groupplot}[group style={group size=1 by 1, x descriptions at=edge bottom}, width=8.5cm, height=3cm, ybar=0pt, legend entries={Skinner-C,Eddy,Reoptimizer,Postgres,S-G(Postgres),S-H(Postgres),Com-DB,S-G(Com-DB),S-H(Com-DB)}, legend columns=3, legend to name=b2millisLegend, ymode=log, ymajorgrids, ylabel={Time (ms)}, xlabel={\# Tables}, ylabel near ticks]
\nextgroupplot[bar width=3pt, title={UDF Equality Predicates, 250 tuples/table}, xtick=data]
\addChainPlotTime{plots/mb2s250millisAvg}{30000}
\draw [red, ultra thick] (axis cs:\pgfkeysvalueof{/pgfplots/xmin},30000) -- (axis cs:\pgfkeysvalueof{/pgfplots/xmax},30000);
\end{groupplot}
\end{tikzpicture}
\ref{b2millisLegend}
\caption{Trivial Optimization benchmark.\label{easyFig1}}
\end{figure}
Our primary goal is to achieve robust query evaluation for corner cases. Still, we also consider scenarios where sophisticated optimization only adds overheads. Figure~\ref{easyFig1} shows results for the Trivial Optimization benchmark in which all query plans avoiding Cartesian products are equivalent. We are mostly interested in relative execution times obtained for the same execution engine with different optimization strategies. Clearly, optimizers that avoid any exploration perform best in this scenario. For the four baselines sharing the Java-based execution engine (Optimizer, Re-Optimizer, and Eddy), this is the standard optimizer. For the baselines that are based on existing DBMS, the original optimizer works best in each case. The overhead of the adaptive, non-intrusive strategies is more pronounced for the commercial DBMS than for Postgres. We believe that this is due to the fact that our current implementation of the adaptive strategy for the commercial DBMS creates a new JDBC connection in each iteration for the adaptive strategies, this is not the case for Postgres. Still, execution time obtained by the hybrid execution strategies is in all cases within the constant bounds proven in Section~\ref{analysisSec} of the original. While robustness in corner cases clearly costs peak performance in trivial cases, the overheads are bounded.
\end{comment}
\begin{comment}
\begin{figure}
\center
\begin{tikzpicture}
\begin{groupplot}[group style={group size=3 by 1, x descriptions at=edge bottom, y descriptions at=edge left, horizontal sep=5pt}, width=3.75cm, height=4cm, ybar=0pt, legend entries={Skinner-C,Eddy,Optimizer,Reoptimizer}, legend columns=4, legend to name=b2allGraphsLegend, ymode=log, ymajorgrids, ylabel={Time (ms)}, xlabel={\# Tables}, ylabel near ticks, xtick=data, ymax=10000]
\nextgroupplot[bar width=3pt, title={Chain Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINmctsS}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINEddy}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINOpt}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINReopt}] {plots/mb2s1000allGraphsMillisAvg};
\nextgroupplot[bar width=3pt, title={Cycle Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEmctsS}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEEddy}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEOpt}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEReopt}] {plots/mb2s1000allGraphsMillisAvg};
\nextgroupplot[bar width=3pt, title={Star Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STARmctsS}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STAREddy}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STAROpt}] {plots/mb2s1000allGraphsMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STARReopt}] {plots/mb2s1000allGraphsMillisAvg};
\end{groupplot}
\end{tikzpicture}
\ref{b2allGraphsLegend}
\caption{Quantifying learning and sampling overheads for Easy Optimization benchmark.\label{easyFig2}}
\end{figure}
\end{comment}
\begin{comment}
\begin{figure}
\center
\begin{tikzpicture}
\begin{groupplot}[group style={group size=3 by 1, x descriptions at=edge bottom, y descriptions at=edge left, horizontal sep=5pt}, width=3.75cm, height=4cm, ybar=0pt, legend entries={Skinner-C,Eddy,Optimizer,Reoptimizer}, legend columns=4, legend to name=b2allGraphsLegend, ymode=log, ymajorgrids, ylabel={Time (ms)}, xlabel={\# Tables}, ylabel near ticks, xtick=data]
\nextgroupplot[bar width=3pt, title={Chain Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINmctsS}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINEddy}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINOpt}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CHAINReopt}] {plots/mb2sizesMillisAvg};
\nextgroupplot[bar width=3pt, title={Cycle Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEmctsS}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEEddy}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEOpt}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{CYCLEReopt}] {plots/mb2sizesMillisAvg};
\nextgroupplot[bar width=3pt, title={Star Graph}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STARmctsS}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STAREddy}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STAROpt}] {plots/mb2sizesMillisAvg};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{STARReopt}] {plots/mb2sizesMillisAvg};
\end{groupplot}
\end{tikzpicture}
\ref{b2allGraphsLegend}
\caption{sizes}
\end{figure}
\end{comment}
\begin{comment}
\pgfplotsset{
/pgfplots/bar cycle list/.style={/pgfplots/cycle list={%
{draw=blue,fill=blue,mark=none},%
{draw=blue!50,fill=blue!50,mark=none},%
{draw=cyan,fill=cyan,mark=none},%
{draw=blue!70!red,fill=blue!70!red,mark=none},%
{fill=green,mark=none},%
{fill=green!50,mark=none},%
{fill=green!70!red,mark=none},%
{fill=yellow,mark=none},%
{fill=yellow!50,mark=none},%
{fill=yellow!70!red,mark=none},
{fill=black,mark=none}
}
},
}
\def\addPlot#1{
\nextgroupplot[bar width=1pt]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3skinner};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3eddy};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3opt};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3reopt};
}
\def\addTitledPlot#1#2{
\nextgroupplot[bar width=1pt, title={#2}]
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3skinner};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3eddy};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3opt};
\addplot table[x index=1, y index=2, col sep=comma, discard if not={0}{#1}] {plots/mb3reopt};
}
\begin{figure}[t]
\center
\begin{tikzpicture}
\begin{groupplot}[group style={group size=4 by 7, y descriptions at=edge left, x descriptions at=edge bottom, vertical sep=4pt, horizontal sep=5pt}, width=3.2cm, height=2.5cm, ylabel={T (ms)}, ylabel near ticks, xlabel={Test case}, xlabel near ticks, ybar=0pt, legend to name=mb3chainLegend, legend entries={Skinner, Eddy, Optimizer, Reoptimizer}, legend columns=4, ymode=log, ymax=15000, ymin=1, ylabel style={font=\scriptsize}, xlabel style={font=\scriptsize}, tick label style={font=\scriptsize}, ymajorgrids]
\addTitledPlot{CHAINT4S1000m1}{{S; $m=1$}}
\addTitledPlot{CHAINT4S1000m2}{S; $m=n/2$}
\addTitledPlot{CHAINT4S1000000m1}{L; $m=1$}
\addTitledPlot{CHAINT4S1000000m2}{L; $m=n/2$}
\addPlot{CHAINT5S1000m1}
\addPlot{CHAINT5S1000m2}
\addPlot{CHAINT5S1000000m1}
\addPlot{CHAINT5S1000000m2}
\addPlot{CHAINT6S1000m1}
\addPlot{CHAINT6S1000m3}
\addPlot{CHAINT6S1000000m1}
\addPlot{CHAINT6S1000000m3}
\addPlot{CHAINT7S1000m1}
\addPlot{CHAINT7S1000m3}
\addPlot{CHAINT7S1000000m1}
\addPlot{CHAINT7S1000000m3}
\addPlot{CHAINT8S1000m1}
\addPlot{CHAINT8S1000m4}
\addPlot{CHAINT8S1000000m1}
\addPlot{CHAINT8S1000000m4}
\addPlot{CHAINT9S1000m1}
\addPlot{CHAINT9S1000m4}
\addPlot{CHAINT9S1000000m1}
\addPlot{CHAINT9S1000000m4}
\addPlot{CHAINT10S1000m1}
\addPlot{CHAINT10S1000m5}
\addPlot{CHAINT10S1000000m1}
\addPlot{CHAINT10S1000000m5}
\end{groupplot}
\end{tikzpicture}
\ref{mb3chainLegend}
\caption{Correlation Torture benchmark with four (upper row) up to ten tables (bottom row) and with 1,000 (S) or 1,000,000 tuples (L) per table.\label{mb3perTestcase}}
\end{figure}
\end{comment}
|
1,477,468,751,077 | arxiv | \section{Introduction}
It is a general feature of solutions of partial differential equations (PDE) that spikes occur~\cite{inbook:Wei2008}.
Spikes occur in generic solutions of the Einstein field equations (EFE) of general relativity (GR)~\cite{thesis:Lim2004}.
Indeed, when a self-similar solution of the EFE is unstable, spikes can arise near such solutions.
Spikes, originally found in the context of vacuum orthogonally transitive (OT) $G_2$ models~\cite{art:BergerMoncrief1993,art:RendallWeaver2001,art:Lim2008,art:Limetal2009},
describe a dynamic and spatially inhomogeneous gravitational distortion.
Berger and Moncrief first discovered spikes in their numerical simulations~\cite{art:BergerMoncrief1993}.
Rendall and Weaver~\cite{art:RendallWeaver2001} discovered a composition of two transformations that can map spike-free solutions to solutions with spikes.
Using the Rendall-Weaver transformation, Lim discovered an exact OT $G_2$ spike solution~\cite{art:Lim2008}.
Recently, this solution was generalized to the non-OT $G_2$ case by applying Geroch's transformation on a Kasner seed~\cite{art:Lim2015}.
The new solution contains two more parameters than the OT $G_2$ spike solution.
The mechanism of spike formation is simple -- the state-space orbits of nearby worldlines approach a saddle point; if
this collection of orbits straddle the stable manifold of the saddle point, then one
of the orbits becomes stuck on the stable manifold of the saddle point and heads
towards the saddle point, while the neighbouring orbits leave the saddle point.
This heuristic argument holds as long as spatial derivative terms have negligible
effect. In the case of spikes, the spatial derivative terms do have significant
effect, and the spike point that initially got stuck does leave the saddle point
eventually, and the spike that formed becomes smooth again.
\subsubsection*{Types of spikes}
In~\cite{art:Limetal2009}, further improved numerical
evidence was presented that spikes in the Mixmaster
regime of $G_2$ cosmologies are transient and recurring,
supporting the conjecture that the generalized Mixmaster
behavior is asymptotically non-local where spikes occur.
It is believed that this recurring violation of BKL locality holds in more general spacetimes.
We have previously shown explicitly that there exist ($G_2$) recurring spikes leading to
inhomogeneities and a small residual in the form of matter perturbations~\cite{art:ColeyLim2012}.
We are also interested in incomplete spikes.
Evolving away from the initial singularity, the oscillatory regime eventually ends when $\Omega$ is no longer negligible, and
some of the spikes are in the middle of transitioning, leaving inhomogeneous imprints on the matter result.
The residuals from an incomplete spike might, in principle, be large and thus affect structure formation.
The incomplete spikes associated with Kasner saddles points occur generically in the early Universe.
Both the incomplete spikes and the recurring spikes are potentially of physical importance.
Saddle points, related to self-similar solutions such as the Kasner solutions and FLRW models, may also occur at late times, and may also
cause spikes/tilt that might lead to further matter inhomogeneities, albeit non-generically, which
might lead to the existence of exceptional structures on large scales.
\subsubsection*{BKL dynamics}
Belinskii, Khalatnikov and Lifshitz (BKL) \cite{art:LK63,art:BKL1970,art:BKL1982,art:BK1981} have conjectured that within GR, the approach to the
generic (past) spacelike singularity is vacuum dominated, local, and oscillatory (i.e., Mixmaster).
Studies of $G_2$ and more general cosmological models have produced numerical evidence that the BKL conjecture generally holds except possibly at isolated points
(surfaces in the three-dimensional space) where spiky structures (``spikes") form \cite{art:Bergeretal2001,art:vEUW2002,art:Garfinkle2004num,art:Anderssonetal2005}.
These spikes become ever narrower as the singularity is approached.
The presence of such spikes violates the local part of the BKL conjecture.
BKL considered the EFE in synchronous coordinates and by dropping all spatial derivatives,
which geometrically corresponds to neglecting the Ricci 3-curvature of the spatial surfaces of the
synchronous coordinate system, as well as all matter terms [1]. This procedure leads to a set of
ordinary differential equations (ODEs) that are identical to those obtained in the vacuum case by
imposing spatial homogeneity and an associated simply transitive Abelian symmetry group, which
results in the vacuum Bianchi type I models whose solution is the well-known Kasner solution.
But in the general inhomogeneous context the constants of integration that appear in the Kasner
solution are replaced by spatially dependent functions, leading to a generalized Kasner solution
(even though it is not a solution to the EFE, it is a building block when one attempts to construct generic asymptotic
solutions).
\subsubsection*{The influence of matter}
In their seminal work, BKL \cite{art:BK1981} studied the influence of matter upon the behavior of the general inhomogeneous
solution of the EFE in the vicinity of the
initial singularity.
In a space filled with a perfect fluid with the equation of
state $p=(\gamma-1)\rho$, for $1\leq \gamma<2$
the oscillatory regime, as the singular point
is approached asymptotically, remains the same as in vacuum.
However, for the ``stiff matter" equation of state, $\gamma=2$, we have that $p_{\phi}=\rho_{\phi}$ and neither the
Kasner epoch nor an oscillatory regime can exist in the neighborhood of the singularity.
Indeed, it has been shown~\cite{art:BK1981} that the influence of the ``stiff matter" or a massless scalar field with $\rho_{\phi}=p_{\phi}= -g^{ab}{\phi}_{,a} {\phi}_{,b}$
results in the Jacobs relations \cite{book:Coley2003,art:CarrColey1999}:
\begin{equation}
p_{\left( 1\right) }+p_{\left( 2\right) }+p_{\left( 3\right) }=1,\text{
\ \ \ \ }p_{\left( 1\right) }^{2}+p_{\left( 2\right) }^{2}+p_{\left(
3\right) }^{2}=1-p^{2}\,,\label{4}
\end{equation}
where $p^{2}$ is an arbitrary time-independent function with $p^{2}<1$, for which the energy density is asymptotically of the form $\rho_{\phi} = \frac{1}{2} {\dot{\phi}^2}= p^2 t^{-2}$
(where $\phi=p\ln t$ in comoving time).
Therefore, unlike the Kasner relations, it is
possible for all three exponents $p_{\left( a\right) }$ to be positive simultaneously. Consequently, even if the contraction of space
starts with a quasi-Kasner epoch (\ref{4}) in which one of the exponents
$p_{\left( a\right) }$ is negative, the power law asymptotic behavior with
all positive exponents results after a finite number of
oscillations and then persists up to the singular point, and in general the collapse is
described by monotonic (but anisotropic) contraction along all spatial directions \cite{art:BK1981}.
\subsubsection*{Overview}
In this paper we wish to extend previous results to the massless scalar field/stiff perfect fluid case.
In the next section we briefly review massless scalar fields
and the techniques that have been used to generate exact
stiff fluid solutions. As motivation we first
generalize the OT $G_2$ vacuum spike solution to obtain
a new exact OT $G_2$ stiff fluid spike solution, and
analyse OT $G_2$ stiff fluid spike models both heuristically and numerically~(see Section~\ref{sec:2}).
We then discuss non-OT $G_2$ stiff fluid spike solutions~(see Section~\ref{sec:3}).
We first obtain a new class of exact non-OT $G_2$ stiff fluid spike solutions.
This is achieved, generalizing~\cite{art:Lim2015}, by
applying the stiff fluid version of Geroch's transformation on a Jacobs seed.
The new solution contains two more parameters than the
OT $G_2$ stiff fluid spike solution described earlier.
We discuss these solutions.
We subsequently discuss the non-OT $G_2$ stiff fluid models in full generality~(see Section~\ref{sec:4}).
We extend the analysis to include a second perfect fluid.
We derive the evolution equations using different normalizations and gauge choices.
In particular, the discovery of the exact non-OT $G_2$ stiff fluid spike solution motivates the use of the fluid-comoving gauge.
We briefly discuss some of the qualitative properties of these models (primarily to illustrate any new features of the models)
and discuss their numerical analysis.
In the final section we discuss the physical consequences of a stiff fluid or massless scalar field in the general relativistic generation of spikes.
\section{Massless scalar field}
\label{sec:2}
Scalar fields are ubiquitous in the early universe in modern theories of theoretical physics.
In the approach to the singularity it is known that the scalar field is dynamically massless \cite{book:Coley2003,art:CarrColey1999}.
Thus including massless scalar fields in early universe cosmology is important.
The field equations of a minimally coupled scalar field with timelike gradient
are formally the same as those of an irrotational stiff fluid.
We shall
concentrate on showing how spikes generate matter overdensities in a radiation fluid in general relativity in a
special class of inhomogeneous models in the initial
regime of general massless scalar field cosmological models.
In the initial oscillatory vacuum regime, we recall that spikes recur.
We also wish to study the residual imprints of the spikes
on matter inhomogeneities in the early universe in scalar field models. As the spike inhomogeneities form,
matter undergoes gravitational instability and begins to collapse to form overdensities.
We shall {\em{normalize}} using a $D$-normalization; when utilizing the exact solutions obtained from a
Geroch transformation, $D$ is chosen to be the scale dependent determinant of the metric (see section 3.1).
In the OT case under consideration below, $D$-normalization is equivalent to $\beta$-normalization. Hence using
$\beta$-normalization implies that the normalized stiff fluid density $\Omega_\phi$ ($\sim \rho_\phi \beta^{-2}$) is a constant
(and we can then omit its trivial evolution equation).
\subsection{OT $G_2$ spike imprint analysis}
The exact OT $G_2$ stiff fluid spike solution (which can be used as the zeroth order solution in the linearization)
obtained as a simple generalization of \cite{art:Lim2008} is:
\begin{equation}
\label{spike}
(\Sigma_-,N_\times,\Sigma_\times,N_-) =
\left(-c \Sigma_-{}_\text{Taub} -\frac{1}{\sqrt{3}},
sN_-{}_\text{Taub},
c N_-{}_\text{Taub},
-s \Sigma_-{}_\text{Taub}
\right).
\end{equation}
\begin{equation}
\label{csf}
c = \frac{f^2-1}{f^2+1},\quad
s = \frac{2f}{f^2+1},\quad
f =w e^\tau \text{sech}(w\tau) x.
\end{equation}
\begin{equation}
\Sigma_-{}_\text{Taub}=\frac{w}{\sqrt{3}}\text{tanh}(w\tau)-\frac{1}{\sqrt{3}},\quad
N_-{}_\text{Taub}=\frac{w}{\sqrt{3}}\text{sech}(w\tau)
\end{equation}
\begin{equation}
\Omega_\phi = \text{const.},\quad v_\phi = 0.
\end{equation}
The variables there are $\beta$-normalized, where $\beta$ is the area expansion
rate of the $(y,z)$ plane and is related by $H$ = $\beta(1-\Sigma_+)$.
$\Sigma_+$, $\Sigma_-$ and $\Sigma_\times$ are components of the $\beta$-normalized rate of shear;
$N_\times$ and $N_-$ are components of the $\beta$-normalized spatial
curvature ($\Sigma_+$ and $q$ etc. are given in \cite{thesis:Lim2004}).
$\Omega_\phi$ is the $\beta$-normalized stiff fluid density; $v_\phi$ is the
relative stiff fluid velocity (tilt) in the $x$-direction.
More importantly, we note that (here we shall use the sign convention $\beta>0$, as opposed to~\cite{art:ColeyLim2012})
\begin{equation}
\beta = \frac12 \text{sech}(w\tau) e^{[\frac{w^2+7+3\Omega_\phi}{4}]\tau - \frac14 \lambda_2} (f^2+1)^{-\frac12}.
\label{beta}
\end{equation}
The full OT $G_2$ evolution equations which include both the stiff fluid and another perfect fluid $(\Omega,v)$ are presented in the Appendix (see Appendix D of \cite{thesis:Lim2004}
with $A=0$), from which we can obtain the linearized evolution equations. $\gamma$ is the equation of state parameter, with $\gamma=\frac43$ describing the radiation fluid.
The evolution equations for $\beta$ and $\Omega$ are:
\begin{eqnarray}
\partial_\tau \ln\beta &= &\frac34 [1+\Sigma_-^2+\Sigma_\times^2+N_\times^2+N_-^2 + (\gamma -1) \Omega + \Omega_\phi] \label{beta1}\\
\partial_\tau \ln\Omega & = & \frac12 \gamma v E_1{}^1 \partial_x \ln\Omega + \frac12
\gamma E_1{}^1 \partial_x v \nonumber \\
&& - \frac34 (2 - \gamma) [1 + \Sigma_-^2 + \Sigma_\times^2 + N_\times^2 + N_-^2 - \Omega + \Omega_\phi].\label{omega1}
\end{eqnarray}
In the above equations we see that the constant parts of the evolution equations for
$\beta$ and $\Omega$ are ``renormalized" by the factor $(1+\Omega_\phi)$. Therefore, the numerics will show numerical evidence of spikes and their influence on
matter perturbations, and the quasi-analytical results will yield similar results to those in previous papers
on the vacuum case \cite{art:Lim2008,art:ColeyLim2012} with the renormalization of the constant
parameters.
\subsubsection*{Heuristics}
In the vacuum case ${\beta} \equiv {\beta}^\text{vac}$ and ${\Omega} \equiv {\Omega}^\text{vac}$
(given in terms of ${\beta}^\text{vac}$) are given by equations (7) and (11) in \cite{art:ColeyLim2012}, respectively.
The stiff fluid equivalent for ${\beta}$ is given by equation~(\ref{beta}), where
\begin{equation}
\beta=\beta^\text{vac}e^{\frac{3}{4}\Omega_\phi \tau},
\end{equation}
and the evolution equations for $\beta$, $\Omega$ are given by equations (\ref{beta1}) -- (\ref{omega1}) above.
Treating the (radiation $\gamma = \frac{4}{3}$) $\Omega$ field as a test fluid (with negligible $\Omega$ and $v$), we obtain
\begin{equation}
\partial_\tau \ln \Omega = \partial_\tau \ln \beta^{-(2-\gamma)}.
\end{equation}
Again, remarkably, we can integrate this to exactly obtain
\begin{equation}
\Omega=\Omega^\text{vac}e^{-\frac{3}{4} (2 - \gamma) \Omega_\phi \tau}.
\end{equation}
We recall that $\tau$ increases towards the singularity, so that
$\tau$ decreases to the future. Therefore, $\Omega$ is amplified to the future (relative to $\Omega^\text{vac}$).
We recall from~\cite{art:ColeyLim2012} that the cumulative effect of a complete spike transition
on the spatial inhomogeneity of $\beta^\text{vac}$ (and hence $\Omega^\text{vac}$) is zero.
In the stiff fluid case, $\beta$ and $\Omega$ above only differ from $\beta^\text{vac}$ and $\Omega^\text{vac}$ by a purely time-dependent factor,
so that a complete spike transition has no permanent inhomogeneous imprint on the test matter in the stiff fluid case either.
We next consider the linearized equations (which can be obtained from the Appendix), where
the zeroth order terms in the linearized equations are satisfied
identically by the exact spike background solution. Assuming a small $\Omega_0$ (and neglecting tilt $v$), we obtain (as above)
\begin{equation}
\Omega_0=\Omega^\text{vac}e^{-\frac{3}{4} (2 - \gamma) \Omega_\phi \tau}.
\end{equation}
For the larger $\Omega$ case, with $\Omega_0 \neq 0$, and writing $\Omega = \Omega_0 (1 + \Omega_1)$
where $\Omega_1$ is treated as a perturbation, we obtain \cite{art:ColeyLim2012}
\begin{equation}
\Omega_1 = \hat{\Omega}_1(x) - \frac{4}{3 \gamma(2-\gamma)} \bigg[ \int \Omega_0 d \tau \bigg]^{-1}
\end{equation}
or
\begin{equation}
\Omega_1 - \hat{\Omega}_1(x) \sim \Omega^\text{vac}_1 e^{\frac{3}{4} (2 - \gamma) \Omega_\phi \tau}.
\end{equation}
Therefore, $\Omega_1$ is damped to the future (relative to $\Omega^\text{vac}_1$ as $\tau$ decreases). However, the overall radiation energy density is amplified to the future.
\begin{figure}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{fig910_cropped.png}}
\caption{Plots of $\Omega$, $v$ (for radiation fluid), $\Omega_\phi$, $v_\phi$ (for stiff fluid), the ratio $\Omega/\Omega_{X=-1.1}$ and the expression $-3N_\times \Sigma_- + 3 N_- \Sigma_\times$ that drives $v$.
The plots are qualitatively the same as Figures 9 and 10 in~\cite{art:LimColey2014},
showing that stiff fluid and vacuum backgrounds are qualitatively the same regarding the effect of spike on the radiation fluid.
$\Omega_\phi$ tends to constant, while $v_\phi$ tends to zero.
See Equations~(\ref{IC910_1})--(\ref{IC910_4}) for the initial condition.}
\label{fig:fig910_cropped}
\end{center}
\end{figure}
\subsubsection*{Numerics}
The system of equations in~\cite{art:LimColey2014} has been extended to include a tilted
stiff fluid (with stiff fluid variables $\Omega_\phi$ and $v_\phi$) in the Appendix.
It is expected that the effect of spikes on the radiation fluid to be qualitatively the same
in a vacuum or stiff fluid background.
We illustrate this by running a numerical simulation using an initial condition very similar to the one in Section 6 of~\cite{art:LimColey2014},
by specifying the initial condition $\Omega_\phi = 10^{-2}$, $v_\phi=0$.
The full initial condition (which are Equations (30)--(32) in~\cite{art:LimColey2014} but with $v=-\tanh(X/100)$) is
\begin{gather}
\label{IC910_1}
(\Sigma_-,\ N_\times,\ \Sigma_\times,\ N_-) = (-c \Sigma_-{}_\text{Taub}-\tfrac{1}{\sqrt{3}},\ s N_-{}_\text{Taub},\ c N_-{}_\text{Taub},\ -s \Sigma_-{}_\text{Taub}),
\\
\Omega = 10^{-5},\ v = -\tanh(X/100)),\ \Omega_\phi = 10^{-2},\ v_\phi=0, E_1{}^1 = 2,
\end{gather}
where
\begin{equation}
\Sigma_-{}_\text{Taub} = \tfrac{1}{\sqrt{3}}[\tanh(w(T-T_0))-1],\ N_-{}_\text{Taub} = \tfrac{w}{\sqrt{3}}\text{sech}(w(T-T_0)),
\end{equation}
\begin{equation}
\label{IC910_4}
c = \frac{f^2-1}{f^2+1},\ s = \frac{2f}{f^2+1},\ f = w \text{sech}(w(T-T_0)) (X-X_0).
\end{equation}
We use the same parameter values as before: $w=1.5$, $T=0$, $T_0 = -10$.
The results are shown in Figure~\ref{fig:fig910_cropped}. The plots are qualitatively the same as Figures 9 and 10 in~\cite{art:LimColey2014}.
$\Omega_\phi$ quickly tends to a constant value, and $v_\phi$ quickly tends to zero.
\section{New non-OT stiff fluid spike solutions}
\label{sec:3}
We next present a new two-parameter family of non-OT $G_2$ stiff fluid spike solutions,
generalising the vacuum solutions of~\cite{art:Lim2015}. This is achieved by
applying the stiff fluid version of Geroch's transformation~\cite{art:Geroch1971,art:Geroch1972,art:Stephani1988,art:GarfinkleGlassKrisch1997}
on a Jacobs seed.
The new solution contains two more parameters than the OT $G_2$ stiff fluid spike solution described earlier.
Let
$g_{ab} $
be a solution of the stiff perfect fluid EFEs with energy density
$ \rho_\phi $
and pressure
$ p_\phi $ and stiff equation of state $ p_\phi = \rho_\phi $, and fluid
four velocity $ u^a$.
Assume that
$ g_{ab} $
has a Killing vector field (KVF)
$ \xi ^a $.
Define the norm
$ \lambda $
and twist
$ \omega _a $
of
$ \xi ^a $
by
$ \lambda = {\xi ^a} {\xi _a} $
and
\begin{equation}
{\omega _a} = {\epsilon _{abcd}} {\xi ^b} {\nabla ^c} {\xi ^d}.
\end{equation}
We assume that the KVF is orthogonal to the fluid four-velocity and
thus
\begin{equation}
{R_{ab}} {\xi ^b} = 0.
\end{equation}
It then follows that there is a scalar
$ \omega $
such that
$ {\omega _a} = {\nabla _a} \omega $
and that there are forms
$ \alpha _a $
and
$ \beta _a $
satisfying
\begin{gather}
\nabla_a \omega =\varepsilon_{abcd}\xi ^b\nabla^c \xi^d,
\\
\nabla_{[a}\alpha_{b]} =\frac{1}{2}\varepsilon_{abcd} \nabla^c \xi^d,\quad
\xi^a \alpha_a =\omega,
\\
\nabla_{[a}\beta_{b]}=2\lambda \nabla_a \xi_b + \omega \varepsilon_{abcd} \nabla^c \xi^d,
\quad
\xi^a \beta_a =\lambda^2 +\omega^2 -1.
\end{gather}
We solve these for $\omega$, $\alpha_a$ and $\beta_a$.
Next, we define $F$ (or $\tilde{\lambda}$) and $\eta_a$ as
\begin{align}
F = \frac{\lambda}{\tilde{\lambda}}&= (\cos\theta-\omega\sin\theta)^2 +\lambda^2 \sin^2\theta,
\\
\eta_a &=\tilde{\lambda}^{-1} \xi_a +2 \alpha_a \cos\theta\sin\theta-\beta_a \sin^2\theta,
\end{align}
for any constant $\theta$.
Then the new metric is given by
\begin{equation}
\tilde{g}_{a b}=\frac{\lambda}{\tilde{\lambda}}(g_{a b}-\lambda^{-1} \xi_a\xi_b)+\tilde{\lambda} \eta_a \eta_b.
\end{equation}
This new metric is also a solution of the stiff perfect fluid EFEs
with the same KVF. For each non-zero constant value of $\theta$ the solution is generally distinct
($\theta=0$ gives the trivial transformation $\tilde{g}_{ab} = g_{ab}$),
but it amounts to essentially adding a constant value to $\omega$. So, without loss of generality, for $\omega \neq 0$ (and keeping an additive constant),
we can take
$\theta = \pi/2$.
In general, for
$ \omega \neq 0 $, the (non-tilted) stiff perfect fluid quantities transform as follows:
\begin{equation}
\tilde{\rho}_\phi = \rho_\phi / F,\quad
\tilde{u}_a = \sqrt{F} u_a,
\end{equation}
and the determinant of the metric $g$ transforms as
\begin{equation}
\tilde{g} = F^2 g.
\end{equation}
The most relevant application of the stiff fluid version of Geroch's transformation is to generate the non-OT $G_2$ stiff fluid spike solution.
As in~\cite{art:Lim2015}, we express a metric $g_{ab}$ using the Iwasawa frame~\cite{art:HeinzleUgglaRohr2009}, as follows.
The metric components in terms of $b$'s and $n$'s are given by
\begin{align}
g_{00} &= -N^2
\\
g_{11} &= \text{e}^{-2b_1},\quad g_{12} = \text{e}^{-2b_1} n_1,\quad g_{13} = \text{e}^{-2b_1} n_2
\\
g_{22} &= \text{e}^{-2b_2} + \text{e}^{-2b_1} n_1^2,\quad g_{23} = \text{e}^{-2b_1} n_1 n_2 + \text{e}^{-2b_2} n_3
\\
g_{33} &= \text{e}^{-2b_3} + \text{e}^{-2b_1} n_2^2 + \text{e}^{-2b_2} n_3^2.
\end{align}
The seed is the Jacobs solution (stiff fluid Bianchi type I solution),
parametrized very similarly to the vacuum case (Kasner solution) in~\cite{art:Lim2015}:
\begin{equation}
b_1 = \frac14(w^2-1+4\rho_0)\tau,\quad
b_2 = \frac12(w+1)\tau,\quad
b_3 = -\frac12(w-1)\tau,\quad
N^2 = \text{e}^{-2b_1-2b_2-2b_3} = \text{e}^{-\frac12(w^2+3+4\rho_0)\tau},
\end{equation}
and $n_1=n_2=n_3=0$.
The stiff fluid density $\rho_\phi$ is simply
\begin{equation}
\rho_\phi = \frac{\rho_0}{V^2},
\end{equation}
where $V$ is the spatial volume, given by $V = \text{e}^{-b_1-b_2-b_3}$.
[We note that the stable triangular region in the Hubble-normalized $(\Sigma_+,\Sigma_-)$
plane corresponds to $0 < w < 1$, $\rho_0 > \frac14(3-w)(w+1)$.]
Following the same arguments as in~\cite{art:Lim2015}, we make a coordinate change
and end up with a rotated Jacobs solution:
\begin{align}
\label{Jacobs_rotated}
N^2 &= \text{e}^{-\frac12(w^2+3+4\rho_0)\tau}
\\
\text{e}^{-2b_1} &= \text{e}^{(w-1)\tau} + n_{20}^2 \text{e}^{-\frac12(w^2-1+4\rho_0)\tau} + n_{30}^2 \text{e}^{-(w+1)\tau}
\\
\text{e}^{-2b_2} &= \frac{\mathcal{A}^2}{\text{e}^{-2b_1}}
\\
\text{e}^{-2b_3} &= \text{e}^{-\frac12(w^2+3+4\rho_0)\tau} \mathcal{A}^{-2}
\\
n_1 &= \frac{n_{30} \text{e}^{-(w-1)\tau} + n_{10} n_{20} \text{e}^{-\frac12(w^2-1+4\rho_0)\tau}}{\text{e}^{-2b_1}}
\\
n_2 &= \frac{n_{20} \text{e}^{-\frac12(w^2-1+4\rho_0)\tau}}{\text{e}^{-2b_1}}
\\
n_3 &= \text{e}^{-\frac12(w^2-1+4\rho_0)\tau}\mathcal{A}^{-2}\left[n_{30}(n_{10}n_{30}-n_{20})\text{e}^{-(w+1)\tau}+n_{10}\text{e}^{(w-1)\tau} \right]
\\
\label{Jacobs_rotated_n3}
&= \mathcal{A}^{-2}\left[n_{30}(n_{10}n_{30}-n_{20}) \text{e}^{-\frac12[(w+1)^2+4\rho_0]\tau} + n_{10} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau} \right],
\intertext{where}
\label{Jacobs_area}
\mathcal{A}^2 &= \text{e}^{-2\tau} + n_{10}^2 \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau} + (n_{10} n_{30} - n_{20})^2 \text{e}^{-\frac12[(w+1)^2+4\rho_0]\tau}.
\end{align}
We now apply Geroch's transformation to the seed solution (\ref{Jacobs_rotated})--(\ref{Jacobs_rotated_n3}), using the KVF $\partial_x$.
We obtain
\begin{equation}
\lambda = \text{e}^{-2b_1} = \text{e}^{(w-1)\tau} + n_{20}^2 \text{e}^{-\frac12(w^2-1+4\rho_0)\tau} + n_{30}^2 \text{e}^{-(w+1)\tau},\quad
\omega = 2w n_{30} z - K y + \omega_0,
\end{equation}
where the constant $K$ is given by
\begin{equation}
K = \frac12 [(w-1)(w+3)+4\rho_0] n_{20} - 2 w n_{10} n_{30},
\end{equation}
and $\omega$ is determined up to an additive constant $\omega_0$.
We could absorb $\omega_0$ by a translation in the $z$ direction if $w n_{30} \neq 0$, but we shall keep $\omega_0$ for the case $w n_{30} = 0$.
Without loss of generality, we choose $\theta=\frac{\pi}{2}$ in Geroch's transformation, so we do not need $\alpha_a$.
For $\beta_a$ we only need a particular solution.
We assume that $\beta_a$ has a zero $\tau$-component. Its other components are
\begin{align}
\beta_1 &= \omega^2 + \lambda^2 -1
\\
\beta_2 &= n_{10} n_{20}^3 \text{e}^{-(w^2-1+4\rho_0)\tau} + \left[ 2n_{10} n_{20} n_{30}^2\frac{w^2-1+4\rho_0}{(w+1)^2+4\rho_0} + 4n_{20}^2 n_{30}\frac{w+1}{(w+1)^2+4\rho_0} \right] \text{e}^{-\frac12[(w+1)^2+4\rho_0] \tau}
\notag\\
&\quad
+ 2 n_{10} n_{20}\frac{w^2-1+4\rho_0}{(w-1)^2+4\rho_0} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau} + (w+1) n_{30} \text{e}^{-2\tau} + n_{30}^3 \text{e}^{-2(w+1)\tau} + F_2(y,z)
\\
\beta_3 &= n_{20}^3 \text{e}^{-(w^2-1+4\rho_0)\tau} + 2 n_{20} n_{30}^2 \frac{w^2-1+4\rho_0}{(w+1)^2+4\rho_0} \text{e}^{-\frac12[(w+1)^2+4\rho_0] \tau}
\notag\\
&\quad
+ 2 n_{20} \frac{w^2-1+4\rho_0}{(w-1)^2+4\rho_0} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau} + F_3(y,z)
\end{align}
where $F_2(y,z)$ and $F_3(y,z)$ satisfy the constraint equation
\begin{equation}
- \partial_z F_2 + \partial_y F_3 + 2(w-1)\omega = 0.
\end{equation}
For our purpose, we want $F_3$ to be as simple as possible, so we choose
\begin{equation}
F_3 = 0,\quad F_2 = \int 2(w-1)\omega \text{d} z = 2w(w-1) n_{30} z^2 - 2 (w-1) K y z +2(w-1)\omega_0 z.
\end{equation}
Geroch's transformation now yields the desired metric $\tilde{g}_{ab}$, given by:
\begin{align}
\label{nonOT_spike}
\tilde{N}^2 &= N^2 (\omega^2+\lambda^2)
\\
\text{e}^{-2\tilde{b}_1} &= \frac{\text{e}^{-2b_1}}{\omega^2+\lambda^2}
\\
\text{e}^{-2\tilde{b}_2} &= \text{e}^{-2b_2} (\omega^2+\lambda^2)
\\
\label{nonOT_spike_b3}
\text{e}^{-2\tilde{b}_3} &= \text{e}^{-2b_3} (\omega^2+\lambda^2)
\\
\tilde{n}_1 &= -2w(w-1) n_{30} z^2 + 2 (w-1) K y z - 2(w-1)\omega_0 z
\notag\\
&\quad
+ \frac{\omega^2}{\lambda}(n_{30} \text{e}^{-(w+1)\tau} + n_{10} n_{20} \text{e}^{-\frac12(w^2-1+4\rho_0)\tau})
\notag\\
&\quad -\Bigg[ n_{30} w \text{e}^{-2\tau} + n_{10} n_{20} \frac{(w+3)(w-1)+4\rho_0}{(w-1)^2+4\rho_0} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau}
\notag\\
&\quad
+ n_{20} n_{30} (n_{10} n_{30} - n_{20}) \frac{(w-3)(w+1)+4\rho_0}{(w+1)^2+4\rho_0} \text{e}^{-\frac12[(w+1)^2+4\rho_0]\tau} \Bigg]
\intertext{}
\tilde{n}_2 &= n_{20} \Bigg[ \frac{\omega^2}{\lambda} \text{e}^{-\frac12(w^2-1+4\rho_0)\tau} - \frac{(w+3)(w-1)+4\rho_0}{(w-1)^2+4\rho_0} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau}
\notag\\
&\quad
- n_{30}^2 \frac{(w-3)(w+1)+4\rho_0}{(w+1)^2+4\rho_0} \text{e}^{-\frac12[(w+1)^2+4\rho_0]\tau} \Bigg]
\\
\tilde{n}_3 &= \mathcal{A}^{-2} \left[ n_{10} \text{e}^{-\frac12[(w-1)^2+4\rho_0]\tau} + n_{30} (n_{10} n_{30} - n_{20}) \text{e}^{-\frac12[(w+1)^2+4\rho_0]\tau} \right],
\label{nonOT_spike_n3}
\end{align}
and $\mathcal{A}$, given by (\ref{Jacobs_area}), is the area density~\cite{art:vEUW2002} of the $G_2$ orbits.
The matter density for the stiff spike is
\begin{equation}
\tilde{\rho}_\phi = \frac{\rho_0}{\tilde{N}^2} = \frac{\rho_0 \text{e}^{\frac12(w^2+3+4\rho_0)\tau} }{ (2w n_{30} z - K y + \omega_0)^2 + (\text{e}^{(w-1)\tau} + n_{20}^2 \text{e}^{-\frac12(w^2-1+4\rho_0)\tau} + n_{30}^2 \text{e}^{-(w+1)\tau})^2}.
\end{equation}
We shall focus on the case where $K=0$, or equivalently, where
\begin{equation}
\label{n20choice}
n_{20} = \frac{4w}{(w+3)(w-1)+4\rho_0} n_{10} n_{30},
\end{equation}
which turns off the $R_2$ frame transition (which is shown to be asymptotically suppressed in~\cite{art:HeinzleUgglaRohr2009}), and eliminates the $y$-dependence.
The dynamics of the ($K=0$) stiff fluid spike solution is qualitatively different from that of the vacuum spike solution, in that the stiff fluid spike solution can end up with a permanent spike.
To produce a permanent spike, $\lambda$ must tend to zero as $\tau$ tends to infinity. This means $w$ and $\rho_0$ must satisfy
\begin{equation}
1 - 4 \rho_0 < w^2 < 1,
\end{equation}
assuming that $w n_{10} n_{30} \neq 0$.
The vacuum spike solution cannot meet this condition because $\rho_0=0$.
For the exact stiff fluid spike solution, although the $\beta$-normalized $\Omega_\phi$ is independent of $z$, the Hubble-normalized $\Omega_\phi$ and physical $\rho_\phi$ do depend on $z$.
Figure~\ref{fig:rho_stiff_cropped} shows the spatial dependence of the Hubble-normalized $\Omega_\phi$ and
$\ln(\rho_\phi)$ (plotted in coordinate $z$ without zooming).
The inhomogeneity in $\beta$-normalized $\Omega_\phi$ is more pronounced. During the spike transition, $\rho_\phi$ is larger at the spike point.
\subsubsection*{Normalization and gauge}
We can normalize our variables with appropriate powers of a scale dependent quantity
$D$. When utilizing the exact solutions obtained from a Geroch transformation, and observing the transformation rules for the stiff fluid
parameters above in terms of $F$, we see that an appropriate invariant choice for $D$ is the determinant of the metric
which is related to the spatial volume $V$, which then implies that the $V$-normalized stiff fluid density $\Omega_\phi$ ($\sim \rho_\phi V^2$) is a constant
(and we can then omit its trivial evolution equation). In the OT case
for a comoving stiff fluid seed solution (or vacuum), we have that
${{\tilde \beta}^2} = \beta ^2 F^{-1}$, and so in this case $V$-normalization is equivalent to $\beta$-normalization.
In the non-OT case, unfortunately $V$-normalization is not equivalent to $\beta$-normalization, and $V$-normalization fails to present
self-similar solutions as equilibrium points in the $V$-normalized state space. Therefore we abandon $V$-normalization and use either $\beta$-normalization or Hubble-normalization.
\begin{figure}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{rho_stiff_cropped.png}}
\caption{Plots of Hubble-normalized $\Omega_\phi$ and $\ln(\rho_\phi)$ for the exact stiff spike solution, with $w=\frac13$, $n_{10}=0.001$, $n_{30}=1$, $\rho_0 = 0.1$.
In the second row of figures, solid line is for the spike worldline $z=0$, dashed line is for $z=1$.
The plots show the spike structure in the density of the stiff fluid.}
\label{fig:rho_stiff_cropped}
\end{center}
\end{figure}
We shall use the stiff fluid comoving gauge.
In previous work (e.g., see the Appendix) we have used the area time gauge.
Although the exact solution above satisfies both gauges, it is better to use the
stiff fluid comoving gauge because it makes matching with numerical simulations easier
i.e., it is possible to set $N = -V$ (which matches the exact solution) numerically in stiff fluid
comoving gauge, whereas in the area time gauge, it is only possible to
set $N=-1/\beta$, which does not match the exact solution, as $V$ is not equivalent to $\beta$ in the non-OT case.
The evolution equations in fluid-comoving gauge (and with Hubble-normalization and choosing the lapse
to be the volume) are given in the next section.
\section{Non-OT $G_2$ stiff fluid evolution equations}
\label{sec:4}
The discovery of the non-OT $G_2$ stiff fluid spike solution motivates the use of the fluid-comoving gauge.
For the ease of analytical analysis, it is necessary to use a normalization that presents
solutions with timelike homothetic VF as equilibrium points.
This means using the Hubble normalization or its variation the $\beta$-normalization.
For the purpose of numerical simulation, there is no need to use such a normalization, but
such a Hubble-normalization is convenient.
There is one requirement for numerical simulation; namely, the system should be first order.
There are two problematic terms.
The first problematic term is $\boldsymbol{\partial}_3 q$ in $\boldsymbol{\partial}_0 r_3$,
which contains $\boldsymbol{\partial}_3 \boldsymbol{\partial}_3 \dot{U}_3$, a second order derivative.
We are forced to specify $r_3$ through the Codazzi constraint, so $r_3$ now contains $\boldsymbol{\partial}_3 \Sigma_+$, which is fine.
The second problematic term is $\boldsymbol{\partial}_3 \dot{U}_3$. We cannot compute $\dot{U}_3$ using the
definition of $\dot{U}_3$, because it would turn $\boldsymbol{\partial}_3 \dot{U}_3$ into a second order derivative,
so we are forced to evolve $\dot{U}_3$. But we do not have a ready-made evolution equation for $\dot{U}_3$.
We have to derive it from the definition of $\dot{U}_3$. This gives
\begin{equation}
\boldsymbol{\partial}_0 \dot{U}_3 = (q+2\Sigma_+)\dot{U}_3 + 3 (\dot{U}_3-r_3).
\end{equation}
Then the system is first order.
The process of normalization (while leaving the gauge unspecified) follows Section 2.3 of~\cite{thesis:Lim2004}.
But we will just give the equations in fluid-comoving gauge (and in an Iwasawa frame).
For Hubble-normalization we can use the equations in component form given in~\cite{cal:Elst}.
The shear and $N_{\alpha\beta}$ components are:
\begin{equation}
\Sigma_{\alpha\beta} = \left( \begin{array}{ccc}
\Sigma_+ + \sqrt{3}\Sigma_- & -R_3 & 0 \\
-R_3 & \Sigma_+ - \sqrt{3}\Sigma_- & -R_1 \\
0 & - R_1 & -2\Sigma_+
\end{array} \right)
\quad
N_{\alpha\beta} = \left( \begin{array}{ccc}
N_{11} & N_{12} & 0 \\
N_{12} & 0 & 0 \\
0 & 0 & 0
\end{array} \right)
\end{equation}
For the sake of continuity, we shall use the ``old" variables (except $\Sigma_2$)
where the old variables like $N_-$, $N_\times$, $\Sigma_\times$ are related to new
ones like $N_{11}$, $N_{12}$, $R_3$ by
\begin{equation}
N_{11} = 2\sqrt{3} N_-, N_{12} = \sqrt{3} N_\times, R_3 = - \sqrt{3} \Sigma_\times,\ R_1 = - \sqrt{3}\Sigma_2],
\end{equation}
and the ``old" normalization (an alternative normalization
is the new conformal normalization~\cite{art:RohrUggla2005}, which differs a bit in the $A$'s, etc.).
The evolution equations in Hubble-normalized variables are then~\cite{cal:Elst}:
\begin{equation}
\label{system1_1}
q = 2(\Sigma_+^2+\Sigma_-^2+\Sigma_\times^2+\tfrac13R_1^2) + 2\Omega_\phi -\tfrac13(\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3)\dot{U}_3
\end{equation}
\begin{equation}
N = - V,\quad \boldsymbol{\partial}_0 = -\frac{1}{VH} \partial_\tau
\end{equation}
\begin{equation}
\boldsymbol{\partial}_3 = E_{3}{}^{3} \partial_z
\end{equation}
\begin{align}
\boldsymbol{\partial}_0 E_{3}{}^{3} &= (q+2\Sigma_+) E_{3}{}^{3}
\\
\boldsymbol{\partial}_0 \dot{U}_3 &= (q+2\Sigma_+)\dot{U}_3 + 3 (\dot{U}_3-r_3)
\\
\boldsymbol{\partial}_0 A_3 &= (q+2\Sigma_+)A_3 - (\boldsymbol{\partial}_3-r_3+\dot{U}_3)(1+\Sigma_+)
\\
\boldsymbol{\partial}_0 r_3 &= (q+2\Sigma_+)r_3 + (\boldsymbol{\partial}_3-r_3+\dot{U}_3)(q+1)
\\
\boldsymbol{\partial}_0 \Sigma_+ &= (q-2) \Sigma_+ -2(N_-^2+N_\times^2) +\tfrac13(\boldsymbol{\partial}_3-r_3)A_3 + R_1^2 - \tfrac13(\boldsymbol{\partial}_3-r_3+\dot{U}_3+A_3)\dot{U}_3
\\
\boldsymbol{\partial}_0 \Sigma_- &= (q-2)\Sigma_- -(\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3)N_\times + 2\sqrt{3} (\Sigma_\times^2-N_-^2) - \tfrac{1}{\sqrt{3}} R_1^2
\\
\boldsymbol{\partial}_0 \Sigma_\times &= (q-2-2\sqrt{3} \Sigma_-)\Sigma_\times + (\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3 - 2\sqrt{3} N_\times)N_-
\\
\boldsymbol{\partial}_0 R_1 &= (q-2-3\Sigma_++\sqrt{3}\Sigma_-)R_1
\\
\boldsymbol{\partial}_0 N_- &= (q+2\Sigma_++2\sqrt{3}\Sigma_-) N_- + (\boldsymbol{\partial}_3-r_3+\dot{U}_3+2\sqrt{3}N_\times) \Sigma_\times
\\
\boldsymbol{\partial}_0 N_\times &= (q+2\Sigma_+) N_\times -(\boldsymbol{\partial}_3-r_3+\dot{U}_3)\Sigma_-
\\
\boldsymbol{\partial}_0 \Omega_\phi &= (2q-4)\Omega_\phi
\end{align}
Gauss Constraint
\begin{equation}
0 = 1 + \tfrac13(2\boldsymbol{\partial}_3-2r_3-3A_3)A_3 - N_-^2 -N_\times^2 -\Sigma_+^2-\Sigma_-^2-\Sigma_\times^2-\tfrac13R_1^2-\Omega_\phi
\end{equation}
Codazzi constraints
\begin{align}
0 &= -(\boldsymbol{\partial}_3-r_3)R_1 + (3A_3-\sqrt{3}N_\times)R_1
\\
0 &= (\boldsymbol{\partial}_3-r_3)(1+\Sigma_+) -3A_3\Sigma_+ + 3N_-\Sigma_\times - 3N_\times\Sigma_-
\label{system1_n}
\end{align}
For this paper, we have a second tilted perfect fluid with one tilt component $(0,0,v)$.
The equations are extended as follows:
\begin{equation}
q = 2(\Sigma_+^2+\Sigma_-^2+\Sigma_\times^2+\tfrac13R_1^2) + \tfrac12(\Omega+3p_\phi) + 2\Omega_\phi -\tfrac13(\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3)\dot{U}_3
\end{equation}
\begin{equation}
G_\pm = 1 \pm (\gamma-1)v^2,\ p_\phi = \frac{(\gamma-1)(1-v^2) + \tfrac13\gamma v^2}{G_+} \Omega ,\ Q_3 = \frac{\gamma v \Omega}{G_+},\ p_\phi{}_{33} = \frac23 \frac{\gamma v^2 \Omega}{G_+}
\end{equation}
\begin{equation}
N = - V,\quad \boldsymbol{\partial}_0 = -\frac{1}{VH} \partial_\tau
\end{equation}
\begin{equation}
\boldsymbol{\partial}_3 = E_{3}{}^{3} \partial_z
\end{equation}
\begin{align}
\boldsymbol{\partial}_0 E_{3}{}^{3} &= (q+2\Sigma_+) E_{3}{}^{3}
\\
\boldsymbol{\partial}_0 \dot{U}_3 &= (q+2\Sigma_+)\dot{U}_3 + 3 (\dot{U}_3-r_3)
\\
\boldsymbol{\partial}_0 A_3 &= (q+2\Sigma_+)A_3 - (\boldsymbol{\partial}_3-r_3+\dot{U}_3)(1+\Sigma_+)
\\
\boldsymbol{\partial}_0 r_3 &= (q+2\Sigma_+)r_3 + (\boldsymbol{\partial}_3-r_3+\dot{U}_3)(q+1)
\\
\boldsymbol{\partial}_0 \Sigma_+ &= (q-2) \Sigma_+ -2(N_-^2+N_\times^2) +\tfrac13(\boldsymbol{\partial}_3-r_3)A_3 + R_1^2 - \tfrac13(\boldsymbol{\partial}_3-r_3+\dot{U}_3+A_3)\dot{U}_3 - \frac{\gamma v^2 \Omega}{G_+}
\\
\boldsymbol{\partial}_0 \Sigma_- &= (q-2)\Sigma_- -(\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3)N_\times + 2\sqrt{3} (\Sigma_\times^2-N_-^2) - \tfrac{1}{\sqrt{3}} R_1^2
\\
\boldsymbol{\partial}_0 \Sigma_\times &= (q-2-2\sqrt{3} \Sigma_-)\Sigma_\times + (\boldsymbol{\partial}_3-r_3+\dot{U}_3-2A_3 - 2\sqrt{3} N_\times)N_-
\\
\boldsymbol{\partial}_0 R_1 &= (q-2-3\Sigma_++\sqrt{3}\Sigma_-)R_1
\\
\boldsymbol{\partial}_0 N_- &= (q+2\Sigma_++2\sqrt{3}\Sigma_-) N_- + (\boldsymbol{\partial}_3-r_3+\dot{U}_3+2\sqrt{3}N_\times) \Sigma_\times
\\
\boldsymbol{\partial}_0 N_\times &= (q+2\Sigma_+) N_\times -(\boldsymbol{\partial}_3-r_3+\dot{U}_3)\Sigma_-
\\
\boldsymbol{\partial}_0 \Omega_\phi &= (2q-4)\Omega_\phi
\end{align}
Gauss Constraint
\begin{equation}
0 = 1 + \tfrac13(2\boldsymbol{\partial}_3-2r_3-3A_3)A_3 - N_-^2 -N_\times^2 -\Sigma_+^2-\Sigma_-^2-\Sigma_\times^2-\tfrac13R_1^2-\Omega-\Omega_\phi
\end{equation}
Codazzi constraints
\begin{align}
0 &= -(\boldsymbol{\partial}_3-r_3)R_1 + (3A_3-\sqrt{3}N_\times)R_1
\\
0 &= (\boldsymbol{\partial}_3-r_3)(1+\Sigma_+) -3A_3\Sigma_+ + 3N_-\Sigma_\times - 3N_\times\Sigma_- - \frac32 \frac{\gamma v \Omega}{G_+}
\end{align}
Evolution equations for perfect fluid variables $\Omega$ and $v$:
\begin{align}
\boldsymbol{\partial}_0 \ln\Omega &= -\frac{\gamma v}{G_+} \boldsymbol{\partial}_3 \ln\Omega - \frac{\gamma G_-}{G_+^2} \boldsymbol{\partial}_3 v
\notag\\
&\quad + G_+^{-1} \left[ 2G_+ q -(3\gamma-2) -(2-\gamma)v^2 + 2\gamma v^2 \Sigma_+ - 2\gamma v(-r_3+\dot{U}_3 -A_3)\right]
\\
\boldsymbol{\partial}_0 v &= -\frac{(\gamma-1)}{\gamma} \frac{(1-v^2)^2}{G_-} \boldsymbol{\partial}_3 \ln \Omega + \frac{[(3\gamma-4)-(\gamma-1)(4-\gamma)v^2]v}{G_+ G_-} \boldsymbol{\partial}_3 v
\notag\\
&\quad + \frac{(1-v^2)}{G_-}\left[2\frac{\gamma-1}{\gamma}(1-v^2)r_3 + (3\gamma-4)v + 2v\Sigma_+ - {G_-}\dot{U}_3-2(\gamma-1)v^2A_3\right]
\end{align}
\subsubsection*{Numerics}
As a first step towards a numerical analysis, we shall simulate the system~(\ref{system1_1})--(\ref{system1_n}), which is without the second perfect fluid.
For the zooming method, we shall use the dynamic zooming introduced in~\cite{art:Clarksonetal2013},
in which the outer boundary travels inward at the speed of light.
As a result the operators $\boldsymbol{\partial}_0$ and $\boldsymbol{\partial}_3$ are replaced by
\begin{equation}
\boldsymbol{\partial}_0 = \mathcal{N}^{-1}\left(\partial_T - \frac{\partial_T z}{\partial_Z z} \partial_Z\right),\quad
\boldsymbol{\partial}_3 = E_3{}^3 \frac{1}{\partial_Z z} \partial_Z,
\end{equation}
where
\begin{equation}
T = \tau,\quad Z=Z(\tau,z)
\end{equation}
are the new coordinates in the zoomed view, with the evolution of unzoomed coordinate $z(T,Z)$ defined by
\begin{equation}
\partial_T z = (\mathcal{N} E_3{}^3)_\text{ob} \frac{Z}{Z_\text{ob}},
\end{equation}
where the subscript $\text{ob}$ denotes evaluation at the outer boundary.
The dynamic zooming is more economical than the specified zooming of~\cite{art:Limetal2009} in that the outer boundary travels inward at exactly the speed of light.
This is important in saving numerical resources, as the horizon size grows during an $R_1$ transition.
Due to this growth, spikes appearing after the $R_1$ transition appear wider than the those before.
In order to capture the wider spike, a larger domain of simulation is needed, but the spike appearing before the $R_1$ transition would appear narrower relative to this larger domain,
and therefore more grid points are needed to provide sufficient numerical resolution for the narrower spike.
In order to ensure that the code is correct, we shall simulate the exact non-OT $G_2$ stiff fluid spike solution.
We choose the following values for the parameters:
\begin{equation}
\label{nonOT_param}
w = \frac13,\quad
n_{10} = 0.001,\quad
n_{30} = 1,\quad
K = 0,\quad
\rho_0 = 0.00075,\quad
\omega_0 = 0.
\end{equation}
We use the domain $Z \in [0,1]$ with 1001 uniformly spaced grid points, fine-tune the initial domain size $z$ to 6 digits by trial and error
(as the horizon size will shrink to about $10^{-6}$ of its initial size by $T=25$)
\begin{equation}
z = 148.417 Z,
\end{equation}
and run the simulation from $T=-5$ to $T=25$. Fine-tuning is needed because, if $z$ is too big then one loses numerical
resolution, and if $z$ is too small then the outer boundary would hit the inner boundary $z=0$ before $T=25$.
We also limit $T$ to reduce the number of digits in the fine-tuning.
We plot the numerical results for $\Omega_\phi$, $\Sigma_-$, $N_-$, $R_1$, $A_3$ and $\log_{10}(-\mathcal{N} E_3{}^3)$ in Figure~\ref{fig:nonOT_fig_id7_cropped}.
It can be seen that the spike first forms at about $T=0$ and then becomes wider (in this zoomed view, but becomes narrower in the unzoomed view).
The $R_1$ transition occurs during $T\in[5,10]$, and the spike becomes narrower again,
until it undergoes a transition during $T\in[10,20]$ and resolves.
Figure~\ref{fig:rho_stiff_cropped}, which shows the unzoomed view of the exact non-OT stiff fluid spike solution,
hides the finer details of the spike profile and the width of the spike at different times.
Here we shall comment on the characteristic width of the horizon before and after the $R_1$ transition.
Before the $R_1$ transition, the spike solution is close to the OT solution with $w=\frac13$. Recall from~\cite{art:Lim2008} that the horizon size
decays at the rate $\text{e}^{-\tau}$. After the $R_1$ transition, the horizon size decays at the
rate $\text{e}^{-0.1\tau}$ (see the last panel of Figure~\ref{fig:nonOT_fig_id7_cropped}).
Over the duration of the simulation, the horizon shrinks by a factor of $10^6$, hence the initial $z$ needs to be fine-tune to 6 digits accuracy.
We shall leave further numerical analysis to the next paper.
\begin{figure}
\begin{center}
\resizebox{\textwidth}{!}{\includegraphics{nonOT_fig_id7_cropped.png}}
\caption{Plots of $\Omega_\phi$, $\Sigma_-$, $N_-$, $R_1$, $A_3$ and $\log_{10}(-\mathcal{N} E_3{}^3)$ for the numerical simulation of the exact solution with parameters~(\ref{nonOT_param}).}
\label{fig:nonOT_fig_id7_cropped}
\end{center}
\end{figure}
\section{Conclusion}
In this paper we have discussed massless scalar
field/stiff perfect fluids in the general
relativistic generation of spikes.
We first studied OT $G_2$ stiff fluid spike models
both heuristically and numerically, and
generalized the vacuum solution to obtain
a new exact OT $G_2$ stiff fluid spike solution.
We then discussed non-OT $G_2$ stiff fluid spike models.
We presented a new two-parameter family
of non-OT $G_2$ stiff fluid spike solutions, obtained by the generalization of non-OT $G_2$ vacuum spike solutions \cite{art:Lim2015} to the stiff fluid case
and achieved by applying the stiff fluid version of Geroch's transformation on a Jacobs seed.
The dynamics of the new ($K=0$) non-OT $G_2$ stiff fluid spike solutions is qualitatively
different from that of the vacuum spike solution, in that the matter (stiff fluid) feels the
spike directly and the stiff fluid
spike solutions can end up with a permanent spike.
We next derived the evolution equations of non-OT $G_2$ stiff fluid models, including a second perfect fluid, in full generality.
We discussed the evolution equations using different normalizations and gauge choices,
motivated by the discovery of the exact non-OT stiff spike solutions and in order to be consistent with previous analyses.
We shall return to the issue of different normalization and gauge choices in future work.
We also briefly discussed some of the qualitative properties of these models and their numerical analysis
(and, in particular, the potential problems with numerical resolution).
Qualitatively we found that the spike imprint in a stiff fluid background is the same as the previous vacuum case.
We shall present a full numerical analysis (of the general equations) in a follow up paper.
We have been particularly interested in how a fluid, and especially a stiff fluid or massless scalar field,
affects the physical consequences of the general relativistic generation of spikes. Let us briefly discuss this further.
\subsubsection*{Discussion}
In previous work \cite{art:ColeyLim2012} we explicitly showed that spikes naturally occur in a class of non-vacuum $G_2$ models and, due to
gravitational instability, leave small residual imprints on matter in the form of matter perturbations.
We have been particularly interested in recurring and complete spikes formed in the oscillatory regime (or recurring spikes for short)~\cite{art:Lim2008,art:Limetal2009}
and incomplete spikes, and their imprint on matter and structure formation.
We have obtained further numerical evidence for the existence of spikes and general relativistic matter perturbations~\cite{art:LimColey2014},
which support the results of \cite{art:ColeyLim2012}.
We have generalized these results to massless scalar field/stiff perfect fluid models in this paper to further illustrate the possible existence of a
general relativistic mechanism for generating matter perturbations of physical interest.
With a tilted fluid, the tilt provides another mechanism in generating matter inhomogeneities due to the non-negligible divergence term caused by the instability in the tilt.
In~\cite{art:LimColey2014} we investigated the evolution equations of the OT $G_2$ models with a perfect fluid and
we concluded that it is the tilt instability that plays the primary role in the formation of matter inhomogeneities in these models
(while the spike mechanism plays a secondary role in generating matter inhomogeneities).
In the early Universe we have explicitly shown that there exist $G_2$ recurring spikes that lead to
inhomogeneities and a residual in the form of matter perturbations, that these occur
naturally within generic cosmology models within GR, and that they are not local but form
on surfaces and give rise to a distribution of perturbations.
In $G_2$ models these spike surfaces are parallel and do not intersect.
In general spacetimes however, two spikes surfaces may intersect along a curve,
and this curve may intersect with a third spike surface at a point, leading to matter inhomogeneities forming on a web of surfaces, curves and points.
There are tantalising hints that filamentary structures and voids would occur naturally in this scenario.
Inflationary cosmology provides a causal mechanism which
generates the primordial perturbations which were later responsible for the formation of
large scale structures of our Universe due to gravitational collapse.
The density perturbations produced during inflation are due to
quantum fluctuations in the matter and gravitational fields \cite{art:Mukhanovetal1992,book:LythLiddle2009}.
Primeval fluctuations,
which are subsequently amplified
outside the Hubble radius, are then thought to be present at the end of the inflationary
epoch. Previously, we have speculated \cite{art:ColeyLim2012} whether
recurring spikes might be an alternative to the inflationary mechanism for
generating matter perturbations in the early Universe.
Indeed, there are some
similarities with the perturbations and structure formation created in
cosmic string models~\cite{art:LimColey2014};
the inhomogeneities occur on closed circles or infinite lines \cite{art:ColeyLim2012},
similar to what happens in the case of
topological defects.
Saddles, related to Kasner solutions and FLRW models, may also occur at late times, and may also
cause spikes/tilt that might lead to further matter inhomogeneities, albeit non-generically.
Permanent spikes
in LTB models were studied in \cite{art:ColeyLim2014}, which might offer an
alternative general relativistic spike mechanism for naturally generating (a small number of)
exceptional structures at late times.
\enlargethispage{5cm}
\section{Appendix: OT $G_2$ Equations}
The evolution equations for the OT $G_2$ model with KVFs acting on a plane,
and with two perfect fluids (one of them stiff).
(where the coordinate variable $T$ increases towards the singularity) are \cite{thesis:Lim2004}:
\begin{align}
\partial_T\ln\beta &= - AX \partial_X\ln\beta + \frac32(1-\Sigma_+) - \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega
\\
\partial_T\lnE_1{}^1 &= - AX \partial_X\lnE_1{}^1 - 1 + \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega
\\
\partial_T\Sigma_- &= - AX \partial_X\Sigma_-
+ \tfrac12\text{e}^{AT}E_1{}^1 \dXN_\times
+ \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega \Sigma_- - \sqrt{3}(\Sigma_\times^2-N_-^2)
\\
\dTN_\times &= - AX \dXN_\times
+ \tfrac12\text{e}^{AT}E_1{}^1 \partial_X\Sigma_-
- N_\times
+ \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega N_\times
\\
\partial_T\Sigma_\times &= - AX \partial_X\Sigma_\times
- \tfrac12\text{e}^{AT}E_1{}^1 \dXN_-
+ \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega \Sigma_\times
+ \sqrt{3}\Sigma_-\Sigma_\times + \sqrt{3}\NcN_-
\\
\dTN_- &= - AX \dXN_-
- \tfrac12\text{e}^{AT}E_1{}^1 \partial_X\Sigma_\times
- N_-
+ \frac34(2-\gamma)\frac{1-v^2}{G_+}\Omega N_-
- \sqrt{3}\Sigma_-N_- - \sqrt{3}\Sigma_\timesN_\times
\\
\partial_T\ln\Omega &= - AX \partial_X\ln\Omega
- \frac{\gamma v}{2G_+} \text{e}^{AT}E_1{}^1 \partial_X\ln\Omega
+ \frac{\gamma G_-(1-v^2)}{2G_+^2} \partial_X \text{arctanh} v
\notag\\
&\quad- \frac{\gamma}{G_+} \left[ \frac{G_+}{\gamma}(q+1)-\frac12(1-3\Sigma_+)(1+v^2)-1 \right]
\\
\partial_T\text{arctanh} v &= - AX \partial_X\text{arctanh} v
+ \frac{(\gamma-1)(1-v^2)}{2 \gamma G_-} \text{e}^{AT}E_1{}^1 \partial_X\ln\Omega
\notag\\
&\quad- [3\gamma-4-(\gamma-1)(4-\gamma)v^2] \frac{v}{2 G_+ G_-} \text{e}^{AT}E_1{}^1 \partial_X\text{arctanh} v
\notag\\
&\quad+ \frac{1}{2\gamma G_-} \left[ (2-\gamma)G_+ r - \gamma v(3\gamma-4 + 3(2-\gamma)\Sigma_+) \right]
\\
\partial_T\ln\Omega_\phi &= - AX \partial_X\ln\Omega_\phi
- \frac{v_\phi}{1+v_\phi^2} \text{e}^{AT}E_1{}^1 \partial_X\ln\Omega_\phi
+ \frac{(1-v_\phi^2)^2}{(1+v_\phi^2)^2} \partial_X \text{arctanh} v_\phi
\notag\\
&\quad- \frac{2}{1+v_\phi^2} \left[ \frac{1+v_\phi^2}{2}(q+1)-\frac12(1-3\Sigma_+)(1+v_\phi^2)-1 \right]
\\
\partial_T\text{arctanh} v_\phi &= - AX \partial_X\text{arctanh} v_\phi
+ \frac{1}{4} \text{e}^{AT}E_1{}^1 \partial_X\ln\Omega_\phi
\notag\\
&\quad- \frac{v_\phi}{1+v_\phi^2} \text{e}^{AT}E_1{}^1 \partial_X\text{arctanh} v_\phi
- \frac{v_\phi}{1-v_\phi^2}
\end{align}
where $\Sigma_+$, $q$, $r$, $G_\pm$ are given by
\begin{align}
\Sigma_+ &= \frac12\left( 1-\Sigma_-^2-\Sigma_\times^2-N_-^2-N_\times^2-\Omega-\Omega_\phi\right)
\\
q &= 2-3\Sigma_+ - \frac32(2-\gamma)\frac{1-v^2}{G_+}\Omega
\\
r &= -3N_\times\Sigma_- + 3N_-\Sigma_\times - \frac{3\gamma v}{2 G_+} \Omega - \frac{3 v_\phi}{1+v_\phi^2} \Omega_\phi
\\
G_\pm &= 1 \pm (\gamma-1) v^2.
\end{align}
\section*{Acknowledgments}
This work was supported, in part, by NSERC of Canada.
|
1,477,468,751,078 | arxiv | \section{Introduction}
The interpretation of quark confinement as the effect of a classical \textit{event horizon} for color degrees of freedom \cite{Casto,Salam}, naturally lead to view hadronization as the quantum tunnelling through such horizon \cite{K1,K2}. From this point of view, hadron formation is the result of the \textit{Unruh radiation} \cite{Un} associated to the strong force.
More precisely, hadronization is the result of a Unruh phenomenon related with the string breaking/formation mechanism, that is, with the large distances QCD behavior.
The hadronic-Unruh temperature is given by \cite{K2}
\begin{equation}
T_{h} = \frac{a}{2 \pi} \simeq \sqrt{\frac{\sigma}{2 \pi}}
\label{0}
\end{equation}
where $\sigma$ is the string tension, and the acceleration, $a \simeq 2 k_T \simeq \sqrt{2\pi \sigma}$, is the one necessary to bring on shell a quark of transverse momentum $k_T$. In other words, the time given by the characteristic fluctuations determined by the virtuality of the pair is $\Delta \tau = 1/\Delta E \simeq 1/(2 k_T)$.
Moreover, at zero chemical potential, $\mu = 0$, one has $T_{h} = T_c$, the critical temperature of the deconfinement transition, because it is directly related with the string breaking/formation mechanism. This provides a theoretical basis for understanding the production of newly formed hadrons in high energy collisions, and, as shown in \cite{CIS1}, it allows to predict ($\mu=0$): a) the hadronic freeze-out conditions \cite{taw,jean1,jean2},
i.e. $s/T_{h}^3 = 3 \pi^2 /4 \simeq 7.4$ in terms of the entropy density $s$, and b) the value $\langle E \rangle / \langle N \rangle = \sqrt{2 \pi \sigma} \simeq 1.09$
for the average energy per hadron. These predictions are based on the string breaking/formation mechanism, and on the adaptation of the Bekenstein-Hawking (BH) entropy formula \cite{bekensteinEntropy} to the color event horizon \cite{CIS1}. The BH formula was born in the context of black hole physics, however, there is by now a vast literature where its implications are studied in more general contexts, see, e.g.,\cite{bousso}. Some connections between the hadronization as a Unruh phenomenon and the corresponding near horizon black hole scenarios, have been studied in \cite{CGI}.
Although the Unruh hadronization mechanism holds at the string breaking, that is, at the corresponding $T_c$, since the deconfinement transition is a cross-over, one can expect some remnant of confinement slightly above $T_c$. Indeed the persistence of string-like objects above $T_c$ has ben obtained by many different methods: lattice simulations \cite{karsch,cea}, quasiparticle approach \cite{mannarelli1,mannarelli2}, NJL correlator \cite{beppe,jap}, Mott transitions \cite{david} and confinement mechanisms \cite{eddy}. It is then natural to ask whether small changes in the description of color confinement, slightly above $T_c$, can give information on thermodynamical quantities, as, e.g., the QCD entropy.
In this paper we show that the QCD entropy, evaluated by lattice simulations \cite{olaf}, in the region $T_c < T < 1.3 T_c$, is in reasonable agreement with the picture of a melting color event horizon.
Next we recall more details of the Unruh hadronization mechanism, including the understanding, in terms of string-breaking, of the other freeze-out condition: $n \simeq 0.12$ fm$^{-3}$, where $n$ is the number density. We then discuss temperature effects related to the string-like system near the phase transition, give our results for the entropy and the internal energy as compared with lattice QCD simulations, and draw some conclusions.
\section{The Unruh hadronization}
Although universal, the mechanism is most simply illustrated by hadron production through
$e^+ e^-$ annihilation into a $q \bar q$ pair, as shown in Fig.\ref{anni}.
\begin{figure}[h]
\centerline{\psfig{file=annihil1.eps,width=7cm} }
\caption{Quark formation in $e^+e^-$ annihilation}
\label{anni}
\end{figure}
The attempt to separate the initial $q \bar q$ pair ends at a distance $R$, when both the quark and the antiquark hit the confinement horizon, that is, when they reach the end of the binding string. The separation can now continue only if a further quark-antiquark system, say $q_1 \bar{q}_1$, is excited from the vacuum. Although the new pair $q_1 \bar{q}_1$ is at rest in the overall center of mass, each of its constituents has a transverse momentum $k_T$, determined by the uncertainty relations in terms of the transverse dimension of the string flux tube. String theory gives for the basic thickness \cite{Lue}
\begin{equation}
r_T=\sqrt{2/\pi \sigma},
\label{1}
\end{equation}
leading to
\begin{equation}
k_T=\sqrt{\pi \sigma/2}.
\label{2}
\end{equation}
The maximum separation distance $R$ can thus be obtained from $\sigma R = 2 k_T$, hence, from (\ref{1}) and (\ref{2}) one has
\begin{equation}
R = \sqrt{2 \pi / \sigma} .
\label{4}
\end{equation}
The entropy associated to a color event horizon is necessarily an entropy of entanglement, between quantum field modes on both sides of the horizon. Its general form is \cite{terashima,entaentropy}
\begin{equation}
\label{entropymatter}
S_{\rm ent} = \alpha \frac{A_h}{r^2} \;,
\end{equation}
where $A_h$ is the area of the event horizon, $r$ the scale of the characteristic quantum fluctuations, and $\alpha$ an undetermined numerical constant. This expression shares its holographic
behavior\footnote{Holography of entanglement entropy is a quite general
result, see \cite{srenidcky,solodukin}.} with the BH entropy formula of a black hole \cite{bekensteinEntropy}
\begin{equation}
S_{\rm BH} = {1 \over 4}~ {A \over r_P^2} ,
\label{10}
\end{equation}
where $A$ denotes the surface area of the hole, that, e.g., in the Schwarzschild case is given by $A = 4 \pi R_S^2$, with $R_S=2GM/c^2$. The quantity $r_P= \sqrt{\hbar G/c^3}$
is the Planck length the smallest possible fluctuation scale.
As shown in \cite{laflamme}, a formula similar to the Bekenstein-Hawking formula (\ref{10}) holds in the case of the Rindler spacetime of an accelerated observer. On the other hand, it is well known that the Rindler spacetime can be associated to the near-horizon approximation of a black hole spacetime \cite{wald}
In ref. \cite{CIS1}, the above has been applied to the Unruh hadronization mechanism, allowing to predict within the model the freeze-out conditions.
Indeed, in this case the characteristic scale of the quantum fluctuations is given by Eq.\ (\ref{1}), and we obtain
\begin{equation}
S_h = {1\over 4}~ {A_h \over r_T^2} = {1\over 4}~ {4 \pi R^2 \over r_T^2} = \frac{\pi^2}{2} \sigma R^2
\label{11}
\end{equation}
for the entropy associated to hadron production. The parameter $R$ is given by Eq.\ (\ref{4}), and the smallest
fluctuation scale is the transverse string thickness (\ref{1}). Using Eq.\ (\ref{4}) into Eq.\ (\ref{11}) gives
\begin{equation}
S_h = \pi^3,
\label{S}
\end{equation}
while the entropy {\sl density} divided by $T^3$, evaluated at ${T=T_c}$, gives
\begin{equation}
{s \over T^3} = {S_h \over (4 \pi/3) R^3 T^3} =
{3 \pi^2 \over 4} \simeq 7.4
\label{entrop}
\end{equation}
as freeze-out condition in terms of $s(T)$ and $T$. This result is in agreement with the value obtained for $s/T^3$ from species abundance analyses
in terms of the ideal resonance gas model \cite{Cley,Tawfik}.
Furthermore, one can shows that the other freeze-out condition, based on the number density $n$, that is $n \simeq 0.12$ fm$^{-3}$ \cite{foc} is directly related with the string-breaking too.
Indeed, for a single string-breaking the number density is given by
\begin{equation}
n_{sb}= \frac{1}{4\pi R^3/3}
\end{equation}
where $R$ is the string breaking distance, which turns out to be $R=1/T_h$, for massless quarks. For $T_h \simeq 160$ MeV, one obtains $n_{sb} \simeq 0.129$ fm$^{-3}$.
From the above, it should be clear that the previous formulae hold strictly at the string breaking, that is, at the hadronization temperature $T_c$. Therefore, to compare results obtained in this approach with lattice data at $T\ne T_c$ requires a more general analysis that we now present.
\section{Color horizon entropy slightly above $T_c$}
A natural starting point is to generalize Eq.\ (\ref{11}) and write
\begin{equation}
S_h (T) = \frac{\pi^2}{2} \sigma(T) R^2(T) ,
\label{ShT}
\end{equation}
where $\sigma(T)$ is the string tension for $T \ge T_c$, which for a sharp deconfinement transition should be exactly zero, and $R(T)$ has to be interpreted as the effective range of the color field above $T_c$.
\begin{figure}
{{\epsfig{file=adrianofit.eps,height=8.0 true cm,width=7.0 true cm, angle=0}}
\caption{Behavior of the string tension $\sigma(T)$ above $T_c$.
}
\label{Fig:sigmabeyondTc}}
\end{figure}
For the observed crossover between the quark-gluon phase and the hadron phase one should expect that $\sigma(T)$ and $R(T)$ go quickly to zero. Some information about the behaviour of $\sigma$ above $T_c$ can be obtained as follows. The string tension $\sigma$ can be interpreted as the vacuum energy density in a flux tube of transverse area $\pi r_T^2$, i.e.
\begin{equation}
\sigma = \epsilon_v \pi r_T^2 .
\label{vacuumsigma}
\end{equation}
On the other hand, from Eq.\ (\ref{1}) we have $r_T^2 \simeq 1/\sigma$, hence the string tension scales with the square root of the vacuum energy $\sigma \simeq \epsilon_v^{1/2}$.
The behaviour of the vacuum energy density of the chromoelectric field above $T_c$ has been evaluated in lattice QCD in ref. \cite{adriano1,adriano2}, where the ratio of the vacuum energy density at high temperature $T>T_c$, to its value for $T<T_c$ is given. From the previous discussion, the behavior of the string tension above $T_c$ is given by
\begin{equation}
\frac{\sigma(T)}{\sigma(T_c)} = \left( \frac{\epsilon_v(T)}{\epsilon(T_c)} \right)^{1/2} ,
\label{sigmaTsigmaTc}
\end{equation}
and it is depicted in Fig.\ref{Fig:sigmabeyondTc}, by using the results in Ref.\cite{adriano2} and a fit (red curve), where $\sigma_0$ is the string tension at $T_c$ (below $T_c$ the chromoelectric field is essentially constant).
Using Eq.\ (\ref{sigmaTsigmaTc}) in Eq.\ (\ref{ShT}), the entropy at large distances can be evaluated by the equation
\begin{equation}
S_h(T) = \pi^3 \frac{\sigma(T)}{\sigma_0} \left( \frac{R(T)}{R_0} \right)^2 ,
\label{ShT2}
\end{equation}
where $R_0 = R(T_c)$. To compare with lattice data one needs the ratio $R(T)/R_0$ which we parametrize as
\begin{equation}
R(T)/R_0= a + b \exp \left[ -c ( T/T_c -1) \right] ,
\end{equation}
where $ a+b=1$ and $a \simeq 0.3$ has been fixed by the $T$-independent gluon field correlation length \cite{adriano1,adriano2} $\ell = a R_0\simeq 0.34$fm with $R_0 \simeq 1.1$ fm.
To compare with lattice data on 2-flavour QCD one has to multiply $S_h$ in Eq.\ (\ref{ShT2}) by $2/3$. The results are in Fig.\ref{Fig:Sh2/3}.
\begin{figure}
{{\epsfig{file=olaf.eps,height=8.0 true cm,width=7.0 true cm, angle=0}}
\caption{Entropy at large distances, $S_h(T)$, vs temperature. Here $R(T)/R_0 = a + b \exp \left[ -c ( T/T_c -1) \right]$, and $a+b=1$ and $a \simeq 0.3$, and $S_h$ multiplied by $2/3$ to compare with lattice data on 2-flavour QCD.}\label{Fig:Sh2/3}}
\end{figure}
In the proposed framework, a calculation of the internal energy at large distances, $U_\infty$, is possible by considering that $U = F + T S$, and that $F= \sigma(T) R(T)$, see Eq.\ (\ref{vacuumsigma}). Thus
\begin{equation}
U_\infty = F + T S_\infty = \sigma(T) R(T) + T S_\infty .
\end{equation}
From the above it turns out
\begin{eqnarray}
\frac{U_\infty}{T_c} & = & \frac{\sigma(T)}{\sigma_0} \frac{R(T)}{R_0} \nonumber \\
&& \times \left( 2\pi + \pi^3 \frac{T}{T_c} \frac{R(T)}{R(0)} \right) .
\end{eqnarray}
and the comparison with the lattice data requires again the factor 2/3. The results are in Fig.\ref{Fig:intener}.
\begin{figure}
{{\epsfig{file=olafu.eps,height=8.0 true cm,width=7.0 true cm, angle=0}}
\caption{Internal energy at large distances, $U_\infty$, vs temperature.}\label{Fig:intener}}
\end{figure}
\section{Conclusions}
The consistency between our results and lattice data on the QCD entropy above the critical temperature suggests the picture of a progressive melting of the
color confinement horizon. This dynamical decription is completely in agreement with the persistence of string-like structures that survive slightly above $T_c$.
\section*{Aknowledgements}
The authors thank Helmut Satz and Martin Spousta for useful discussions. P.C. gladly acknowledges the kind hospitality of the Institute of Particle and Nuclear Physics, MatFyz, of Charles University, where this work was initiated.
|
1,477,468,751,079 | arxiv | \section{Introduction}
The next-generation (i.e, 6G) communication is expected to be a sustainable green, cost-effective and secure communication system \cite{saad2019vision}. In particular, secure communication is crucially important in 6G communication networks since communication environment becomes increasingly complicated and the security of
private information is imperative \cite{wang2019energy}. The information security using crytographic encryption (in the network layer) is a conventional secure communication technique, which suffers from the vulnerabilities, such as secret key distribution, protection and management \cite{liao2010qos}. Unlike this network layer security approach, the physical layer security can guarantee good security performance bypassing the relevant manipulations on the secret key, thus is more attractive for the academia and industry \cite{wu2018survey}. There are various physical-layer secrecy scenarios. The first one is the classical physical-layer secrecy setting where there is one legitimate information receiver (IR) and one eavesdropper (Eve) operating over a single-input-single-output (SISO) channel (i.e., the so-called three-terminal SISO Gaussian wiretap channel) \cite{wyner1975wire,csiszar1978broadcast}. The second one considers the physical-layer secrecy with an IR and Eve operating over a multiple-input-single-output (MISO) channel, which is called as three-terminal MISO Gaussian wiretap channel. The third one is a renewed and timely scenario with one IR and one Eve operating over a multiple-input-multiple-output (MIMO) channel, which is named as three-terminal MIMO Gaussian wiretap channel\cite{khisti2010secure,oggier2011secrecy} and is the focus of this paper. For MIMO systems, a novel idea in physical-layer security is to transmit artificial noise (AN) from the base station (BS) to contaminate the Eve's received signal \cite{mukherjee2009fixed,swindlehurst2009fixed,goel2008guaranteeing}. For these AN-aided methods, a portion of transmit power is assigned to the artificially generated noise to interfere the Eve, which should be carefully designed. For AN-aided secrecy systems, while most of the existing AN-aided design papers focused on the MISO wiretap channel and null-space AN \cite{khisti2010secure,zhou2010secure}, designing the transmit precoding (TPC) matrix together with AN covariance matrix for the MIMO wiretap channel is more challenging \cite{li2013transmit}.
In general, the secrecy rate (SR) achieved by the mutual information difference between the legitimate IR and the Eve is limited by the channel difference between the BS-IR link and the BS-Eve link. The AN-aided method can further improve the SR, but it consumes the transmit power destined for the legitimate IR. When the transmit power is confined, the performance bottleneck always exists for the AN-aided secure communication. To conquer the dilemma, the recently proposed intelligent reflecting surface (IRS) technique can be exploited. Since higher SR can be achieved by enhancing the channel quality in the BS-IR link and degrading the channel condition in the BS-Eve link, the IRS can serve as a powerful complement to AN-aided secure communication due to its capability of reconfiguring the wireless propagation environment.
The IRS technique has been regarded as a revolutionary technique to control and reconfigure the wireless environment \cite{di2019smart,qingqing2019towards},\cite{huang2019holographic}. An IRS comprises an array of reflecting elements, which can reflect the incident electromagnetic (EM) wave passively, and the complex reflection coefficient contains the phase shift and amplitude. In practical applications, the phase shifts of the reflection coefficients are discrete due to the manufacturing cost\cite{wu2019beamforming}. However, many works on IRS aided wireless communications are based on the assumption of continuous phase shifts \cite{huang2019reconfigurable},\cite{huang2020reconfigurable}. To investigate the potential effect of IRS on the secure communication, we also assume continuous phase shifts to simplify the problem. We evaluate its impact on the
system performance in the simulation section. Theoretically, the reflection amplitude of each IRS element can be adjusted for different purpose \cite{wu2019towards}. However, considering the hardware cost, the reflection amplitude is usually assumed to be 1 for simplicity. Hence, by smartly tuning the phase shifts with a preprogrammed controller, the direct signals from the BS and the reflected signals from the IRS can be combined constructively or destructively according to different requirements. In comparison to the existing related techniques which the IRS resembles, such as active intelligent surface \cite{hu2018beyond}, traditional reflecting surfaces\cite{ford1984electromagnetic}, backscatter communication \cite{yang2017modulation} and amplify-and-forward (AF) relay \cite{zhang2009optimal}, the IRSs have the advantages of flexible reconfiguration on the phase shifts in real time, minor additional power consumption, easy installation with many reflecting elements, etc. Furthermore, due to the light weight and compact size, the IRS can be integrated into the traditional communication systems with minor modifications \cite{pan2019multicell}. Because of these appealing virtues, IRS has introduced into various wireless communication systems, including the single-user case \cite{yu2019miso,yang2019intelligent}, the downlink multiuser case \cite{wu2019intelligent,huang2019reconfigurable,guo2019weighted,nadeem2019large,zhou2019intelligent}, mobile edge computing \cite{bai2019latency}, wireless information and power transfer design \cite{pan2019intelligent}, and the physical layer security design \cite{yu2019enabling,cui2019secure,shen2019secrecy,chen2019intelligent}.
IRS is promising to strengthen the system security of wireless communication. In \cite{yu2019enabling,shen2019secrecy,feng2019physical}, the authors investigated the problem of maximizing the achievable SR in a secure MISO communication system aided by IRS, where both the legitimate user and eavesdropper are equipped with a single antenna. The TPC matrix at the BS and the phase shifts at the IRS were optimized by an alternate optimization (AO) strategy. To handle the nonconvex unit modulus constraint, the semidefinite relaxation (SDR) \cite{ma2010semidefinite}, majorization-minimization (MM) \cite{huang2019reconfigurable,sun2016majorization}, complex circle manifold (CCM) \cite{absil2009optimization} techniques were proposed to optimize phase shifts. An IRS-assisted MISO secure communication with a single IR and single Eve was also considered in \cite{cui2019secure}, but it was limited to a special scenario, where the Eve has a stronger channel than the IR, and the two channels from BS to Eve and IR are highly correlated. Under this assumption, the transmit beamforming and the IRS reflection beamforming are jointly optimized to improve the SR. Similarly, a secure IRS-assisted downlink MISO broadcast system was considered in \cite{chen2019intelligent}, and it assumes that multiple legitimate IRs and multiple Eves are in the same directions to the BS, which implies that the IR channels are highly correlated with the Eve channels. \cite{feng2019secure} considered the transmission design for an IRS-aided secure MISO communication with a single IR and single Eve, in which the system energy consumption is minimized under two assumptions that the channels of access point (AP)-IRS links are rank-one and full-rank. An IRS-assisted MISO network with cooperative jamming was investigated in \cite{wang2019energy}. The physical layer security in a simultaneous wireless information and power transfer (SWIPT) system was considered with the aid of IRS \cite{shi2019enhanced}. However, there are a paucity of papers considering the IRS-assisted secure communication with AN. A secure MISO communication system aided by the transmit jamming and AN was considered in \cite{guan2019intelligent}, where a large number of Eves exist, and the AN beamforming vector and jamming vector were optimized to reap the additional degrees of freedom (DoF) brought by the IRS. \cite{xu2019resource} investigated the resource allocation problem in an IRS-assisted MISO communication by jointly optimizing the beamforming vectors, the phase shifts of the IRS, and AN covariance matrix for secrecy rate maximization (SRM), but the direct BS-IRs links and direct BS-Eves link are assumed to be blocked.
Although a few papers have studied security enhancement for an AN-aided system through the IRS, the existing papers related to this topic either only studied the MISO scenario or assumed special settings to the channels. The investigation on the MIMO scenario with general channel settings is absent in the existing literature. Hence, we investigate this problem in this paper by employing an IRS in an AN-aided MIMO communication system for the physical layer security enhancement. Specifically, by carefully designing the phase shifts of the IRS, the reflected signals are combined with the direct signals constructively for enhancing the data rate at the IR and destructively for decreasing the rate at the Eve. As a result, the TPC matrix and AN covariance matrix at the BS can be designed flexibly with a higher DoF than the case without IRS. In this work, the TPC matrix, AN covariance matrix and the phase shift matrix are jointly optimized. Since these optimization variables are highly coupled, an efficient algorithm based on the block coordinate descent (BCD) and MM techniques for solving the problem is proposed.
We summarize our main contributions as follows:
\begin{enumerate}
\item This is the first research on exploiting an IRS to enhance security in AN-aided MIMO communication systems. Specifically, an SRM problem is formulated by jointly optimizing the TPC matrix and AN covariance matrix at the BS, together with the phase shifts of the IRS subject to maximum transmit power limit and the unit modulus constraint of the phase shifters. The objective function (OF) of this problem is the difference of two Shannon capacity expressions, thus is not jointly concave over the three highly-coupled variables. To handle it, the popular minimum mean-square error (MMSE) algorithm is used to reformulate the SRM problem.
\item The BCD algorithm is exploited to optimize the variables alternately. Firstly, given the phase shifts of IRS, the optimal TPC matrix and AN covariance matrix are obtained in closed form by utilizing the Lagrangian multiplier method. Then, given the TPC matrix and AN covariance matrix, the optimization problem for IRS phase shifts is transformed by sophisticated matrix manipulations into a quadratically constrained quadratic program (QCQP) problem subject to unit modulus constraints. To solve it, the MM algorithm is utilized, where the phase shifts are derived in closed form iteratively. Based on the BCD-MM algorithm, the original formulated SRM problem can be solved efficiently.
\item The SRM problem is also extended to the more general scenario of multiple legitimate IRs. A new BCD algorithm is proposed to solve it, where the optimal TPC matrix and AN covariance matrix are obtained by solving a QCQP problem, and the unit modulus constraint is handled by the penalty convex-concave procedure (CCP) method.
\item The simulation results confirm that on the one hand, the IRS can greatly enhance the security of an AN-aided MIMO communication system; on the other hand, the phase shifts of IRS should be properly optimized. Simulation results also show that larger IRS element number and more transmit power is beneficial to the security. Moreover, properly-selected IRS location and good channel states of the IRS-related links are important to realize the full potential of IRS.
\end{enumerate}
This paper is organized as follows. Section II provides the signal model of an AN-aided MIMO communication system assisted by an IRS, and the SRM problem formulation. The SRM problem is reformulated in Section III, where the BCD-MM algorithm is proposed to optimize the TPC
matrix, AN covariance matrix and phase shifts of IRS. Section IV extends the SRM problem to a more general scenario of multiple IRs. In Section V, numerical simulations are given to validate the algorithm efficiency and security enhancement. Section VI concludes this paper.
\emph{Notations}: Throughout this paper, boldface lower case, boldface upper case and regular letters are used to denote vectors, matrices, and scalars respectively. ${\bf{X}} \odot {\bf{Y}}$ is the Hadamard product of $\bf X$ and $\bf Y$. ${\rm{Tr}}\left( {\bf{X}} \right)$ and $\left| {\bf{X}} \right|$ denote the trace and determinant of ${\bf{X}}$ respectively. ${{\mathbb{ C}}^{M \times N}}$ denotes the space of $M \times N$ complex matrices. ${\rm{Re}}\{\cdot\}$ and $\arg\{\cdot\}$ denote the real part of a complex value and the extraction of phase information respectively. ${\rm{diag}}\{\cdot\}$ is the operator for diagonalization. ${\cal C}{\cal N}({\bm{\mu}},{\bf{Z}})$ represents a circularly symmetric complex gaussian (CSCG) random vector with mean ${\bm{\mu}}$ and covariance matrix ${\bf{Z}}$. ${\left( \cdot \right)^{\rm{T}}}$, ${\left( \cdot \right)^{\rm{H}}}$ and ${\left( \cdot \right)^{\rm{\ast}}}$ denote the transpose, Hermitian and conjugate operators respectively. $(\cdot)^{ \star }$ stands for the optimal value, and $(\cdot)^{ \dag }$ means the pseudo-inverse. $[\cdot]^{+}$ is the projection onto the non-negative number, i.e, if $y=[x]^{+}$, then $y=\rm{max}\{0,x\}$.
\section{Signal Model and Problem Formulation}\label{system}
\subsection{Signal Model}
We consider an IRS-aided communication network shown in Fig.~\ref{fig1} that consists of a BS, a legitimate IR and an Eve, all of which are equipped with multiple antennas. The number of transmit antennas at the BS is ${{N}_{T}}\ge 1$, and the numbers of receive antennas at the legitimate IR and Eve are ${{N}_{I}}\ge 1$ and ${{N}_{E}}\ge 1$ respectively. To ensure secure transmission from the BS to the IR, the AN is sent from the BS to interfere the eavesdropper to achieve strong secrecy.
\begin{figure}[h!]
\centering
\includegraphics[width=3.5in]{SR_system_model.pdf}
\caption{An AN-aided MIMO secure communication system with IRS.}\vspace{-0.8cm}
\label{fig1}
\end{figure}
With above assumptions, the BS employed the TPC matrix to transmit data streams with AN. The transmitted signal can be modeled as
\begin{align}
{\bf{x}} = {\bf{Vs}} + {\bf{n}}, \label{eq1t}
\end{align}
where ${\bf{V}}\in {{\mathbb{C}}^{{{N}_{T}}\times d}}$ is the TPC matrix; the number of data streams is $d\le \min ({{N}_{T}},{{N}_{I}})$; the transmitted data towards the IR is $\mathbf{s} \sim \mathcal{C}\mathcal{N}(0,{\mathbf{I}_{d}})$; and $\mathbf{n} \in {\cal C}{\cal N}({\bm{0}},{\bf{Z}})$ represents the AN random vector with zero mean and covariance matrix $\mathbf{Z}$.
Assuming that the wireless signals are propagated in a non-dispersive and narrow-band way, we model the equivalent channels of the BS-IRS link, the BS-IR link, the BS-Eve link, the IRS-IR link, the IRS-Eve link by the matrices $\mathbf{G} \in {{\mathbb{C}}^{M\times {{N}_{T}}}}$, ${{\bf{H}}_{b,I}}\in {{\mathbb{C}}^{{{N}_{I}}\times {{N}_{T}}}}$, ${{\bf{H}}_{b,E}}\in {{\mathbb{C}}^{{{N}_{E}}\times {{N}_{T}}}}$, ${{\bf{H}}_{R,I}}\in {{\mathbb{C}}^{{{N}_{I}}\times M}}$,${{\bf{H}}_{R,E}}\in {{\mathbb{C}}^{{{N}_{E}}\times M}}$, respectively. The phase shift coefficients of IRS are collected in a diagonal matrix defined by ${\bf{\Phi}} =\text{diag }\!\!\{\!\!\text{ }{{\phi }_{1}},\cdots ,{{\phi }_{m}},\cdots ,{{\phi }_{M}}\text{ }\!\!\}\!\!\text{ }$ and ${{\phi }_{m}}={{e}^{j{{\theta }_{m}}}}$, where ${{\theta }_{m}}\in [0,2\pi ]$ denotes the phase shift of the $m$-th reflection element. The multi-path signals that have been reflected by multiple times are considered to be absorbed and diffracted, then the signal received at the legitimate IR is given by
\begin{equation}
{\bf{y}}_I = ({\bf{H}}_{b,I} + {\bf{H}}_{R,I}{\bf{\Phi}} {\bf{G}}) {\bf{x}} + {\bf{n}}_{I}, \label{eq2t}
\end{equation}
where ${{\bf{n}}_{I}}$ is the random noise vector at IR obeying the distribution ${{\bf{n}}_{I}}\sim \mathcal{C}\mathcal{N}({\bf{0}},\sigma _{I}^{2}{\bf{I}}_{{{N}_{I}}})$. The signal received at the Eve is
\begin{align}
{\bf{y}}_E = ({\bf{H}}_{b,E} + {\bf{H}}_{R,E}{\bf{\Phi}} {\bf{G}}){\bf{x}} + {{\bf{n}}_E}, \label{eq3t}
\end{align}
where ${\bf{n}}_{E}$ is the Eve's noise vector following the distribution ${\bf{n}}_{E}\sim \mathcal{C}\mathcal{N}({\bf{0}},\sigma _{E}^{2}{\bf{I}}_{{{N}_{E}}})$.
Assume that the BS has acquired the prior information of all the channel state information (CSI). Then the BS is responsible for optimizing the IRS phase shifts and sending them back to the IRS controller through a separate low-rate communication link such as wireless links \cite{wu2019towards},\cite{wu2019beamforming} or wired lines \cite{tan2016increasing}. The assumption of perfect CSI knowledge is idealistic, since the CSI estimation for IRS networks is challenging. However, the algorithms developed allow us to derive the relevant performance upper bounds for realistic scenarios in the presence of realistic CSI errors. Recently, we have investigated the design of robust and secure transmission in IRS-aided MISO wireless communication systems in \cite{hong2020robust} by considering the statistical CSI error model associated with the cascaded channels for the eavesdropper. Its extension to the MIMO scenario will be studied in our future work.
Upon substituting $\bf{x}$ into \eqref{eq2t}, ${\bf{y}}_{I}$ can be rewirtten as
\begin{align}
{\bf{y}}_I = {\hat {\bf{H}}_I}({\bf{V}}{\bf{s}} + {\bf{n}}) + {{\bf{n}}_I}{\rm{ = }}{\hat {\bf{H}}_I}{\bf{V}}{\bf{s}} + {\hat {\bf{H}}_I}{\bf{n}} + {{\bf{n}}_I}, \label{eq4t}
\end{align}
where ${{\hat{\bf{H}}}_{I}}\overset{\triangle}{=} {{\bf{H}}_{b,I}}+{{\bf{H}}_{R,I}}{\bf{\Phi}} {\bf{G}}$ is defined as the equivalent channel spanning from the BS to the legitimate IR. Then, the data rate (bit/s/Hz) achieved by the legitimate IR is given by
\begin{align}
{R_I}({\bf{V}},{\bf{\Phi}} ,{\bf{Z}}) = {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_I}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_I^H{\bf{J}}_I^{ - 1}} \right|, \label{eq5t}
\end{align}
where ${{\bf{J}}_{I}}$ is the interference-plus-noise covariance matrix given by ${{\bf{J}}_{I}}\overset{\triangle}{=} {{\hat{\bf{H}}}_{I}}{\bf{Z}}{{\hat{\bf{H}}}_{I}}^{H}+\sigma _{I}^{2}{{\bf{I}}_{{{N}_{I}}}}$.
Upon substituting $\bf{x}$ into \eqref{eq3t}, ${{\bf{y}}_{E}}$ can be rewritten as
\begin{align}
{\bf{y}}_E = {\hat {\bf{H}}_E}({\bf{Vs}} + {\bf{n}}) + {\bf{n}}_E = {\hat {\bf{H}}_E}{\bf{Vs}} + {\hat {\bf{H}}_E}{\bf{n}} + {\bf{n}}_E,
\label{eq6t}
\end{align}
where ${{\hat{\bf{H}}}_{E}}\overset{\triangle}{=} {{\bf{H}}_{b,E}}+{{\bf{H}}_{R,E}}{\bf{\Phi}} {\bf{G}}$ is defined as the equivalent channel spanning from the BS to the Eve. Then, the data rate (bit/s/Hz) achieved by the Eve is given by
\begin{align}
{R_E}({\bf{V}},{\bf{\Phi}} ,{\bf{Z}}) = {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_E}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_E^H{\bf{J}}_E^{ - 1}} \right|, \label{eq7t}
\end{align}
where ${{\bf{J}}_{E}}$ is the interference-plus-noise covariance matrix given by ${{\bf{J}}_{E}}\overset{\triangle}{=} {{\hat{\bf{H}}}_{E}}{\bf{Z}}{{\hat{\bf{H}}}_{E}}^{H}+\sigma _{E}^{2}{{\bf{I}}_{{{N}_{E}}}}$.
The achievable secrecy rate is given by
\begin{align}\label{CANorign}
{{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{\bf{\Phi}} ,{\bf{Z}})&{\rm{ = }}[{R_I}({\bf{V}},{\bf{\Phi}} ,{\bf{Z}}) - {R_E}({\bf{V}},{\bf{\Phi}} ,{\bf{Z}}){]^ + }\nonumber \\
&= {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_I}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_I^H{\bf{J}}_I^{ - 1}} \right| - {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_E}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_E^H{\bf{J}}_E^{ - 1}} \right| \nonumber \\
&= {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_I}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_I^H{{({{\hat {\bf{H}}}_I}{\bf{Z}}{{\hat {\bf{H}}}_I}^H + \sigma _I^2{{\bf{I}}_{{N_I}}})}^{ - 1}}} \right| \nonumber \\
&\quad - {\rm{log}}\left| {{\bf{I}} + {{\hat {\bf{H}}}_E}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_E^H{{({{\hat {\bf{H}}}_E}{\bf{Z}}{{\hat {\bf{H}}}_E}^H + \sigma _E^2{{\bf{I}}_{{N_E}}})}^{ - 1}}} \right|.
\end{align}
\subsection{Problem Formulation}
With the aim for maximizing SR, the TPC matrix ${\bf{V}}$ at the BS, the AN covariance matrix ${\bf{Z}}$ at the BS, and the phase shift matrix ${\bf{\Phi}}$ at the IRS should be optimized jointly subject to the constraints of the maximum transmit power and unit modulus of phase shifts. Hence, we formulate the SRM problem as
\begin{subequations} \label{optorig}
\begin{align}
\ &\ \underset{{\bf{V}},{\bf{\Phi}} ,{\bf{Z}}}{\mathop\text{max}} \ \ {{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{\bf{\Phi}} ,{\bf{Z}}{\rm{)}} \label{eq9ta} \\
& \ \ \text{s.t.} \quad \ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^H}{\rm{ + }}{\bf{Z}}{\rm{)}} \le {P_T},\label{eq9tb} \\
& \quad \quad \quad {\bf{Z}}\succeq 0, \label{eq9tc}\\
& \quad \quad \quad \! \left| {{{{\phi}} _m}} \right| = 1,m = 1, \cdots ,M, \label{phaseshifconstrnt}
\end{align}
\end{subequations}
where ${P_T}$ is the maximum transmit power limit. The optimal value of SR in (9) is always non-negative, which can be proved by using contradiction. Assume that the optimal value of SR is negative, then we can simply set the TPC matrix ${\bf{V}}$ to zero matrix, and the resulted SR will be equal to zero, which is larger than a negative SR.
By variable substitution ${\bf{Z}}={{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}$, where ${{\bf{V}}_{E}}\in {{\mathbb{C}}^{{{N}_{T}}\times {{N}_{T}}}}$, Problem (\ref{optorig}) is equivalent to
\begin{subequations} \label{optorigVE}
\begin{align}
&\ \underset{{\bf{V}} ,{{\bf{V}}_E},{\bf{\Phi}}}{\mathop\text{max}} \ \ {{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_{E}},{\bf{\Phi}} {\rm{)}} \label{optorigVE_a} \\
& \ \ \text{s.t.} \quad \ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^H}{\rm{ + }}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}} \le {P_T},\label{optorigVE_b} \\
& \quad \quad \quad \! \left| {{{{\phi}} _m}} \right| = 1,m = 1, \cdots ,M, \label{optorigVE_c}
\end{align}
\end{subequations}
where the OF of (\ref{optorigVE_a}) is obtained by substituting ${\bf{Z}}={{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}$ into (\ref{CANorign}). In (\ref{optorigVE_a}), the expression of OF is difficult to tackle, and the variables of ${\bf{V}}$, ${\bf{V}}_{E}$ and ${\bf{\Phi}}$ are coupled with each other, which make Problem (\ref{optorigVE}) difficult to solve. In addition, the unit modulus constraint imposed on the phase shifts in (\ref{optorigVE_c}) aggravates the difficulty. In the following, we provide a low-complexity algorithm to solve this problem.
\vspace{-0.4cm}\section{A Low-Complexity Algorithm of BCD-MM}\label{algo}
Firstly, the OF of Problem (\ref{optorigVE}) is reformulated into a more tractable expression equivalently. Then, the BCD-MM method is proposed for optimizing the TPC matrix ${\bf{V}}$, ${\bf{V}}_{E}$, and the phase shift matrix ${\bf{\Phi}}$ alternately.
\subsection{Reformulation of the Original Problem}
Firstly, the achievable SR ${{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}}{\rm{)}}$ in (\ref{CANorign}) can be further simplified as
\begin{align} \label{CAN}
{{\rm{C}}_{AN}}\rm{(}{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}}\rm{)}&{\rm{ = log}}\left| {{\bf{I}}_{{N_I}} + {{\hat {\bf{H}}}_I}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_I^H{{({{\hat {\bf{H}}}_I}{\bf{Z}}{{\hat {\bf{H}}}_I}^H + \sigma _I^2{{\bf{I}}_{{N_I}}})}^{ - 1}}} \right|{\rm{ + log}}\left| {{{\hat {\bf{H}}}_E}{\bf{Z}}{{\hat {\bf{H}}}_E}^H + \sigma _E^2{{\bf{I}}_{{N_E}}}} \right| \nonumber \\
&\quad - {\rm{log}}\left| {{{\hat {\bf{H}}}_E}{\bf{Z}}{{\hat {\bf{H}}}_E}^H + \sigma _E^2{{\bf{I}}_{{N_E}}} + {{\hat {\bf{H}}}_E}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_E^H} \right| \nonumber \\
&=\underbrace {{\rm{log}}\left| {{\bf{I}}_{{N_I}} + {{\hat {\bf{H}}}_I}{\bf{V}}{{\bf{V}}^H}\hat {\bf{H}}_I^H{{({{\hat {\bf{H}}}_I}{{\bf{V}}_E}{{\bf{V}}^H_E}{{\hat {\bf{H}}}_I}^H + \sigma _I^2{{\bf{I}}_{{N_I}}})}^{ - 1}}} \right|}_{{f_1}} \nonumber \\
&\quad {\rm{ + }}\underbrace {{\rm{log}}\left| {{{\bf{I}}_{{N_E}}} + {{\hat {\bf{H}}}_E}{{\bf{V}}_E}{{\bf{V}}^H_E}{{\hat {\bf{H}}}_E}^H(\sigma _E^2{{\bf{I}}_{{N_E}}})^{-1}} \right|}_{{f_2}} \nonumber \\
&\quad \underbrace {- {\rm{log}}\left| {{{\bf{I}}_{{N_E}}} + \sigma _E^{-2}{{\hat {\bf{H}}}_E}({\bf{V}}{{\bf{V}}^H}+{{\bf{V}}_E}{{\bf{V}}^H_E})\hat {\bf{H}}_E^H} \right|}_{{f_3}}.
\end{align}
The expression in $f_1$ represents the data rate of the legitimate IR, which can be reformulated by exploiting the relationship between the data rate and the mean-square error (MSE) for the optimal decoding matrix. Specifically, the linear decoding matrix ${{\bf{U}}_{I}}\in {{\mathbb{C}}^{{{N}_{T}}\times {d}}}$ is applied to estimate the signal vector $\hat{{\bf{s}}}$ for the legitimate IR, and the MSE matrix of the legitimate IR is given by
\begin{align} \label{MSE_EI}
{{\bf{E}}_I}({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E})& \buildrel \Delta \over = {{\mathbb{E}}_{{\bf{s}},{\bf{n}},{{\bf{n}}_I}}}\left[ {(\hat {\bf{s}}-{\bf{s}} ){{(\hat {\bf{s}}-{\bf{s}} )}^H}} \right] \nonumber \\
&{\rm{ = }}({{\bf{U}}_I}^H{{\hat {\bf{H}}}_I}{\bf{V}}-{\bf{I}}_{d}){({{\bf{U}}_I}^H{{\hat {\bf{H}}}_I}{\bf{V}}-{\bf{I}}_{d} )^H} + {{\bf{U}}_I}^H({{\hat {\bf{H}}}_I}{{\bf{V}}_E}{{\bf{V}}_E}^H{{\hat {\bf{H}}}_I}^H{\rm{ + }}\sigma _I^2{{\bf{I}}_{{N_I}}}){{\bf{U}}_I}.
\end{align}
By introducing an auxiliary matrix ${{\bf{W}}_I}\succeq 0$, ${{\bf{W}}_I}\in {{\mathbb{C}}^{{d}\times {d}}}$ and exploiting the fact 3) of Lemma 4.1 in \cite{shi2015secure}, we have
\begin{align} \label{lowboundh1f1}
f_1&{\rm{ = }} \mathop \text{max}\limits_{{{\bf{U}}_I},{{\bf{W}}_I}\succeq 0} h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I})\nonumber \\
&\buildrel \Delta \over = \mathop \text{max}\limits_{{{\bf{U}}_I},{{\bf{W}}_I}\succeq 0} \log \left| {{{\bf{W}}_I}} \right| - {\rm{Tr}}({{\bf{W}}_I}{{\bf{E}}_I}({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E})) + d.
\end{align}
$h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I})$ is concave with respect to (w.r.t.) each matrix of the matrices ${{\bf{U}}_I}$,${\bf{V}}$,${{\bf{V}}_E}$,${{\bf{W}}_I}$ by fixing the other three matrices. According to the facts 1) and 2) of Lemma 4.1 in \cite{shi2015secure}, the optimal ${{\bf{U}}^ {\star}_{I}}$, ${{\bf{W}}^ {\star}_I}$ to achieve the maximum value of $h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I})$ is given by
\begin{align}\label{optUI}
{{\bf{U}}^ {\star}_I}{\rm{ = }}\text{arg} \mathop \text{max}\limits_{{{\bf{U}}_I}} h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I}) {\rm{ = }}({\hat {\bf{H}}_I}{{\bf{V}}_E}{{\bf{V}}^H_E}{\hat {\bf{H}}_I}^H{\rm{ + }}\sigma _I^2{{\bf{I}}_{{N_I}}}{\rm{ + }}{\hat {\bf{H}}_I}{\bf{V}}{{\bf{V}}^H}{\hat {\bf{H}}_I}^H)^{-1}{\hat {\bf{H}}_I}{\bf{V}},
\end{align}
\begin{align}\label{optWI}
{{\bf{W}}^ {\star}_I}{\rm{ = }}\text{arg} \mathop \text{max}\limits_{{{\bf{W}}_I}\succeq 0} h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I}) {\rm{ = [}}{{\bf{E}}^ {\star}_I}({{\bf{U}}^ {\star}_I},{\bf{V}},{{\bf{V}}_E}){]^{ - 1}},
\end{align}
where ${{\bf{E}}^ {\star}_I}$ is obtained by plugging the expression of ${{\bf{U}}^ {\star}_I}$ into ${{\bf{E}}_I}({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E})$ as
\begin{align}\label{optEI}
{{\bf{E}}^ {\star}_I}({{\bf{U}}^ {\star}_I},{\bf{V}},{{\bf{V}}_E}){\rm{ = }}( {{\bf{U}}^{{\star}H}_I}{\hat {\bf{H}}_I}{\bf{V}}-{\bf{I}}_{d}){({{\bf{U}}^{{\star}H}_I}{\hat {\bf{H}}_I}{\bf{V}}-{\bf{I}}_{d})^H} + {{\bf{U}}^{{\star}H}_I}({\hat {\bf{H}}_I}{{\bf{V}}_E}{{\bf{V}}^H_E}{\hat {\bf{H}}_I}^H{\rm{ + }}\sigma _I^2{{\bf{I}}_{{N_I}}}){{\bf{U}}^{{\star}}_I}{{\rm{}}}.
\end{align}
Similarly, by introducing the auxiliary variables ${{\bf{W}}_E}\succeq 0$, ${{\bf{W}}_E}\in {{\mathbb{C}}^{{{N}_{T}}\times {{N}_{T}}}}$, ${{\bf{U}}_{E}}\in {{\mathbb{C}}^{{{N}_{E}}\times {{N}_{T}}}}$, and exploiting the fact 3) of Lemma 4.1 in \cite{shi2015secure}, we have
\begin{align} \label{lowboundh2f2}
f_2&{\rm{ = }} \mathop \text{max}\limits_{{{\bf{U}}_E},{{\bf{W}}_E}\succeq 0} h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E}) \nonumber \\
&\buildrel \Delta \over = \mathop \text{max}\limits_{{{\bf{U}}_E},{{\bf{W}}_E}\succeq 0} \log \left| {{{\bf{W}}_E}} \right| - {\rm{Tr}}({{\bf{W}}_E}{{\bf{E}}_E}({{\bf{U}}_E},{{\bf{V}}_E})) + {N}_{t},
\end{align}
$h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E})$ is concave w.r.t each matrix of the matrices ${{\bf{U}}_E}$,${{\bf{V}}_E}$,${{\bf{W}}_E}$ when the other two matrices are given. According to the facts 1) and 2) of Lemma 4.1 in \cite{shi2015secure}, the optimal ${{\bf{U}}^ {\star}_{E}}$, ${{\bf{W}}^ {\star}_E}$ to achieve the maximum value of $h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E})$ is given by
\begin{align} \label{optUE}
{{\bf{U}}^ {\star}_E}{\rm{ = }}\text{arg} \mathop \text{max}\limits_{{{\bf{U}}_E}} h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E}) {\rm{ = }}(\sigma _E^2{{\bf{I}}_{{N_E}}}{\rm{ + }}{\hat {\bf{H}}_E}{{\bf{V}}_E}{{{\bf{V}}^H_E}}{\hat {\bf{H}}_E}^H)^{-1}{\hat {\bf{H}}_E}{{\bf{V}}_E},
\end{align}
\begin{align} \label{optWE}
{{\bf{W}}^ {\star}_E}{\rm{ = }}\text{arg} \mathop \text{max}\limits_{{{\bf{W}}_E}\succeq 0} h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E}) {\rm{ = [}}{{\bf{E}}^ {\star}_E}({{\bf{U}}^ {\star}_E},{{\bf{V}}_E}){]^{ - 1}},
\end{align}
where ${{\bf{E}}^ {\star}_E}$ is obtained by plugging the expression of ${{\bf{U}}^ {\star}_E}$ into ${{\bf{E}}_E}({{\bf{U}}_E},{{\bf{V}}_E})$ as
\begin{align}\label{optEE}
\begin{array}{l}
{{\bf{E}}^ {\star}_E}({{\bf{U}}^ {\star}_E},{{\bf{V}}_E}) = ({{\bf{U}}_E}^{{\star}H}{{\hat {\bf{H}}}_E}{\bf{V}}_E-{\bf{I}}_{N_{T}}){({{\bf{U}}^{{\star}H}_E}{{\hat {\bf{H}}}_E}{\bf{V}}_E-{\bf{I}}_{N_{T}} )^H} + {{\bf{U}}^{{\star}H}_E}(\sigma _E^2{{\bf{I}}_{{N_E}}}){{\bf{U}}^ {\star}_E}.
\end{array}
\end{align}
By using the Lemma 1 in \cite{li2013transmit}, we have
\begin{align} \label{lowboundh3f3}
f_3&{\rm{ = }} \mathop \text{max}\limits_{{{\bf{W}}_X}\succeq 0} h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X}) \nonumber \\
&{\rm{ = }} \mathop \text{max}\limits_{{{\bf{W}}_X}\succeq 0} \log \left| {{{\bf{W}}_X}} \right| - {\rm{Tr}}({{\bf{W}}_X}{{\bf{E}}_X}({{\bf{V}}},{{\bf{V}}_E})) + {N}_{E},
\end{align}
where ${{\bf{W}}_X}\succeq 0$, ${{\bf{W}}_X}\in {{\mathbb{C}}^{{{N}_{E}}\times {{N}_{E}}}}$ are the introduced auxiliary variable, and
\begin{align} \label{MSE_EX}
\begin{array}{l}
{{\bf{E}}_X}({{\bf{V}}},{{\bf{V}}_E})\buildrel \Delta \over = {{{\bf{I}}_{{N_E}}} + \sigma _E^{-2}{{\hat {\bf{H}}}_E}({\bf{V}}{{\bf{V}}^H}+{{\bf{V}}_E}{{\bf{V}}^H_E})\hat {\bf{H}}_E^H}.
\end{array}
\end{align}
$h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X})$ is concave w.r.t each matrix of the matrices ${{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X}$ when the other two matrices are given. The optimal ${{\bf{W}}^ {\star}_{X}}$ to achieve the maximum value of $h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X})$ is
\begin{align} \label{optWX}
{{\bf{W}}^ {\star}_X}{\rm{ = }}\text{arg} \mathop \text{max}\limits_{{{\bf{W}}_X}\succeq 0} h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X}) {\rm{ = [}}{{\bf{E}}_X}({{\bf{V}}},{{\bf{V}}_E}){]^{ - 1}}.
\end{align}
By substituting (\ref{lowboundh1f1}), (\ref{lowboundh2f2}), (\ref{lowboundh3f3}) into (\ref{CAN}), we have
\begin{align}\label{lowboundCAN}
{{\rm{C}}_{AN}}{\rm{(}}{\bf{V}},{{\bf{V}}_E}{\rm{)}}&=\mathop \text{arg max}\limits_{{{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X}} {{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E}),
\end{align}
where
\begin{align} \label{lowboundCANlow}
{{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E})\buildrel \Delta \over =&h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I})+h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E}) \nonumber \\
&+h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X}).
\end{align}
Obviously, ${{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E})$ is a concave function for each of the matrices ${{\bf{U}}_I}$,${{\bf{W}}_I}$,${{\bf{U}}_E}$,${{\bf{W}}_E}$,${{\bf{W}}_X}$,${\bf{V}}$,${{\bf{V}}_E}$ when the other six matrices are given. By substituting (\ref{lowboundCAN}) into Problem (\ref{optorigVE}), we have the following equivalent problem:
\begin{subequations}\label{optorigVElowerbnd}
\begin{align}
\ &\ \underset{{{\bf{U}}_I},{{\bf{W}}_I}\succeq 0,{{\bf{U}}_E},{{\bf{W}}_E}\succeq 0,{{\bf{W}}_X}\succeq 0,{\bf{V}} ,{{\bf{V}}_E},{\bf{\Phi}}}{\mathop\text{max}} \ \ {{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}}) \label{optorigVElowerbnda} \\
& \quad\quad\quad\quad\quad\quad \text{s.t.} \quad \ {\rm{Tr(}}{\bf{V}}{{\bf{V}}^H}{\rm{ + }}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{\rm{)}} \le {P_{T}},\label{optorigVElowerbndb} \\
& \quad\quad\quad\quad\quad\quad \quad\quad \ \ \left| {{{{\phi}} _m}} \right| = 1,m = 1, \cdots ,M. \label{optorigVElowerbndc}
\end{align}
\end{subequations}
To solve Problem \eqref{optorigVElowerbnd}, we apply the BCD method, each iteration of which consists the following two sub-iterations. Firstly, with given ${\bf{V}} ,{{\bf{V}}_E},{\bf{\Phi}}$, update ${{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X}$ by using \eqref{optUI}, \eqref{optWI}, \eqref{optUE}, \eqref{optWE}, \eqref{optWX} respectively. Secondly, with given ${{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X}$, update ${\bf{V}} ,{{\bf{V}}_E},{\bf{\Phi}}$ by solving the following subproblem:
\begin{subequations} \label{optorigVElowerbndSmp}
\begin{align}
&\ \underset{{\bf{V}} ,{{\bf{V}}_E},{\bf{\Phi}}}{\mathop\text{min}} \ \ -\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V}) \nonumber \\
&\quad \quad \quad \quad -\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}^{H}_{E}}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}) \label{optorigVElowerbndSmpa} \\
& \ \ \text{s.t.} \quad {\rm{Tr(}}{\bf{V}}{{\bf{V}}^H}{\rm{ + }}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}} \le P_{T},\label{optorigVElowerbndSmpb} \\
& \quad \quad \quad \! \left| {{{{\phi}} _m}} \right| = 1,m = 1, \cdots ,M, \label{optorigVElowerbndSmpc}
\end{align}
\end{subequations}
where
\begin{align} \label{HV}
{{\mathbf{H}}_{V}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{\mathbf{\hat{H}}}_{I}}+\sigma _E^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},
\end{align}
\begin{align} \label{HVE}
{{\mathbf{H}}_{VE}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{\mathbf{\hat{H}}}_{I}}+{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{\mathbf{\hat{H}}}_{E}}+\sigma _E^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.
\end{align}
Problem (\ref{optorigVElowerbndSmp}) is obtained from Problem \eqref{optorigVElowerbnd} by taking the ${{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X}$ as constant values, and the specific derivations are given in Appendix \ref{firstconstrntDeriv}.
It is obvious that Problem (\ref{optorigVElowerbndSmp}) is much easier to tackle than Problem (\ref{optorigVE}) due to the convex quadratic OF in (\ref{optorigVElowerbndSmpa}). Now, we devote to solve Problem (\ref{optorigVElowerbndSmp}) equivalently instead of Problem (\ref{optorigVE}), and the matrices ${\bf{V}}$, ${\bf{V}}_{E}$, and phase shift matrix $\mathbf{\Phi }$ will be optimized.
\vspace{-0.4cm}\subsection{Optimizing the Matrices ${\bf{V}}$ and ${\bf{V}}_{E}$}\label{kodsijcosakpdc}
In this subsection, the TPC matrix ${\bf{V}}$ and matrix ${\bf{V}}_{E}$ are optimized by fixing $\mathbf{\Phi }$. Specifically, the unit modulus constraint on the phase shifts $\mathbf{\Phi }$ is removed, and the updated optimization problem reduced from Problem (\ref{optorigVElowerbndSmp}) is given by
\begin{subequations} \label{optorigVElowerbndSmpNOfai}
\begin{align}
& \ \ \underset{{\bf{V}} ,{{\bf{V}}_E}}{\mathop\text{min}} \ \ -\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}^{H}_{I}}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V}) \nonumber \\
&\quad \quad \quad \quad -\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}^{H}_{E}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}^{H}_{E}}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}) \label{eq10ta} \\
& \ \ \text{s.t.} \quad {\rm{Tr(}}{\bf{V}}{{\bf{V}}^H}{\rm{ + }}{{\bf{V}}_{E}}{{\bf{V}}^{H}_{E}}{\rm{)}} \le P_{T}.\label{eq10tb}
\end{align} \label{eq10t}
\end{subequations}
The above problem is a convex QCQP problem, and the standard optimization packages, such as CVX \cite{grant2014cvx} can be exploited to solve it. However, the calculation burden is heavy. To reduce the complexity, the near-optimal closed form expressions of the TPC matrix and AN covariance matrix are provided by applying the Lagrangian multiplier method.
Since Problem \eqref{optorigVElowerbndSmpNOfai} is a convex problem, the Slater's condition is satisfied, where the duality gap between Problem \eqref{optorigVElowerbndSmpNOfai} and its dual problem is zero. Thus, Problem \eqref{optorigVElowerbndSmpNOfai} can be solved by addressing its dual problem if the dual problem is easier. For this purpose, by introducing Lagrange multiplier $\lambda$ to combine the the constraint and OF of Problem \eqref{optorigVElowerbndSmpNOfai}, the Lagrangian function of Problem \eqref{optorigVElowerbndSmpNOfai} is obtained as
\begin{align} \label{LagrgnforVVE}
\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right) &\!\buildrel \Delta \over = \!- {\rm{Tr}}\left( {{\bf{W}}_I}{{\bf{V}}^H}{\bf{\hat H}}_I^H{{\bf{U}}_I} \right)\! - \!{\rm{Tr}}\left( {{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I}{\bf{V}}} \right)\! +\! {\rm{Tr}}\left( {{{\bf{V}}^H}{{\bf{H}}_V}{\bf{V}}} \right) \!-\! {\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}} \right)\nonumber \\
& \quad- {\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{U}}_E^H{{{\bf{\hat H}}}_E}{{\bf{V}}_E}} \right) + {\rm{Tr}}\left( {{\bf{V}}_E^H{{\bf{H}}_{VE}}{{\bf{V}}_E}} \right)+ \lambda[ {{\rm{Tr}}\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right)} -P_{T}] \nonumber \\
&= - {\rm{Tr}}\left( {{{\bf{W}}_I}{{\bf{V}}^H}{\bf{\hat H}}_I^H{{\bf{U}}_I}} \right) - {\rm{Tr}}\left( {{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I}{\bf{V}}} \right) + {\rm{Tr}}\left[ {{{\bf{V}}^H}\left( {{{\bf{H}}_V} + \lambda{\bf{I}}} \right){\bf{V}}} \right] \nonumber \\
&\quad- {\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}} \right)\! - \!{\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{U}}_E^H{{{\bf{\hat H}}}_E}{{\bf{V}}_E}} \right)\! +\! {\rm{Tr}}\left[ {{\bf{V}}_E^H\left( {{{\bf{H}}_{VE}} \!+ \!\lambda {\bf{I}}} \right){{\bf{V}}_E}} \right] \!-\! \lambda {P_T}.
\end{align}
Then the dual problem of Problem \eqref{optorigVElowerbndSmpNOfai} is
\begin{subequations} \label{DualoptforVVE}
\begin{align}
\mathop {\max }\limits_\lambda \quad \quad {\rm{ }}h\left( \lambda \right)
\label{DualforVVEa} \\
\text{s.t.}\quad \quad {\rm{ }}\lambda \ge 0,
\label{DualforVVEb}
\end{align}
\end{subequations}
where $h\left( \lambda \right)$ is the dual function given by
\begin{align}\label{DualobjforVVE}
h\left( \lambda \right) \buildrel \Delta \over = \mathop {\min }\limits_{{\bf{V}},{{\bf{V}}_E}} {\rm{ }}\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right).
\end{align}
Note that Problem \eqref{DualobjforVVE} is a convex quadratic optimization problem with no constraint, which can be solved in closed form. The optimal solution ${\bf{V}^{\star}},{{\bf{V}^{{\star}}}_E}$ for Problem \eqref{DualobjforVVE} is
\begin{align}\label{optVVE}
[{\bf{V}^{\star}},{{\bf{V}^{{\star}}}_E}]=\text{arg}\mathop {\min }\limits_{{\bf{V}},{{\bf{V}}_E}} {\rm{ }}\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right).
\end{align}
By setting the first-order derivative of $\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right)$ w.r.t. ${{{\bf{V}}}}$ to zero matrix, we can obtain the optimal solution of ${\bf{V}}$ as follows:
\begin{subequations} \label{DerivLagrgnforVVE1L1L2}
\begin{align}
\frac{\partial{\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right)}}{{\partial {\bf{V}}}}=\bf{0}, \label{DerivLagrgnforVVE1L1L2a} \\
\frac{\partial{\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right)}}{{\partial {{\bf{V}}_E}}}=\bf{0}. \label{DerivLagrgnforVVE1L1L2b}
\end{align}
\end{subequations}
The left hand side of Equation \eqref{DerivLagrgnforVVE1L1L2a} can be expanded as
\begin{align} \label{DerivLagrgnforVVE1L1L2_expand}
\frac{\partial{\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right)}}{{\partial {\bf{V}}}}&=\frac{{\partial {\rm{Tr}}\left[ {{{\bf{V}}^H}\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right){\bf{V}}} \right]}}{{\partial {\bf{V}}}}-\left( {{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I}} \right)^H-\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}} \right) \nonumber \\
&=2\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right){\bf{V}}-2\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}} \right).
\end{align}
The equation \eqref{DerivLagrgnforVVE1L1L2a} becomes
\begin{align} \label{DerivLagrgnforVVE1L1L2Zero}
\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right){\bf{V}}=\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}} \right).
\end{align}
Then the optimal solution ${\bf{V}^{\star}}$ for Problem \eqref{optVVE} is
\begin{align} \label{optimalV}
{{\bf{V}}^{\star}}&=\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right)^{ \dag }\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}} \right) \nonumber \\
&\buildrel \Delta \over= {{\bf{\Theta }}_V}\left( \lambda \right)\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}} \right).
\end{align}
Similarly, we solve Problem \eqref{optVVE} by setting the first-order derivative of $\mathcal{L}\left( {{\bf{V}},{{\bf{V}}_E},\lambda } \right)$ w.r.t. ${{{\bf{V}}_E}}$ to zero matrix, which becomes
\begin{align} \label{DerivLagrgnforVE1Zero1}
2\left( {{{\bf{H}}_{VE}} + \lambda {\bf{I}}} \right){{\bf{V}}_E} - 2{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H = \bf{0}.
\end{align}
Then the optimal solution ${\bf{V}}_E^{\star}$ for Problem \eqref{optVVE} is
\begin{align} \label{optimalVE}
{\bf{V}}_E^{\star} &= \left( {{{\bf{H}}_{VE}} + \lambda {\bf{I}}} \right)^{ \dag }{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H \nonumber \\
&\buildrel \Delta \over ={{\bf{\Theta }}_{VE}}\left( \lambda \right){\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H.
\end{align}
Once the optimal solution ${\lambda}^{\star}$ for Problem \eqref{DualoptforVVE} is found, the final optimal ${\bf{V}^{\star}},{\bf{V}}_E^{\star}$ can be obtained. The value of ${\lambda}^{\star}$ should be chosen in order to guarantee the complementary slackness condition as
\begin{align} \label{Compslack}
\lambda[ {\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{ + }}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}}-P_{T}]=0.
\end{align}
We define
\begin{align} \label{Plambda}
P(\lambda)&\buildrel \Delta \over={\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{ + }}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}} ={\rm{Tr(}}{\bf{V}^{\star}}{{\bf{V}}^{{\star}H}}{\rm{)}}+{\rm{Tr(}}{{\bf{V}}_{E}^{{\star}}}{{\bf{V}}^{{\star}H}_{E}}{\rm{)}},
\end{align}
where
\begin{align}
{\rm{Tr}}\left( {{\bf{V}}^{\star}{{\bf{V}}^{{\star}H}}} \right)&={\rm{Tr}}\left( {{{\bf{\Theta }}_V}\left( \lambda \right)( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )}( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )^H{{\bf{\Theta }}^H_V}\left( \lambda \right) \right) \nonumber \\
&={\rm{Tr}}\left( {{{\bf{\Theta }}^H_V}\left( \lambda \right){{\bf{\Theta }}_V}\left( \lambda \right)( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )} ( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )^H \right),
\end{align}
\begin{align}
{\rm{Tr}}\left( {{{\bf{V}}_E^{{\star}H}}{\bf{V}}_E^{\star}} \right)&={\rm{Tr}}\left( {{{\bf{\Theta }}_{VE}}\left( \lambda \right)( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )}( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )^H{{\bf{\Theta }}^H_{VE}}\left( \lambda \right) \right) \nonumber \\
&={\rm{Tr}}\left( {{{\bf{\Theta }}^H_{VE}}\left( \lambda \right){{\bf{\Theta }}_{VE}}\left( \lambda \right)( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )} ( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )^H \right).
\end{align}
Then $P(\lambda)$ becomes
\begin{align} \label{Plambdanew}
P(\lambda)={\rm{Tr}}\left( {{{\bf{\Theta }}^n_V}( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} ) ( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )^H }\right) +{\rm{Tr}}\left( {{{\bf{\Theta }}^n_{VE}}( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} ) ( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )^H }\right),
\end{align}
where
\begin{align} \label{ThetaVnew}
{{\bf{\Theta }}^n_V}&={{\bf{\Theta }}^H_V}\left( \lambda \right){{\bf{\Theta }}_V}\left( \lambda \right)=\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right)^{ \dag H }\left( {{{\bf{H}}_V} + \lambda {\bf{I}}} \right)^{ \dag},
\end{align}
\begin{align} \label{ThetaVEnew}
{{\bf{\Theta }}^n_{VE}}&={{\bf{\Theta }}^H_{VE}}\left( \lambda \right){{\bf{\Theta }}_{VE}}\left( \lambda \right) =\left( {{{\bf{H}}_{VE}} + \lambda {\bf{I}}} \right)^{ \dag H}\left( {{{\bf{H}}_{VE}} + \lambda {\bf{I}}} \right)^{ \dag }.
\end{align}
To find the optimal ${\lambda^{\star}}\geq0$, we first check whether $\lambda=0$ is the optimal solution or not. If
\begin{align} \label{lambdazero}
P(0)= {\rm{Tr}}\left( {{{\bf{V}}^{{\star}H}}(0){\bf{V}}^{\star}(0)} \right) + {\rm{Tr}}\left( {{\bf{V}}_E^{{\star}H}(0){{\bf{V}}_E}^{\star}(0)} \right)\le {P_{T}},
\end{align}
then the optimal solutions are given by ${{\bf{V}}^{\star}}={{\bf{V}}}(0)$ and ${{\bf{V}}_{E}^{\star}}={{\bf{V}}_{E}}(0)$. Otherwise, the optimal $\lambda^{\star}>0$ is the solution of the equation $P(\lambda)=0$.
It is ready to verify that ${{\bf{H}}_V}$ and ${{\bf{H}}_{VE}}$ is a positive semidefinite matrix. Let us define the rank of ${{\bf{H}}_V}$ and ${{\bf{H}}_{VE}}$ as $r_{V}={\rm{rank}}({\bf{H}}_{V})\le N_T$ and $r_{VE}={\rm{rank}}({\bf{H}}_{VE}) \le N_T$ respectively. By decomposing ${{\bf{H}}_V}$ and ${{\bf{H}}_{VE}}$ by using the singular value decomposition (SVD), we have
\begin{equation}\label{svdforHVHVE}
{{\bf{H}}_V} = \left[ {{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}} \right]{{\bf{\Sigma }}_V}{\left[ {{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}} \right]^{\rm{H}}},{{\bf{H}}_{VE}} = \left[ {{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}} \right]{{\bf{\Sigma }}_{VE}}{\left[ {{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}} \right]^{\rm{H}}},
\end{equation}
where ${\bf{P}}_{V,1}$ comprises the first $r_V$ singular vectors associated with
the $r_V$ positive eigenvalues of ${{\bf{H}}_V}$, and ${\bf{P}}_{V,2}$ includes the last $N_T-r_V$ singular vectors associated with the $N_T-r_V$ zero-valued eigenvalues of ${{\bf{H}}_V}$, ${{\bm{\Sigma}} _V} = {\rm{diag}}\left\{ {{{\bm{\Sigma}} _{V,1}},{{\bf{0}}_{\left( {{N_T} - {r_V}} \right) \times \left( {{N_T} - {r_V}} \right)}}} \right\}$ with ${\bm{\Sigma}} _{V,1}$ representing the diagonal submatrix collecting the first $r_V$ positive eigenvalues. Similarly, the first $r_{VE}$ singular vectors corresponding to
the $r_{VE}$ positive eigenvalues of ${{\bf{H}}_{VE}}$ are contained in ${\bf{P}}_{VE,1}$, while the last $N_T-r_{VE}$ singular vectors corresponding to the $N_T-r_{VE}$ zero-valued eigenvalues of ${{\bf{H}}_{VE}}$ are held in ${\bf{P}}_{VE,2}$. ${{\bm{\Sigma}} _{VE}} = {\rm{diag}}\left\{ {{{\bm{\Sigma}} _{{VE},1}},{{\bf{0}}_{\left( {{N_T} - {r_{VE}}} \right) \times \left( {{N_T} - {r_{VE}}} \right)}}} \right\}$ is a diagonal matrix with ${\bm{\Sigma}} _{{VE},1}$ representing the diagonal submatrix gathering the first $r_{VE}$ positive eigenvalues. By defining ${{\bf{P}}_V} \buildrel \Delta \over = \left[ {{{\bf{P}}_{V,1}},{{\bf{P}}_{V,2}}} \right]$ and ${{\bf{P}}_{VE}} \buildrel \Delta \over = \left[ {{{\bf{P}}_{{VE},1}},{{\bf{P}}_{{VE},2}}} \right]$, and substituting \eqref{svdforHVHVE} into \eqref{ThetaVnew} and \eqref{ThetaVEnew}, $P(\lambda)$ becomes
\begin{align} \label{Plamda}
& P(\lambda) ={\rm {Tr}}\left({[{\left({{{\bf {P}}_{V}}{{\bf {\Sigma}}_{V}}{\bf {P}}_{V}^{H}+\lambda{{\bf {P}}_{V}}{\bf {P}}_{V}^{H}}\right)^{-1}}{\left({{{\bf {P}}_{V}}{{\bf {\Sigma}}_{V}}{\bf {P}}_{V}^{H}+\lambda{{\bf {P}}_{V}}{\bf {P}}_{V}^{H}}\right)^{-1}}]({{\bf {\hat{H}}}_{I}^{H}{{\bf {U}}_{I}}{\bf {W}}_{I}^{H}})({{\bf {\hat{H}}}_{I}^{H}{{\bf {U}}_{I}}{\bf {W}}_{I}^{H}})^{H}}\right)\nonumber \\
& \!\!+\!\!{\rm {Tr}}\!\left(\!{[\!{\left(\!{{{\bf {P}}_{VE}}{{\bf {\Sigma}}_{VE}}{\bf {P}}_{VE}^{H}\!+\!\lambda{{\bf {P}}_{VE}}{\bf {P}}_{VE}^{H}}\!\right)^{-1}}\!\!{\left({{{\bf {P}}_{VE}}{{\bf {\Sigma}}_{VE}}{\bf {P}}_{VE}^{H}\!+\!\lambda{{\bf {P}}_{VE}}{\bf {P}}_{VE}^{H}}\right)^{-1}}\!]\!(\!{{\bf {\hat{H}}}_{E}^{H}{{\bf {U}}_{E}}{\bf {W}}_{E}^{H}}\!)\!({{\bf {\hat{H}}}_{E}^{H}{{\bf {U}}_{E}}{\bf {W}}_{E}^{H}})^{H}}\!\right)\nonumber \\
& ={\rm {Tr}}\left({[{\left({{{\bf {\Sigma}}_{V}}+\lambda{\bf {I}}}\right)^{-2}}]{\bf {Z}}_{V}}\right)+{\rm {Tr}}\left({[{\left({{{\bf {\Sigma}}_{VE}}+\lambda{\bf {I}}}\right)^{-2}}]{\bf {Z}}_{VE}}\right)\nonumber \\
& {=}\sum\limits _{i=1}^{r_{V}}\left[{\frac{{{\left[{{\bf {Z}}_{V}}\right]}_{i,i}}}{{{\left({{{\left[{{\Sigma}_{V}}\right]}_{i,i}}\!+\!\lambda}\right)}^{2}}}}\right]+\sum\limits _{i=1}^{r_{VE}}\left[{\frac{{{\left[{{\bf {Z}}_{VE}}\right]}_{i,i}}}{{{\left({{{\left[{{\Sigma}_{VE}}\right]}_{i,i}}\!+\!\lambda}\right)}^{2}}}}\right]+\sum\limits _{i={r_{V}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf {Z}}_{V}}\right]}_{i,i}}}{{{\left({\lambda}\right)}^{2}}}}\right]}\!+\!\sum\limits _{i={r_{VE}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf {Z}}_{VE}}\right]}_{i,i}}}{{{\left({\lambda}\right)}^{2}}}}\right]},
\end{align}
where ${{\bf{Z}}_{V}}={\bf{P}}_V^H( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} ) ( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{\bf{W}}_I^H} )^H {{\bf{P}}_V}$ and ${{\bf{Z}}_{VE}}={\bf{P}}_{VE}^H( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} ) ( {{\bf{\hat H}}_E^H{{\bf{U}}_E}{\bf{W}}_E^H} )^H {{\bf{P}}_{VE}}$. ${\left[ {{{\bf{Z}}_V}} \right]}_{i,i}$, ${\left[ {{{\bf{Z}}_{VE}}} \right]}_{i,i}$, ${\left[ {{{\Sigma}}_{V}} \right]}_{i,i}$, and ${\left[ {{{\Sigma}}_{VE}} \right]}_{i,i}$ represent the $i$th diagonal element of matrices ${{{\bf{Z}}_V}}$, ${{\bf{Z}}_{VE}}$, ${{{\Sigma}}_{V}}$, and ${{{\Sigma}}_{VE}}$, respectively. The first line of (\ref{Plamda}) is obtained by substituting (\ref{svdforHVHVE}) into the expression of $P({\lambda})$ in (\ref{Plambdanew}). It can be verified from the last line of (\ref{Plamda}) that $P({\lambda})$ is a monotonically decreasing function.
Then, the optimal $\lambda^{\star}$ can be obtained by solving the following equation,
\begin{align}\label{lambdaequat}
\sum\limits_{i = 1}^{r_{V}}\!\left[{\frac{{{\left[ {{{\bf{Z}}_{V}}} \right]}_{i,i}}}{{{{\left( {{{\left[ {{{\Sigma}}_{V}} \right]}_{i,i}} + \lambda} \right)}^2}}}}\right]\!+\!\!\sum\limits_{i = 1}^{r_{VE}}\!\left[{\frac{{{\left[ {{{\bf{Z}}_{VE}}} \right]}_{i,i}}}{{{{\left( {{{\left[ {{{\Sigma}}_{VE}} \right]}_{i,i}} + \lambda} \right)}^2}}}}\right]\!+\!\! \sum\limits_{i = {r_{V}} + 1}^{{N_T}}\! {\left[{\frac{{{\left[ {{{\bf{Z}}_{V}}} \right]}_{i,i}}}{{{{\left( {\lambda} \right)}^2}}}}\right]}\!+ \!\!\sum\limits_{i = {r_{VE}} + 1}^{{N_T}}\! {\left[{\frac{{{\left[ {{{\bf{Z}}_{VE}}} \right]}_{i,i}}}{{{{\left( {\lambda} \right)}^2}}}}\right]}=P_{T}.
\end{align}
To solve it, the bisection search method is utilized. Since $P(\infty )=0$, the solution to Equation (\ref{lambdaequat}) must exist. The lower bound of $\lambda^{\star}$ is a positive value approaching zero, while the upper bound of $\lambda^{\star}$ is given by
\begin{equation}\label{xddfcerf}
{\lambda^{\star}} < \sqrt {\frac{{\sum\limits_{i = 1}^{{N_T}} {{{\left[ {{{\bf{Z}}_V}} \right]}_{i,i}}} }+{\sum\limits_{i = 1}^{{N_T}} {{{\left[ {{{\bf{Z}}_{VE}}} \right]}_{i,i}}} }}{{{P_{T}}}}} \buildrel \Delta \over = \lambda^{{\rm{ub}}}.
\end{equation}
which can be proved as
\begin{align}\label{asdftg}
{P}(\lambda^{{\rm{ub}}})&=\sum\limits_{i = 1}^{r_{V}} {\frac{{{{\left[ {{{\bf{Z}}_V}} \right]}_{i,i}}}}{{{{\left({{{\left[{{{\Sigma}}_{V}} \right]}_{i,i}} + {\lambda^{{\rm{ub}}}}} \right)}^2}}}}+\sum\limits_{i = 1}^{r_{VE}} {\frac{{{{\left[ {{{\bf{Z}}_{VE}}}\right]}_{i,i}}}}{{{{\left( {{{\left[ {{{\Sigma}}_{VE}} \right]}_{i,i}} + {\lambda^{{\rm{ub}}}}} \right)}^2}}}}+\sum\limits_{i={r_{V}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{V}}\right]}_{i,i}}}{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}\right]}\!+\!\sum\limits _{i={r_{VE}}+1}^{{N_{T}}}{\left[{\frac{{{\left[{{\bf{Z}}_{VE}}\right]}_{i,i}}}{{{\left({\lambda^{{\rm{ub}}}}\right)}^{2}}}}\right]}\nonumber \\
& < \sum\limits_{i = 1}^{{N_T}} {\frac{{{{\left[ {{{\bf{Z}}_V}} \right]}_{i,i}}}}{{{{\left( {\lambda^{{\rm{ub}}}} \right)}^2}}}}+\sum\limits_{i = 1}^{{N_T}} {\frac{{{{\left[ {{{\bf{Z}}_{VE}}} \right]}_{i,i}}}}{{{{\left( {\lambda^{{\rm{ub}}}} \right)}^2}}}} = {P_{T}}.
\end{align}
When the optimal $\lambda^{{\star}}$ is found, the optimal matrices ${{\bf{V}}^{{\star}}}$ and ${{\bf{V}}_{E}^{{\star}}}$ can be obtained by substituting $\lambda^\star$ into (\ref{optimalV}) and (\ref{optimalVE}).
\vspace{-0.4cm}\subsection{Optimizing the Phase Shifts $\mathbf{\Phi }$}\label{hwdi}
In this subsection, the phase shift matrix $\mathbf{\Phi }$ is optimized by fixing ${{\bf{V}}}$ and ${{\bf{V}}_E}$. The transmit power constraint in Problem \eqref{optorigVElowerbndSmp} is only related with ${{\bf{V}}}$ and ${{\bf{V}}_E}$, thus is removed. Then, the optimization problem for $\mathbf{\Phi }$ reduced from Problem \eqref{optorigVElowerbndSmp} is formulated as
\begin{subequations} \label{optproblemforfaimin}
\begin{align}
&\ \ \underset{{\bf{\Phi}}}{\mathop\text{min}} \ \ {g_{0}}(\mathbf{\Phi })\buildrel \Delta \over=-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V}) \nonumber \\
&\quad \quad \quad \quad \quad \quad \ -\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}) \label{optproblemforfaimina} \\
& \ \ \text{s.t.} \quad \left| {{{{\phi}} _m}} \right| = 1,m = 1, \cdots ,M. \label{optproblemforfaiminb}
\end{align}
\end{subequations}
By the aid of complex mathematical manipulations, which are given in details in Appendix \ref{newOFDeriv}, Problem (\ref{optproblemforfaimin}) can be transformed into a form that can facilitate the MM algorithm. Based on the derivations in Appendix \ref{newOFDeriv}, the OF ${g_{0}}(\mathbf{\Phi })$ can be equivalently transformed into
\begin{align} \label{eqg0forPhi}
{g_{0}}(\mathbf{\Phi })={\rm{Tr}}\left( {{{\bf{\Phi}}^H{\bf{D}}^H}} \right) + {\rm{Tr}}\left( {{\bf{\Phi D}}} \right) + {\rm{Tr}}\left[ {{{\bf{\Phi }}^H}{{\bf{B}}_{VE}}{\bf{\Phi }}{{\bf{C}}_{VE}}} \right] + {\rm{Tr}}\left( {{{\bf{\Phi }}^H}{{\bf{B}}_V}{\bf{\Phi }}{{\bf{C}}_V}} \right)+C_t,
\end{align}
where $C_t$, ${\bf{D}}$, ${{\bf{C}}_{VE}}$, ${{\bf{C}}_{V}}$, ${{\bf{B}}_{VE}}$ and ${{\bf{B}}_{V}}$ are constants for ${\bf{\Phi }}$, and are given in Appendix \ref{newOFDeriv}.
By exploiting the matrix properties in \cite[Eq. (1.10.6)]{zhang2017matrix}, the trace operators can be removed, and the third and fourth terms in (\ref{eqg0forPhi}) become as
\begin{subequations}\label{saddewde}
\begin{align}
{\rm{Tr}}\left( {{{\bm{\Phi}} ^{\rm{H}}}{\bf{B}}_{VE}{\bm{\Phi}} {\bf{C}}_{VE}} \right) = {{\bm{\phi}} ^{\rm{H}}}\left( {{\bf{B}}_{VE} \odot {{\bf{C}}_{VE}^{\rm{T}}}} \right){\bm{\phi}}, \\
{\rm{Tr}}\left( {{{\bm{\Phi}} ^{\rm{H}}}{\bf{B}}_{V}{\bm{\Phi}} {\bf{C}}_{V}} \right) = {{\bm{\phi}} ^{\rm{H}}}\left( {{\bf{B}}_{V} \odot {{\bf{C}}_{V}^{\rm{T}}}} \right){\bm{\phi}},
\end{align}
\end{subequations}
where ${\bm{\phi}} \buildrel \Delta \over = {\left[ {{e^{j{\theta _1}}}, \cdots ,{e^{j{\theta _m}}}, \cdots ,{e^{j{\theta _M}}}} \right]^{\rm{T}}}$ is a vector holding the diagonal elements of ${\bm{\Phi}}$.
Similarly, the trace operators can be removed for the first and second terms in (\ref{eqg0forPhi}) as
\begin{equation}\label{sdewf}
{\rm{Tr}}\left( {{{\bm{\Phi}} ^{\rm{H}}}{{\bf{D}}^{\rm{H}}}} \right) = {{\bf{d}}^{\rm{H}}}({{\bm{\phi}}}^*), {\rm{Tr}}\left( {{\bm{\Phi}} {\bf{D}}} \right)={\bm{\phi}}^{\rm{T}}{\bf{d}},
\end{equation}
where ${\bf{d}} = {\left[ {{{\left[ {\bf{D}} \right]}_{1,1}}, \cdots ,{{\left[ {\bf{D}} \right]}_{M,M}}} \right]^{\rm{T}}}$ is a vector gathering the diagonal elements of matrix ${\bf{D}}$.
Hence, Problem (\ref{optproblemforfaimin}) can be rewritten as
\begin{subequations}\label{optproblemforlittlefaimin}
\begin{align}
&{\mathop {\min }\limits_{{\bm{\phi}}} \quad {{\bm{\phi}} ^{\rm{H}}}{\bm{\Xi} }{\bm{\phi}} + {\bm{\phi}}^{\rm{T}}{\bf{d}} + {{\bf{d}}^{\rm{H}}}({{\bm{\phi}}}^*)}
\\
&\textrm{s.t.}\quad \left| {{\phi _m}} \right| = 1 , m = 1, \cdots ,M,
\end{align}
\end{subequations}
where $\bm{\Xi}={\bf{B}}_{VE} \odot {{\bf{C}}_{VE}^{\rm{T}}}+{\bf{B}}_{V} \odot {{\bf{C}}_{V}^{\rm{T}}} $. $\bm{\Xi}$ is a semidefinite matrix, because it is a sum of two semidefinite matrices, both of which are Hadamard products of two semidefinite matrices. It is observed that ${\bf{B}}_{VE}$, ${{\bf{C}}_{VE}^{\rm{T}}}$, ${\bf{B}}_{V}$ and ${{\bf{C}}_{V}^{\rm{T}}}$ are semidefinite matrices. Then, the Hadamard products of ${{\bf{B}}_{VE} \odot {{\bf{C}}_{VE}^{\rm{T}}}}$ and ${{\bf{B}}_{V} \odot {{\bf{C}}_{V}^{\rm{T}}}}$ are semidefinite according to the Property (9) on Page 104 of \cite{zhang2017matrix}. Problem (\ref{optproblemforlittlefaimin}) can be further simplified as
\begin{subequations}\label{appjig}
\begin{align}
&{\mathop {\min }\limits_{\bm{\phi}} \quad f({\bm{\phi}})\buildrel \Delta \over = {{\bm{\phi}} ^{\rm{H}}}{\bm{\Xi}}{\bm{\phi}} + 2{\rm{Re}}\left\{ {{{\bm{\phi}} ^{\rm{H}}}({{\bf{d}}}^*)} \right\}}
\\
&\textrm{s.t.}\quad \left| {{\phi _m}} \right| = 1 , m = 1, \cdots ,M. \label{dshxsdceur}
\end{align}
\end{subequations}
The Problem (\ref{appjig}) can be solved by the SDR technique \cite{wu2019intelligent} by transforming the unimodulus constraint into a rank-one constraint, however, the rank-one solution cannot always be obtained and the computation complexity is heavy for the SDR method. Thus, we propose to solve Problem (\ref{appjig}) efficiently by the MM algorithm as \cite{pan2019multicell}, where the closed-form solution can be obtained in each iteration. Details are omitted for simplicity.
\vspace{-0.4cm}\subsection{Overall Algorithm to Solve Problem (\ref{optorigVE})}
To sum up, the detailed execution of the overall BCD-MM algorithm proposed for solving Problem (\ref{optorigVE}) is provided in Algorithm \ref{bcd}. The MM algorithm is exploited for solving the optimal phase shifts ${\bf{\Phi}}^{(n+1)}$ of Problem (\ref{appjig}) in Step 5. The iteration process in MM algorithm ensures that the OF value of Problem (\ref{appjig}) decreases monotonically. Moreover, the BCD algorithm also guarantees that the OF value of Problem (\ref{optorigVElowerbndSmp}) monotonically decreases in each step and each iteration of Algorithm \ref{bcd}. Since the OF value in (\ref{optorigVElowerbndSmpa}) has a lower bound with the power limit, the convergence of Algorithm \ref{bcd} is guaranteed.
\begin{algorithm}
\caption{BCD-MM Algorithm}\label{bcd}
\begin{algorithmic}[1]
\STATE Parameter Setting. Set the maximum number of iterations $n_{\rm{max}}$ and the first iterative number $n=1$; Give the error tolerance $\varepsilon$.
\STATE Variables Initialization. Initialize the variables ${\bf{V}}^{(1)}$,
${\bf{V}}_{E}^{(1)}$ and ${\bf{\Phi}}^{(1)}$ in the feasible region; Compute the OF value of Problem (\ref{optorigVE}) as ${\rm{OF(}}{{\bf{V}}^{(1)}},{{\bf{V}}^{(1)}_{E}},{\bf{\Phi}}^{(1)}{\rm{)}}$;
\STATE Auxiliary Variables Calculation. Given ${\bf{V}}^{(n)} ,{{\bf{V}}^{(n)}_E}$, ${\bf{\Phi}}^{(n)}$, compute the optimal matrices ${{\bf{U}}^{(n)}_I},{{\bf{W}}^{(n)}_I},{{\bf{U}}^{(n)}_E},{{\bf{W}}^{(n)}_E},{{\bf{W}}^{(n)}_X}$ according to \eqref{optUI}, \eqref{optWI}, \eqref{optUE}, \eqref{optWE}, \eqref{optWX} respectively;
\STATE Matrices Optimization. Given ${{\bf{U}}^{(n)}_I},{{\bf{W}}^{(n)}_I},{{\bf{U}}^{(n)}_E},{{\bf{W}}^{(n)}_E},{{\bf{W}}^{(n)}_X}$, solve the optimal TPC matrix ${\bf{V}}^{(n+1)}$ and equivalent AN covariance matrix ${{\bf{V}}^{(n+1)}_E}$ of Problem (\ref{optVVE}) with the Lagrangian multiplier method;
\STATE Phase Shifts Optimization. Given ${{\bf{U}}^{(n)}_I},{{\bf{W}}^{(n)}_I},{{\bf{U}}^{(n)}_E},{{\bf{W}}^{(n)}_E},{{\bf{W}}^{(n)}_X}$ and ${\bf{V}}^{(n+1)} ,{{\bf{V}}^{(n+1)}_E}$, solve the optimal phase shifts ${\bf{\Phi}}^{(n+1)}$ of Problem (\ref{appjig}) with the MM algorithm;
\STATE Termination Check. If ${{\left| \!{{\rm{OF}}\!(\!{\bf{V}}^{(n+1)}\! ,\!{{\bf{V}}^{(n+1)}_E}\!,\!{\bf{\Phi}}^{(n+1)}\!)\! \!- \!\! {\rm{OF}}\!(\!{\bf{V}}^{(n)}\! ,\!{{\bf{V}}^{(n)}_E}\!,\!{\bf{\Phi}}^{(n)}\!)} \!\right|} / \!{{\rm{OF}}\!(\!{\bf{V}}^{(n+1)} \!,\!{{\bf{V}}^{(n+1)}_E}\!,\!{\bf{\Phi}}^{(n+1)}\!)}}\! < \varepsilon$ or $n\geq n_{\rm{max}}$, terminate. Otherwise, update $n \leftarrow n + 1$ and jump to step 2.
\end{algorithmic}
\end{algorithm}
Based on the algorithm description, the complexity analysis of the proposed BCD-MM algorithm is performed. In Step 3, computing the decoding matrices ${{\bf{U}}^{(n)}_I}$ and ${{\bf{U}}^{(n)}_E}$ costs the complexity of ${\cal O}(N_I^3)+{\cal O}(N_E^3)$, while calculating the auxiliary matrices ${{\bf{W}}^{(n)}_I}$, ${{\bf{W}}^{(n)}_E}$, and ${{\bf{W}}^{(n)}_X}$ consumes the complexity of ${\cal O}(d^3)+{\cal O}(N_T^3)+{\cal O}(N_E^3)$. The complexity of calculating the TPC matrix ${\bf{V}}^{(n+1)} $ and AN covariance matrix ${{\bf{V}}^{(n+1)}_E}$ in Step 4 can be analyzed according to the specific process of Lagrangian multiplier method based on the fact that the complexity of computing product ${\bf{XY}}$ of complex matrices ${\bf{X}} \in {{\mathbb{C}}^{m \times n}}$ and ${\bf{Y}} \in {{\mathbb{C}}^{n \times p}}$ is ${\cal O}\left( {mnp} \right)$. By assuming that $N_T>N_I({\rm{or \ }} N_E)>d$, the complexity of computing the matrices $\{{{\mathbf{H}}_{V}},{{\mathbf{H}}_{VE}}\}$ in (\ref{HV}) and (\ref{HVE}) is ${\cal O}(N_T^3)+{\cal O}(2N_T^2d)+{\cal O}(2N_T^2N_E)$; while the complexity of calculating ${\bf{V}}^*$, ${\bf{V}}_E^*$ in (\ref{optimalV}) and (\ref{optimalVE}) is ${\cal O}(2N_T^3)$. The SVD decomposition of $\{{{\mathbf{H}}_{V}},{{\mathbf{H}}_{VE}}\}$ requires the computation complexity of ${\cal O}(2N_T^3)$, while calculating $\{{\bf{Z}}_V\}$ and $\{{\bf{Z}}_{VE}\}$ requires the complexity of ${\cal O}(N_T^2N_I)+{\cal O}(2N_T^3)$. The complexity of finding the Lagrangian multipliers $\{\lambda\}$ is negligible. Thus, the overall complexity for ${\bf{V}}^{(n+1)}$, ${\bf{V}}_E^{(n+1)}$ is about ${\cal O}({\rm{max}}\{2N_T^3,2N_T^2N_E\})$. In step 5, obtaining optimal ${\bf{\Phi}}^{(n+1)}$ by the MM algorithm need a complexity of $C_{MM}={\cal O}(M^3+T_{MM}M^2)$, where $T_{MM}$ is the iteration number for convergence. Based on the complexity required in Step 3, 4 and 5, the overall complexity $C_{\rm{BCD-MM}}$ of the BCD-MM algorithm can be evaluated by
\begin{equation}\label{aefar}
C_{\rm{BCD-MM}}={\cal O}({\rm{max}}\{2N_T^3,2N_T^2N_E,C_{MM}\}).
\end{equation}
\section{Extension to the Multiple-IRs Scenario}
\subsection{Problem Formulation}
Consider a multicast extension where there are $L\geq2$ legitimate IRs, and they all intend to receive
the same message. The signal model for the MIMO multi-IRs wiretap channel scenario is
\begin{align}
{\bf {y}}_{I,l} ={\hat{{\bf {H}}}_{I,l}}({\bf {V}}{\bf {s}}+{\bf {n}})+{{\bf {n}}_{I,l}},l=1,\cdots,L,
\end{align}
where ${{\hat{\bf{H}}}_{I,l}}\overset{\triangle}{=} {{\bf{H}}_{b,I,l}}+{{\bf{H}}_{R,I,l}}{\bf{\Phi}} {\bf{G}}$. The subscript $l$ indicates the $l$th IR, and the other notations are the same
as (\ref{eq4t}) and (\ref{eq6t}). Under these settings, the achievable
SR is given by \cite{liang2009information}
\begin{alignat}{1}
R_{s}{\rm {(}}{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}{\rm {)}} & =\underset{l=1,\cdots,L}{\min}\{{R_{I,l}}({\bf {V}},{\bf {\Phi}},{\bf {Z}})-{R_{E}}({\bf {V}},{\bf {\Phi}},{\bf {Z}})\},
\end{alignat}
where ${R_{I,l}}({\bf {V}},{\bf {\Phi}},{\bf {Z}}) ={\rm {log}}\left|{{\bf {I}}+{{\hat{{\bf {H}}}}_{I,l}}{\bf {V}}{{\bf {V}}^{H}}\hat{{\bf {H}}}_{I,l}^{H}{\bf {J}}_{I,l}^{-1}}\right|$ and ${{\bf {J}}_{I,l}} \overset{\triangle}{=}{{\hat{{\bf {H}}}}_{I,l}}{\bf {Z}}{{\hat{{\bf {H}}}}_{I,l}}^{H}+\sigma_{I,l}^{2}{{\bf {I}}_{{{N}_{I}}}}$.
Then the multicast counterpart of the AN-aided SRM
problem (\ref{optorigVE}) is formulated as \begin{subequations} \label{multicastoptorigVE}
\begin{align}
& \ \underset{{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}}{\mathop{\text{max}}}\ \ {R_{s}}{\rm {(}}{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}{\rm {)}}\label{multicastoptorigVE_a}\\
& \ \ \text{s.t.}\quad\ {\rm {Tr(}}{\bf {V}}{{\bf {V}}^{H}}{\rm {+}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{\rm {)}}\le{P_{T}},\label{multicastoptorigVE_b}\\
& \quad\quad\quad\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.\label{multicastoptorigVE_c}
\end{align}
\end{subequations}
The objective function of Problem (\ref{multicastoptorigVE}) can
be rewritten as \begin{subequations}\label{OFRs}
\begin{alignat}{1}
R_{s}{\rm {(}}{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}{\rm {)}} & =\underset{l=1,\cdots,L}{\min}\{\underbrace{{\rm {log}}\left|{{\bf {I}}_{{N_{I}}}+{{\hat{{\bf {H}}}}_{I,l}}{\bf {V}}{{\bf {V}}^{H}}\hat{{\bf {H}}}_{I,l}^{H}{{({{\hat{{\bf {H}}}}_{I,l}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{{\hat{{\bf {H}}}}_{I,l}}^{H}+\sigma_{I,l}^{2}{{\bf {I}}_{{N_{I}}}})}^{-1}}}\right|}_{{f_{1,l}}}\}\nonumber \\
& \quad{\rm {+}}\underbrace{{\rm {log}}\left|{{{\bf {I}}_{{N_{E}}}}+{{\hat{{\bf {H}}}}_{E}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{{\hat{{\bf {H}}}}_{E}}^{H}(\sigma_{E}^{2}{{\bf {I}}_{{N_{E}}}})^{-1}}\right|}_{{f_{2}}}\nonumber \\
& \quad\underbrace{-{\rm {log}}\left|{{{\bf {I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{{\bf {H}}}}_{E}}({\bf {V}}{{\bf {V}}^{H}}+{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}})\hat{{\bf {H}}}_{E}^{H}}\right|}_{{f_{3}}},\\
& =\underset{l=1,\cdots,L}{\min}\{\mathop{\text{max}}\limits _{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0}h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{I,l}})\}+\mathop{\text{max}}\limits _{{{\bf {U}}_{E}},{{\bf {W}}_{E}}\succeq0}h_{2}({{\bf {U}}_{E}},{{\bf {V}}_{E}},{{\bf {W}}_{E}})\nonumber \\
& \quad+\mathop{\text{max}}\limits _{{{\bf {W}}_{X}}\succeq0}h_{3}({\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{X}}).\label{eq:OFRsminmax}
\end{alignat}
\end{subequations}
The lower bound to the first term of (\ref{eq:OFRsminmax}) can be found as \begin{subequations}\label{RIminmax}
\begin{alignat}{1}
& \underset{l=1,\cdots,L}{\min}\{\mathop{\text{max}}\limits _{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0}h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{I,l}})\}\label{Rspart1} \\
& \geq \mathop{\text{max}}\limits _{\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0\}_{l=1}^{L}}\{\underset{l=1,\cdots,L}{\min}h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{I,l}})\},\label{eq:RIminmaxfinal}
\end{alignat}
\end{subequations}
where \eqref{eq:RIminmaxfinal} holds due to the fact that $\underset{x}{\min}\ \underset{y}{\max}f(x,y)\geq\underset{y}{\max}\ \underset{x}{\min}f(x,y)$
for any function $f(x,y)$. Here by exchanging the positions of $\mathop{\text{max}}\limits _{\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0\}_{l=1}^{L}}$
and $\underset{l=1,\cdots,L}{\min}$ in \eqref{Rspart1},
we can find a lower bound to $R_{s}{\rm {(}}{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}{\rm {)}}$ as
\begin{alignat}{1}
& f_{ms}({\bf {V}},{{\bf {V}}_{E}},\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\}_{l=1}^{L},{{\bf {U}}_{E}},{{\bf {W}}_{E}},{{\bf {W}}_{X}}) \nonumber \\
& \triangleq\mathop{\text{max}}\limits _{{\bf {V}},{{\bf {V}}_{E}},\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0\}_{l=1}^{L},{{\bf {U}}_{E}},{{\bf {W}}_{E}}\succeq0,{{\bf {W}}_{X}}\succeq0}\{\underset{l=1,\cdots,L}{\min}h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{I,l}})\nonumber \\
& \quad+h_{2}({{\bf {U}}_{E}},{{\bf {V}}_{E}},{{\bf {W}}_{E}})+h_{3}({\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{X}})\}. \label{fms}
\end{alignat}
We simplify Problem (\ref{multicastoptorigVE}) by maximizing a lower bound to its original objective as follows,
\begin{subequations} \label{multicastoptorigVEfms}
\begin{align}
& \ \underset{{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}},\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\succeq0\}_{l=1}^{L},{{\bf {U}}_{E}},{{\bf {W}}_{E}}\succeq0,{{\bf {W}}_{X}}\succeq0}{\mathop{\text{max}}}f_{ms}({\bf {V}},{{\bf {V}}_{E}},\{{{\bf {U}}_{I,l}},{{\bf {W}}_{I,l}}\}_{l=1}^{L},{{\bf {U}}_{E}},{{\bf {W}}_{E}},{{\bf {W}}_{X}})\label{multicastoptorigVEfms_a}\\
& \ \ \text{s.t.}\quad\ {\rm {Tr(}}{\bf {V}}{{\bf {V}}^{H}}{\rm {+}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{\rm {)}}\le{P_{T}},\label{multicastoptorigVEfms_b}\\
& \quad\quad\quad\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.\label{multicastoptorigVEfms_c}
\end{align}
\end{subequations}
To solve the multicast AN-aided SRM problem in (\ref{multicastoptorigVEfms}), a BCD-QCQP-CCP algorithm is proposed.
\subsection{BCD Iterations for Problem (\ref{multicastoptorigVEfms})}
The equivalent SRM problem in (\ref{multicastoptorigVEfms}) provides a desirable formulation for BCD algorithm. In particular, one can
show that problem (\ref{multicastoptorigVEfms}) is convex w.r.t. either ${\bf {V}},{{\bf {V}}_{E}}$ or ${\bf {\Phi}}$ or $\{{{\bf {U}}_{I,l}}$, ${{\bf {W}}_{I,l}}\}_{l=1}^{L}$, ${{\bf {U}}_{E}}$, ${{\bf {W}}_{E}}$, ${{\bf {W}}_{X}}$. By fixing ${\bf {\Phi}}$, the iteration process for problem (\ref{multicastoptorigVEfms}) is as follows. Let ${\bf {V}}^{n}$, ${{\bf {V}}_{E}^{n}}$, $\{{{\bf {U}}_{I,l}^{n}},{{\bf {W}}_{I,l}^{n}}\}_{l=1}^{L},{{\bf {U}}_{E}^{n}},{{\bf {W}}_{E}^{n}},{{\bf {W}}_{X}^{n}}$ denote the BCD iterate at the $n$th iteration. The BCD iterates are generated via \begin{subequations}\label{BCD-iteration}
\begin{alignat}{1}
& {{\bf {U}}_{I,l}^{n}}=\text{arg}\mathop{\text{max}}\limits _{{{\bf {U}}_{I,l}}}h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}}^{n},{{\bf {V}}_{E}^{n}},{{\bf {W}}_{I,l}^{n}})\nonumber \\
& \qquad\:\:{\rm {=}}({\hat{{\bf {H}}}_{I,l}}{{\bf {V}}_{E}^{n}}{{\bf {V}}_{E}^{nH}}{\hat{{\bf {H}}}_{I,l}}^{H}{\rm {+}}\sigma_{I,l}^{2}{{\bf {I}}_{{N_{I}}}}{\rm {+}}{\hat{{\bf {H}}}_{I,l}}{\bf {V}}^{n}{{\bf {V}}^{nH}}{\hat{{\bf {H}}}_{I,l}}^{H})^{-1}{\hat{{\bf {H}}}_{I,l}}{\bf {V}}^{n}, \label{BCD-iteration_a} \\
& {{\bf {W}}_{I,l}^{n}}=\text{arg}\mathop{\text{max}}\limits _{{{\bf {W}}_{I,l}}\succeq0}h_{1,l}({{\bf {U}}_{I,l}^{n}},{\bf {V}}^{n},{{\bf {V}}_{E}^{n}},{{\bf {W}}_{I,l}}){\rm {=[}}{{\bf {E}}_{I,l}}({{\bf {U}}_{I,l}^{n}},{\bf {V}}^{n},{{\bf {V}}_{E}^{n}}){]^{-1}}\nonumber \\
& {\rm {=}}\text{[}({{\bf {U}}_{I,l}^{{n}H}}{\hat{{\bf {H}}}_{I,l}}{\bf {V}}^{n}-{\bf {I}}_{d}){({{\bf {U}}_{I,l}^{{n}H}}{\hat{{\bf {H}}}_{I,l}}{\bf {V}}^{n}-{\bf {I}}_{d})^{H}}+{{\bf {U}}_{I,l}^{{n}H}}({\hat{{\bf {H}}}_{I,l}}{{\bf {V}}_{E}^{n}}{{\bf {V}}_{E}^{nH}}{\hat{{\bf {H}}}_{I,l}}^{H}{\rm {+}}\sigma_{I,l}^{2}{{\bf {I}}_{{N_{I}}}}){{\bf {U}}_{I,l}^{{n}}}]^{-1}, \label{BCD-iteration_b} \\
& {{\bf {U}}_{E}^{n}}{\rm {=}}\text{arg}\mathop{\text{max}}\limits _{{{\bf {U}}_{E}}}h_{2}({{\bf {U}}_{E}},{{\bf {V}}_{E}^{n}},{{\bf {W}}_{E}^{n}})\nonumber \\
& \qquad\:\:{\rm {=}}(\sigma_{E}^{2}{{\bf {I}}_{{N_{E}}}}{\rm {+}}{\hat{{\bf {H}}}_{E}}{{\bf {V}}_{E}^{n}}{{\bf {V}}_{E}^{nH}}{\hat{{\bf {H}}}_{E}}^{H})^{-1}{\hat{{\bf {H}}}_{E}}{{\bf {V}}_{E}^{n}}, \label{BCD-iteration_c} \\
& {{\bf {W}}_{E}^{n}}{=}\text{arg}\mathop{\text{max}}\limits _{{{\bf {W}}_{E}}\succeq0}h_{2}({{\bf {U}}_{E}^{n}},{{\bf {V}}_{E}^{n}},{{\bf {W}}_{E}}){\rm {=[}}{{\bf {E}}_{E}}({{\bf {U}}_{E}^{n}},{{\bf {V}}_{E}^{n}}){]^{-1}}\nonumber \\
& \qquad\:\:{\rm {=}}[({{\bf {U}}_{E}}^{{n}H}{{\hat{{\bf {H}}}}_{E}}{\bf {V}}_{E}^{n}-{\bf {I}}_{N_{T}}){({{\bf {U}}_{E}^{{n}H}}{{\hat{{\bf {H}}}}_{E}}{\bf {V}}_{E}^{n}-{\bf {I}}_{N_{T}})^{H}}+{{\bf {U}}_{E}^{{n}H}}(\sigma_{E}^{2}{{\bf {I}}_{{N_{E}}}}){{\bf {U}}_{E}^{n}}]^{-1}, \label{BCD-iteration_d} \\
& {{\bf {W}}_{X}^{n}}{\rm {=}}\text{arg}\mathop{\text{max}}\limits _{{{\bf {W}}_{X}}\succeq0}h_{3}({\bf {V}}^{n},{{\bf {V}}_{E}^{n}},{{\bf {W}}_{X}}){\rm {=[}}{{\bf {E}}_{X}}({\bf {V}}^{n},{{\bf {V}}_{E}^{n}}){]^{-1}}\nonumber \\
& \qquad\:\:{\rm {=}}[{{{\bf {I}}_{{N_{E}}}}+\sigma_{E}^{-2}{{\hat{{\bf {H}}}}_{E}}({\bf {V}}^{n}{{\bf {V}}^{nH}}+{{\bf {V}}_{E}^{n}}{{\bf {V}}_{E}^{nH}})\hat{{\bf {H}}}_{E}^{H}}]^{-1}. \label{BCD-iteration_e}
\end{alignat}
\end{subequations}
The parameters ${\bf {V}}^{n}$, ${{\bf {V}}_{E}^{n}}$, ${\bf {\Phi}}^{n}$ are obtained
by solving the following problem \begin{subequations} \label{multicastoptorigVEfai}
\begin{align}
& \ \underset{{\bf {V}},{{\bf {V}}_{E}},{\bf {\Phi}}}{\mathop{\text{max}}}\ \ \underset{l=1,\cdots,L}{\min}\{h_{1,l}({{\bf {U}}_{I,l}},{\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{I,l}})\}\nonumber \\
& \quad\quad\quad\quad\quad\quad+h_{2}({{\bf {U}}_{E}},{{\bf {V}}_{E}},{{\bf {W}}_{E}})+h_{3}({\bf {V}},{{\bf {V}}_{E}},{{\bf {W}}_{X}})\label{multicastoptorigVEfai_a}\\
& \ \ \text{s.t.}\quad\ {\rm {Tr(}}{\bf {V}}{{\bf {V}}^{H}}{\rm {+}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{\rm {)}}\le{P_{T}},\label{multicastoptorigVEfai_b}\\
& \quad\quad\quad\!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.\label{multicastoptorigVEfai_c}
\end{align}
\end{subequations}
\subsection{Optimizing the Matrices ${\bf {V}}$ and ${{\bf {V}}_{E}}$}
By fixing ${\bf {\Phi}}$, Problem (\ref{multicastoptorigVEfai})
can be written more compactly as \begin{subequations} \label{multicastoptorigVVEfaiconvx}
\begin{align}
& \ \underset{{\bf {V}},{{\bf {V}}_{E}}}{\mathop{\text{min}}}\ \ \underset{l=1,\cdots,L}{\max}\{-\text{Tr}({{\bf {W}}_{I,l}}{{\bf {V}}^{H}}{{\hat{{\bf {H}}}}_{I,l}}^{H}{{\bf {U}}_{I,l}})-\text{Tr}({{\bf {W}}_{I,l}}{{\bf {U}}_{I,l}}^{H}{{\hat{{\bf {H}}}}_{I,l}}{\bf {V}})\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V,l}^{i}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE,l}^{i}}{{\mathbf{V}}_{E}})-{C}_{l}\}\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}{{\mathbf{V}}_{E}})\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}^{e}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE}^{e}}{{\mathbf{V}}_{E}})\label{multicastoptorigVVEfaiconvx_a}\\
& \ \ \ \ \ \text{s.t.}\quad\ {\rm {Tr(}}{\bf {V}}{{\bf {V}}^{H}}{\rm {+}}{{\bf {V}}_{E}}{{\bf {V}}_{E}^{H}}{\rm {)}}\le{P_{T}},\label{multicastoptorigVVEfaiconvx_b}
\end{align}
\end{subequations}
where \begin{subequations}
\begin{alignat}{1}
& {{\mathbf{H}}_{V}^{e}}\text{(}\mathbf{\Phi}\text{)}=\sigma_{E}^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},\\
& {{\mathbf{H}}_{VE}^{e}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}+\sigma_{E}^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}},\\
& {{\mathbf{H}}_{V,l}^{i}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{I,l}}^{H}{{\mathbf{U}}_{I,l}}{{\mathbf{W}}_{I,l}}{{\mathbf{U}}_{I,l}^{H}}{{\mathbf{\hat{H}}}_{I,l}},\\
& {{\mathbf{H}}_{VE,l}^{i}}\text{(}\mathbf{\Phi}\text{)}={{\mathbf{\hat{H}}}_{I,l}}^{H}{{\mathbf{U}}_{I,l}}{{\mathbf{W}}_{I,l}}{{\mathbf{U}}_{I,l}^{H}}{{\mathbf{\hat{H}}}_{I,l}},\\
& {C}_{l}=\log\left|{{\bf {W}}_{I,l}}\right|+d-\text{Tr}({{\bf {W}}_{I,l}}+\sigma_{I,l}^{2}{{\bf {W}}_{I,l}}{{\bf {U}}_{I,l}}^{H}{{\bf {U}}_{I,l}}).
\end{alignat}
\end{subequations}
The Problem (\ref{multicastoptorigVVEfaiconvx}) is a convex QCQP problem, we can
obtain its optimal solution using a general-purpose convex
optimization solver.
\subsection{Optimizing the Phase Shifts $\mathbf{\Phi}$}
By fixing ${\bf {V}},{{\bf {V}}_{E}}$, the optimization problem for
the phase shift matrix $\mathbf{\Phi}$ reduced from Problem (\ref{multicastoptorigVVEfaiconvx})
is formulated as
\begin{subequations} \label{multicastoptorigfaiconvx}
\begin{align}
& \ \underset{\mathbf{\Phi}}{\mathop{\text{min}}}\ \ {g_{0}}(\mathbf{\Phi})\triangleq\underset{l=1,\cdots,L}{\max}\{-\text{Tr}({{\bf {W}}_{I,l}}{{\bf {V}}^{H}}{{\hat{{\bf {H}}}}_{I,l}}^{H}{{\bf {U}}_{I,l}})-\text{Tr}({{\bf {W}}_{I,l}}{{\bf {U}}_{I,l}}^{H}{{\hat{{\bf {H}}}}_{I,l}}{\bf {V}})\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V,l}^{i}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE,l}^{i}}{{\mathbf{V}}_{E}})-{C}_{l}\}\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}^{H}}{{\mathbf{\hat{H}}}_{E}}{{\mathbf{V}}_{E}})\nonumber \\
& \quad\quad\quad\quad\quad\quad\quad+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}^{e}}\mathbf{V})+\text{Tr}({{\mathbf{V}}_{E}^{H}}{{\mathbf{H}}_{VE}^{e}}{{\mathbf{V}}_{E}})\label{multicastoptorigfaiconvx_a}\\
& \ \ \ \ \ \text{s.t.}\quad\ \!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M.\label{multicastoptorigfaiconvx_b}
\end{align}
\end{subequations}
By complex mathematical manipulations, the OF ${g_{0}}(\mathbf{\Phi})$
can be equivalently transformed into
\begin{alignat}{1}
{g_{0}}(\mathbf{\Phi}) & \triangleq\underset{l=1,\cdots,L}{\max}\{{g_{0,l}^{i}}(\mathbf{\Phi})\}+{g_{0}^{e}}(\mathbf{\Phi}),
\end{alignat}
where
\begin{subequations}
\begin{alignat}{1}
{g_{0,l}^{i}}(\mathbf{\Phi}) & ={\rm {Tr}}\left({{\bf {\Phi}}^{H}{\bf {D}}_{l}^{iH}}\right)+{\rm {Tr}}\left({\bf {\Phi}}{\bf {D}}_{l}^{i}\right)+{\rm {Tr}}\left[{{\bf {\Phi}}{{\bf {C}}_{VE}}{{\bf {\Phi}}^{H}}{{\bf {B}}_{VE,l}^{i}}}\right]+{\rm {Tr}}\left({{\bf {\Phi}}{{\bf {C}}_{V}}{{\bf {\Phi}}^{H}}{{\bf {B}}_{V,l}^{i}}}\right)+C_{l}^{i},\\
{g_{0}^{e}}(\mathbf{\Phi}) & ={\rm {Tr}}\left({{\bf {\Phi}}^{H}{\bf {D}}^{eH}}\right)+{\rm {Tr}}\left({\bf {\Phi}}{\bf {D}}^{e}\right)+{\rm {Tr}}\left[{{\bf {\Phi}}{{\bf {C}}_{VE}}{{\bf {\Phi}}^{H}}{{\bf {B}}_{VE}^{e}}}\right]+{\rm {Tr}}\left({{\bf {\Phi}}{{\bf {C}}_{V}}{{\bf {\Phi}}^{H}}{{\bf {B}}_{V}^{e}}}\right)+C^{e},
\end{alignat}
\end{subequations}
and
\begin{subequations}
\begin{alignat}{1}
{\bf {D}}_{l}^{i} & ={\bf {G}}{{\bf {V}}_{X}}{\bf {H}}_{b,I,l}^{H}{{\bf {M}}_{I,l}}{{\bf {H}}_{R,I,l}}-{\bf {GV}}{{\bf {W}}_{I,l}}{\bf {U}}_{I,l}^{H}{{\bf {H}}_{R,I,l}},\\
{{\bf {C}}_{VE}} & ={\bf {G}}{{\bf {V}}_{E}}{\bf {V}}_{E}^{H}{{\bf {G}}^{H}},\\
{{\bf {C}}_{V}} & ={\bf {G}}{\bf {V}}{\bf {V}}^{H}{{\bf {G}}^{H}},\\
{{\bf {B}}_{VE,l}^{i}} & =\left({{\bf {H}}_{R,I,l}^{H}{{\bf {U}}_{I,l}}{{\bf {W}}_{I,l}}{\bf {U}}_{I,l}^{H}{{\bf {H}}_{R,I,l}}}\right),\\
{{\bf {B}}_{V,l}^{i}} & =\left({{\bf {H}}_{R,I,l}^{H}{{\bf {U}}_{I,l}}{{\bf {W}}_{I,l}}{\bf {U}}_{I,l}^{H}{{\bf {H}}_{R,I,l}}}\right),\\
C_{l}^{i} & ={\rm {Tr}}\left[{{\bf {H}}_{b,I,l}}{{\bf {V}}_{X}}{\bf {H}}_{b,I,l}^{H}{{\bf {M}}_{I,l}}\right]\!+\!{\rm {Tr}}\left[{{{\bf {U}}_{I,l}}{\bf {W}}_{I,l}^{H}{{\bf {V}}^{H}}{\bf {H}}_{b,I,l}^{H}}\right]\!+\!{\rm {Tr}}\left[{{{\bf {H}}_{b,I,l}}{\bf {V}}{{\bf {W}}_{I,l}}{\bf {U}}_{I,l}^{H}}\right]-{C}_{l},\\
{\bf {M}}_{I,l} & ={{\bf {U}}_{I,l}}{{\bf {W}}_{I,l}}{\bf {U}}_{I,l}^{H},\\
{\bf {D}}^{e} & =\sigma_{E}^{-2}{\bf {G}}{{\bf {V}}_{X}}{\bf {H}}_{b,E}^{H}{{\bf {W}}_{X}}{{\bf {H}}_{R,E}}+{\bf {G}}{{\bf {V}}_{E}}{\bf {V}}_{E}^{H}{\bf {H}}_{b,E}^{H}{{\bf {M}}_{E}}{{\bf {H}}_{R,E}}-{\bf {G}}{{\bf {V}}_{E}}{{\bf {W}}_{E}}{\bf {U}}_{E}^{H}{{\bf {H}}_{R,E}},\\
{{\bf {B}}_{VE}^{e}} & =\left({\sigma_{E}^{-2}{\bf {H}}_{R,E}^{H}{{\bf {W}}_{X}}{{\bf {H}}_{R,E}}+{\bf {H}}_{R,E}^{H}{{\bf {U}}_{E}}{{\bf {W}}_{E}}{\bf {U}}_{E}^{H}{{\bf {H}}_{R,E}}}\right),\\
{{\bf {B}}_{V}^{e}} & =\left({\sigma_{E}^{-2}{\bf {H}}_{R,E}^{H}{{\bf {W}}_{X}}{{\bf {H}}_{R,E}}}\right),\\
C^{e} & =\sigma_{E}^{-2}{\rm {Tr}}\left[{{\bf {H}}_{b,E}}{{\bf {V}}_{X}}{\bf {H}}_{b,E}^{H}{{\bf {W}}_{X}}\right]+{\rm {Tr}}\left[{{\bf {H}}_{b,E}}{{\bf {V}}_{E}}{\bf {V}}_{E}^{H}{\bf {H}}_{b,E}^{H}{{\bf {M}}_{E}}\right] \nonumber \\
& +{\rm {Tr}}\left[{{{\bf {U}}_{E}}{\bf {W}}_{E}^{H}{\bf {V}}_{E}^{H}{\bf {H}}_{b,E}^{H}}\right]+{\rm {Tr}}\left[{{{\bf {H}}_{b,E}}{{\bf {V}}_{E}}{{\bf {W}}_{E}}{\bf {U}}_{E}^{H}}\right].
\end{alignat}
\end{subequations}
Similarly, Problem (\ref{multicastoptorigfaiconvx}) can be further simplified as
\begin{subequations}\label{findfaiprepareforMM}
\begin{alignat}{1}
\underset{{\bm{\phi}}}{\mathop{\text{min}}} & \underset{l=1,\cdots,L}{\max}\{{{\bm{\phi}}^{{\rm {H}}}}{\bm{\Xi}_{l}^{i}}{\bm{\phi}}+2\textrm{Re}[{\bm{\phi}}^{{\rm {H}}}{\bf {d}}_{l}^{i*}]+C_{l}^{i}\}+{{\bm{\phi}}^{{\rm {H}}}}{\bm{\Xi}^{e}}{\bm{\phi}}+2\textrm{Re}[{\bm{\phi}}^{{\rm {H}}}{\bf {d}}^{e*}]+C^{e}\\
\text{s.t.} & \quad\quad\ \!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,
\end{alignat}
\end{subequations}
where
\begin{subequations}
\begin{alignat}{1}
{\bm{\Xi}_{l}^{i}} & ={\bf {B}}_{VE,l}^{i}\odot{{\bf {C}}_{VE}^{{\rm {T}}}}+{\bf {B}}_{V,l}^{i}\odot{{\bf {C}}_{V}^{{\rm {T}}}},\\
{\bm{\Xi}^{e}} & ={\bf {B}}_{VE}^{e}\odot{{\bf {C}}_{VE}^{{\rm {T}}}}+{\bf {B}}_{V}^{e}\odot{{\bf {C}}_{V}^{{\rm {T}}}},\\
{\bf {d}}_{l}^{i} & ={\left[{{{\left[{\bf {D}}_{l}^{i}\right]}_{1,1}},\cdots,{{\left[{\bf {D}}_{l}^{i}\right]}_{M,M}}}\right]^{{\rm {T}}}},\\
{\bf {d}}^{e} & ={\left[{{{\left[{\bf {D}}^{e}\right]}_{1,1}},\cdots,{{\left[{\bf {D}}^{e}\right]}_{M,M}}}\right]^{{\rm {T}}}}.
\end{alignat}
\end{subequations}
By using the Lemma 1 in \cite{pan2019multicell}, Problem (\ref{findfaiprepareforMM}) will be recast as \begin{subequations}\label{findfaiMMformorg}
\begin{alignat}{1}
\underset{{\bm{\phi}}}{\mathop{\max}} & \underset{l=1,\cdots,L}{\min}\{2\textrm{Re}[{{\bm{\phi}}^{{\rm {H}}}}{\bf {q}}_{l}^{i,t}]-C_{q,l}^{i,t}\}+2\textrm{Re}[{{\bm{\phi}}^{{\rm {H}}}}{\bf {q}}^{e,t}]-C_{q}^{e,t}\\
\text{s.t.} & \quad\quad\ \!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,
\end{alignat}
\end{subequations}
where
\begin{subequations}
\begin{alignat}{1}
{\bf {q}}_{l}^{i,t} & =\left({{\lambda_{l,{\rm {\max}}}^{i}}{{\bf {I}}_{M}}-{\bm{\Xi}_{l}^{i}}}\right){{\bm{\phi}}^{t}}-{\bf {d}}_{l}^{i*},\\
C_{q,l}^{i,t} & =2M{\lambda_{l,{\rm {\max}}}^{i}}-{\left({{\bm{\phi}}^{t}}\right)^{{\rm {H}}}}\left({\bm{\Xi}_{l}^{i}}\right){{\bm{\phi}}^{t}}+C_{l}^{i},\\
{\bf {q}}^{e,t} & =\left({{\lambda_{{\rm {\max}}}^{e}}{{\bf {I}}_{M}}-{\bm{\Xi}^{e}}}\right){{\bm{\phi}}^{t}}-{\bf {d}}^{e*},\\
C_{q}^{e,t} & =2M{\lambda_{{\rm {\max}}}^{e}}-{\left({{\bm{\phi}}^{t}}\right)^{{\rm {H}}}}\left({\bm{\Xi}^{e}}\right){{\bm{\phi}}^{t}}+C^{e},
\end{alignat}
\end{subequations}
and ${\lambda_{l,{\rm {\max}}}^{i}}$ is the the maximum eigenvalue
of ${\bm{\Xi}_{l}^{i}}$, and ${\lambda_{{\rm {\max}}}^{e}}$ is the
maximum eigenvalue of ${\bm{\Xi}^{e}}$. By defining ${\bf {q}}_{l}^{ie,t} \triangleq {\bf {q}}_{l}^{i,t}+{\bf {q}}^{e,t}$ and $C_{q,l}^{ie,t}\triangleq C_{q,l}^{i,t}+C_{q}^{e,t}$, Problem (\ref{findfaiMMformorg})
can be rewritten as \begin{subequations}\label{findfaiMMform2}
\begin{alignat}{1}
\underset{{\bm{\phi}}}{\mathop{\max}} & \underset{l=1,\cdots,L}{\min}\{2\textrm{Re}[{{\bm{\phi}}^{{\rm {H}}}}{\bf {q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\}\\
\text{s.t.} & \quad\quad\ \!\left|{{\phi}_{m}}\right|=1,m=1,\cdots,M,
\end{alignat}
\end{subequations}
which is equivalent to the following problem
\begin{subequations}
\begin{alignat}{1}
\underset{{\bm{\phi}},z}{\mathop{\max}} & \quad z\\
\text{s.t.}\ \ & 2\textrm{Re}[{{\bm{\phi}}^{{\rm {H}}}}{\bf {q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\geq z, l=1,\cdots,L,\\
& \left|{{\phi}_{m}}\right|=1,m=1,\cdots,M. \label{unitmodulusCCP}
\end{alignat}
\end{subequations}
We note that the above problem is still non-convex due to the unit-modulus
constraints. To deal with the non-convex constraints, the penalty
CCP method is applied. Following the penalty CCP framework, the constraint
(\ref{unitmodulusCCP}) are firstly equivalently rewritten as $-1\leq\left|{{\phi}_{m}}\right|^{2}\leq1,m=1,\cdots,M$.
The non-convex parts of the resulting constraints are then linearized
by $\left|{{\phi}_{m}^{\text{(n)}}}\right|^{2}-2\textrm{Re}({{\phi}_{m}^{\text{(*)}}}{{\phi}_{m}^{\text{(n)}}})\leq-1,m=1,\cdots,M$.
We finally have the following convex subproblem of ${\bm{\phi}}$
as
\begin{subequations}\label{ccpfai}
\begin{alignat}{1}
\underset{{\bm{\phi}},z}{\mathop{\max}} & \quad z-\lambda^{(t)}\left\Vert \mathbf{b}\right\Vert _{1}\\
\text{s.t.} & \ 2\textrm{Re}[{{\bm{\phi}}^{{\rm {H}}}}{\bf {q}}_{l}^{ie,t}]-C_{q,l}^{ie,t}\geq z,\\
& \left|{{\phi}_{m}^{\text{(n)}}}\right|^{2}-2\textrm{Re}({{\phi}_{m}^{\text{(*)}}}{{\phi}_{m}^{\text{(n)}}})\leq b_{m}-1,m=1,\cdots,M,\\
& \left|{{\phi}_{m}}\right|^{2}\leq1+b_{M+m},m=1,\cdots,M.
\end{alignat}
\end{subequations}
where $\mathbf{b}=[b_{1},\cdots,b_{2M}]^{T}$ are slack variables
imposed over the equivalent linear constraints of the unit-modulus
constraints, and $\left\Vert \mathbf{b}\right\Vert _{1}$ is the penalty
term in the OF. $\left\Vert \mathbf{b}\right\Vert _{1}$
is scaled by the regularization factor $\lambda^{(t)}$ to control
the feasibility of the constraints. The specific steps of the penalty CCP method can be referred in \cite{zhou2020framework}.
\section{Simulation Results}\label{simlresult}
In this section, numerical simulations are carried out to evaluate the assistance of the IRS on the AN-aided MIMO secure communication system. We focus on the scenario of the standard three-terminal MIMO Guassian wiretap channel shown in Fig.~\ref{fig2add}, where there are one BS, one legitimate IR and one Eve, all with multiple antennas. The distance from the BS to the IRS is $d_{BR}=50$ m. We assume that the line connecting the IR and Eve is parallel to the line connecting the BS and the IRS, and that the vertical distance between them is $d_{v}=2$ m.
\begin{figure}
\centering
\includegraphics[width=3.4in]{SRsimmodel.pdf}
\caption{The three-terminal MIMO communication scenario in simulation.}\vspace{-0.8cm}
\label{fig2add}
\end{figure}
The large-scale path loss is modeled as ${\rm{PL = }}{{\rm{P}}{{\rm{L}}_0} - 10\alpha {{\log }_{10}}\left( {\frac{d}{{{d_0}}}} \right)}$, where ${\rm{P}}{{\rm{L}}_0}$ is the path loss at the reference distance $d_0=1$ m, $\alpha$ is the path loss exponent, $d$ is the link distance. In our simulations, we set ${\rm{P}}{{\rm{L}}_0}=-30$ dB. The path loss exponents of the links from BS to Eve, from BS to IR, from IRS to Eve and from IRS to IR are ${\alpha _{{\rm{BE}}}}=3.5$, ${\alpha _{{\rm{BI}}}}=3.5$, ${\alpha _{{\rm{RE}}}}=2.5$ and ${\alpha _{{\rm{RI}}}}=2.5$ respectively. The path-loss exponents of the link from BS to IRS is set to be ${\alpha _{{\rm{BR}}}}= 2.2$, which means that the IRS is well-located, and the path loss is negligible in this link.
For the direct channels from the BSs to the Eve and IR, the small-scale fading is assumed to be Rayleigh fading due to extensive scatters. However, for the IRS-related channels, the small-scale fading is assumed to be Rician fading. Specifically, the small-scale channel can be modeled as
\begin{alignat}{1}
\tilde{\mathbf{H}} & =\left(\sqrt{\frac{\beta}{1+\beta}}\tilde{\mathbf{H}}^{LOS}+\sqrt{\frac{1}{1+\beta}}\tilde{\mathbf{H}}^{NLOS}\right),
\end{alignat}
where $\beta$ is the Rican factor, $\tilde{\mathbf{H}}^{LOS}$ denotes
the deterministic line of sight (LoS) component of the IRS-related
channel, and $\tilde{\mathbf{H}}^{NLOS}$ denotes the non-LoS (NLoS)
component of the IRS-related channel, which is modeled as Rayleigh
fading. By assuming the antennas at the BS, IRS, Eve and IR are arranged
in a uniform linear array (ULA), the $\tilde{\mathbf{H}}^{LOS}$ can
be modeled as $\tilde{\mathbf{H}}^{LOS}=\mathbf{a}_{r}\mathbf{a}_{t}^{H}$,
where $\mathbf{a}_{t}$ and $\mathbf{a}_{r}$ are the steering vectors
of the transmitting and receiving arrays respectively. The $\mathbf{a}_{t}$
and $\mathbf{a}_{r}$ are defined as,
\begin{subequations}\label{steeringvectr}
\begin{alignat}{1}
\mathbf{a}_{t} & =\left[\begin{array}{cccc}
1, & \exp(j2\pi\frac{d_{t}}{\lambda}\sin\varphi_{t}), & \cdots, & \exp(j2\pi\frac{d_{t}}{\lambda}(N_{t}-1)\sin\varphi_{t})\end{array}\right]^{T},\\
\mathbf{a}_{r} & =\left[\begin{array}{cccc}
1, & \exp(j2\pi\frac{d_{r}}{\lambda}\sin\varphi_{r}), & \cdots, & \exp(j2\pi\frac{d_{r}}{\lambda}(N_{r}-1)\sin\varphi_{r})\end{array}\right]^{T}.
\end{alignat}
\end{subequations}
In \eqref{steeringvectr}, $\lambda$ is the wavelength; $d_{t}$
and $d_{r}$ are the element intervals of the transmitting and receiving
array; $\varphi_{t}$ and $\varphi_{r}$ are the angle of departure
and the angle of arrival; $N_{t}$ and $N_{r}$ are the number of
antennas/elements at the transmitter and receiver, respectively. We
set $\frac{d_{t}}{\lambda}=\frac{d_{r}}{\lambda}=0.5$, and $\varphi_{t}=\tan^{-1}(\frac{y_{r}-y_{t}}{x_{r}-x_{t}})$,
$\varphi_{r}=\pi-\varphi_{t}$, where $(x_{t},y_{t})$ is the location
of the transmitter, and $(x_{r},y_{r})$ is the location of the receiver.
If not specified, the simulation parameters are set as follows. The IR's noise power and the Eve's noise power are $\sigma _{I}^{2}=-75$ dBm and $\sigma _{E}^{2}=-75$ dBm. The numbers of BS antennas, IR antennas, and Eve antennas are $N_T=4$, $N_I=2$, and $N_E=2$ respectively. There are $d=2$ data streams and $M=50$ IRS reflection elements. The transmit power limit is $P_{T}=15$ dBm, and the error tolerance is $\varepsilon=10^{-6}$. The horizontal distance between the BS and the Eve is $d_{BE}=44$ m. The horizontal distance between the BS and the IR is selected from $d_{BI}=[10 \ \text{m},70 \ \text{m}]$. The channels are realized 200 times independently to average the simulation results.
\subsection{Convergence Analysis}
The convergence performance of the proposed BCD-MM algorithm is investigated. The iterations of the BCD algorithm are termed as outer-layer iterations, while the iterations of the MM algorithm are termed as the inner-layer iterations. Fig.~\ref{fig1simu} shows three examples of convergence behaviour for $M=$10, 20 and 40 phase shifts of IRS. In Fig.~\ref{fig1simu}, the SR increases versus the iteration number, and finally reaches a stable value. It is shown that the algorithm converges quickly, almost with 20 iterations, which demonstrates the efficiency of the proposed algorithm. Moreover, a larger converged SR value is reached with a larger $M$, which means that better security can be obtained by using more IRS elements. However, more IRS elements bring a heavier computation, which is demonstrated in Fig.~\ref{fig1simu} in the form of a slower convergence speed with more phase shifters.
\begin{figure}
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{outlayer_50.pdf}\vspace{-0.6cm}
\caption{Convergence behaviour of the BCD algorithm.}
\label{fig1simu}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{innerlayer.pdf}\vspace{-0.6cm}
\caption{Convergence behaviour of the MM algorithm.}
\label{fig2simu}
\end{minipage}\vspace{-0.7cm}
\end{figure}
Specifically, we evaluate the convergence performance of the MM algorithm used for solving the optimal IRS phase shifts. The inner-layer iterative process of the MM algorithm in the first iteration of the BCD algorithm is shown in Fig.~\ref{fig2simu}. The SR value increases as the iteration number increases, and finally converges to a stable value. According with the convergency performance in the out-layer iteration, similar conclusions can be drawn for the inner-layer iteration, which is that a higher converged SR value can be obtained with more phase shifts but at the cost of lower convergence speed. The reason for the lower convergence speed with larger $M$ value is that more optimization variables are introduced, which require more computation complexity.
\subsection{Performance Evaluation}
In this subsection, our proposed algorithm is evaluated by comparing the simulation results to three schemes of
\begin{enumerate}
\item \textbf{RandPhase}: The phase shifts of the IRS are randomly selected from $[0,2\pi]$. In this scheme, the MM algorithm is skipped, and only the TPC matrix and AN covariance matrix are optimized.
\item \textbf{No-IRS}: Without the IRS, the channel matrices of IRS related links become zero matrices, which is ${\bf{H}}_{R,I}={\bf{0}}$, ${\bf{H}}_{R,E}={\bf{0}}$ and ${{\bf{G}}}={\bf{0}}$. This scheme results a conventional AN-aided communication system, and only the TPC matrix and AN covariance matrix need to be optimized.
\item \textbf{BCD-QCQP-SDR}: The BCD algorithm is utilized. However, the TPC matrix and the AN covariance matrix is optimized by tackling Problem (\ref{optorigVElowerbndSmpNOfai}) as a QCQP problem, which is solved by the general CVX solvers, e.g. Sedumi or Mosek. The phase shifts of IRS are optimized by solving Problem (\ref{appjig}) with the SDR technique.
\end{enumerate}
\subsubsection{Impact of Transmit Power}
\begin{figure}
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSpowCompared.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the transmit power limit.}
\label{fig3simu}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSMCompared.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the number of phase shifts $M$.}
\label{fig4simu}
\end{minipage}\vspace{-0.7cm}
\end{figure}
To evaluate the impact of the transmit power limit $P_T$, the average SRs versus the transmit power limit for various schemes are given in Fig.~\ref{fig3simu}, which demonstrates that the achieved SRs of three schemes increase as the power limit $P_T$ increases. It is observed that the BCD-MM algorithm significantly outperforms the other three benchmark schemes over the entire range of transmit power limits. By comparing the RandPhase scheme to the No-IRS scheme, we find that the RandPhase scheme is better than the No-IRS scheme for obtaining higher SR, and that the SR gap increases with the power limit $P_T$. The reason is that, for the RandPhase scheme, the IR is closer to the IRS than the Eve is, and more signal power from the IRS can be acquired by the IR than that by the Eve, while for the No-IRS scheme, the IR is further from the BS than the Eve is, and less signal power from the BS can be acquired by the IR than that by the Eve. This comparison result signifies that even the phase shifts of IRS are random, the IRS can enhance the system security. In comparison to the no-IRS scheme, the SR gain achieved by the proposed algorithm is very obvious, and increases greatly with the power limit $P_T$, which confirms the effectiveness and benefits of employing the IRS. By comparing the proposed scheme and the RandPhase scheme, we find that the security gain obtained for the proposed scheme is much greater than that for the RandPhase scheme. That's because the phase shifts of IRS are properly designed to enhance the signal received at the IR more constructively, and weaken the signal received at the Eve more destructively. This comparison result emphasizes that optimizing the phase shifts of IRS is important and necessary. By comparing the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm, we observe that in terms of the SR performance, the proposed BCD-MM algorithm is better BCD-QCQP-SDR, and the performance gain increases with the $P_T$. Moreover, the proposed BCD-MM algorithm is much more efficient than the BCD-QCQP-SDR algorithm. The superiority of the proposed algorithm is further validated.
\subsubsection{Impact of the Phase Shifts Number}
The averaged SR performance of four schemes with various phase shifts number $M$ is shown in Fig.~\ref{fig4simu}, which demonstrates that the proposed BCD-MM algorithm is significantly superior to the other three schemes. We observe that the SR achieved by the BCD-MM scheme obviously increases with $M$, while the RandPhase scheme only shows a slight improvement as $M$ increases, and the No-IRS scheme has very low SRs irrelative with $M$. Larger the element number $M$ of IRS is, more significant the performance gain obtained by the proposed algorithm is. For example, when $M$ is small as $M=10$, the SR gain of the BCD-MM relative to the No-IRS is only 1.3 bit/s/Hz, while this SR gain becomes 9.5 bit/s/Hz when $M$ increases to $M=100$. The performance gain for the proposed algorithm originates from two perspectives. On the one hand, a higher array gain can be obtained by increasing $M$, since more signal power can be received at the IRS with larger $M$. On the other hand, a higher reflecting beamforming gain can be obtained by increasing $M$, which means that the sum of coherently adding the reflected signals at the IRS elements increases with $M$ by appropriately designing the phase shifts. However, only the array gain can be exploited by the RandPhase scheme, thus the SRs for it increase very slowly, and remain at much lower values than those for the proposed algorithm. These results further confirm that more security improvements can be archived by using a large IRS with more reflect elements and optimizing the phase shifts properly, however there may bring the computation complexity problem. In comparison to the BCD-QCQP-SDR algorithm, the proposed BCD-MM algorithm can achieve the higher SR, and the SR performance gap increases with $M$.
\subsubsection{Impact of the relative location of IRS}
\begin{figure}
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SMVSBoblocaCompared.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the location of the IR $d_{\rm{BI}}$.}
\label{fig5simu}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSafaCompared.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the path loss exponent of IRS-related links.}
\label{fig6simu}
\end{minipage}\vspace{-0.7cm}
\end{figure}
Fig.~\ref{fig5simu} illustrates the achieved SRs for four schemes with various BS-IR horizontal distance $d_{BI}$, where the BS-Eve distance is fixed to be $d_{BE} = 44$ m. It is observed that the proposed BCD-MM algorithm is the best among the four schemes for obtaining the highest SR value. When the IR moves far away from the BS, the SRs decrease for the four schemes, however, the SRs achieved for the RandPhase, the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm increase greatly when the IR approaches the IRS. The achieved SRs at different BS-IR distances of the RandPhase scheme and the no-IRS scheme are almost the same, except for $d_{BI} \in [40 \text{m}, 50 \text{m}]$, in which case the IRS brings a prominent
security enhancement when IR becomes close to it even with random IRS phase shifts. Similarly, the proposed BCD-MM algorithm and the BCD-QCQP-SDR algorithm can achieve almost the same SRs, except for $d_{BI} \in [40 \text{m}, 50 \text{m}]$, in which case the IR is close to the IRS, and the proposed BCD-MM algorithm is superior to the BCD-QCQP-SDR algorithm. For other BS-IR distances where the IR is far from the IRS, the SRs of RandPhase scheme are similar with those of the No-IRS scheme due to the not fully explored potential of IRS. By optimizing the phase shifts of IRS, the SRs are enhanced at different BS-IRS distances. And the SR gain of the proposed BCD-MM algorithm over the RandPhase scheme increases when the IR moves close to the IRS ($d_{BI} \in [40 \text{m}, 50 \text{m}]$). This signifies that as long as the IRS is deployed close to the IR, significant security enhancement can be achieved by the IRS in an AN-aided MIMO communication system. Moreover, it is highly recommended that the IRS phase shifts should be optimized to prevent the system security degrading into the level of No-IRS scheme.
\subsubsection{Impact of the Path Loss Exponent of IRS-related Links}
In the above simulations, the path loss exponents of the IRS-related links (including the BS-IRS link, IRS-IR link and IRS-Eve link) are set to be low by assuming that the IRS is properly located to obtain clean channels without heavy blockage. Practically, such kind of settings may not always be sensible due to the real-field environment. Thus, it is necessary to investigate the security gain brought by the IRS and our proposed algorithm with higher values of IRS-related path loss exponents. For the sake of analysis, we assume the path-loss exponents of the links from BS to IRS, from IRS to IR and from IRS to Eve are the same as ${\alpha _{{\rm{BR}}}}={\alpha _{{\rm{RI}}}}={\alpha _{{\rm{RE}}}} \buildrel \Delta \over = {\alpha _{{\rm{IRS}}}}$. Then, the achieved SRs versus the path-loss exponent ${\alpha _{{\rm{IRS}}}}$ of IRS-related links are shown in Fig.~\ref{fig6simu}, which demonstrates that the SR obtained by the BCD-MM algorithm decreases as ${\alpha _{{\rm{IRS}}}}$ increases, and finally drops to the same SR value which is achieved by the RandPhase, BCD-QCQP-SDR and No-IRS schemes. The reason is that larger ${\alpha _{{\rm{IRS}}}}$ means more severe signal attenuation in the IRS-related links, and more weakened signal received and reflected at the IRS. In comparison to the BCD-QCQP-SDR algorithm, the proposed BCD-MM algorithm can achieve the higher SR when the channel state of the IRS related channels is good, i.e., ${\alpha _{{\rm{IRS}}}}$ is low, and achieve almost the same SR when ${\alpha _{{\rm{IRS}}}}$ is large. Similarly, the performance gains brought by our proposed algorithm over the RandPhase and No-IRS schemes is significant with a small ${\alpha _{{\rm{IRS}}}}$. Specifically, for ${\alpha _{{\rm{IRS}}}}=2$ (almost ideal channels), the security gain is up to 9.6 bit/s/Hz over the No-IRS scheme, and 6.8 bit/s/Hz over the RandPhase scheme. Therefore, the security gain of IRS-assisted systems depends on the channel conditions of the IRS-related links. This suggests that it is much preferred to deploy the IRS with fewer obstacles, in which case, the performance gain brought by the IRS can be explored thoroughly. Fig.~\ref{fig6simu} also shows that when ${\alpha _{{\rm{IRS}}}}$ is small, the RandPhase scheme can obtain security gain over the No-IRS scheme, but this security gain decreases to zero when ${\alpha _{{\rm{IRS}}}}$ becomes large. However, the SR gain of the RandPhase scheme over the No-IRS scheme is almost negligible in comparison to the SR gain of the proposed scheme over the No-IRS scheme, which demonstrates that the necessity of jointly optimizing the TPC matrix, AN covariance matrix and the phase shifts at the IRS.
\subsubsection{Impact of the Number of Data Streams}
Compared with the MISO scenario, a significant advantage of the MIMO scenario is that multiple data streams can be transmitted to the users. To evaluate the impact of the number of data streams on the SR, the average SRs versus the transmit power limit for various numbers of data streams are given in Fig.~\ref{figdatastreams}.
\begin{figure}
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSpow_d_badEveChannl.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the transmit power limit for various numbers of data streams.}
\label{figdatastreams}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSAmplitude.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the reflection amplitude $\eta$.}
\label{figSRvsAmplitude}
\end{minipage}\vspace{-0.7cm}
\end{figure}
The number of transmit antennas is $N_T=4$. The Rician fading channels are utilized. The path loss exponents are ${\alpha _{{\rm{BR}}}}= 2.2$, ${\alpha _{{\rm{BE}}}}=3.5$, ${\alpha _{{\rm{BI}}}}=2.5$, ${\alpha _{{\rm{RE}}}}=3.5$ and ${\alpha _{{\rm{RI}}}}=2.5$ respectively. The Rician factor is $\beta=3$. The number of phase shifts is $M=50$. As shown in Fig.~\ref{figdatastreams}, the SR increases with the transmit power limit and larger number of data streams result the higher SR. When the transmit power limit is low, marginal performance gains
are achieved by increasing the number $d$ of data streams. When the transmit power limit is high, significant performance gains can be achieved by increasing the number $d$ of data streams. This means that a greater number of data streams ensure the higher SR, and the performance gain expands with the transmit power limit. For the case of $d=1$, the SR performance of $N_I=N_E=4$ and $N_I=N_E=1$ is further compared. It is revealed that the SR obtained by four receiving antennas is higher than the SR obtained by one single receiving antenna when the transmit power limit is relatively low. With the increase of transmit power limit, the SR performance gain brought by multiple receiving antennas decreases. When the transmit power limit is high enough, the SR performance is saturated, and the SR performance of the multiple receiving antennas and single receiving antenna becomes the same.
\subsubsection{Impact of the Reflection Amplitude}
Due to the manufactural and hardware reasons, the signals reflected by the IRS may be attenuated. Then, in Fig.~\ref{figSRvsAmplitude}, we study the impact of the reflection amplitude on the security performance. The transmit power limit is 10dBm. We assume that the reflection amplitudes of all the IRS elements are same as $\eta$, and that the phase shift matrix of the IRS is rewritten as ${\bf{\Phi}} =\eta\text{diag }\!\!\{\!\!\text{ }{{\phi }_{1}},\cdots ,{{\phi }_{m}},\cdots ,{{\phi }_{M}}\text{ }\!\!\}\!\!\text{ }$. As expected, the SR achieved by the IRS-aided scheme increases with $\eta$ due to less power loss. As $\eta$ increases, the superiority of the proposed BCD-MM algorithm over the other algorithms becomes more obvious. The reflection amplitude has a great impact on the security performance. Specifically, when $\eta$ increases from 0.2 to 1, the SR increases over 3.6 bit/s/Hz for the proposed BCD-MM algorithm.
\subsubsection{Impact of the Discrete Phase Shifts}
In practice, it is difficult to realize continuous phase shifts at the reflecting elements of the IRS due to the high manufacturing cost. It is more cost-effective to implement only discrete phase shifts with a small number of control bits for each element, e.g., 1-bit for two-level (0 or $\pi$) phase shifts. Thus, the impact of the controlling bits $b$ of the discrete phase shifts on the security performance is investigated in Fig.~\ref{figSRvsDiscretePhasebits}. The transmit power limit is 10dBm. It is shown that the SR with continuous phase shifts of the IRS is higher than those with discrete phase shifts. The limited discrete phase shifts inevitably cause SR performance degradation. The SR of the IRS with discrete phase shifts increases with the number of controlling bits $b$, and becomes saturated when $b\ge4$, which means that the SR loss is inevitable even when the number of controlling bits $b$ is high. For the proposed BCD-MM algorithm, the maximum SR gap between the continuous phase shifts and the discrete phase shifts is 1.4 bit/s/Hz.
\begin{figure}
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSdiscretPhasebits.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the discrete phase bits $b$.}
\label{figSRvsDiscretePhasebits}
\end{minipage}%
\hfill
\begin{minipage}[t]{0.495\linewidth}
\centering
\includegraphics[width=2.6in]{SRVSpowMultiIRs.pdf}\vspace{-0.6cm}
\caption{Achievable SR versus the transmit power limit for multiple IRs.}
\label{figSRvspowMultIRs}
\end{minipage}\vspace{-0.7cm}
\end{figure}
\subsubsection{Multiple IRs Scenario}
Finally, we consider the multiple IRs scenario to investigate the security enhancement brought by the IRS on the AN-aided MIMO communication systems. The horizontal distances between the BS and the two IRs are selected as $d_{BI,1}=47$m and $d_{BI,2}=49$m. Considering the heavy computational load, the element number of the IRS is assumed to be $M=20$. The proposed BCD-QCQP-CCP algorithm is utilized to perform the joint optimization of the TPC matrix, AN covariance matrix and the phase shifts of the IRS. The achieved SRs for the proposed algorithm, the random IRS scheme and the No-IRS scheme are shown in Fig.~\ref{figSRvspowMultIRs}. By comparing with the Random IRS scheme and the No-IRS scheme, the proposed BCD-QCQP-CCP algorithm can optimize the phase shifts of the IRS, thus achieve the higher SR. The SR gain increases with the power limit $P_T$. However, the performance gain is not as much as that in Fig.~\ref{fig3simu}. On the one hand, the element number of the IRS is set to be lower than that in Fig.~\ref{fig3simu} due to the heavy computational load. On the other hand, it is more difficult for the IRS to adjust the phase shifts to guarantee higher SRs for more legitimate IRs.
\section{Conclusions}\label{conclu}
In this paper, we propose to enhance the security of AN-aided MIMO secure communication systems by exploiting an IRS. To exploit the IRS efficiently, we formulate a SRM problem by jointly optimizing the TPC matrix at the BS, the covariance matrix of AN and phase shifts at the IRS with the constraints of transmit power limit and unit-modulus of phase shifts. To solve this non-convex problem, we propose to use the BCD algorithm to decouple the optimization variables, and optimize them iteratively. The optimal TPC matrix and AN covariance matrix were obtained in semi-closed form by the Lagrange multiplier method, and the phase shifts at the IRS were obtained in closed form by an efficient MM algorithm. Various simulations validated that significant security gains can be achieved by the proposed algorithm with IRS. Furthermore, useful suggestions for choosing and deploying the IRS are provided.
\begin{appendices}
\vspace{-0.5cm}\section{Derivation of the Problem \eqref{optorigVElowerbndSmp}}\label{firstconstrntDeriv}
By substituting $h_1({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E},{{\bf{W}}_I})$ of (\ref{lowboundh1f1}), $h_2({{\bf{U}}_E},{{\bf{V}}_E},{{\bf{W}}_E})$ of (\ref{lowboundh2f2}) and $h_3({{\bf{V}}},{{\bf{V}}_E},{{\bf{W}}_X})$ of (\ref{lowboundh3f3}) into (\ref{lowboundCANlow}), we have
\begin{align} \label{lowCAN}
&{{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}})=\log \left| {{{\bf{W}}_I}} \right| - {\rm{Tr}}({{\bf{W}}_I}{{\bf{E}}_I}({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E})) + \log \left| {{{\bf{W}}_E}} \right| \nonumber \\
& \qquad \qquad \qquad - {\rm{Tr}}({{\bf{W}}_E}{{\bf{E}}_{E}}({{\bf{U}}_E},{{\bf{V}}_E})) + \log \left| {{{\bf{W}}_X}} \right| - {\rm{Tr}}({{\bf{W}}_X}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_E}))+d+N_t+N_E \nonumber \\
&\qquad=C_{g_{0}}-{\underbrace{ {\rm{Tr}}({{\bf{W}}_I}{{\bf{E}}_I}({{\bf{U}}_I},{\bf{V}},{{\bf{V}}_E}))}_{g_{1}}}-{\underbrace{ {\rm{Tr}}({{\bf{W}}_E}{{\bf{E}}_{E}}({{\bf{U}}_E},{{\bf{V}}_E}))}_{g_{2}}}- {\underbrace{{\rm{Tr}}({{\bf{W}}_X}{{\bf{E}}_{X}}({{\bf{V}}},{{\bf{V}}_E}))}_{g_{3}}},
\end{align}
where $C_{g_{0}}\buildrel \Delta \over =\log \left| {{{\bf{W}}_I}} \right|+ \log \left| {{{\bf{W}}_E}} \right|+ \log \left| {{{\bf{W}}_X}} \right|+d+N_t+N_E$.
$C_{g_{0}}$ contains the constant terms irrelated with ${\bf{V}},{{\bf{V}}_E},{\bf{\Phi}}$. By putting matrix functions ${{\bf{E}}_I}$, ${{\bf{E}}_E}$ and ${{\bf{E}}_X}$ into \eqref{lowCAN}, we expand ${g_{1}}$, ${g_{2}}$, and ${g_{3}}$ respectively as follows.
(1) ${{g}_{1}}$ can be reformulated as
\begin{align}
{{g}_{1}}&=\text{Tr}({{\bf{W}}_{I}}[({\bf{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}){{({\bf{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})}^{H}}+{{\bf{U}}_{I}}^{H}({{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}+\sigma _{I}^{2}{{\bf{I}}_{{{N}_{I}}}}){{\bf{U}}_{I}}])\nonumber \\
&=\text{Tr}({{\bf{W}}_{I}}[({\bf{I}}-{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}-{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}+{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}}{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}) \nonumber \\
&\quad +({{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}}{{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}\text{+}{{\bf{U}}_{I}}^{H}\sigma _{I}^{2}{{\bf{I}}_{{{N}_{I}}}}{{\bf{U}}_{I}})]). \label{eq29t}
\end{align}
By gathering the constant terms related with ${{\bf{W}}_{I}},{{\bf{U}}_{I}}$ in ${{C}}_{g_{1}}$, ${{g}_{1}}$ can be simplified as
\begin{align}
{{g}_{1}}&=-\text{Tr}({{\bf{W}}_{I}}{{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}})-\text{Tr}({{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})+\text{Tr}({{\bf{V}}^{H}}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{\bf{V}})\nonumber \\ & \quad +\text{Tr}({{\bf{V}}_{E}}^{H}{{{\hat{\bf{H}}}}_{I}}^{H}{{\bf{U}}_{I}}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{{\hat{\bf{H}}}}_{I}}{{\bf{V}}_{E}})+{{C}}_{g_{1}}, \label{eq30t}
\end{align}
where ${{C}}_{g_{1}}\buildrel \Delta \over =\text{Tr}({{\bf{W}}_{I}}+\sigma _{I}^{2}{{\bf{W}}_{I}}{{\bf{U}}_{I}}^{H}{{\bf{U}}_{I}})$.
(2) ${{g}_{2}}$ can be reformulated as
\begin{align}
{{g}_{2}}&=\text{Tr}({{\mathbf{W}}_{E}}[(\mathbf{I}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}){{(\mathbf{I}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})}^{H}}+\sigma _{E}^{2}{{\mathbf{U}}_{E}}^{H}{{\mathbf{U}}_{E}}]) \nonumber \\
&=\text{Tr}({{\mathbf{W}}_{E}}[(\mathbf{I}-{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}-{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}+{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}\nonumber \\
&\quad +\sigma _{E}^{2}{{\mathbf{U}}_{E}}^{H}{{\mathbf{U}}_{E}}]). \label{eq31t}
\end{align}
By gathering the constant terms related with ${{\mathbf{W}}_{E}},{{\mathbf{U}}_{E}}$ in ${{C}}_{g_{2}}$, ${{g}_{2}}$ can be simplified as
\begin{align}
{{g}_{2}}=-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+{{C}}_{g_{2}}, \label{eq32t}
\end{align}
where ${{C}}_{g_{2}}\buildrel \Delta \over =\text{Tr}({{\bf{W}}_{E}}+\sigma _{E}^{2}{{\bf{W}}_{E}}{{\bf{U}}_{E}}^{H}{{\bf{U}}_{E}})$.
(3) ${{g}_{3}}$ can be reformulated as
\begin{align}
{{g}_{3}}=\text{Tr}({{\mathbf{W}}_{X}}({{{\bf{I}}_{{N_E}}} + \sigma _E^{-2}{{\hat {\bf{H}}}_E}({\bf{V}}{{\bf{V}}^H}+{{\bf{V}}_E}{{\bf{V}}_E}^H)\hat {\bf{H}}_E^H})).
\label{eq33t}
\end{align}
By gathering the constant terms related with ${{\mathbf{W}}_{X}}$ in ${{C}}_{g_{3}}$, ${{g}_{3}}$ can be simplified as
\begin{align}
{{g}_{3}}=\sigma _E^{-2}\text{Tr}({{\mathbf{V}}^{H}}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}\mathbf{V})+\sigma _E^{-2}\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+{{C}}_{g_{3}},
\label{eq34t}
\end{align}
where ${{C}}_{g_{3}}\buildrel \Delta \over =\text{Tr}({{\bf{W}}_{X}})$.
By substituting \eqref{eq30t}, \eqref{eq32t} and \eqref{eq34t} into \eqref{lowCAN}, we have
\begin{align} \label{lowCANwithconstants}
&{{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}})=\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})
+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})\nonumber \\
& -\text{Tr}({{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V}) -\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}{{\mathbf{V}}_{E}}) +\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})\nonumber \\
& +\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}}) -\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})
-\sigma _E^{-2}\text{Tr}({{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}}_{E}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}\mathbf{V})\nonumber \\
& -\sigma _E^{-2}\text{Tr}({{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+C_{g},
\end{align}
where $C_{g}\buildrel \Delta \over =C_{g_{0}}-C_{g_{1}}-C_{g_{2}}-C_{g_{3}}$.
Equation \eqref{lowCANwithconstants} can be rewritten more compactly as
\begin{align}
& {{\rm{C}}^{l}_{AN}}({{\bf{U}}_I},{{\bf{W}}_I},{{\bf{U}}_E},{{\bf{W}}_E},{{\bf{W}}_X},{\bf{V}},{{\bf{V}}_E},{\bf{\Phi}})=C_{g}+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})+\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})\nonumber \\
&\quad-\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V}) +\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})+\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})-\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}).
\label{eq37t}
\end{align}
where
\begin{align}
{{\mathbf{H}}_{V}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{\mathbf{\hat{H}}}_{I}}+\sigma _E^{-2}\mathbf{\hat{H}}_{E}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.
\label{eq38t}
\end{align}
\begin{align}
{{\mathbf{H}}_{VE}}={{\mathbf{\hat{H}}}_{I}}^{H}{{\mathbf{U}}_{I}}{{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{\mathbf{\hat{H}}}_{I}}+{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{U}}_{E}}{{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{\mathbf{\hat{H}}}_{E}}+\sigma _E^{-2}{{\mathbf{\hat{H}}}_{E}}^{H}{{\mathbf{W}}_{X}}{{\mathbf{\hat{H}}}_{E}}.
\label{eq39t}
\end{align}
By substituting \eqref{eq37t} into Problem \eqref{optorigVElowerbnd}, and removing the constant term $C_{g}$, we arrive at the Problem \eqref{optorigVElowerbndSmp}.
\vspace{-0.5cm}\section{Derivation of the new OF form in \eqref{eqg0forPhi}}\label{newOFDeriv}
The objective function of Problem \eqref{optproblemforfaimin} is
\begin{align} \label{firstconstraint}
{g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })=&-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{V}}^{H}}{{{\mathbf{\hat{H}}}}_{I}}^{H}{{\mathbf{U}}_{I}})-\text{Tr}({{\mathbf{W}}_{I}}{{\mathbf{U}}_{I}}^{H}{{{\mathbf{\hat{H}}}}_{I}}\mathbf{V})+\text{Tr}({{\mathbf{V}}^{H}}{{\mathbf{H}}_{V}}\mathbf{V}) \nonumber \\
&-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{V}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}^{H}{{\mathbf{U}}_{E}})-\text{Tr}({{\mathbf{W}}_{E}}{{\mathbf{U}}_{E}}^{H}{{{\mathbf{\hat{H}}}}_{E}}{{\mathbf{V}}_{E}})+\text{Tr}({{\mathbf{V}}_{E}}^{H}{{\mathbf{H}}_{VE}}{{\mathbf{V}}_{E}}).
\end{align}
The third term of \eqref{firstconstraint} is
\begin{align} \label{thirdoffirstconst}
{\rm{Tr}}\left( {{{\bf{V}}^H}{{\bf{H}}_V}{\bf{V}}} \right)& = {\rm{Tr}}\left[ {{{\bf{V}}^H}\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I} + \sigma _E^{-2}{\bf{\hat H}}_E^H{{\bf{W}}_X}{{{\bf{\hat H}}}_E}} \right){\bf{V}}} \right] \nonumber \\
&= {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_I}{\bf{V}}{{\bf{V}}^H}{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H} \right] +\sigma _E^{-2} {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}{\bf{V}}{{\bf{V}}^H}{\bf{\hat H}}_E^H{{\bf{W}}_X}} \right].
\end{align}
The sixth term of \eqref{firstconstraint} is
\begin{align} \label{sixoffirstconst}
{\rm{Tr}}\left( {{\bf{V}}_E^H{{\bf{H}}_{VE}}{{\bf{V}}_E}} \right) =& {\rm{Tr}}\left[ {{\bf{V}}_E^H\left( {{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I} + {\bf{\hat H}}_E^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{{\bf{\hat H}}}_E} +\sigma _E^{-2} {\bf{\hat H}}_E^H{{\bf{W}}_X}{{{\bf{\hat H}}}_E}} \right){{\bf{V}}_E}} \right] \nonumber \\
=& {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_I}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H} \right] + {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H} \right] \nonumber \\
&+ \sigma _E^{-2} {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{W}}_X}} \right].
\end{align}
The summation of Equation \eqref{thirdoffirstconst} and Equation \eqref{sixoffirstconst} is
\begin{align} \label{third_sixoffirstconst}
{\rm{Tr}}\left( {{{\bf{V}}^H}{{\bf{H}}_V}{\bf{V}}} \right) + {\rm{Tr}}\left( {{\bf{V}}_E^H{{\bf{H}}_{VE}}{{\bf{V}}_E}} \right) =& {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_I}\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right){\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H} \right] \nonumber \\
& + \sigma _E^{-2}{\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right){\bf{\hat H}}_E^H{{\bf{W}}_X}} \right] \nonumber \\
&+ {\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H} \right].
\end{align}
By defining ${{\bf{V}}_X}=\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right)$ and ${{\bf{M}}_I}={{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H$, {the first part} of \eqref{third_sixoffirstconst} can be derived as
\begin{align} \label{Part1_third_sixoffirstconst}
&{\rm{Tr}}\left[ {{{{\bf{\hat H}}}_I}\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right){\bf{\hat H}}_I^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H} \right]\nonumber \\
&={\rm{Tr}}\left[ {{{{\bf{\hat H}}}_I}{{\bf{V}}_X}{\bf{\hat H}}_I^H{{\bf{M}}_I}} \right] \nonumber \\
&= {\rm{Tr}}\left[ {\left( {{{\bf{H}}_{b,I}} + {{\bf{H}}_{R,I}}{\bf{\Phi G}}} \right){{\bf{V}}_X}\left( {{\bf{H}}_{b,I}^H + {{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,I}^H} \right){{\bf{M}}_I}} \right]\nonumber \\
&= {\rm{Tr}}\left[ {\left( {{{\bf{H}}_{b,I}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H + {{\bf{H}}_{b,I}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,I}^H + {{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H + {{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,I}^H} \right){{\bf{M}}_I}} \right]\nonumber \\
&= {\rm{Tr}}[ {{\bf{H}}_{b,I}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I} + {{\bf{H}}_{b,I}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,I}^H{{\bf{M}}_I} + {{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I} \nonumber \\
&\quad+ {{\bf{H}}_{R,I}}{\bf{\Phi G}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,I}^H{{\bf{M}}_I} ].
\end{align}
The derivation in \eqref{Part1_third_sixoffirstconst} can be used for the second and third parts of \eqref{third_sixoffirstconst}.
Based on the derivation in \eqref{Part1_third_sixoffirstconst}, it is obvious that {the second part} of \eqref{third_sixoffirstconst} can be derived as
\begin{align} \label{Part2_third_sixoffirstconst}
&\sigma _E^{-2}{\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}\left( {{\bf{V}}{{\bf{V}}^H} + {{\bf{V}}_E}{\bf{V}}_E^H} \right){\bf{\hat H}}_E^H{{\bf{W}}_X}} \right] \nonumber \\
&=\sigma _E^{-2}{\rm{Tr}}[ {{\bf{H}}_{b,E}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}+{{\bf{H}}_{b,E}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,E}^H{{\bf{W}}_X}+ {{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}\nonumber \\
&\quad+ {{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,E}^H{{\bf{W}}_X}].
\end{align}
Based on the derivation in \eqref{Part1_third_sixoffirstconst} and by defining ${{\bf{M}}_E}={{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H$, it is obvious that {the third part} of \eqref{third_sixoffirstconst} can be derived as
\begin{align}\label{Part3_third_sixoffirstconst}
&{\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H} \right]\nonumber \\
&={\rm{Tr}}\left[ {{{{\bf{\hat H}}}_E}\left( {{{\bf{V}}_E}{\bf{V}}_E^H} \right){\bf{\hat H}}_E^H{{\bf{M}}_E}} \right] \nonumber \\
&= {\rm{Tr}}[ {{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}+ {{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,E}^H{{\bf{M}}_E}+ {{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E} \nonumber \\
&\quad + {{\bf{H}}_{R,E}}{\bf{\Phi G}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,E}^H{{\bf{M}}_E}].
\end{align}
By adding \eqref{Part1_third_sixoffirstconst}, \eqref{Part2_third_sixoffirstconst} and \eqref{Part3_third_sixoffirstconst}, and gathering constant terms irreverent with ${\bf{\Phi}}$, Equation \eqref{third_sixoffirstconst} becomes
\begin{align}\label{third_sixoffirstconst_form2}
&{\rm{Tr}}\left( {{{\bf{V}}^H}{{\bf{H}}_V}{\bf{V}}} \right) + {\rm{Tr}}\left( {{\bf{V}}_E^H{{\bf{H}}_{VE}}{{\bf{V}}_E}} \right) \nonumber \\
&= {\rm{Tr}}\left[ {{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{b,I}}{{\bf{V}}_X}{{\bf{G}}^H} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{b,E}}{{\bf{V}}_X}{{\bf{G}}^H} + {\bf{H}}_{R,E}^H{{\bf{M}}_E}{{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}} \right)} \right] \nonumber \\
&\quad+{\rm{Tr}}\left[ {{\bf{\Phi }}\left( {{\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{G}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}} \right)} \right] \nonumber \\
& \quad + {\rm{Tr}}\left[ {{\bf{\Phi G}}{{\bf{V}}_X}{{\bf{G}}^H}{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}}} \right)} \right] \nonumber \\
&\quad+{\rm{Tr}}\left[ {{\bf{\Phi G}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}{\bf{H}}_{R,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}} \right]+C_{{t}_1},
\end{align}
where
\begin{align} \label{Ct1}
C_{{t}_1}={\rm{Tr}}\left[ {{\bf{H}}_{b,I}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I}\right]+\sigma _E^{-2}{\rm{Tr}}\left[{{\bf{H}}_{b,E}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}\right]+{\rm{Tr}}\left[ {{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}\right].
\end{align}
The first term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })$ is derived as
\begin{align} \label{firstoffirstconst}
{\rm{Tr}}\left( {{{\bf{W}}_I}{{\bf{V}}^H}{\bf{\hat H}}_I^H{{\bf{U}}_I}} \right) \!=\! {\rm{Tr}}\left( {{{\bf{U}}_I}{\bf{W}}_I^H{{\bf{V}}^H}{\bf{\hat H}}_I^H} \right)\! = \!\underbrace { {\rm{Tr}}\left[ {{{\bf{U}}_I}{\bf{W}}_I^H{{\bf{V}}^H}{\bf{H}}_{b,I}^H} \right]}_{C_{{t}_2}(\text{constant for }\mathbf{\Phi})}\! +\! {\rm{Tr}}\left[ {{\bf{H}}_{R,I}^H{{\bf{U}}_I}{\bf{W}}_I^H{{\bf{V}}^H}{{\bf{G}}^H}{{\bf{\Phi }}^H}} \right].
\end{align}
The second term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })$ is derived as
\begin{align} \label{secondoffirstconst}
&{\rm{Tr}}\left( {{{\bf{W}}_I}{\bf{U}}_I^H{{{\bf{\hat H}}}_I}{\bf{V}}} \right) = {\rm{Tr}}\left( {{{{\bf{\hat H}}}_I}{\bf{V}}{{\bf{W}}_I}{\bf{U}}_I^H} \right) = {\rm{Tr}}\left[ {\left( {{{\bf{H}}_{b,I}} + {{\bf{H}}_{R,I}}{\bf{\Phi G}}} \right){\bf{V}}{{\bf{W}}_I}{\bf{U}}_I^H} \right] \nonumber \\
&=\underbrace { {\rm{Tr}}\left[ {{{\bf{H}}_{b,I}}{\bf{V}}{{\bf{W}}_I}{\bf{U}}_I^H} \right]}_{C_{{t}_3}(\text{constant for }\mathbf{\Phi})} + {\rm{Tr}}\left[ {{\bf{\Phi GV}}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}}} \right].
\end{align}
The fourth term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })$ is derived as
\begin{align} \label{fourthoffirstconst}
&{\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{V}}_E^H{\bf{\hat H}}_E^H{{\bf{U}}_E}} \right)={\rm{Tr}}\left( {{{\bf{U}}_E}{{\bf{W}}_E^H}{\bf{V}}_E^H{\bf{\hat H}}_E^H} \right) \nonumber \\
& =\underbrace { {\rm{Tr}}\left[ {{{\bf{U}}_E}{\bf{W}}_E^H{\bf{V}}_E^H{\bf{H}}_{b,E}^H} \right]}_{C_{{t}_4}(\text{constant for }\mathbf{\Phi})} + {\rm{Tr}}\left[ {{\bf{H}}_{R,E}^H{{\bf{U}}_E}{\bf{W}}_E^H{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}} \right].
\end{align}
The fifth term of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })$ is derived as
\begin{align} \label{fifthoffirstconst}
&{\rm{Tr}}\left( {{{\bf{W}}_E}{\bf{U}}_E^H{{{\bf{\hat H}}}_E}{{\bf{V}}_E}} \right) = {\rm{Tr}}\left( {{{{\bf{\hat H}}}_E}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H} \right) \nonumber \\
&= \underbrace { {\rm{Tr}}\left[ {{{\bf{H}}_{b,E}}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H} \right]}_{C_{{t}_5}(\text{constant for }\mathbf{\Phi})} + {\rm{Tr}}\left[ {{\bf{\Phi G}}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}} \right].
\end{align}
By including the first term in \eqref{firstoffirstconst}, the second term in \eqref{secondoffirstconst}, the fourth term in \eqref{fourthoffirstconst}, the fifth term in \eqref{fifthoffirstconst}, and the sum of the third and six terms in \eqref{third_sixoffirstconst_form2} of ${g_{0}}(\mathbf{V},{{\mathbf{V}}_{E}},\mathbf{\Phi })$ and gathering constant terms irreverent with ${\bf{\Phi}}$, we have
\begin{align}
&{g_{0}}(\mathbf{\Phi }) = -\rm{Equation \ }\eqref{firstoffirstconst}-\rm{Equation \ }\eqref{secondoffirstconst}-\rm{Equation \ }\eqref{fourthoffirstconst}-\rm{Equation \ }\eqref{fifthoffirstconst}+\rm{Equation \ }\eqref{third_sixoffirstconst_form2} \nonumber \\
&={\rm{Tr}}\left[ {{{\bf{\Phi }}^H}\left( \begin{array}{l}
{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{b,I}}{{\bf{V}}_X}{{\bf{G}}^H} +\sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{b,E}}{{\bf{V}}_X}{{\bf{G}}^H} + {\bf{H}}_{R,E}^H{{\bf{M}}_E}{{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H} \nonumber \\
-{\bf{H}}_{R,I}^H{{\bf{U}}_I}{\bf{W}}_I^H{{\bf{V}}^H}{{\bf{G}}^H} - {\bf{H}}_{R,E}^H{{\bf{U}}_E}{\bf{W}}_E^H{\bf{V}}_E^H{{\bf{G}}^H}
\end{array} \right)} \right]\nonumber \\
&\quad + {\rm{Tr}}\left[ {{\bf{\Phi }}\left( \begin{array}{l}
{\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} +\sigma _E^{-2} {\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{G}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}\nonumber \\
- {\bf{GV}}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} - {\bf{G}}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}
\end{array} \right)} \right]\nonumber \\
&\quad + {\rm{Tr}}\left[ {{\bf{\Phi G}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{H}}_{R,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}} \right)} \right] \nonumber \\
&\quad + {\rm{Tr}}\left[ {{\bf{\Phi GV}}{{\bf{V}}^H}{{\bf{G}}^H}{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}}} \right)} \right]+C_t\nonumber \\
& ={\rm{Tr}}\left[ {{{\bf{\Phi }}^H}\left( \begin{array}{l}
{\bf{H}}_{R,I}^H{{\bf{M}}_I}{{\bf{H}}_{b,I}}{{\bf{V}}_X}{{\bf{G}}^H} +\sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{b,E}}{{\bf{V}}_X}{{\bf{G}}^H} + {\bf{H}}_{R,E}^H{{\bf{M}}_E}{{\bf{H}}_{b,E}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H} \nonumber \\
-{\bf{H}}_{R,I}^H{{\bf{U}}_I}{\bf{W}}_I^H{{\bf{V}}^H}{{\bf{G}}^H} - {\bf{H}}_{R,E}^H{{\bf{U}}_E}{\bf{W}}_E^H{\bf{V}}_E^H{{\bf{G}}^H}
\end{array} \right)} \right]\nonumber \\
& \quad+ {\rm{Tr}}\left[ {{\bf{\Phi }}\left( \begin{array}{l}
{\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} +\sigma _E^{-2} {\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{G}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}\nonumber \\
- {\bf{GV}}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} - {\bf{G}}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}
\end{array} \right)} \right]\nonumber \\
& \quad+ {\rm{Tr}}\left[ {{\bf{\Phi G}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} +\sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{H}}_{R,E}^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}} \right)} \right]\nonumber \\
&\quad+ {\rm{Tr}}\left[ {{\bf{\Phi GV}}{{\bf{V}}^H}{{\bf{G}}^H}{{\bf{\Phi }}^H}\left( {{\bf{H}}_{R,I}^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} +\sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}}} \right)} \right]+C_t,
\end{align}
where
\begin{align}
C_t=C_{{t}_1}+C_{{t}_2}+C_{{t}_3}+C_{{t}_4}+C_{{t}_5}.
\end{align}
Then ${g_{0}}(\mathbf{\Phi })$ becomes
\begin{align}
{g_{0}}(\mathbf{\Phi })={\rm{Tr}}\left( {{{\bf{\Phi}}^H{\bf{D}}^H}} \right) + {\rm{Tr}}\left( {{\bf{\Phi D}}} \right) + {\rm{Tr}}\left[ {{\bf{\Phi }}{{\bf{C}}_{VE}}{{\bf{\Phi }}^H}{{\bf{B}}_{VE}}} \right] + {\rm{Tr}}\left( {{\bf{\Phi }}{{\bf{C}}_V}{{\bf{\Phi }}^H}{{\bf{B}}_V}} \right)+C_t \nonumber \\
={\rm{Tr}}\left( {{{\bf{\Phi}}^H{\bf{D}}^H}} \right) + {\rm{Tr}}\left( {{\bf{\Phi D}}} \right) + {\rm{Tr}}\left[ {{{\bf{\Phi }}^H}{{\bf{B}}_{VE}}{\bf{\Phi }}{{\bf{C}}_{VE}}} \right] + {\rm{Tr}}\left( {{{\bf{\Phi }}^H}{{\bf{B}}_V}{\bf{\Phi }}{{\bf{C}}_V}} \right)+C_t,\label{eq82t}
\end{align}
where
\begin{subequations}
\begin{align}
{\bf{D}}&={\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,I}^H{{\bf{M}}_I}{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{G}}{{\bf{V}}_X}{\bf{H}}_{b,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{G}}{{\bf{V}}_E}{\bf{V}}_E^H{\bf{H}}_{b,E}^H{{\bf{M}}_E}{{\bf{H}}_{R,E}}\nonumber \\
&\quad - {\bf{GV}}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} - {\bf{G}}{{\bf{V}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}, \label{eq83ta} \\
{{\bf{C}}_{VE}}&={\bf{G}}{{\bf{V}}_E}{\bf{V}}_E^H{{\bf{G}}^H}, \label{eq83tb} \\
{{\bf{C}}_{V}}&={\bf{G}}{{\bf{V}}}{\bf{V}}^H{{\bf{G}}^H}, \label{eq83tc} \\
{{\bf{B}}_{VE}}&=\left( {{\bf{H}}_{R,I}^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}} + {\bf{H}}_{R,E}^H{{\bf{U}}_E}{{\bf{W}}_E}{\bf{U}}_E^H{{\bf{H}}_{R,E}}} \right), \label{eq83td} \\
{{\bf{B}}_{V}}&=\left( {{\bf{H}}_{R,I}^H{{\bf{U}}_I}{{\bf{W}}_I}{\bf{U}}_I^H{{\bf{H}}_{R,I}} + \sigma _E^{-2} {\bf{H}}_{R,E}^H{{\bf{W}}_X}{{\bf{H}}_{R,E}}} \right). \label{eq83te}
\end{align} \label{eq83t}
\end{subequations}
\end{appendices}
\
\
\vspace{-0.5cm}
\bibliographystyle{IEEEtran}
|
1,477,468,751,080 | arxiv | \section{Introduction}
Markov chain Monte Carlo (MCMC) has become a widely used approach
for simulation from an arbitrary distribution of interest, typically
a Bayesian posterior distribution, known as the target distribution.
MCMC really represents a family of sampling methods. Generally speaking,
any new sampler that can be shown to preserve the ergodicity of the
Markov chain such that it converges to the target distribution is
a member of the family and can be combined with other samplers as
part of a valid MCMC kernel. The key to MCMC's success is its
simplicity and applicability. In practice, however, it sometimes needs
a lot of non-trivial tuning to work well \citep{haario2005componentwise}.
To deal with this problem, many adaptive MCMC algorithms have been
proposed \citep{gilks1998adaptive,haario2001adaptive,andrieu2001controlled,sahu2003self}.
These allow parameters of the MCMC kernel to be automatically
tuned based on previous samples. This breaks the Markovian property
of the chain so has required special schemes and proofs that the resulting
chain will converge to the target distribution \citep{andrieu2003ergodicity,atchade2005adaptive,andrieu2006efficiency}.
Under some weaker and easily verifiable conditions, namely ``diminishing
adaptation'' and \textit{``}containment'', \citet{rosenthal2007coupling}
proved ergodicity of adaptive MCMC and proposed many useful samplers.
It is important to realize, however, that such adaptive MCMC samplers
address only a small aspect of a much larger problem. A typical adaptive MCMC
sampler will approximately optimize performance
given the kind of sampler chosen in the first place, but it will not
optimize among the variety of samplers that could have been chosen.
For example, an adaptive random walk Metropolis-Hastings sampler will
adapt the scale of its proposal distribution, but that adaptation
won't reveal whether an altogether different kind of sampler would
be more efficient. In many cases it would, and the exploration of
different sampling strategies often remains a human-driven trial-and-error
affair.
Here we present a method for a higher level of MCMC adaptation. The
adaptation explores a potentially large space of valid MCMC kernels
composed of different samplers. One starts with an arbitrary set of
candidate samplers for each dimension or block of dimensions in the
target distribution. The main idea is to iteratively try different
candidates that compose a valid MCMC kernel, run them for a relatively
short time, generate the next set of candidates based on the results
thus far, and so on. Since relative performance of different samplers
is specific to each model and even to each computing environment,
it is doubtful whether there is a universally optimal kind of sampler.
Hence we view the choice of efficient samplers for a particular problem
as well-suited to empirical determination via computation.
The goal of computationally exploring valid sampler combinations in
search of an efficient model-specific MCMC kernel raises a number
of challenges. First, one must prove that the samples collected as
the algorithm proceeds indeed converge to the target distribution,
even when some of the candidate samplers are internally adaptive,
such as conventional adaptive random walk samplers. We provide such
a proof for a general framework.
Second, one must determine efficient methods for exploring the very
large, discrete space of valid sampler combinations. This is complicated
by a combinatorial explosion, which is exacerbated by the fact that
any multivariate samplers can potentially be used for arbitrary blocks
of model dimensions. Here we take a practical approach to this problem,
setting as our goal only to show basic schemes that can yield substantial
improvements in useful time frames. Future work can aim to develop improvements within
the general framework presented here. We also limit ourselves to relatively simple candidate samplers,
but the framework can accommodate many more choices.
Third, one must determine how to measure the efficiency of a particular
MCMC kernel for each dimesion and for the entire model, in order to
have a metric to seek to optimize. As a first step, it is vital to
realize that there can be a tradeoff between good mixing and computational
speed. When considering adaptation within one kind of sampler, say
adaptive random walk, one can roughly assume that computational cost
does not depend on the proposal scale, and hence mixing measured by
integrated autocorrelation time, or the related effective sample size,
is a sensible measure of efficiency. But when comparing two samplers
with very different computational costs, say adaptive random walk
and slice samplers, good mixing may or may not be worth its computational
cost. Random walk samplers may mix more slowly than slice samplers
on a per iteration basis, but they do so at higher computational speed
because slice samplers can require many evaluations of model density functions. Thus
the greater number of random walk iterations per
unit time could outperform the slice sampler. An additional issue
is that different dimensions of the model may mix at different rates,
and often the slowest-mixing dimensions limit the validity of all
results \citep{turek2016automated}. In view of these considerations, we define MCMC efficiency
as the effective sample size per computation time and use that as
a metric of performance per dimension. Performance of an MCMC kernel
across all dimensions is defined as the minimum efficiency among all
dimensions.
The rest of the paper is organized as follows. Section \ref{theory} begins with a general theoretical framework
for Auto Adapt MCMC, then extends these methods to a specific Auto Adapt algorithm
involving block MCMC updating. Section \ref{method} presents an example algorithm
that fits within the framework, and provides some explanations on its
details. Section \ref{example} then outlines some numerical examples comparing
the example algorithm with existing algorithms for a variety of benchmark
models. Finally, section \ref{discussion} concludes and discusses some future research
directions.
\section{A general Auto Adapt MCMC}\label{theory}
In this section, we present a general Auto Adapt MCMC algorithm and
give theoretical results establishing its correctness.
Let $\mathbf{\mathcal{X}}$ be a state space and $\pi$ the probability
distribution on $\mathbf{\mathcal{X}}$ that we wish to sample from.
Let $\mathbf{\mathcal{I}}$ be a countable set (this set indexes the
discrete set of MCMC kernels we wish to choose from). For $\iota\in\mathbf{\mathcal{I}}$,
let $\Theta_{\iota}$ be a parameter space (for all practical purposes,
we can assume that this is some subset of some Euclidean space $\mathbb{R}^{m}$).
For $\iota\in\mathbf{\mathcal{I}}$ and $\theta\in\Theta_{\iota}$,
let $P_{\iota,\theta}$ denote a Markov kernel on $\mathbf{\mathcal{X}}$
with invariant distribution $\pi$. We set ${\displaystyle \bar{\Theta}=\bigcup_{\iota\in\mathbf{\mathcal{I}}}\{\iota\}\times\Theta_{\iota}}$
the adaptive MCMC parameter space. We want to build a stochastic process
(an adaptive Markov chain) $\{(X_{n},\ \iota_{n},\ \theta_{n}),\ n\ \geq0\}$
on $\mathbf{\mathcal{X}}\times\bar{\Theta}$ such that as $n\rightarrow\infty$,
the distribution of $X_{n}$ converges to $\pi$, and a law of large
numbers holds. We call $\iota$ the external adaptation parameter
and $\theta$ the internal adaptation parameter.
We will follow the general adaptive MCMC recipe of \citet{roberts2009examples}.
Assume that any internal adaptation on $\Theta_{\iota}$ is
done using a function $H_{\iota}$ : $\Theta_{\iota}\times\mathbf{\mathcal{X}}\rightarrow\Theta_{\iota}$,
and an ``internal clock'' sequence $\{\gamma_{n},\ n\ \geq0\}$
such that $\lim_{n\rightarrow\infty}\gamma_{n}=0$. The function $H_{\iota}$ depends on $\gamma_n$ and updates parameters
for internal adaptation.
Also, let $\{p_{k},\ k\geq1\}$
be a sequence of numbers $p_{k}\in(0,1)$ such that $\lim_{k\rightarrow\infty}p_{k}=0$.
$p_{k}$ will be the probability of performing external adaptation
at external iteration $k$. During the algorithm
we will also keep track of two variables: $\kappa_{n}$, the number
of external adaptations performed up to step $n$; and $\tau_{n}$,
the number of iterations between $n$ and the last time an external
adaptation is performed. These two variables are used to manage the
internal clock based on external iterations, which in most situations
can simply be the number of adaptation steps. We build the stochastic
process $\{(X_{n},\ \iota_{n},\ \theta_{n}),\ n\ \geq0\}$ on $\mathbf{\mathcal{X}}\times\bar{\Theta}$
as follows.
\begin{enumerate}
\item We start with $\kappa_{0}=\tau_{0}=0$. We start also with some $X_{0}\in\mathbf{\mathcal{X}},\ \iota_{0}\in\mathbf{\mathcal{I}}$,
and $\theta_{0}\in\Theta_{\iota_{0}}$.
\item At the n-th iteration, given $\mathcal{F}_{n}\overset{def}{=}\sigma\{(X_{k},\ \iota_{k},\ \theta_{k}),\ k\leq n\}$,
and given $\kappa_{n},\ \tau_{n}$:
\begin{enumerate}
\item Draw $X_{n+1}\sim P_{\iota_{n},\theta_{n}}(X_{n},\ \cdot)$.
\item Independently of $\mathcal{F}_{n}$ and $X_{n+1}$, draw $B_{n+1}\sim$
Bern$(p_{n+1})\in\{0,1\}$.
\begin{enumerate}
\item If $B_{n+1}=0$, there is no external adaptation: $\iota_{n+1}=\iota_{n}$.
We update $\kappa_{n}$ and $\tau_{n}$:
\begin{equation}
\kappa_{n+1}=\kappa_{n},\ \tau_{n+1}=\tau_{n}+1.\label{eq:1}
\end{equation}
Then we perform an internal adaptation: set $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$,
and compute
\begin{equation}
\theta_{n+1}=\theta_{n}+\gamma_{c_{n+1}}H_{\iota_{n}}(\theta_{n},\ X_{n+1}).\label{eq:2}
\end{equation}
Note that the internal adaptation interval could vary between iterations.
\item If $B_{n+1}=1$, then we do an external adaptation: we choose a new
$\iota_{n+1}$. And we choose a new value $\theta_{n+1}\in\Theta_{\iota_{n+1}}$
based on $\mathcal{F}_{n}$ and $X_{n+1}$. Then we update $\kappa_{n}$
and $\tau_{n}$.
\begin{equation}
\kappa_{n+1}=\kappa_{n}+1,\ \tau_{n+1}=0.
\end{equation}
\end{enumerate}
\end{enumerate}
\end{enumerate}
For this Auto Adapt MCMC algorithm to be valid we must show that it
satisfies three assumptions:
\begin{enumerate}
\item For each $(\iota,\ \theta)\in\bar{\Theta},\ P_{\iota,\theta}$ has
invariant distribution $\pi$.
\item (diminishing adaptation):
\[
\triangle_{n+1}\overset{_{def}}{=}\sup_{x\in\mathcal{X}}\Vert P_{\iota_{n},\theta_{n}}(x,\ \cdot)-P_{\iota_{n+1},\theta_{n+1}}(x,\ \cdot)\Vert_{\mathrm{TV}}
\]
converges in probability to zero, as $n\rightarrow\infty$,
\item (containment): For all $\epsilon>0$, the sequence $\{M_{\epsilon}(\iota_{n},\ \theta_{n},\ X_{n})\}$
is bounded in probability, where
\[
M_{\epsilon}(\iota,\ \theta,\ x)\overset{_{def}}{=}\inf\{n\ \geq1:\Vert P_{\iota,\theta}^{n}(x,\ \cdot)-\pi\Vert_{\mathrm{TV}}\leq\epsilon\}.
\]
\end{enumerate}
\begin{remark} Here the first assumption holds by construction. We
will show that by the design, our Auto Adapt algorithm satisfies
the diminishing adaptation.\end{remark}
For $\iota\in\mathbf{\mathcal{I}},\ \theta,\ \theta'\in\Theta_{\iota}$,
define
\[
D_{\iota}(\theta,\ \theta')\ \overset{_{def}}{=}\ \sup_{x\in\mathbf{\mathcal{X}}}\Vert P_{\iota,\theta}(x,\ \cdot)-P_{\iota,\theta'}(x,\ \cdot)\Vert_{\mathrm{TV}}.
\]
\begin{proposition} Suppose that $\mathbf{\mathcal{I}}$ is finite,
and for any $\iota\in\mathcal{\mathbf{\mathcal{I}}}$, the adaptation
function $H_{\iota}$ is bounded, and there exists $C<\infty$ such
that
\[
D_{\iota}(\theta,\ \theta')\leq C\Vert\theta-\theta'\Vert.
\]
Then the diminishing adaptation holds. \end{proposition} \begin{proof}
We have
\begin{eqnarray*}
\mathrm{E}(\triangle_{n+1}) & = & p_{n+1}\mathrm{E}(\triangle_{n+1}|B_{n+1}=1)+(1-p_{n+1})\mathrm{E}(\triangle_{n+1}|B_{n+1}=0),\\
& \leq & 2p_{n+1}+\mathrm{E}(\triangle_{n+1}|B_{n+1}=0),\\
& = & 2p_{n+1}+\mathrm{E}\ [D_{\iota_{n}}(\theta_{n},\ \theta_{n+1})],\\
& \leq & 2p_{n+1}+C\mathrm{E}\ [\Vert\theta_{n+1}-\theta_{n}\Vert],\\
& \leq & 2p_{n+1}+C_{1}\gamma_{c_{n+1}},
\end{eqnarray*}
where $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$. It is easy to see that $c_{n}\rightarrow\infty$
as $n\rightarrow\infty$. The result follows since ${\displaystyle \lim_{n\rightarrow\infty}p_{n}=\lim_{n\rightarrow\infty}\gamma_{n}=0.}$
\end{proof}
In general, the containment condition is more challenging than the
diminishing adaptation condition. This technical assumption might
be harder to satisfy and might not even be necessary sometimes \citep{rosenthal2007coupling}.
However, we still use simultaneous uniform ergodicity, a sufficient
containment condition to simplify theory and to concentrate more on
designing efficient algorithm.
\begin{definition}
The family $\{P_{\iota,\theta}:(\iota,\theta)\in{\bar{\Theta}}\}$
has simultaneous uniform ergodicity (SUE) if for all $\epsilon>0$,
there is $N=N(\epsilon)\in\mathrm{\mathbb{N}}$ so
that $\Vert P_{\iota,\theta}^{N}(x,\ \cdot)-\pi(\cdot)\Vert\leq\epsilon$
for all $x\in\mathcal{X}$ and $(\iota,\theta)\in {\bar{\Theta}}.$
\end{definition}
\begin{proposition} (Theorem 1 of \citet{rosenthal2007coupling}). SUE implies containment.
\end{proposition}
\begin{remark} It is possible to use weaker condition such as simultaneous
geometric ergodicity or simultaneous polynormal ergodicity instead
of simultaneous uniform ergodicity to imply containment condition but for the
purpose of introducing the Auto Adapt algorithm, we do not pursue
them here.\end{remark}
\section{Example algorithms}\label{method}
We present one specific approach as an example of an Auto Adapt algorithm.
Our approach to ``outer adaptation'' will be to identify the ``worst-mixing dimension''
(i.e., some parameter or latent state of the statistical model) and update the kernel by
assigning different sampler(s) for that dimension. To explain the method, we
will give some terminology for describing our algorithm. In particular, we will define a valid kernel,
MCMC efficiency, and worst-mixing dimension. We will define a set of candidate samplers for
a given dimension, which could include scalar samplers or block samplers.
In either case, a sampler may also have internal adaptation for each parameter or combination.
To implement the internal clock of each sampler ($c_{n}$ of the general algorithm), we need to
formulate all internal adaptation in the framework using equation \ref{eq:2}.
We use $P$ (without subscripts) in this section to represent $P_{\iota, \theta}$
of the general theory, so the kernel and parameters are implicit.
\subsection{Valid kernel}\label{sec:valid}
Assume our model of interest is $\mathcal{M}$, which could be represented
as a graphical model where vertices or nodes represent states or data
while edges represent dependencies among them. Here we are using ``state''
as Bayesians do to mean any dimension of the model to be sampled by
MCMC, including model parameters and latent states. We denote the
set of all dimensions of the target distribution, as $\mathcal{X}=\{\mathcal{X}_{1},\ldots,\mathcal{X}_{m}\}$.
Since we will construct a new MCMC kernel as an ordered set of samplers
at each outer iteration, it is useful to define requirements for a
kernel to be valid. We require that each kernel, if used on its own,
would be a valid MCMC to sample from the target distribution $\pi(X),\, X\in\mathcal{X}$
(typically defined from Bayes' Rule as the conditional distribution
of states given the data). This is the case if it satisfies the detailed
balance equation, $\pi=P\pi$.
In more detail, we need to ensure that a new MCMC kernel does not
omit some subspace of $\mathcal{X}$ from mixing. Denote the kernel
$P$ as a sequence of (new choice of $c_{n+1}$) samplers $P_{i},\, i=1\ldots j,$
such that $P=P_{j}P_{j-1}\ldots P_{1}$. By some abuse of language,
$P$ is a valid kernel if each sampler $P_{i}$ operates on a non-empty
subset $b_{i}$ of $\mathcal{X}$, satisfying ${\displaystyle \bigcup_{i=1}^{j}b_{i}=\mathcal{X}}$.
At iteration $n$, assume the kernel is $P^{(n)}$ and the samples
are $X_{n}=(X_{n,1},\ \ldots,\ X_{n,m})$ where the set of initial
values is $X_{0}$. For each dimension $\mathcal{X}_{k}$, $k=1,\ldots,m$
let $\mathbf{X}_{k}=\{X_{0,k},\ X_{1,k},\ \ldots\}$ be the scalar
chain of samples of $\mathcal{X}_{k}$.
\subsection{Worst mixing state and MCMC efficiency}\label{sec:worst}
We define MCMC efficiency for state $\mathcal{X}_{k}$ from a sample
of size $N$ from kernel $P$ as effective sample size per computation
time
\[
\omega_{k}(N,P)=\frac{N/\tau_{k}(P)}{t(N,P)},
\]
where $t(N,P)$ is the computation time for kernel $P$ to run $N$
iterations (often $t(N,P)\approx Nt(1,P)$) and $\tau_{k}(P)$ is
the integrated autocorrelation time for chain $\mathbf{X}_{k}$ defined
as
\[
\tau_{k}=1+2\sum_{i=1}^{\infty}\mathrm{cor}(X_{0,k},X_{i,k}),
\]
\citet{straatsma1986estimation}. The ratio $N/\tau_{k}$ is the effective
sample size (ESS) for state $\mathcal{X}_{k}$ \citep{roberts2001optimal}.
Note that $t(N,P)$ is computation time for the entire kernel, not
just samplers that update $\mathcal{X}_{k}$. $\tau_{k}$ can be interpreted
as the number of effective samples per actual sample. The worst-mixing
state is defined as the state with minimum MCMC efficiency among all
states. Let $k_{min}$ be the index of the worst-mixing state, that
is
\[
k_{min}=\arg\min_{k}\tau_{k}^{-1}
\]
Since the worst mixing dimension will limit the validity of the entire
posterior sample \citep{thompson2010graphical}, we define the efficiency
of a MCMC algorithm as $\omega_{k_{min}}(N,P)$, the efficiency of
the worst-mixing state of model $\mathcal{M}$.
There are several ways to estimate ESS, but we use $\texttt{effectiveSize}$
function in the R coda package \citep{plummer2006coda, turek2016automated} since this
function provides a stable estimation of ESS. This method,
which is based on the spectral density at frequency zero,
has been proven to have the highest convergence rate, thus giving a more accurate
and stable result \citep{thompson2010graphical}.
\subsection{Candidate Samplers}\label{sec:candidate}
A set of candidate samplers $\{P_{j},\: j\in\mathcal{S}\}$ is a list
of all possible samplers that could be used for a parameter of the model
$\mathcal{M}$. These may differ depending on the parameter's characteristics
and role in the model (e.g., whether there is a valid Gibbs sampler,
or whether it is restricted to $[0,\infty)$). In addition to univariate
candidate samplers, nodes can also be sampled by block samplers. Denote
$|b|$ the number of elements of $b.$ If $|b_{i}|>1$, $P^{(n)}_{i}$,
the sampler applied to block $b_{i}$ at iteration $n$, is called
a block sampler; otherwise it is a univariate or scalar sampler.
In the examples below we considered up to four univariate candidate
samplers and three kinds of block samplers. The univariate samplers
included adaptive random walk (ARW), adaptive random walk on a log
scale (ARWLS) for states taking only positive real values, Gibbs samplers
for states with a conjugate prior-posterior pairing, and slice samplers.
The block samplers included adaptive random walk with multivariate
normal proposals, automated factor slice sampler \citep{tibbits2014automated} (slice samplers in
a set of orthogonal rotated coordinates), and automated factor random
walk (univariate random walks in a set of orthogonal rotated coordinates).
These choices are by no means exhaustive but serve to illustrate the
algorithms here.
\subsubsection{Block samplers and how to block}\label{sec:block}
\citep{turek2016automated} suggested different ways to block the states
efficiently: (a) based on correlation clustering, (b) based on model
structure. Here we use the first method.
At each iteration, we use the generated samples to create the empirical
posterior correlation matrix. To stabilize the estimation, all of the samples
are used to compute a correlation
matrix $\rho_{d\times d}$. This in turn is used to make a distance
matrix $D_{d\times d}$ where $D_{i,j}=1-|\rho_{i,j}|$ for $i\neq j$
and $D_{i,i}=0$ for every $i$, $j$ in $1,\ldots,d$. To ensure
minimum absolute pairwise correlation between clusters, we construct a
hierarchical cluster tree from the distance matrix $D$ (\citet{everitt2011hierarchical} chapter 4). Given a selected
height, we cluster the hierarchical tree into distinct groups of states.
Different parts of the tree may have different optimal
heights for forming blocks. Instead of using a global height to cut the tree, we only choose a block that
contains the worst-mixing state from the cut and keep the other
nodes intact. Adaptively, at each outer iteration, the algorithm will try
to obtain a less correlated cluster for a chosen block sampler to
improve on the efficiency. In our implementation, we use the R function $\texttt{hclust}$
to build the hierarchical clustering tree with ``complete linkage''
from the distance matrix
$D$. By construction, the absolute correlation between states within
each group is at least $1-h$ for $h$ in $[0,1]$. We then use the R
function $\texttt{cutree}$ to choose a block that contains the worst-mixing state.
This process is justified in the sense that the partitioning
adapts according to the model structure through the posterior correlation.
The details and validity of the block sampling in our general framework are provided in Appendix A.
\subsection{How to choose new samplers}\label{sec:choice}
To choose new samplers to compose a new kernel, we determine the worst-mixing
state and choose randomly
from candidate samplers to replace whatever sampler was updating it
in the previous kernel while keeping other samplers the same. There
are some choices to make when considering a block sampler. If the worst-mixing parameter is $x$,
and the new kernel will use a block sampler for $x$ together with one or more parameters $y$,
we can either keep the current sampler(s) used for $y$ or remove them from the kernel.
Future work can consider other schemes such as changing group of samplers together
based on model structure.
\subsection{Internal clock variables}
\label{sec:internal} In the algorithm \ref{alg1},
$\theta$ represents the internal adaptation parameter of a particular
sampler and $c$ represents its internal clock. In general, an internal
clock variable is defined as a variable used in a sampler to determine
the size of internal adaptation steps such that any internal adaptation
would converge in a typical MCMC setting. An example of an internal clock
variable is a number of internal iterations that have occurred. To
use a sampler in the general framework, we need to establish what
are its internal adaptation and clock variables. A few examples of
internal adaptation variables of different samplers are summarized
as follows:
\begin{itemize}
\item For adaptive random walk: proposal scale is used.
\item For block adaptive random walk: proposal scale and covariance matrix
are used.
\item For automated factor slice sampler: covariance matrix (or equivalent,
i.e. coordinate rotation) is used.
\item For automated factor random walk: covariance matrix (ditto) and proposal
scales for each rotated coordinate axis are used.
\end{itemize}
These internal adaption variables are set to default initial values when its sampler is first used.
After that, they are retained along with internal clock variables so that whenever
we revisit a sampler,
we will used the stored values to set up this sampler.
This setting guarantees the diminishing adaption property,
which is essential for the convergence of the algorithm.
Pseudo-code for Auto Adapt MCMC is given in Algorithm 1.
\begin{algorithm}[htp]
\caption{Auto Adapt MCMC} \label{alg1}
\small
\begin{algorithmic}[1]
\INPUT
\Statex Bayesian model with initial state (including latent variables) ${X}_{0}$
\Statex $\{p_{n}, n \in \mathbb{N} |p_n\in(0,1),\lim_{n}p_{n}=0\}$, maximum iteration $M$
\Statex Candidate samplers $\{P_{j}, j \in \mathcal{S}\}$
\Statex $P_{\iota_0, \theta_0}:=$ ordered set of initial samplers $\{P^{(0)}_j\}_{j\in \mathcal{S}}$ from Bayesian model
\Ensure
\Statex An ordered set of samplers $\{P_{i^*}\}_{i^*\in \mathcal{S}}$ with the best MCMC efficiency so far
\State \; Initialize $\mathrm{EFF}$, $\mathrm{EFF_{best}}$, $n$, $\kappa_0$, $\tau_0$, $c_0$ to $0$ \Comment Denote MCMC efficiency $\mathrm{EFF}$
\While {($\mathrm{EFF}$ $\ge$ $\mathrm{EFF_{best}}$) or ($n < M$)}
\State Sample $N$ samples from the current sampler set $\{P_j^{(n)}\}_{j\in \mathcal{S}}$
\State Store internal clocks $c_n$ and adaption variables $\theta_{n}$ for each sampler \Comment Section \ref{sec:internal}
\State Compute $\mathrm{EFF_k}=\omega_{k}(N,P)=\frac{N/\tau_{k}(P)}{t(N,P)}$ \Comment $k$ is an index of parameters
\State Identify $k_{min}=\arg\min_{k}\tau_{k}^{-1}$, $\mathrm{EFF}=\mathrm{EFF_{k_{min}}}$ \Comment See Section \ref{sec:worst}
\If {($\mathrm{EFF}$ $\ge$ $\mathrm{EFF_{best}}$)}
\State Set $\{P_{i^*}\}_{i^*\in \mathcal{S}}=\{P_i^{(n)}\}_{i\in \mathcal{S}}$
\State Set $\mathrm{EFF_{best}}=\mathrm{EFF}$
\Else
\State Set $\{P_i^{(n)}\}_{i\in \mathcal{S}}=\{P_{i^*}\}_{i^*\in \mathcal{S}}$
\EndIf
\State Draw $B_{n+1} \sim \mathrm{Bern}(p_{n+1})\in \{0,1\}$
\If {$B_{n+1}=0$}
\State $\kappa_{n+1}= \kappa_{n}$, $\tau_{n+1}=\tau_n+1$, $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$
\Else
\State $\kappa_{n+1}= \kappa_{n}$+1, $\tau_{n+1}=0$, $c_{n+1}=\kappa_{n+1}+\tau_{n+1}$
\State Set $P^{(n+1)}_{i}=P^{(n)}_{i}, i \neq k_{\mathrm{min}}$, choose $P^{(n+1)}_{k_{\mathrm{min}}}$ from candidate samplers \Comment See Section \ref{sec:choice}
\If {($P^{(n+1)}_{k_{\mathrm{min}}}$ has been used before)}
\State Use $c_n$, $\theta_{n}$ to set up the sampler $P^{(n+1)}_{k_{\mathrm{min}}}$
\Else
\State Use default internal adaptation value of $P^{(n+1)}_{k_{\mathrm{min}}}$ \Comment Section \ref{sec:internal}
\EndIf
\EndIf
\State Set $n = n+1$
\EndWhile
\end{algorithmic}
\end{algorithm}
\section{Examples}\label{example}
In this section, we evaluate our algorithm on some benchmark examples
and compare them to different MCMC algorithms. In particular, we compare
our approach to the following MCMC algorithms.
\begin{itemize}
\item All Scalar algorithm: Every dimension is sampled using an adaptive
scalar normal random walk sampler.
\item All Blocked algorithm: All dimensions are sampled in one adaptive
multivariate normal random walk sampler.
\item Default algorithm: Groups of parameters arising from multivariate
distributions are sampled using adaptive multivariate normal random
walk samplers while parameters arising from univariate distributions
are sampled using adaptive scalar normal random walk samplers. In
addition, whenever the structure of model $\mathcal{M}$ permits,
we assigns conjugate samplers instead of scalar normal random walk
samplers.
\item Auto Block algorithm: The Auto Block method
\citep{turek2016automated} searches blocking schemes based on
hierarchical clustering from posterior correlations to determine a
highly efficient (but not necessarily optimal) set of blocks that
are sampled with multivariate normal random-walk samplers. Thus,
Auto Block uses only either scalar or multivariate adaptive
random walk, concentrating more on partitioning the correlation
matrix than trying different sampling methods. Note that the
initial sampler of both the Auto Block algorithm and our
proposed algorithm is the All Scalar algorithm.
\end{itemize}
All experiments were carried out using the NIMBLE package \citep{nimble2017}
for R \citep{R2013} on a cluster using $32$ cores of Intel Xeon E5-2680
$2.7$ Ghz with $256$ GB memory. Models are coded using NIMBLE's
version of the BUGS model
declaration language \citep{lunn2000winbugs,lunn2012bugs}. All MCMC
algorithm are written in NIMBLE, which provides user friendly interfaces
in R and efficient execution in custom-generated C++, including matrix operations in the C++ Eigen
library \citep{guennebaud2010eigen}.
To measure the performance of a MCMC algorithm, we use MCMC
efficiency. MCMC efficiency depends on ESS, estimates of which
can have high variance for a short Markov chain. This presents
a tuning-parameter tradeoff for the Auto Adapt method: Is it better to
move cautiously (in sampler space) by running long chains for each outer adaptation in
order to gain an accurate measure of efficiency, or is it better to
move adventurously by running short chains, knowing that some
algorithm decisions about samplers will be based on noisy efficiency
comparisons? In the latter case, the final samplers may be less optimal,
but that may be compensated by the saved computation time. To explore this
tradeoff, we try our Auto Adapt algorithm with different sample
sizes in each outer adaptation and label results accordingly. For
example, Auto Adapt 10K will refer to the Auto Adapt method with
samples of 10,000 per outer iteration.
We present algorithm comparisons in terms of time spent in an
adaptation phase, final MCMC efficiency achieved, and the time
required to obtain a fixed effective sample size (e.g., 10,000). Only
Auto Block and Auto Adapt have adaptation phases. An important
difference is that Auto Block did not come with a proof of valid
adaptive MCMC convergence (it could be modified to work in the current
framework, but we compare to the published version). Therefore,
samples from its adaptation phase are not normally included in the
final samples, while the adapation samples of Auto Adapt can be
included.
To measure final MCMC efficiency, we conducted a single long run of
length $N$ with the final kernel of each method solely for the purpose
of obtaining an accurate ESS estimate. One would not normally do such
a run in an real application. The calculation of time to obtain a
fixed effective sample size incorporates both adaptation time and
efficiency of the final samplers. For both Auto Adapt and Auto Block,
we placed them on a similar playing field by assuming for this
calculation that samples are not retained from the adaptation phase,
making the results conservative.
For all comparisons, we used $20$ independent runs of each method and
present the average results from these runs. To show the variation in
runs, we present boxplots of efficiency in relation to computation
time from the $20$ runs of Auto Adapt. The final (right-most) boxplot
in each such figures shows the $20$ final efficiency estimates from
larger runs. Not surprisingly, these can be lower than obtained by
shorter runs. These final estimates are reported in the tables.
A public Github repository containing scripts for reproducing our
results may be found at https://github.com/nxdao2000/AutoAdaptMCMC.
Some additional experiments are also provided there.
\subsection{Toy example: A random effect model}
We consider the ``litters'' model, which is an original example
model provided with the MCMC package WinBUGS. This model is chosen
because of its notoriously slow mixing, which is due to the strong
correlation between parameter pairs. It
is desirable to show how much improvement can be achieved compared to other approaches
on this benchmark example. The purpose of using a simple example is to establish the
potential utility of the Auto Adapt approach, while saving more advanced applications for future work.
In this case, we show that our algorithm
indeed outperforms by a significant margin the other approaches. This
model's specification is given following \citet{deely1981bayes} and
\citet{kass1989approximate} as follows.
Suppose we observe the data in $i$ groups. In each group, the data
$y_{ij}$, $j=\{1,\ldots,n\}$ are conditionally independent given
the parameters $p_{ij}$, with the observation density
\[
y_{ij}\sim \mathrm{Bin}(n_{ij},p_{ij}).
\]
In addition, assume that $p_{ij}$ for fixed $i$ are conditionally
independent given the ``hyperparameters'' $\alpha_{i}$, $\beta_{i}$, with conjugate density
\[
p_{ij}\sim \mathrm{Beta}(\alpha_{i},\beta_{i}).
\]
Assume that $\alpha_{j}$, $\beta_{j}$ follow prior densities,
\[
\alpha_{1}\sim \mathrm{Gamma}(1,0.001),
\]
\[
\beta_{1}\sim \mathrm{Gamma}(1,0.001),
\]
\[
\alpha_{2}\sim \mathrm{Uniform}(0,100),
\]
\[
\beta_{2}\sim \mathrm{Uniform}(0,50).
\]
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for the litters model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:1}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.5855 & 17079\\
Default & 0.00 & 1.8385 & 5439\\
All Scalar & 0.00 & 1.6870 & 5928\\
Auto Block & 21.97 & 12.1205 & 847 \\
Auto Adapt 10K & 1.14 & 9.6393 & 1038 \\
Auto Adapt 20K & 2.33 & 11.5717 & 866 \\
Auto Adapt 50K & 6.03 & 14.3932 & 701 \\
\hline
\end{tabular}
\end{center}
\end{table}
Following the setup of \citet{rue2005gaussian,turek2016automated},
we can jointly sample the top-level parameters and conjugate latent
states as the beta-binomial conjugacy relationships allow the use
of what \citet{turek2016automated} call cross-level sampling, but, for demonstration purposes, we do not
include this here.
Since the litters model mixes poorly, we run a large
number of iterations (i.e. $N=300000$) to produce stable estimates of
final MCMC efficiency. We start both Auto Block and Auto
Adapt algorithms with All Scalar and adaptively explore the
space of all given candidate samplers. We use Auto Adapt with
either 10000, 20000 or 50000 iterations per outer adaptation.
Results (Table \ref{table:1}) show that Auto Block generates
samples with MCMC efficiency about seven-fold, six-fold and
twenty-fold that of the All Scalar, Default and All
Blocked methods, respectively. We can also see that as the outer
adaptation sample size increases, the performance of Auto Adapt
improves. Final MCMC efficiencies of Auto Adapt 10K, Auto Adapt
20K and Auto Adapt 50K are 80\%, 95\% and 118\% of MCMC
efficiency of Auto Block, respectively. In addition, the
adaptation time for all cases of Auto Adapt are much shorter than
for Auto Block. Combining adaptation time and final efficiency
into the resulting time to 10000 effective samples, we see that in this
case, larger samples in each outer iteration are worth their
computational cost.
Figure \ref{fig:toy1} shows the boxplots
computed from 20 independent runs on litters model of All Blocked,
All Scalar, Auto Block, Default and Auto Adapt 50K
algorithms. The left panel of the figure confirms that MCMC efficiency
of Auto Block is well dominated that of other static adaptive
algorithms. The right panel of the figure shows the MCMC efficiency of
Auto Adapt 50K gradually improves with time. The right-most boxplot
verifies that the MCMC efficiency of selected samplers from
Auto Adapt algorithm (computed from large samples) is slightly
better than that of Auto Block algorithm. Last but not least,
Auto Adapt algorithms are much more efficient than Auto Block in
the sense that we can keep every sample while Auto Block algorithm
throws away most of the samples.
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{litterAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{litter50000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for litters model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 50K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 300000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:toy1}
\end{figure}
\subsection{Generalized linear mixed models}
Many MCMC algorithms do not scale up well for high dimensional data.
To test the capabilities of our algorithm in such situations, we consider
a relatively large generalized linear mixed model (GLMM) \citep{gelman2006data}.
We make use of the Minnesota Health Plan dataset \citep{waller1997log}
for this model following the setup of \citet{zipunnikov2006monte}.
Specifically, let $y_{ikl}$ be the count for subject $i$ ($i=1,\ldots,121$
senior-citizens), event $k$ (either visited or called), period $l$
(one of four 6-month periods). Assume that
\[
y_{ikl}|u_{i}^{\Sigma}\sim \mathrm{Poisson}(\mu_{ikl})
\]
with log link,
\[
\log\mu_{ikl}=a_{0}+a_{k}+b_{l}+c_{kl}+\gamma_{i}+v_{ik}+\omega_{il},\ k=1,2,\:\mathrm{and}\: l=1,2,3,4.
\]
Here the fixed effect coefficients are $a_{k}$, $b_{l}$ and $c_{kl}$. To
achieve identifiability, we set $a_{2}=b_{4}=c_{14}=c_{21}=c_{22}=c_{23}=c_{24}=0$.
Priors for the non-zero parameters, $\beta=(a_{0},\ a_{1},\ b_{1},\
b_{2},\ b_{3},\ c_{11},\ c_{12},\ c_{13})$, are:
\[
a_{0}\sim \mathrm{N}(0,0.001)
\]
\[
a_{1}\sim \mathrm{N}(0,0.001)
\]
\[
b_{l}\sim \mathrm{N}(0,0.001)\;\mathrm{for\: l}=1,2,3
\]
\[
c_{1l}\sim \mathrm{N}(0,0.001)\;\mathrm{for\: l}=1,2,3.
\]
The random effect variables are $\gamma_{i}$, $v_{ik},$
$\omega_{il}$. Their distributions are:
\[
\sigma_{\gamma}^{2}\sim \mathrm{N}(0,10)
\]
\[
\gamma_{i}\sim \mathrm{N}(0,\sigma_{\gamma})
\]
\[
\sigma_{v}^{2}\sim \mathrm{N}(0,10)
\]
\[
v_{ik}\sim N(0,\sigma_{\nu})
\]
\[
\sigma_{\omega}^{2}\sim \mathrm{N}(0,10)
\]
\[
\omega_{il}\sim \mathrm{N}(0,\sigma_{\omega})
\]
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for GLMM model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:2}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.0031 & 6451613\\
Default & 0.00 & 0.4641 & 43094\\
All Scalar & 0.00 & 0.4672 & 42808\\
Auto Block & 1019.35 & 0.8420 & 12896\\
Auto Adapt 5K & 247.15 & 0.5289 & 19154 \\
Auto Adapt 10K & 465.92 & 0.7349 & 14072 \\
Auto Adapt 20K & 1017.38 & 0.8594 & 12652 \\
\hline
\end{tabular}
\end{center}
\end{table}
It should be noted that GLMM model is by far the largest example
considered, containing nearly $2000$ stochastic model components,
which include both observations and a large number of independent
random effects. Since this example is rather
computationally intensive, we try our Auto Adapt algorithm with
smaller numbers of iterations in each outer adaptation as well as
smaller number of iteration to estimate final efficiency than we did
with the litters model. Specifically, we used samples sizes of 5000, 10000
and 20000 per outer adaptation, and we used $N=50000$ for computing
final efficiency.
In this example (Table \ref{table:2}) All Scalar sampling produces
MCMC efficiency of about $0.47$, while the All Blocked algorithm,
which consists of a single block sampler of
dimension $858$, has MCMC efficiency of approximately $0.003$. In this
case, All Blocked samples all $858$ dimensions jointly, which
requires computation time roughly three-times that of All Scalar
and yields only rather low ESS. The Default algorithm performs
similarly to All Scalar but they both perform much worse
than Auto Block and Auto Adapt. In this example, it is clear that all Auto Adapt and
Auto Block methods have dramatic improvements even when we take
into account the adaptation time. Amongst these Auto methods,
Auto Block performs slightly worse than Auto Adapt 20K in both
computational time and MCMC efficiencies. Overall, Auto Adapt 20K
appears to be the most efficient method in terms of time to 10000 effective
samples. One interpretation is that Auto Adapt 20K trades off well between
adaptation time and MCMC efficiency in this model.
Figure \ref{fig:GLMM} shows that Auto Adapt algorithm is very
competitive with the Auto Block algorithm. This comes from both the
flexibility to trade off the number of outer adaptations vs. adaptive
time to reach a good sampler as well as the larger space of kernels
being explored. Since MCMC efficiency is highly dependent upon
hierarchical model structure, using scalar and multivariate normal
random walks alone, as done by the Auto Block algorithm, can be
quite limiting. Auto Adapt, can overcome this limitation with the
flexibility to choose different type of samplers. We will see that
more strongly in the next example, where the model is more complex.
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{GLMMAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{GLMM20000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for GLMMs model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 20K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 50000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:GLMM}
\end{figure}
\subsection{Spatial model}
In this section, we consider a hierachical spatial model as the final
example. We use the classical scallops dataset for this model. This
dataset is chosen since we want to compare our approach with other
standard approaches in the presence of spatial dependence. This data
collects observations of scallop abundance at 148 locations from the New
York to New Jersey coastline in 1993. It was surveyed by the Northeast
Fisheries Science Center of the National Marine Fisheries Service and
made publicly available at
http://www.biostat.umn.edu/\textasciitilde{}brad/data/myscallops.txt.
It has been analyzed many times, such as
\citet{ecker1994geostatistical,ecker1997bayesian,banerjee2014hierarchical}
and references therein. Following
\citeauthor{banerjee2014hierarchical}, assume the log-abundance
$\mathbf{g}=(g_{1},\ldots,g_{N})$ follows a multivariate normal
distribution with mean $\bm{\mu}$ and covariance matrix $\mathbf{\Sigma}$,
defined by covariances that decay exponentially as a function of distance. Specifically,
let $y_{i}$ be measured scallop abundance at site $i$,
$d_{i,j}$ be the distance between sites $i$ and $j$, and $\rho$ be a
valid correlation. Then
\[
\mathbf{g}\sim\mathrm{N}\left(\bm{\mu},\mathbf\Sigma\right),
\]
where each component $\Sigma_{ij}=\sigma^{2}exp(-d_{i,j}/\rho).$
We model observations as $y_{i} \sim
\mathrm{Poisson}(\mathrm{exp}(g_{i}))$. Priors for $\sigma$ and
$\rho$ are Uniform over a large range of interest.
The parameters in the posterior distribution are expected to be correlated
as the covariance structure induces a trade-off between $\sigma$
and $\rho$. This can be sampled well by Auto Block algorithm,
and we would like to show that our approach can achieve even higher
efficiency with lower computational cost of adaptation.
This spatial model, with 858 parameters, is computationally expensive
to estimate. Therefore, we will use Auto Adapt 5K, Auto Adapt
10K and Auto Adapt 20K algorithms for comparison and run
$N=50000$ for estimating final efficiency.
As can be seen from the Table \ref{table:3}, All Blocked and
Default algorithms mix very poorly, resulting in extremely low
efficiencies of 0.01 and 0.002, respectively. The All Scalar
algorithm, while achieving higher ESS, run slowly because large matrix
calculations are needed for every univariate sampler. The Auto Block
algorithm, on the other hand, selects an optimal threshold to cut the
entire hierarchical clustering tree into different groups, increasing
the ESS about 3 times. With a few small blocks, the computation cost of
Auto Block is somewhat cheaper than All Scalar algorithm. As a
result, the efficiency mean is about 3.5 times that of All
Scalar. Meanwhile, our Auto Adapt 5K, 10K and 20K algorithms perform
best. It should be noted that the Auto Adapt algorithm can achieve
good mixing with adaptation times that are only 15.5\%, 32.5\% and
59\% compared to the adaptation time of Auto Block. In Figure 3,
while the left panel shows a distinction between Auto Block and
other static algorithms, the right panel shows that Auto Adapt 20K
surpasses Auto Block in just a few outer iterations, indicating
substantial improvements in some models.
\begin{table}{}
\begin{center}
\caption{Summary results of different MCMC algorithms for spatial model. Runtime is presented
as seconds, and efficiency is in units of effective samples produced
per second of algorithm runtime. Time to $N$ effective samples is computed by $N/\mathrm{efficiency}$ for
static algorithms and that plus adaptation time for Auto Block and Auto Adapt algorithms.}
\label{table:3}
\vspace{2mm}
\begin{tabular}{lrrrrr}
\hline
\hline
Algorithms& \rule[-1.5mm]{0mm}{4mm}\hspace{2mm} Adapt time
& \hspace{2mm} Efficiency & \hspace{2mm} Time to 10000 effective samples\\
\hline
All Blocked & 0.00 & 0.0100 & 1000000\\
Default & 0.00 & 0.0020 & 5000000\\
All Scalar & 0.00 & 0.1150 & 86956\\
Auto Block & 19094.89 & 0.3565 & 47145\\
Auto Adapt 5K & 2967.558 & 0.4420 & 25592 \\
Auto Adapt 10K & 6221.61 & 0.4565 & 28127 \\
Auto Adapt 20K & 11278.78 & 0.4948 & 31488 \\
\hline
\end{tabular}
\end{center}
\end{table}
\begin{figure}
\begin{centering}
\vspace{-0.5cm}
\includegraphics[width=8cm,height=8cm]{spatialAutoBlockEfficiency.pdf}\includegraphics[width=8cm,height=8cm]{spatial5000BoxplotEfficiency.pdf}
\par\end{centering}
\caption{MCMC efficiencies of different methods for spatials model. The left panel shows the box-plots of
MCMC efficiencies of All Blocked, All Scalar, Auto Block and Default algorithms computed from 20 replications.
The right panel show the box-plots of MCMC efficiencies of Auto Adapt 20K algorithm computed from 20 replications at each outer adaptation. The last (rightest) box-plot is
computed from the chain of 10000 samples generated. The time axis show the average computational time of 20 replications.}
\label{fig:spatial}
\end{figure}
\section{Discussion}\label{discussion}
\label{discussion} We have proposed a general Auto Adapt MCMC
algorithm. Our algorithm traverses a space of valid MCMC kernels to
find an efficienct algorithm automatically. There is only one previous
approach, namely Auto Block sampling, of this kind that we are
aware of. We have shown that our approach can substantially outperform
Auto Block in some cases, and that both outperform simple static
approaches. Using some benchmark models, we can observe that our
approach can yield orders-of-magnitude improvements.
The comparisons presented have deliberately used fairly simple
samplers as options for Auto Adapt in order to avoid comparisons among
vastly different computational implementations. A major feature of
our framework is that it can incorporate almost any sampler as a
candidate and almost any strategy for choosing new kernels from
compositions of samplers based on results so far. Samplers to be
explored in the future could include auxiliary variable algorithms such
as slice sampling or derivative-based sampling algorithm such as
Hamiltonian Monte Carlo \citep{duane1987hybrid}. Now that the basic
framework is established and shown to be useful in simple cases, it
merits extension to more advanced cases.
The Auto Adapt method can be viewed as a generalization of the
Auto Block method. It is more general in the sense that it can use
more kinds of samplers and explore the space of samplers more
generally than the correlation-clustering of Auto Block. Thus, our framework
can be considered to provide a broad class of automated kernel construction
algorithms that use a wide range of sampling algorithm as components.
If block sampling is included in the space of the candidate samplers,
choosing optimal blocks is important and can greatly increase the
efficiency of the algorithm. For this reason, we extended the cutting
of a hierarchical cluster tree to allow different cut heights on
different branches (different parts of the model). This differs from
Auto Block, which forms all blocks by cutting the entire tree at the
same height. We also have different multivariate adaptive
sampling other than random walk normal distribution
such as automated factor slice sampler and automated factor random walk sampler. With these extensions, the
final efficiency achieved by our algorithm specifically among blocking
schemes is often substantially better and is found in a shorter time.
Beyond hierarchical clustering, there are other approaches one might
consider to find efficient blocking schemes. One such approach
would be to use the structure of the graph instead of posterior
correlations to form blocks. This would allow conservation of
calculations that are shared by some parts of the graph, whether or
not they are correlated. Another future direction could be to improve
how a new kernel is determined from previous results, essentially to
determine an effective strategy for exploring the very
high-dimensional kernel space. Finally, the tradeoff
between computational cost and the accuracy of effective sample
size estimates is worth further exploration.
\bibliographystyle{chicago}
|
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.